Cisco Apic Layer 3 Networking Configuration Guide 61x
Cisco Apic Layer 3 Networking Configuration Guide 61x
1(x)
First Published: 2024-08-01
Americas Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
800 553-NETS (6387)
Fax: 408 527-0883
© 2022–2024 Cisco Systems, Inc. All rights reserved.
Trademarks
THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS REFERENCED IN THIS
DOCUMENTATION ARE SUBJECT TO CHANGE WITHOUT NOTICE. EXCEPT AS MAY OTHERWISE
BE AGREED BY CISCO IN WRITING, ALL STATEMENTS, INFORMATION, AND
RECOMMENDATIONS IN THIS DOCUMENTATION ARE PRESENTED WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED.
The Cisco End User License Agreement and any supplemental license terms govern your use of any Cisco
software, including this product documentation, and are located at: https://www.cisco.com/c/en/us/about/legal/
cloud-and-software/software-terms.html. Cisco product warranty information is available at
https://www.cisco.com/c/en/us/products/warranty-listing.html. US Federal Communications Commission
Notices are found here https://www.cisco.com/c/en/us/products/us-fcc-notice.html.
IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL,
CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST
PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE
THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY
OF SUCH DAMAGES.
Any products and features described herein as in development or available at a future date remain in varying
stages of development and will be offered on a when-and if-available basis. Any such product or feature
roadmaps are subject to change at the sole discretion of Cisco and Cisco will have no liability for delay in the
delivery or failure to deliver any products or feature roadmap items that may be set forth in this document.
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual
addresses and phone numbers. Any examples, command display output, network topology diagrams, and
other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses
or phone numbers in illustrative content is unintentional and coincidental.
The documentation set for this product strives to use bias-free language. For the purposes of this documentation
set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial
identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be
present in the documentation due to language that is hardcoded in the user interfaces of the product software,
language used based on RFP documentation, or language that is used by a referenced third-party product.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and
other countries. To view a list of Cisco trademarks, go to this URL: https://www.cisco.com/c/en/us/about/
legal/trademarks.html. Third-party trademarks mentioned are the property of their respective owners. The use
of the word partner does not imply a partnership relationship between Cisco and any other company. (1721R)
Layer 3 Prerequisites 13
Bridge Domain Configurations 13
CHAPTER 5 IP Aging 19
Overview 19
Configuring the IP Aging Policy Using the GUI 19
Neighbor Discovery 29
Configuring IPv6 Neighbor Discovery on a Bridge Domain 30
Creating the Tenant, VRF, and Bridge Domain with IPv6 Neighbor Discovery on the Bridge Domain
Using the GUI 30
Configuring IPv6 Neighbor Discovery on a Layer 3 Interface 32
Guidelines and Limitations 32
Configuring an IPv6 Neighbor Discovery Interface Policy with RA on a Layer 3 Interface Using the
GUI 32
Configuring IPv6 Neighbor Discovery Duplicate Address Detection 33
The APIC IGMP Snooping Function, IGMPv1, IGMPv2, and the Fast Leave Feature 49
The APIC IGMP Snooping Function and IGMPv3 49
Cisco APIC and the IGMP Snooping Querier Function 50
Guidelines and Limitations for the APIC IGMP Snooping Function 50
Configuring and Assigning an IGMP Snooping Policy 50
Configuring and Assigning an IGMP Snooping Policy to a Bridge Domain in the Advanced GUI 50
Configuring an IGMP Snooping Policy Using the GUI 51
Assigning an IGMP Snooping Policy to a Bridge Domain Using the GUI 52
Enabling IGMP Snooping Static Port Groups 52
Enabling IGMP Snooping Static Port Groups 52
Prerequisite: Deploy EPGs to Static Ports 53
Enabling IGMP Snooping and Multicast on Static Ports Using the GUI 53
Enabling IGMP Snoop Access Groups 54
Enabling IGMP Snoop Access Groups 54
Enabling Group Access to IGMP Snooping and Multicast Using the GUI 54
Configuring BGP External Routed Network with Autonomous System Override Enabled Using
the GUI 307
BGP Neighbor Shutdown and Soft Reset 307
About BGP Neighbor Shutdown and Soft Reset 307
Configuring BGP Neighbor Shutdown Using the GUI 308
Configuring BGP Neighbor Soft Reset Using the GUI 309
Configuring Per VRF Per Node BGP Timer Values 311
Per VRF Per Node BGP Timer Values 311
Configuring a Per VRF Per Node BGP Timer Using the Advanced GUI 311
Troubleshooting Inconsistency and Faults 312
Configuring BFD Support 313
Bidirectional Forwarding Detection 313
CHAPTER 22 Route Control with Route Maps and Route Profiles 343
Configuring a Route Control Protocol to Use Import and Export Controls, With the GUI 360
Interleak Redistribution for MP-BGP 362
Overview of Interleak Redistribution for MP-BGP 362
Configuring a Route Map for Interleak Redistribution Using the GUI 362
Applying a Route Map for Interleak Redistribution Using the GUI 363
Configure Remote Leaf Switches Using the NX-OS Style CLI 471
Part II: External Routing (L3Out) Configuration 474
Routed Connectivity to External Networks 474
Configuring an MP-BGP Route Reflector Using the NX-OS Style CLI 474
Node and Interface for L3Out 474
Configuring Layer 3 Routed and Sub-Interface Port Channels Using the NX-OS Style CLI 474
Configuring a Switch Virtual Interface Using the NX-OS Style CLI 481
Associating a Track List with a Next Hop Profile Using the NX-OS Style CLI 522
Viewing Track List and Track Member Status Using the CLI 522
Viewing Track List and Track Member Detail Using the CLI 523
Configuring HSRP Using the NX-OS Style CLI 525
Configuring HSRP in Cisco APIC Using Inline Parameters in NX-OS Style CLI 525
Configuring HSRP in Cisco APIC Using Template and Policy in NX-OS Style CLI 526
Configuring Cisco ACI GOLF Using the NX-OS Style CLI 527
Recommended Shared GOLF Configuration Using the NX-OS Style CLI 527
Cisco ACI GOLF Configuration Example, Using the NX-OS Style CLI 528
Enabling Distributing BGP EVPN Type-2 Host Routes to a DCIG Using the NX-OS Style CLI 530
Configuring and Assigning an MLD Snooping Policy to a Bridge Domain using the REST API 542
Configuring IP Multicast Using REST API 542
Configuring Layer 3 Multicast Using REST API 542
Configuring Layer 3 IPv6 Multicast Using REST API 545
Configuring Multicast Filtering Using the REST API 546
Configuring Multi-Pod Using REST API 548
Setting Up Multi-Pod Fabric Using the REST API 548
Configuring Remote Leaf Switches Using REST API 550
Configure Remote Leaf Switches Using the REST API 550
Configuring SR-MPLS Handoff Using REST API 553
Configuring an SR-MPLS Infra L3Out Using the REST API 553
Configuring an SR-MPLS VRF L3Out Using the REST API 555
Creating SR-MPLS Custom QoS Policy Using REST API 556
Part II: External Routing (L3Out) Configuration 558
Routed Connectivity to External Networks 558
Configuring an MP-BGP Route Reflector Using REST API 558
Configuring the BGP Domain-Path Feature for Loop Prevention Using the REST API 559
Node and Interface for L3Out 559
Configuring Layer 3 Routed and Sub-Interface Port Channels Using REST API 559
Configuring a Switch Virtual Interface Using REST API 562
Configuring Routing Protocols Using REST API 563
Configuring BGP External Routed Networks with BFD Support Using REST API 563
Configuring OSPF External Routed Networks Using REST API 575
Configuring EIGRP External Routed Networks Using REST API 576
Configuring Route Summarization Using REST API 578
Configuring Route Summarization for BGP, OSPF, and EIGRP Using the REST API 578
Configuring Route Control with Route Maps and Route Profile Using REST API 580
Configuring Route Control Per BGP Peer Using the REST API 580
Configuring Route Map/Profile with Explicit Prefix List Using REST API 581
Configuring a Route Control Protocol to Use Import and Export Controls, With the REST API 582
Configuring Interleak Redistribution Using the REST API 583
Configuring Transit Routing Using REST API 584
Configuring Transit Routing Using the REST API 584
REST API Example: Transit Routing 588
Table 1: New Features and Changed Behavior in Cisco APIC Release 6.1 (2)
OSPFv3 authentication Support for encryption and Create an OSPF IPsec Policy, on
authentication for OSPFv3 page 278
sessions.
VXLAN Site ID Specify a VXLAN site ID while VXLAN Site ID, on page 228
configuring the border gateway set
policy.
VRF in Enforced Mode VRFs can now be configured in VXLAN Stretched Bridge Domain
enforced mode. The endpoints and Selector, on page 233
prefixes that are advertised from
VXLAN External Subnet Selectors,
the remote VXLAN EVPN fabrics
on page 233
can be classified into endpoint
groups that are represented through
Endpoint Security Group objects
(ESG). Use the newly supported
selectors that are only applicable
for remote VXLAN endpoints.
Table 2: New Features and Changed Behavior in Cisco APIC Release 6.1 (1)
Cisco ACI border gateways With the Cisco ACI border gateway ACI Border Gateways, on page 207
(BGW) solution, you can now have
a seamless extension of a Virtual
Routing and Forwarding (VRF)
instance and bridge domain
between fabrics. The Cisco ACI
BGW is a node that interacts with
nodes within a site and with nodes
that are external to the site. The
Cisco ACI BGW feature can be
conceptualized as multiple
site-local EVPN control planes and
IP forwarding domains
interconnected by a single common
EVPN control and forwarding
domain.
Deploying remote leaf switch fabric You can now deploy remote leaf SR-MPLS Handoff, on page 151
ports on L3Outs as a routed switch fabric ports on user tenant
sub-interface L3Outs and on SR-MPLS infra
L3Outs as a routed sub-interface.
OSPFv2 authentication For enhanced security with Create OSPF Interface Profile, on
OSPFv2, you can specify the page 275
OSPFv2 authentication key. The
authentication key is a password of
up to 8 characters that you can
assign on a per-interface basis.
OSPFv2 rotating keys For enhanced security with Create Key Policy, on page 277
OSPFv2, you can use the rotating
keys by specifying a lifetime for
each key. When the lifetime expires
for a key, it automatically rotates
to the next key. If you do not
specify any algorithm, OSPF will
use MD5, which is the default
cryptographic authentication
algorithm.
Table 3: New Features and Changed Behavior in Cisco APIC Release 6.0 (4)
Table 4: New Features and Changed Behavior in Cisco APIC Release 6.0 (3)
Table 5: New Features and Changed Behavior in Cisco APIC Release 6.0 (2)
BGP additional paths BGP supports the additional paths BGP Additional-Paths, on page 296
feature, which allows the BGP
speaker to propagate and accept
multiple paths for the same prefix
without the new paths replacing any
previous paths. This feature allows
BGP speaker peers to negotiate
whether they support advertising
and receiving multiple paths per
prefix and advertising such paths.
Config Stripe Winner Policy The fabric now supports a About Config Stripe Winner Policy,
configurable stripe winner policy on page 72
where you can select a pod for a
specific multicast group, group
range and/or source, source range.
This will ensure that the border leaf
elected as the stripe winner is from
the selected pod.
Proportional equal-cost multi-path You can use the next-hop propagate About Equal-Cost Multi-Path
(ECMP) routing and redistribute attached host Routing in Cisco ACI, on page 291
features to avoid sub-optimal
routing in the Cisco ACI fabric.
When these features are enabled,
packet flows from a non-border leaf
switch are forwarded directly to the
leaf switch connected to the
next-hop address. All next-hops are
now used for ECMP forwarding
from the hardware. In addition,
Cisco ACI now redistributes ECMP
paths into BGP for both directly
connected next-hops and recursive
next-hops.
Table 6: New Features and Changed Behavior in Cisco APIC Release 6.0 (1)
Remote pools with subnet mask of You can now configure remote Remote Leaf Switches, on page 121
up to /28 pools with subnet mask of up to
/28.
BGP autonomous system (AS) • You can now use the Remove Configuring Bidirectional
enhancements Private AS option to remove Forwarding Detection on a
private AS numbers from the Secondary IP Address Using the
AS_path in an eBGP route. GUI, on page 314
• Support for AS-Path match Configuring Route Control Per
clause while creating a BGP BGP Peer Using the GUI, on page
per- peer route-map. 346
As traffic enters the fabric, ACI encapsulates and applies policy to it, forwards it as needed across the fabric
through a spine switch (maximum two-hops), and de-encapsulates it upon exiting the fabric. Within the fabric,
ACI uses Intermediate System-to-Intermediate System Protocol (IS-IS) and Council of Oracle Protocol (COOP)
for all forwarding of endpoint to endpoint communications. This enables all ACI links to be active, equal cost
multipath (ECMP) forwarding in the fabric, and fast-reconverging. For propagating routing information
between software defined networks within the fabric and routers external to the fabric, ACI uses the
Multiprotocol Border Gateway Protocol (MP-BGP).
VXLAN in ACI
VXLAN is an industry-standard protocol that extends Layer 2 segments over Layer 3 infrastructure to build
Layer 2 overlay logical networks. The ACI infrastructure Layer 2 domains reside in the overlay, with isolated
broadcast and failure bridge domains. This approach allows the data center network to grow without the risk
of creating too large a failure domain.
All traffic in the ACI fabric is normalized as VXLAN packets. At ingress, ACI encapsulates external VLAN,
VXLAN, and NVGRE packets in a VXLAN packet. The following figure shows ACI encapsulation
normalization.
Forwarding in the ACI fabric is not limited to or constrained by the encapsulation type or encapsulation
overlay network. An ACI bridge domain forwarding policy can be defined to provide standard VLAN behavior
where required.
Because every packet in the fabric carries ACI policy attributes, ACI can consistently enforce policy in a fully
distributed manner. ACI decouples application policy EPG identity from forwarding. The following illustration
shows how the ACI VXLAN header identifies application policy within the fabric.
Figure 3: ACI VXLAN Packet Format
The ACI VXLAN packet contains both Layer 2 MAC address and Layer 3 IP address source and destination
fields, which enables efficient and scalable forwarding within the fabric. The ACI VXLAN packet header
source group field identifies the application policy endpoint group (EPG) to which the packet belongs. The
VXLAN Instance ID (VNID) enables forwarding of the packet through tenant virtual routing and forwarding
(VRF) domains within the fabric. The 24-bit VNID field in the VXLAN header provides an expanded address
space for up to 16 million unique Layer 2 segments in the same network. This expanded address space gives
IT departments and cloud providers greater flexibility as they build large multitenant data centers.
VXLAN enables ACI to deploy Layer 2 virtual networks at scale across the fabric underlay Layer 3
infrastructure. Application endpoint hosts can be flexibly placed in the data center network without concern
for the Layer 3 boundary of the underlay infrastructure, while maintaining Layer 2 adjacency in a VXLAN
overlay network.
VXLAN uses VTEP devices to map tenant end devices to VXLAN segments and to perform VXLAN
encapsulation and de-encapsulation. Each VTEP function has two interfaces:
• A switch interface on the local LAN segment to support local endpoint communication through bridging
• An IP interface to the transport IP network
The IP interface has a unique IP address that identifies the VTEP device on the transport IP network known
as the infrastructure VLAN. The VTEP device uses this IP address to encapsulate Ethernet frames and transmit
the encapsulated packets to the transport network through the IP interface. A VTEP device also discovers the
remote VTEPs for its VXLAN segments and learns remote MAC Address-to-VTEP mappings through its IP
interface.
The VTEP in ACI maps the internal tenant MAC or IP address to a location using a distributed mapping
database. After the VTEP completes a lookup, the VTEP sends the original data packet encapsulated in
VXLAN with the destination address of the VTEP on the destination leaf switch. The destination leaf switch
de-encapsulates the packet and sends it to the receiving host. With this model, ACI uses a full mesh, single
hop, loop-free topology without the need to use the spanning-tree protocol to prevent loops.
The VXLAN segments are independent of the underlying network topology; conversely, the underlying IP
network between VTEPs is independent of the VXLAN overlay. It routes the encapsulated packets based on
the outer IP address header, which has the initiating VTEP as the source IP address and the terminating VTEP
as the destination IP address.
The following figure shows how routing within the tenant is done.
For each tenant VRF in the fabric, ACI assigns a single L3 VNID. ACI transports traffic across the fabric
according to the L3 VNID. At the egress leaf switch, ACI routes the packet from the L3 VNID to the VNID
of the egress subnet.
Traffic arriving at the fabric ingress that is sent to the ACI fabric default gateway is routed into the Layer 3
VNID. This provides very efficient forwarding in the fabric for traffic routed within the tenant. For example,
with this model, traffic between 2 VMs belonging to the same tenant, on the same physical host, but on
different subnets, only needs to travel to the ingress switch interface before being routed (using the minimal
path cost) to the correct destination.
To distribute external routes within the fabric, ACI route reflectors use multiprotocol BGP (MP-BGP). The
fabric administrator provides the autonomous system (AS) number and specifies the spine switches that
become route reflectors.
Note Cisco ACI does not support IP fragmentation. Therefore, when you configure Layer 3 Outside (L3Out)
connections to external routers, or Multi-Pod connections through an Inter-Pod Network (IPN), it is
recommended that the interface MTU is set appropriately on both ends of a link.
IGP Protocol Packets (EIGRP, OSPFv3) are constructed by components based on the Interface MTU size. In
Cisco ACI, if the CPU MTU size is less than the Interface MTU size and if the constructed packet size is
greater than the CPU MTU, then the packet is dropped by the kernal, especially in IPv6. To avoid such control
packet drops always configure the same MTU values on both the control plane and on the interface.
On some platforms, such as Cisco ACI, Cisco NX-OS, and Cisco IOS, the configurable MTU value does not
take into account the Ethernet headers (matching IP MTU, and excluding the 14-18 Ethernet header size),
while other platforms, such as IOS-XR, include the Ethernet header in the configured MTU value. A configured
value of 9000 results in a max IP packet size of 9000 bytes in Cisco ACI, Cisco NX-OS, and Cisco IOS, but
results in a max IP packet size of 8986 bytes for an IOS-XR untagged interface.
For the appropriate MTU values for each platform, see the relevant configuration guides.
We highly recommend that you test the MTU using CLI-based commands. For example, on the Cisco NX-OS
CLI, use a command such as ping 1.1.1.1 df-bit packet-size 9000 source-interface ethernet 1/1.
Layer 3 Prerequisites
Before you begin to perform the tasks in this guide, complete the following:
• Ensure that the ACI fabric and the APIC controllers are online, and the APIC cluster is formed and
healthy—For more information, see Cisco APIC Getting Started Guide, Release 2.x.
• Ensure that fabric administrator accounts for the administrators that will configure Layer 3 networks are
available—For instructions, see the User Access, Authentication, and Accounting and Management
chapters in Cisco APIC Basic Configuration Guide.
• Ensure that the target leaf and spine switches (with the necessary interfaces) are available—For more
information, see Cisco APIC Getting Started Guide, Release 2.x.
For information about installing and registering virtual switches, see Cisco ACI Virtualization Guide.
• Configure the tenants, bridge domains, VRFs, and EPGs (with application profiles and contracts) that
will consume the Layer 3 networks—For instructions, see the Basic User Tenant Configuration chapter
in Cisco APIC Basic Configuration Guide.
• Configure NTP, DNS Service, and DHCP Relay policies—For instructions, see the Provisioning Core
ACI Fabric Services chapter in Cisco APIC Basic Configuration Guide, Release 2.x.
Caution If you install 1 Gigabit Ethernet (GE) or 10GE links between the leaf and spine switches in the fabric, there
is risk of packets being dropped instead of forwarded, because of inadequate bandwidth. To avoid the risk,
use 40GE or 100GE links between the leaf and spine switches.
• Unicast Routing: If this setting is enabled and a subnet address is configured, the fabric provides the
default gateway function and routes the traffic. Enabling unicast routing also instructs the mapping
database to learn the endpoint IP-to-VTEP mapping for this bridge domain. The IP learning is not
dependent upon having a subnet configured under the bridge domain.
• Subnet Address: This option configures the SVI IP addresses (default gateway) for the bridge domain.
• Limit IP Learning to Subnet: This option is similar to a unicast reverse-forwarding-path check. If this
option is selected, the fabric will not learn IP addresses from a subnet other than the one configured on
the bridge domain.
Caution Enabling Limit IP Learning to Subnet is disruptive to the traffic in the bridge domain.
Overview
Note The Common Pervasive Gateway feature is being deprecated and is not actively maintained anymore.
When operating more than one Cisco ACI fabric, we highly recommend that you deploy Multi-Site instead
of interconnecting multiple individual ACI fabrics to each other through leaf switches using the Common
Pervasive Gateway feature. The Common Pervasive Gateway feature is currently not supported because no
validations and quality assurance tests are performed in this topology for many other new features, such as
L3 multicast. Hence, although Cisco ACI had the Common Pervasive Gateway feature for interconnecting
ACI fabrics prior to Multi-Site, we highly recommend that you design a new ACI fabric with Multi-Site
instead when there is a requirement to interconnect separate APIC domains.
This example shows how to configure Common Pervasive Gateway for IPv4 when using the Cisco APIC.
Two ACI fabrics can be configured with an IPv4 common gateway on a per bridge domain basis. Doing so
enables moving one or more virtual machine (VM) or conventional hosts across the fabrics while the host
retains its IP address. VM host moves across fabrics can be done automatically by the VM hypervisor. The
ACI fabrics can be co-located, or provisioned across multiple sites. The Layer 2 connection between the ACI
fabrics can be a local link, or can be across a bridged network. The following figure illustrates the basic
common pervasive gateway topology.
Note Depending upon the topology used to interconnect two Cisco ACI fabrics, it is required that the interconnecting
devices filter out the traffic source with the Virtual MAC address of the gateway switch virtual interface (SVI).
Procedure
a) In the Main tab, in the Name field, enter a name for the bridge domain, and choose the desired values for the remaining
fields.
b) In the L3 configurations tab, expand Subnets, and in the Create Subnets dialog box, in the Gateway IP field, enter
the IP address.
For example, 192.0.2.1/24.
c) In the Treat as virtual IP address field, check the check box.
d) In the Make this IP address primary field, check the check box to specify this IP address for DHCP relay.
Checking this check box affects DHCP relay only.
e) Click Ok, then click Next to advance to the Advanced/Troubleshooting tab, then click Finish.
Step 5 Double click the Bridge Domain that you just created in the Work pane, and perform the following action:
a) Click the Policy tab, then click the L3 Configurations subtab.
b) Expand Subnets again, and in the Create Subnets dialog box, to create the physical IP address in the Gateway IP
field, use the same subnet which is configured as the virtual IP address.
For example, if you used 192.0.2.1/24 for the virtual IP address, you might use 192.0.2.2/24 here for the physical IP
address.
Note
The physical IP address must be unique across the Cisco ACI fabric.
Note
This step essentially ties the virtual MAC address that you enter in this field with the virtual IP address that you entered
in the previous step. If you were to delete the virtual MAC address at some point in the future, you should also remove
the check from the Treat as virtual IP address field for the IP address that you entered in the previous step.
Step 7 To create an L2Out EPG to extend the bridge domain to another fabric, in the Navigation pane, right-click L2Outs, click
Create L2Out, and perform the following actions:
a) In the Name field, enter a name for the bridged outside.
b) In the Bridge Domain field, select the bridge domain already previously created.
c) In the Encap field, enter the VLAN encapsulation to match the other fabric l2out encapsulation.
d) In the Path Type field, select Port, PC, or VPC to deploy the EPG and click Next.
e) To create an External EPG network click in the Name field, enter a name for the network and you can specify the
QoS class and click Finish to complete Common Pervasive configuration.
Overview
The IP Aging policy tracks and ages unused IP addresses on an endpoint. Tracking is performed using the
endpoint retention policy configured for the bridge domain to send ARP requests (for IPv4) and neighbor
solicitations (for IPv6) at 75% of the local endpoint aging interval. When no response is received from an IP
address, that IP address is aged out.
This document explains how to configure the IP Aging policy.
Procedure
What to do next
To specify the interval used for tracking IP addresses on endpoints, create an End Point Retention policy by
navigating to Tenants > tenant-name > Policies > Protocol, right-click End Point Retention, and choose
Create End Point Retention Policy.
Procedure
• For EPG-to-EPG intra-VRF instance Layer 3 traffic, the policy is always applied on the egress leaf switch
because the ingress leaf switch cannot resolve the destination class. The remote IP address is not learned.
• For EPG-to-EPG intra-VRF instance Layer 2 traffic, the policy can be applied on the ingress leaf switch
because the switch can still learn the remote MAC address, but not the remote IP address.
• When dataplane IP address learning is enabled for an endpoint or subnet, a dataplane IP address is not
learned using an endpoint-to-endpoint ARP request that does not reach a CPU. However, an ARP request
to a bridge domain SVI gateway is still learned.
• When dataplane IP address learning is enabled for a VRF instance, local and remote MAC addresses are
learned using an endpoint-to-endpoint ARP request.
The following guidelines and limitations apply to disabling dataplane IP address learning per endpoint or
subnet:
• If there is communication between endpoints in the same bridge domain, the L2 unknown Unicast
property must be set to Flood on the bridge domain. ARP flooding must also be enabled. Otherwise,
ARP between endpoints in the same bridge domain does not work because the local MAC address and
remote MAC address are not learned through an endpoint-to-endpoint ARP request.
• Instead of flushing, the local IP address is converted to the dp-lrn-dis (dataplane learn disabled) state.
• You cannot have endpoint dataplane IP address learning enabled when the subnet for an endpoint is
configured with dataplane IP address learning disabled. For example, you cannot have a bridge domain
with subnet 100.10.0.1/24 with learning disabled and an EPG with 100.10.0.100/32 with learning enabled.
• When dataplane IP address learning is disabled for an endpoint or subnet, the switch will not learn/refresh
Layer 2 MAC addresses from routed Layer 3 data traffic. Layer 2 MAC addresses will only be learned
from Layer 2 data traffic or ARP packets.
• When dataplane IP address learning is disabled for an endpoint or subnet, an IP address learn or move
triggered from a GARP packet is only possible with the ARP flood mode along with GARP-based
endpoint move detection enabled.
• Disabled: Remote IP addresses are flushed and rogue IP addresses are aged out. Rogue IP address
are not detected on local moves. The only moves that are detected are from control traffic. Bounce
is learned from COOP, but these are dropped once the bounce timer expires.
Clients (IP endpoints) behind the EPG are learned through the data/control plane. The VIP address
is learned only through the control plane on the load balancer EPG. Even though it is through the
control plane, the VIP address is not learned on other EPGs.
• Disabled:
• Client to load balancer: No remote IP address learned for VIP address. The remote IP address
is cleared. It will use the spine-proxy. If the IP address of the VIP address is learned, spine-proxy
look-up will be successful, otherwise it will generate glean for the VIP address and learn it
through the control plane.
• Load balancer to server: No effect. Only bridging between the load balancer/server is supported
for the DSR use case.
• Server to client: The remote IP address for the client is cleared and the spine-proxy will be
used. If the remote IP address for the client entry is deleted in the spine switch, it is re-learned
through glean. For clients behind an L3Out, there is no Layer 3 remote IP address.
Procedure
Step 1 Navigate to Tenants > tenant_name > Networking > VRFs > vrf_name.
Step 2 On the VRF - vrf_name work pane, click the Policy tab.
Step 3 Scroll to the bottom of the Policy work pane and locate IP Data-plane Learning.
Step 4 Click one of the following:
• Disabled: Disables dataplane IP address learning on the VRF instance.
Procedure
b) In the Work pane, for the IP Data-plane Learning toggle, choose Enable or Disable, as desired.
This enables or disables IP address dataplane learning for the endpoint.
Step 4 If you are creating a new subnet, perform the following substeps:
a) In the Navigation pane, choose Tenant tenant_name > Application Profiles > app_profile_name > Application
EPGs > app_epg_name > Subnets.
b) Right click Subnets and choose Create EPG Subnet.
c) For the Default Gateway IP field, you must specify a mask of /32 for an IPv4 address or /128 for an IPv6 address.
d) Put a check in the No Default SVI Gateway checkbox.
e) For the Type Behind Subnet buttons, choose None or Anycast MAC.
f) For the IP Data-plane Learning toggle, choose Enable or Disable, as desired.
This enables or disables IP address dataplane learning for the endpoint.
g) Fill out the remaining fields as necessary.
Procedure
Step 4 If you are creating a new subnet, perform the following substeps:
a) In the Navigation pane, choose Tenant tenant_name > Networking > Bridge Domains > bridge_domain_name >
Subnets.
b) Right click Subnets and choose Create Subnet.
c) For the Default Gateway IP field, enter the IP address and mask.
d) If you want to disable dataplane IP address learning, do not put a check in the No Default SVI Gateway checkbox.
e) For the IP Data-plane Learning toggle, choose Enable or Disable, as desired.
This enables or disables IP address dataplane learning for the subnet.
f) Fill out the remaining fields as necessary.
Step 5 Click Submit.
Neighbor Discovery
The IPv6 Neighbor Discovery (ND) protocol is responsible for the address auto configuration of nodes,
discovery of other nodes on the link, determining the link-layer addresses of other nodes, duplicate address
detection, finding available routers and DNS servers, address prefix discovery, and maintaining reachability
information about the paths to other active neighbor nodes.
ND-specific Neighbor Solicitation or Neighbor Advertisement (NS or NA) and Router Solicitation or Router
Advertisement (RS or RA) packet types are supported on all ACI fabric Layer 3 interfaces, including physical,
Layer 3 sub interface, and SVI (external and pervasive). Up to APIC release 3.1(1x), RS/RA packets are used
for auto configuration for all Layer 3 interfaces but are only configurable for pervasive SVIs.
Starting with APIC release 3.1(2x), RS/RA packets are used for auto configuration and are configurable on
Layer 3 interfaces including routed interface, Layer 3 sub interface, and SVI (external and pervasive).
ACI bridge domain ND always operates in flood mode; unicast mode is not supported.
The ACI fabric ND support includes the following:
• Interface policies (nd:IfPol) control ND timers and behavior for NS/NA messages.
• ND prefix policies (nd:PfxPol) control RA messages.
• Configuration of IPv6 subnets for ND (fv:Subnet).
• ND interface policies for external networks.
• Configurable ND subnets for external networks, and arbitrary subnet configurations for pervasive bridge
domains are not supported.
• Configurable Static Adjacencies: (<vrf, L3Iface, ipv6 address> --> mac address)
• Dynamic Adjacencies: Learned via exchange of NS/NA packets
• Per Interface
• Control of ND packets (NS/NA)
• Neighbor Solicitation Interval
• Neighbor Solicitation Retry count
• Control of RA packets
• Suppress RA
• Suppress RA MTU
• RA Interval, RA Interval minimum, Retransmit time
Procedure
d) In the Create Tenant dialog box, check the check box for the security domain that you created, and click Submit.
Step 3 In the Navigation pane, expand Tenant-name > Networking.
Step 4 In the Work pane, drag the VRF icon to the canvas to open the Create VRF dialog box, and perform the following
actions:
a) In the Name field, enter a name.
b) Click Submit to complete the VRF configuration.
Step 5 In the Networking area, drag the Bridge Domain icon to the canvas while connecting it to the VRF icon. In the Create
Bridge Domain dialog box that displays, perform the following actions:
a) In the Name field, enter a name.
b) Click the L3 Configurations tab, and expand Subnets to open the Create Subnet dialog box, enter the subnet
mask in the Gateway IP field.
Step 6 In the Subnet Control field, ensure that the ND RA Prefix check box is checked.
Step 7 In the ND Prefix policy field drop-down list, click Create ND RA Prefix Policy.
Note
There is already a default policy available that will be deployed on all IPv6 interfaces. Alternatively, you can create an
ND prefix policy to use as shown in this example. By default, the IPv6 gateway subnets are advertised as ND prefixes
in the ND RA messages. A user can choose to not advertise the subnet in ND RA messages by un-checking the ND
RA prefix check box.
Step 8 In the Create ND RA Prefix Policy dialog box, perform the following actions:
a) In the Name field, enter the name for the prefix policy.
Note
For a given subnet there can only be one prefix policy. It is possible for each subnet to have a different prefix policy,
although subnets can use a common prefix policy.
Step 9 In the ND policy field drop-down list, click Create ND Interface Policy and perform the following tasks:
a) In the Name field, enter a name for the policy.
b) Click Submit.
Step 10 Click OK to complete the bridge domain configuration.
Similarly you can create additional subnets with different prefix policies as required.
A subnet with an IPv6 address is created under the BD and an ND prefix policy has been associated with it.
Note The steps here show how to associate an IPv6 neighbor discovery interface policy with a Layer 3 interface.
The specific example shows how to configure using the non-VPC interface.
Procedure
Step 1 In the Navigation pane, navigate to the appropriate external routed network under the appropriate Tenant.
Step 2 Under L3Outs, expand > Logical Node Profiles > Logical Node Profile_name > Logical Interface Profiles.
Step 3 Double-click the appropriate Logical Interface Profile, and in the Work pane, click Policy > Routed Interfaces.
Note
If you do not have a Logical Interface Profile created, you can create a profile here.
Step 4 In the Routed Interface dialog box, perform the following actions:
a) In the ND RA Prefix field, check the check box to enable ND RA prefix for the interface.
When enabled, the routed interface is available for auto configuration.
Also, the ND RA Prefix Policy field is displayed.
b) In the ND RA Prefix Policy field, from the drop-down list, choose the appropriate policy.
c) Choose other values on the screen as desired. Click Submit.
Note
When you configure using a VPC interface, you must enable the ND RA prefix for both side A and side B as both
are members in the VPC configuration. In the Work Pane, in the Logical Interface Profile screen, click the SVI tab.
Under Properties, check the check boxes to enable the ND RA Prefix for both Side A and Side B. Choose the identical
ND RA Prefix Policy for Side A and Side B.
Procedure
Step 1 Navigate to the appropriate page to access the DAD field for that interface. For example:
a) Navigate to Tenants > Tenant > Networking > L3Outs > L3Out > Logical Node Profiles > node > Logical
Interface Profiles, then select the interface that you want to configure.
b) Click on Routed Sub-interfaces or SVI, then click on the Create (+) button to configure that interface.
Step 2 For this interface, make the following settings for the DAD entries:
• For the primary address, set the value for the DAD entry to enabled.
• For the shared secondary address, set the value for the DAD entry to disabled. Note that if the secondary address
is not shared across border leaf switches, then you do not need to disable the DAD for that address.
Example:
For example, if you were configuring this setting for the SVI interface, you would:
• Set the Side A IPv6 DAD to enabled.
• Set the Side B IPv6 DAD to disabled.
Example:
As another example, if you were configuring this setting for the routed sub-interface interface, you would:
• In the main Select Routed Sub-Interface page, set the value for IPv6 DAD for the routed sub-interface to enabled.
• Click on the Create (+) button on the IPv4 Secondary/IPv6 Additional Addresses area to access the Create Secondary
IP Address page, then set the value for IPv6 DAD to disabled. Then click on the OK button to apply the changes
in this screen.
In this figure, Server 1 and Server 2 are in the MS NLB cluster. These servers appear as a single-host server
to outside clients. All servers in the MS NLB cluster receive all incoming requests, then MS NLB distributes
the load between the servers.
Microsoft NLB functions in three different operational modes:
• Unicast Mode: In this mode, each NLB cluster VIP is assigned a unicast MAC address. This mode relies
on unknown unicast flooding to deliver traffic to the cluster.
• Multicast Mode: In this mode, each NLB cluster VIP is assigned a non-Internet Assigned Numbers
Authority (IANA) multicast MAC address (03xx.xxxx.xxxx).
• IGMP Mode: In this mode, an NLB cluster VIP is assigned a unique IPv4 multicast group address. The
multicast MAC address for this is derived from the standard MAC derivation for IPv4 multicast addresses.
The use of a common MAC address would normally create a conflict, since Layer 2 switches expect to see
unique source MAC addresses on all switch ports. To avoid this problem, Network Load Balancing uniquely
modifies the source MAC address for outgoing packets. If the cluster MAC address is 02-BF-1-2-3-4, then
each host's source MAC address is set to 02-x-1-2-3-4, where x is the host's priority within the cluster, as
shown in the following figure.
adapter's MAC address. For example, the multicast MAC address could be set to 03-BF-0A-14-1E-28 for a
cluster's primary IP address of 10.20.30.40. Cluster communication doesn't require a separate adapter.
Cisco ACI as a Layer 2 Supported on leaf switch models Supported on leaf switch models Supported on leaf switch models
Network, With External with -EX, -FX, or -FX2 at the with -EX, -FX, or -FX2 at the with -EX, -FX, or -FX2 at the
Router as Layer 3 Gateway end of the switch name. end of the switch name, as well end of the switch name, as well
as leaf switch models that do not as leaf switch models that do not
have a suffix at the end of the have a suffix at the end of the
switch name. switch name. However,
Microsoft NLB traffic is not
scoped by IGMP, but rather is
flooded instead.
Cisco ACI as a Layer 3 Supported on Release 4.1 and Supported on Release 4.1 and Supported on Release 4.1 and
Gateway later. later. later.
The following table provides more information on the configuration options available for deploying Microsoft
NLB using Cisco ACI as Layer 2.
Table 8: External Router and ACI Bridge Domain Configuration for the Three Microsoft NLB Modes
1
Unicast Mode Multicast Mode IGMP Mode
ACI Bridge Domain • Bridge domain configured • Bridge domain configured • Bridge domain configured
Configuration for unknown unicast for unknown unicast for unknown unicast
flooding (not hw-proxy) flooding (not hw-proxy) flooding (not hw-proxy)
• No IP routing • No IP routing • No IP routing
• Layer 3 unknown • Layer3 unknown multicast:
multicast: flood (even with Optional, but can be
optimized multicast configured for future
flooding, Microsoft NLB compatibility
traffic is flooded)
• Querier configuration:
• IGMP snooping Optional, but can be
configuration: Not enabled for future
applicable compatibility; Configure
subnet under the bridge
domain, no need for IP
routing
• IGMP snooping
configuration: Optional,
but can be enabled for
future compatibility
External Router ARP Table • No special ARP Static ARP configuration for Static ARP configuration for
Configuration configuration unicast VIP to multicast MAC unicast VIP to multicast MAC
• External router learns VIP
to VMAC mapping
1
As of Release 3.2, using Microsoft NLB IGMP mode compared with Microsoft NLB multicast mode offers no benefits in terms
of scoping of the multi-destination traffic
Beginning with Release 4.1, configuring Cisco ACI to connect Microsoft NLB servers consists of the following
general tasks:
• Configuring the VRF, where you can configure the VRF in egress or ingress mode.
• Configuring a bridge domain (BD) for the Microsoft NLB servers, with L2 unknown unicast in flooding
mode and not in hardware-proxy mode.
• Defining an EPG for all the Microsoft NLB servers that share the same VIP. You must associate this
EPG with the previously defined BD.
• Entering the Microsoft NLB VIP as a subnet under the EPG. You can configure the Microsoft NLB in
the following modes:
• Unicast mode: You will enter the unicast MAC address as part of the Microsoft NLB VIP
configuration. In this mode, the traffic from the client to the Microsoft NLB VIP is flooded to all
the EPGs in the Microsoft NLB BD.
• Multicast mode: You will enter the multicast MAC address while configuring the Microsoft NLB
VIP itself. You will go to the static ports under the Microsoft NLB EPG and add the Microsoft NLB
multicast MAC to the EPG ports where the Microsoft NLB servers are connected. In this mode, the
traffic is forwarded to the ports that have the static MAC binding.
• IGMP mode: You will enter a Microsoft NLB group address while configuring the Microsoft NLB
VIP itself. In this mode, the traffic from the client to the Microsoft NLB VIP is forwarded to the
ports where the IGMP join is received for the Microsoft NLB group address.
• Configuring a contract between the Microsoft NLB EPG and the client EPG. You must configure the
Microsoft NLB EPG as the provider side of the contract and the client EPG as the consumer side of the
contract.
Microsoft NLB is a route plus flood solution. Traffic from the client to the Microsoft NLB VIP is first routed
at the consumer ToR switch, and is then flooded on the Microsoft NLB BD toward the provider ToR switch.
Once traffic leaves the consumer ToR switch, traffic is flooded and contracts cannot be applied to flood traffic.
Therefore, the contract enforcements must be done on consumer ToR switch.
For a VRF in ingress mode, intra-VRF traffic from the L3Out to the Microsoft NLB EPG may be dropped on
the consumer ToR switch because the border leaf switch (consumer ToR switch) does not have a policy. To
work around this issue, use one of the following options:
• Option 1: Configure the VRF in egress mode. When you configure the VRF in egress mode, the policy
is downloaded on the border leaf switch.
• Option 2: Add the Microsoft NLB EPG and L3external of the L3Out in a preferred group. Traffic will
hit the default-allow policy on the consumer ToR switch.
• Option 3: Deploy the Microsoft NLB EPG on an unused port that is in an up state, or on a port connected
to a Microsoft NLB server on the border leaf switch. By doing so, the Microsoft NLB EPG becomes a
local endpoint on the border leaf switch. The policy is downloaded for local endpoints, so the border leaf
switch would therefore have the policy downloaded.
• Option 4: Use a shared service. Deploy an L3Out in the consumer VRF, which is different from the
provider Microsoft NLB VRF. For the Microsoft NLB VIP under the Microsoft NLB EPG, check the
Shared between VRFs box. Configure a contract between L3Out from the consumer VRF and the
Microsoft NLB EPG. By using a shared service, the policy is downloaded on the border leaf switch.
The following table provides more information on supported EPG and BD configurations for the Microsoft
NLB modes.
Table 9: Cisco ACI EPG and BD Configurations for the Microsoft NLB Modes
EPG Configuration • Subnet for the VIP • Subnet for the VIP • Subnet for the VIP
• Unicast MAC address • Multicast MAC address • No need to enter a MAC
defined as part of the defined as part of the address
subnet subnet
• You can choose dynamic
• Static binding to the ports group or static group
where the servers are
• If you choose the static
• Static group MAC address group option, then enter
on each path static paths and enter the
multicast group in each
path
VMM Domain You can enter a VMM domain Multicast mode requires a static In dynamic group mode, you
path, so you cannot use a VMM can use a VMM domain
domain in this situation
• You should configure Microsoft NLB bridge domain with the default SVI MAC address. Under layer 3
configurations, you should configure the bridge domain MAC address with the default setting of
00:22:BD:F8:19:FF. Do not modify this default SVI MAC address for the Microsoft NLB bridge domain.
• There is a hardware limit of 128 Microsoft NLB VIPs per fabric.
• Virtualized servers that are configured for Microsoft NLB can connect to Cisco ACI with static binding
in all modes (unicast, multicast, and IGMP).
• Virtualized servers that are configured for Microsoft NLB can connect to Cisco ACI through VMM
integration in unicast mode and IGMP mode.
• Microsoft NLB unicast mode is not supported with VMM integration behind Cisco UCS B-Series Blade
Servers in end-host mode.
Microsoft NLB in unicast mode relies on unknown unicast flooding for delivery of cluster-bound packets.
Unicast mode will not work on Cisco UCS B-Series Blade Servers when the fabric interconnect is in
end-host mode, because unknown unicast frames are not flooded as required by this mode. For more
details on the layer 2 forwarding behavior of Cisco UCS B-Series Blade Servers in end-host mode, see:
https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/unified-computing/
whitepaper_c11-701962.html
Procedure
Step 1 In the Navigation pane, choose Tenant > tenant_name > Application Profiles > application_profile_name > Application
EPGs > application_EPG_name > Subnets.
Step 2 Right-click Subnets and select Create EPG Subnet.
Step 3 In the Create EPG Subnet dialog box, fill in the following fields:
a) In the Default Gateway IP field, enter the Microsoft NLB cluster VIP.
For example, 192.0.2.1/32.
b) In the Scope area, for shared services, check Shared between VRFs.
Uncheck Private to VRF, if it is selected.
c) Under Subnet Control, check the No Default SVI Gateway check box.
d) In the Type Behind Subnet area, click EpNlb.
Procedure
Step 1 In the Navigation pane, choose Tenant > tenant_name > Application Profiles > application_profile_name > Application
EPGs > application_EPG_name > Subnets.
Step 2 Right-click Subnets and select Create EPG Subnet.
Step 3 In the Create EPG Subnet dialog box, fill in the following fields:
a) In the Default Gateway IP field, enter the Microsoft NLB cluster VIP.
For example, 192.0.2.1/32.
b) In the Scope area, for shared services, check Shared between VRFs.
Uncheck Private to VRF, if it is selected.
c) Under Subnet Control, check the No Default SVI Gateway check box.
d) In the Type Behind Subnet area, click MSNLB.
The Mode field appears.
e) From the Mode drop-down list, choose NLB in static multicast mode.
The MAC Address field appears.
f) In the MAC Address field, enter the Microsoft NLB cluster MAC address.
For the Microsoft NLB cluster MAC address for the multicast mode, the cluster MAC address has to start with 03.
For example, 03:BF:01:02:03:04.
g) Copy the Microsoft NLB cluster MAC address that you entered in this field for the multicast mode.
Step 4 Click Submit.
Step 5 In the Navigation pane, choose Tenant tenant_name > Application Profiles > application_profile_name > Application
EPGs > application_EPG_name > Static Ports > static_port.
Choose the static port that you want to configure Microsoft NLB to flood onto in the bridge domain.
Step 6 On the Static Path page for this port, fill in the following field:
a) In the NLB Static Group area, click + (Create), then paste the MAC address that you copied from 3.g, on page 45
into the Mac Address field.
b) Click Update underneath the Mac Address field.
Step 7 In the Static Path page, click Submit.
Any traffic to this Microsoft NLB cluster MAC address will now go out on this static port.
Procedure
Step 1 In the Navigation pane, choose Tenant > tenant_name > Application Profiles > application_profile_name > Application
EPGs > application_EPG_name > Subnets.
Step 2 Right-click Subnets and select Create EPG Subnet.
Step 3 In the Create EPG Subnet dialog box, fill in the following fields:
a) In the Default Gateway IP field, enter the Microsoft NLB cluster VIP.
For example, 192.0.2.1/32.
b) In the Scope area, for shared services, check Shared between VRFs.
Uncheck Private to VRF, if it is selected.
c) Under Subnet Control, check the No Default SVI Gateway check box.
d) In the Type Behind Subnet area, click EpNlb.
The Mode field appears.
e) From the Mode drop-down list, choose NLB in IGMP mode.
The Group Id field appears.
f) In the Group Id field, enter the Microsoft NLB multicast group address.
For the Microsoft NLB multicast group address, the last two octets of the address correspond to the last two octets
of the instance cluster IP address. For example, if the instance cluster IP address is 10.20.30.40, then the Microsoft
NLB multicast group address that you would enter into this field might be 239.255.30.40.
IGMP snooping is enabled by default on the bridge domain because the IGMP snooping policy default that is associated
with the bridge domain has Enabled as the administrative state of the policy. For more information, see Configuring an
IGMP Snooping Policy Using the GUI, on page 51.
Note We recommend that you do not disable IGMP snooping on bridge domains. If you disable IGMP snooping,
you may see reduced multicast performance because of excessive false flooding within the bridge domain.
IGMP snooping software examines IP multicast traffic within a bridge domain to discover the ports where
interested receivers reside. Using the port information, IGMP snooping can reduce bandwidth consumption
in a multi-access bridge domain environment to avoid flooding the entire bridge domain. By default, IGMP
snooping is enabled on the bridge domain.
This figure shows the IGMP routing functions and IGMP snooping functions both contained on an ACI leaf
switch with connectivity to a host. The IGMP snooping feature snoops the IGMP membership reports, and
leaves messages and forwards them only when necessary to the IGMP router function.
IGMP snooping operates upon IGMPv1, IGMPv2, and IGMPv3 control plane packets where Layer 3 control
plane packets are intercepted and influence the Layer 2 forwarding behavior.
IGMP snooping has the following proprietary features:
• Source filtering that allows forwarding of multicast packets based on destination and source IP addresses
• Multicast forwarding based on IP addresses rather than the MAC address
• Multicast forwarding alternately based on the MAC address
The ACI fabric supports IGMP snooping only in proxy-reporting mode, in accordance with the guidelines
provided in Section 2.1.1, "IGMP Forwarding Rules," in RFC 4541:
As a result, the ACI fabric will send IGMP reports with the source IP address of 0.0.0.0.
Note For more information about IGMP snooping, see RFC 4541.
Virtualization Support
You can define multiple virtual routing and forwarding (VRF) instances for IGMP snooping.
On leaf switches, you can use the show commands with a VRF argument to provide a context for the information
displayed. The default VRF is used if no VRF argument is supplied.
The APIC IGMP Snooping Function, IGMPv1, IGMPv2, and the Fast Leave
Feature
Both IGMPv1 and IGMPv2 support membership report suppression, which means that if two hosts on the
same subnet want to receive multicast data for the same group, the host that receives a member report from
the other host suppresses sending its report. Membership report suppression occurs for hosts that share a port.
If no more than one host is attached to each switch port, you can configure the fast leave feature in IGMPv2.
The fast leave feature does not send last member query messages to hosts. As soon as APIC receives an IGMP
leave message, the software stops forwarding multicast data to that port.
IGMPv1 does not provide an explicit IGMP leave message, so the APIC IGMP snooping function must rely
on the membership message timeout to indicate that no hosts remain that want to receive multicast data for a
particular group.
Note The IGMP snooping function ignores the configuration of the last member query interval when you enable
the fast leave feature because it does not check for remaining hosts.
Note The IP address for the querier should not be a broadcast IP address, multicast IP address, or 0 (0.0.0.0).
When an IGMP snooping querier is enabled, it sends out periodic IGMP queries that trigger IGMP report
messages from hosts that want to receive IP multicast traffic. IGMP snooping listens to these IGMP reports
to establish appropriate forwarding.
The IGMP snooping querier performs querier election as described in RFC 2236. Querier election occurs in
the following configurations:
• When there are multiple switch queriers configured with the same subnet on the same VLAN on different
switches.
• When the configured switch querier is in the same subnet as with other Layer 3 SVI queriers.
Procedure
Step 1 Click the Tenants tab and the name of the tenant on whose bridge domain you intend to configure IGMP snooping
support.
Step 2 In the Navigation pane, click Policies > Protocol > IGMP Snoop.
Step 3 Right-click IGMP Snoop and select Create IGMP Snoop Policy.
Step 4 In the Create IGMP Snoop Policy dialog, configure a policy as follows:
a) In the Name and Description fields, enter a policy name and optional description.
b) In the Admin State field, select Enabled or Disabled to enable or disable IGMP snooping for this particular policy.
c) Select or unselect Fast Leave to enable or disable IGMP V2 immediate dropping of queries through this policy.
d) Select Enable querier to enable or disable the IGMP querier activity through this policy.
Note
For this option to be effectively enabled, the Subnet Control: Querier IP setting must also be enabled in the subnets
assigned to the bridge domains to which this policy is applied. The navigation path to the properties page on which
this setting is located is Tenants > tenant_name > Networking > Bridge Domains > bridge_domain_name >
Subnets > subnet_name.
e) In the Querier Version field, select Version 2 or Version 3 to to choose IGMP snooping querier version for this
particular policy.
f) Specify in seconds the Last Member Query Interval value for this policy.
IGMP uses this value when it receives an IGMPv2 Leave report. This means that at least one host wants to leave the
group. After it receives the Leave report, it checks that the interface is not configured for IGMP Fast Leave and if
not, it sends out an out-of-sequence query.
g) Specify in seconds the Query Interval value for this policy.
This value is used to define the amount of time the IGMP function will store a particular IGMP state if it does not
hear any reports on the group.
h) Specify in seconds Query Response Interval value for this policy.
When a host receives the query packet, it starts counting to a random value, less that the maximum response time.
When this timer expires, host replies with a report.
i) Specify the Start query Count value for this policy.
Number of queries sent at startup that are separated by the startup query interval. Values range from 1 to 10. The
default is 2.
j) Specify in seconds a Start Query Interval for this policy.
By default, this interval is shorter than the query interval so that the software can establish the group state as quickly
as possible. Values range from 1 to 18,000 seconds. The default is 31 seconds.
The new IGMP Snoop policy is listed in the Protocol Policies - IGMP Snoop summary page.
What to do next
To put this policy into effect, assign it to any bridge domain.
Note For the Enable Querier option on the assigned policy to be effectively enabled, the Subnet Control: Querier
IP setting must also be enabled in the subnets assigned to the bridge domains to which this policy is applied.
The navigation path to the properties page on which this setting is located is Tenants > tenant_name >
Networking > Bridge Domains > bridge_domain_name > Subnets > subnet_name.
Procedure
Step 1 Click the APIC Tenants tab and select the name of the tenant whose bridge domains you intend to configure with an
IGMP Snoop policy.
Step 2 In the APIC navigation pane, click Networking > Bridge Domains, then select the bridge domain to which you intend
to apply your policy-specified IGMP Snoop configuration.
Step 3 On the main Policy tab, scroll down to the IGMP Snoop Policy field and select the appropriate IGMP policy from the
drop-down menu.
Step 4 Click Submit.
The target bridge domain is now associated with the specified IGMP Snooping policy.
Static group membership can be configured through the APIC GUI, CLI, and REST API interfaces.
Enabling IGMP Snooping and Multicast on Static Ports Using the GUI
You can enable IGMP snooping and multicast on ports that have been statically assigned to an EPG. Afterwards
you can create and assign access groups of users that are permitted or denied access to the IGMP snooping
and multicast traffic enabled on those ports.
Note For details on static port assignment, see Deploying an EPG on a Specific Node
or Port Using the GUI in the Cisco APIC Layer 2 Networking Configuration
Guide.
• Identify the IP addresses that you want to be recipients of IGMP snooping and multicast traffic.
Procedure
Step 1 Click Tenant > tenant_name > Application Profiles > application_name > Application EPGs > epg_name > Static
Ports.
Navigating to this spot displays all the ports you have statically assigned to the target EPG.
Step 2 Click the port to which you intend to statically assign group members for IGMP snooping.
This action displays the Static Path page.
Step 3 On the IGMP Snoop Static Group table, click + to add an IGMP Snoop Address Group entry.
Adding an IGMP Snoop Address Group entry associates the target static port with a specified multicast IP address and
enables it to process the IGMP snoop traffic received at that address.
a) In the Group Address field, enter the multicast IP address to associate with his interface and this EPG.
b) In the Source Address field enter the IP address of the source to the multicast stream, if applicable.
c) Click Submit.
When configuration is complete, the target interface is enabled to process IGMP Snooping protocol traffic sent to its
associated multicast IP address.
Note
You can repeat this step to associate additional multicast addresses with the target static port.
Enabling Group Access to IGMP Snooping and Multicast Using the GUI
After you enable IGMP snooping and multicasting on ports that have been statically assigned to an EPG, you
can then create and assign access groups of users that are permitted or denied access to the IGMP snooping
and multicast traffic enabled on those ports.
Note For details on static port assignment, see Deploying an EPG on a Specific Node or Port Using the GUI in the
Cisco APIC Layer 2 Networking Configuration Guide.
Procedure
Step 1 Click Tenant > tenant_name > Application Profiles > application_name > Application EPGs > epg_name >
Static Ports.
Navigating to this spot displays all the ports you have statically assigned to the target EPG.
Step 2 Click the port to which you intend to assign multicast group access, to display the Static Port Configuration page.
Step 3 Click Actions > Create IGMP Access Group to display the IGMP Snoop Access Group table.
Step 4 Locate the IGMP Snoop Access Group table and click + to add an access group entry.
Adding an IGMP Snoop Access Group entry creates a user group with access to this port, associates it with a multicast
IP address, and permits or denies that group access to the IGMP snoop traffic received at that address.
a) Select Create Route Map Policy for Multicast to display the Create Route Map Policy for Multicast window.
b) In the Name field assign the name of the group that you want to allow or deny multicast traffic.
c) In the Route Maps table click + to display the route map dialog.
d) In the Order field, if multiple access groups are being configured for this interface, select a number that reflects the
order in which this access group will be permitted or denied access to the multicast traffic on this interface.
Lower-numbered access groups are ordered before higher-numbered access groups.
e) In the Group IP field enter the multicast IP address whose traffic is to be allowed or blocked for this access group.
f) In the Source IP field, enter the IP address of the source if applicable.
g) In the Action field, choose Deny to deny access for the target group or Permit to allow access for the target group.
h) Click OK.
i) Click Submit.
When the configuration is complete, the configured IGMP snoop access group is assigned a multicast IP address through
the target static port and permitted or denied access to the multicast streams that are received at that address.
Note
• You can repeat this step to configure and associate additional access groups with multicast IP addresses through the
target static port.
• To review the settings for the configured access groups, click to the following location: Tenant > tenant_name >
Policies > Protocol > Route Maps for Multicast > route_map_access_group_name.
When MLD snooping is disabled, then all the multicast traffic is flooded to all the ports, whether they have
an interest or not. When MLD snooping is enabled, the fabric will forward IPv6 multicast traffic based on
MLD interest. Unknown IPv6 multicast traffic will be flooded based on the bridge domain's IPv6 L3 unknown
multicast flood setting.
There are two modes for forwarding unknown IPv6 multicast packets:
• Flooding mode: All EPGs and all ports under the bridge domain will get the flooded packets.
• OMF (Optimized Multicast Flooding) mode: Only multicast router ports will get the packet.
Procedure
Step 1 Click the Tenants tab and the name of the tenant on whose bridge domain you intend to configure MLD snooping support.
Step 2 In the Navigation pane, click Policies > Protocol > MLD Snoop.
Step 3 Right-click MLD Snoop and select Create MLD Snoop Policy.
Step 4 In the Create MLD Snoop Policy dialog, configure a policy as follows:
a) In the Name and Description fields, enter a policy name and optional description.
b) In the Admin State field, select Enabled or Disabled to enable or disable this entire policy.
The default entry for this field is Disabled.
c) In the Control field, select or unselect Fast Leave to enable or disable MLD v1 immediate dropping of queries
through this policy.
d) In the Control field, select or unselect Enable querier to enable or disable the MLD querier activity through the
MLD Snoop Policy.
Note
For this option to be effectively enabled, you must enable Querier in the MLD Snoop Policy of the bridge domains
to which this policy is applied. The navigation path to the properties page on which this setting is located is Tenants
> tenant_name > Networking > Bridge Domains > bridge_domain_name > MLD Snoop Policy.
The new MLD Snoop policy is listed in the Protocol Policies - MLD Snoop summary page.
What to do next
To put this policy into effect, assign it to any bridge domain.
Note For the Enable Querier option on the assigned policy to be effectively enabled, the Subnet Control: Querier
IP setting must also be enabled in the subnets assigned to the bridge domains to which this policy is applied.
The navigation path to the properties page on which this setting is located is Tenants > tenant_name >
Networking > Bridge Domains > bridge_domain_name > Subnets > bd_subnet.
Procedure
Step 1 Click the APIC Tenants tab and select the name of the tenant whose bridge domains you intend to configure with an
MLD Snoop policy.
Step 2 In the APIC navigation pane, click Networking > Bridge Domains, then select the bridge domain to which you intend
to apply your policy-specified MLD Snoop configuration.
Step 3 On the main Policy tab, scroll down to the MLD Snoop Policy field and select the appropriate MLD policy from the
drop-down menu.
Step 4 Click Submit.
The target bridge domain is now associated with the specified MLD Snooping policy.
Step 5 To configure the node forwarding parameter for Layer 3 unknown IPv6 Multicast destinations for the bridge domain:
a) Select the bridge domain that you just configured.
b) Click the Policy tab, then click the General sub-tab.
c) In the IPv6 L3 Unknown Multicast field, select either Flood or Optimized Flood.
Step 6 To change the Link-Local IPv6 address for the switch-querier feature:
a) Select the bridge domain that you just configured.
b) Click the Policy tab, then click the L3 Configurations sub-tab.
c) In the Link-local IPv6 Address field, enter a Link-Local IPv6 address, if necessary.
The default Link-Local IPv6 address for the bridge domain is internally generated. Configure a different Link-Local
IPv6 address for the bridge domain in this field, if necessary.
In the Cisco ACI fabric, most unicast and IPv4/IPv6 multicast routing operate together on the same border
leaf switches, with the IPv4/IPv6 multicast protocol operating over the unicast routing protocols.
In this architecture, only the border leaf switches run the full Protocol Independent Multicast (PIM) or PIM6
protocol. Non-border leaf switches run PIM/PIM6 in a passive mode on the interfaces. They do not peer with
any other PIM/PIM6 routers. The border leaf switches peer with other PIM/PIM6 routers connected to them
over L3Outs and also with each other.
The following figure shows border leaf switch 1 and border leaf switch 2 connecting to router 1 and router 2
in the IPv4/IPv6 multicast cloud. Each virtual routing and forwarding (VRF) instance in the fabric that requires
IPv4/IPv6 multicast routing will peer separately with external IPv4/IPv6 multicast routers.
Figure 7: Overview of Multicast Cloud
• Layer 3 multicast between local leaf switches in a single fabric is forwarded as VXLAN multicast packets
where the outer destination IP address is the VRF GIPo multicast address
• Layer 3 multicast packets sent to or sent by remote leaf switches are encapsulated as VXLAN unicast
head-end replicated packets
When Layer 3 multicast routing is enabled for a VRF, the VRF GIPo multicast address is programmed on all
leaf switches where the VRF is deployed. Layer 3 multicast packets will be forwarded across the pod or
between pods as multicast packets and will be received by all leaf switches where the VRF is deployed. For
remote leaf switches, the Layer 3 multicast packets will be forwarded using head-end replication to all remote
leaf switches where the VRF is deployed. This head-end replication occurs on the pod or remote leaf where
the multicast source is connected. For example, if the multicast source is connected to a local leaf switch, one
of the spine switches in that pod will be selected to replicate these multicast packets to every remote leaf
switch where the VRF is deployed, even if these remote leaf switches are associated with other pods. When
a Layer 3 multicast source is connected to a remote leaf switch, the remote leaf switch will also use head-end
replication to send a copy of the multicast packet to a spine in every pod as well as all other remote leaf
switches where the VRF is deployed.
Multicast forwarding using head-end replication replicates the multicast packet as a separate unicast packet
for every head-end replication tunnel. Layer 3 multicast in a remote leaf switch design should ensure that the
IP network (IPN) where the remote leaf switches are connected has sufficient bandwidth to support multicast
traffic requirements.
Remote leaf switches support L3Out connections with or without PIM enabled. All leaf switches in a VRF
that have PIM-enabled L3Outs are eligible to send PIM joins from the fabric towards external sources and
rendezvous points. When a multicast receiver connected to the fabric sends an IGMP join for a group, the
fabric will select one of the PIM-enabled border leaf switches to send the join (known as the stripe winner).
A remote leaf switch with a PIM-enabled L3Out can be selected as the stripe winner for a group even when
the receivers for that group are connected to local leaf switches in the main pod. Due to potential sub-optimal
forwarding of Layer 3 multicast traffic, deploying PIM-enabled L3Outs on remote leaf switches is not
recommended.
2
The GIPo (Group IP outer address) is the destination multicast IP address used in the outer IP header of the VXLAN packet for all multi-destination packets
(Broadcast, Unknown unicast, and Multicast) packets forwarded within the fabric.
in the outgoing interface (OIF) list for the group. There is no equivalent for the interface in hardware. The
operational state of the fabric interface should follow the state published by the intermediate
system-to-intermediate system (IS-IS).
Note Each multicast-enabled VRF requires one or more border leaf switches configured with a loopback interface.
You must configure a unique IPv4 loopback address on all nodes in a PIM-enabled L3Out. The Router-ID
loopback or another unique loopback address can be used.
Any loopback configured for unicast routing can be reused. This loopback address must be routed from the
external network and will be injected into the fabric MP-BGP (Multiprotocol Border Gateway Protocol) routes
for the VRF. The fabric interface source IP will be set to this loopback as the loopback interface. The following
figure shows the fabric for IPv4/IP6 multicast routing.
Figure 8: Fabric for IPv4/IP6 Multicast Routing
At the top level, IPv4/IPv6 multicast routing must be enabled on the VRF instance that has any multicast
routing-enabled bridge domains. On an IPv4/IPv6 multicast routing-enabled VRF instance, there can be a
combination of IPv4/IPv6 multicast routing-enabled bridge domains and bridge domains where IPv4/IPv6
multicast routing is disabled. A bridge domain with IPv4/IPv6 multicast routing disabled will not show on
the VRF IPv4/IPv6 multicast panel. An L3Out with IPv4/IPv6 multicast routing-enabled will show up on the
panel, but any bridge domain that has IPv4/IPv6 multicast routing enabled will always be a part of a VRF
instance that has IPv4/IPv6 multicast routing enabled.
IPv4/IPv6 multicast routing is not supported on the leaf switches such as Cisco Nexus 93128TX, 9396PX,
and 9396TX. All the IPv4/IPv6 multicast routing and any IPv4/IPv6 multicast-enabled VRF instance should
be deployed only on the switches with -EX and -FX in their product IDs.
Note L3Out ports and sub-interfaces are supported. Support for external SVIs varies, depending on the release:
• For releases prior to release 5.2(3), external SVIs are not supported.
• Beginning with release 5.2(3), support is available for Layer 3 multicast on an SVI L3Out. PIM is
supported on SVI L3Outs for physical ports and port channels but not for vPCs. PIM6 is not supported
on L3Out SVIs.
Note For the same VRF, VRF GIPo is common for both IPv4 and IPv6.
All multicast traffic for PIM/PIM6 enabled BDs will be forwarded using the VRF GIPo. This includes both
Layer 2 and Layer 3 IPv4/IPv6 multicast. Any broadcast or unicast flood traffic on the multicast enabled BDs
will continue to use the BD GIPo. Non-IPv4/IPv6 multicast enabled BDs will use the BD GIPo for all multicast,
broadcast, and unicast flood traffic.
The APIC GUI will display a GIPo multicast address for all BDs and VRFs. The address displayed is always
a /28 network address (the last four bits are zero). When the VXLAN packet is sent in the fabric, the destination
multicast GIPo address will be an address within this /28 block and is used to select one of 16 FTAG trees.
This achieves load balancing of multicast traffic across the fabric.
multicast network and forwarding it to the fabric. This prevents multiple copies of the traffic and it balances
the load across the multiple BL switches.
This is done by striping ownership for groups across the available BL switches, as a function of the group
address and the VRF virtual network ID (VNID). A BL that is responsible for a group sends PIM/PIM6 joins
to the external network to attract traffic into the fabric on behalf of receivers in the fabric.
Each BL in the fabric has a view of all the other active BL switches in the fabric in that VRF. So each of the
BL switches can independently stripe the groups consistently. Each BL monitors PIM/PIM6 neighbor relations
on the fabric interface to derive the list of active BL switches. When a BL switch is removed or discovered,
the groups are re-striped across the remaining active BL switches. The striping is similar to the method used
for hashing the GIPos to external links in multi-pod deployment, so that the group-to-BL mapping is sticky
and results in fewer changes on up or down.
Figure 9: Model for Multiple Border Leafs as Designated Forwarder
First-Hop Functionality
The directly connected leaf switch will handle the first-hop functionality needed for PIM/PIM6 sparse mode.
The Last-Hop
The last-hop router is connected to the receiver and is responsible for doing a Shortest-Path Tree (SPT)
switchover in case of PIM/PIM6 any-source multicast (ASM). The border leaf switches will handle this
functionality. The non-border leaf switches do not participate in this function.
3
All IPv4/IPv6 multicast group membership information is stored in the COOP database on the spines. When a border leaf boots up it pulls this information
from the spine
Fast-Convergence Mode
The fabric supports a configurable fast-convergence mode where every border leaf switch with external
connectivity towards the root (RP for (*,G) and source for (S, G)) pulls traffic from the external network. To
prevent duplicates, only one of the BL switches forwards the traffic to the fabric. The BL that forwards the
traffic for the group into the fabric is called the designated forwarder (DF) for the group. The stripe winner
for the group decides on the DF. If the stripe winner has reachability to the root, then the stripe winner is also
the DF. If the stripe winner does not have external connectivity to the root, then that BL chooses a DF by
sending a PIM/PIM6 join over the fabric interface. All non-stripe winner BL switches with external reachability
to the root send out PIM/PIM6 joins to attract traffic but continue to have the fabric interface as the RPF
interface for the route. This results in the traffic reaching the BL switch on the external link, but getting
dropped.
The advantage of the fast-convergence mode is that when there is a stripe owner change due to a loss of a BL
switch for example, the only action needed is on the new stripe winner of programming the right Reverse
Path Forwarding (RPF) interface. There is no latency incurred by joining the PIM/PIM6 tree from the new
stripe winner. This comes at the cost of the additional bandwidth usage on the non-stripe winners' external
links.
Note Fast-convergence mode can be disabled in deployments where the cost of additional bandwidth outweighs
the convergence time saving.
In typical data center with multicast networks, the multicast sources and receivers are in the same VRF, and
all multicast traffic is forwarded within that VRF. There are use cases where the multicast sources and receivers
may be located in different VRFs:
• Surveillance cameras are in one VRF while the people viewing the camera feeds are on computers in a
different VRF.
• A multicast content provider is in one VRF while different departments of an organization are receiving
the multicast content in different VRFs.
ACI release 4.0 adds support for inter-VRF multicast, which enables sources and receivers to be in different
VRFs. This allows the receiver VRF to perform the reverse path forwarding (RPF) lookup for the multicast
route in the source VRF. When a valid RPF interface is formed in the source VRF, this enables an outgoing
interface (OIF) in the receiver VRF. All inter-VRF multicast traffic will be forwarded within the fabric in the
source VRF. The inter-VRF forwarding and translation is performed on the leaf switch where the receivers
are connected.
Note • For any-source multicast, the RP used must be in the same VRF as the source.
• Inter-VRF multicast supports both shared services and share L3Out configurations. Sources and receivers
can be connected to EPGs or L3Outs in different VRFs.
For ACI, inter-VRF multicast is configured per receiver VRF. Every NBL/BL that has the receiver VRF will
get the same inter-VRF configuration. Each NBL that may have directly connected receivers, and BLs that
may have external receivers, need to have the source VRF deployed. Control plane signaling and data plane
forwarding will do the necessary translation and forwarding between the VRFs inside the NBL/BL that has
receivers. Any packets forwarded in the fabric will be in the source VRF.
Beginning with ACI release 6.0(2), the fabric supports a configurable stripe winner policy where you can
select a pod for a specific multicast group, group range and/or source, source range. This will ensure that the
border leaf elected as the stripe winner is from the selected pod solving the scenarios described above.
This feature also supports the option to exclude any remote leaf switches. When this option is enabled, remote
leaf switches with PIM enabled L3Outs will be excluded from the stripe winner election.
Config Based Stripe Winner Election Guidelines and Requirements:
• Only the BLs in the POD are considered for stripe winner election, contrary to the case where all BLs
from all PODs would have been considered if this configuration is not present.
• Amongst the BLs in the POD, only one BL will be elected as the config based stripe winner.
• If you select the exclude RL option, then the RLs will be excluded from the config stripe winner election.
• All BLs in the POD will be considered candidates for being the stripe and the regular stripe winner to
elect one BL (in the POD) as the stripe winner.
• If there are no BLs in the configured POD or if none of the BLs are candidates for config stripe winner
election, then the election will switch to a default stripe winner election logic, which is considering all
BLs in all PODs as candidates.
• When you perform a VRF delete and re-add operation do not add the config stripe winner configuration
back with the VRF configuration.
• You must add the VRF configuration first and then add the config stripe winner configuration after four
minutes.
• The config stripe winner may result in a scenario where the configured (S,G) stripe winner is a different
border leaf than the (*,G) stripe winner. In this case, the BL that is the (*,G) stripe winner will also install
an (S,G) mroute. Both the configured (S,G) stripe winner and the (*,G) stripe winner will receive multicast
traffic from the external source but only the configured (S,G) stripe winner will forward multicast into
the fabric.
• Overlapping address ranges are not supported. For example, if 224.1.0/16 is already configured then
you cannot configure 224.1.0/24. However, you can have any number of configurations with different
source ranges for 224.1.0/16.
• Config stripe winner policy is not supported for IPv6 multicast.
• The maximum number of ranges that can be configured is 500 per VRF.
• Config stripe winner policy is not supported with Inter-VRF Multicast in ACI release 6.0(2).
IGMP Features
Allow V3 ASM ip igmp allow-v3-asm Allow accepting IGMP version 3 source-specific reports for multicast groups
outside of the SSM range. When this feature is enabled, the switch will create
an (S,G) mroute entry if it receives an IGMP version 3 report that includes
both the group and source even if the group is outside of the configured SSM
range. This feature is not required if hosts send (*,G) reports outside of the
SSM range, or send (S,G) reports for the SSM range.
Fast Leave ip igmp immediate-leave Option that minimizes the leave latency of IGMPv2 group memberships on a
given IGMP interface because the device does not send group-specific queries.
When immediate leave is enabled, the device removes the group entry from
the multicast routing table immediately upon receiving a leave message for the
group. The default is disabled.
Note: Use this command only when there is one receiver behind the
BD/interface for a given group
Report Link Local ip igmp Enables sending reports for groups in 224.0.0.0/24. Reports are always sent
Groups report-link-local-groups for nonlink local groups. By default, reports are not sent for link local groups.
Group Timeout (sec) ip igmp group-timeout Sets the group membership timeout for IGMPv2. Values can range from 3 to
65,535 seconds. The default is 260 seconds.
Query Interval (sec) ip igmp query-interval Sets the frequency at which the software sends IGMP host query messages.
Values can range from 1 to 18,000 seconds. The default is 125 seconds.
Query Response Interval ip igmp Sets the response time advertised in IGMP queries. Values can range from 1
(sec) query-max-response-time to 25 seconds. The default is 10 seconds.
Last Member Count ip igmp Sets the number of times that the software sends an IGMP query in response
last-member-query-count to a host leave message. Values can range from 1 to 5. The default is 2.
Last Member Response ip igmp Sets the query interval waited after sending membership reports before the
Time (sec) last-member-query-response-time software deletes the group state. Values can range from 1 to 25 seconds. The
default is 1 second.
Startup Query Count ip igmp Sets the query count used when the software starts up. Values can range from
startup-query-count 1 to 10. The default is 2.
Querier Timeout ip igmp querier-timeout Sets the query timeout that the software uses when deciding to take over as the
querier. Values can range from 1 to 65,535 seconds. The default is 255 seconds.
Robustness Variable ip igmp Sets the robustness variable. You can use a larger value for a lossy network.
robustness-variable Values can range from 1 to 7. The default is 2.
Version ip igmp version <2-3> IGMP version that is enabled on the bridge domain or interface. The IGMP
version can be 2 or 3. The default is 2.
Report Policy Route ip igmp report-policy Access policy for IGMP reports that is based on a route-map policy. IGMP
Map* <route-map> group reports will only be selected for groups allowed by the route-map
Static Report Route ip igmp static-oif Statically binds a multicast group to the outgoing interface, which is handled
Map* by the switch hardware. If you specify only the group address, the (*, G) state
is created. If you specify the source address, the (S, G) state is created. You
can specify a route-map policy name that lists the group prefixes, group ranges,
and source prefixes. Note A source tree is built for the (S, G) state only if you
enable IGMPv3.
Maximum Multicast ip igmp state-limit Limit the mroute states for the BD or interface that are created by IGMP reports.
Entries
Default is disabled, no limit enforced. Valid range is 1-4294967295.
Reserved Multicast ip igmp state-limit Specifies to use the route-map policy name for the reserve policy and set the
Entries <limit> reserved maximum number of (*, G) and (S, G) entries allowed on the interface.
<route-map>
State Limit Route Map* ip igmp state-limit Used with Reserved Multicast Entries feature
<limit> reserved
<route-map>
IGMP snooping admin [no] ipigmp snooping Enables/disables the IGMP snooping feature. Cannot be disabled for PIM
state enabled bridge domains
Fast Leave ip igmp snooping Option that minimizes the leave latency of IGMPv2 group memberships on a
fast-leave given IGMP interface because the device does not send group-specific queries.
When immediate leave is enabled, the device removes the group entry from
the multicast routing table immediately upon receiving a leave message for the
group. The default is disabled.
Note: Use this command only when there is one receiver behind the
BD/interface for a given group
Enable Querier ip igmp snooping querier Enables the IP IGMP snooping querier feature on the Bridge Domain. Used
<ip address> along with the BD subnet Querier IP setting to configure an IGMP snooping
querier for bridge domains.
Note: Should not be used with PIM enabled bridge domains. The IGMP querier
function is automatically enabled for when PIM is enabled on the bridge domain.
Query Interval ip igmp snooping Sets the frequency at which the software sends IGMP host query messages.
query-interval Values can range from 1 to 18,000 seconds. The default is 125 seconds.
Query Response Interval ip igmp snooping Sets the response time advertised in IGMP queries. Values can range from 1
query-max-response-time to 25 seconds. The default is 10 seconds.
Last Member Query ip igmp snooping Sets the query interval waited after sending membership reports before the
Interval last-member-query-interval software deletes the group state. Values can range from 1 to 25 seconds. The
default is 1 second.
Start Query Count ip igmp snooping Configures snooping for a number of queries sent at startup when you do not
startup-query-count enable PIM because multicast traffic does not need to be routed. Values can
range from 1 to 10. The default is 2.
Start Query Interval (sec) ip igmp snooping Configures a snooping query interval at startup when you do not enable PIM
startup-query-interval because multicast traffic does not need to be routed. Values can range from 1
to 18,000 seconds. The default is 31 seconds
MLD snooping admin ipv6 mld snooping IPv6 MLD snooping feature. Default is disabled
state
Fast Leave ipv6 mld snooping Allows you to turn on or off the fast-leave feature on a per bridge domain basis.
fast-leave This applies to MLDv2 hosts and is used on ports that are known to have only
one host doing MLD behind that port. This command is disabled by default.
Enable Querier ipv6 mld snooping Enables or disables IPv6 MLD snooping querier processing. MLD snooping
querier querier supports the MLD snooping in a bridge domain where PIM and MLD
are not configured because the multicast traffic does not need to be routed.
Query Interval ipv6 mld snooping Sets the frequency at which the software sends MLD host query messages.
query-interval Values can range from 1 to 18,000 seconds. The default is 125 seconds.
Query Response Interval ipv6 mld snooping Sets the response time advertised in MLD queries. Values can range from 1 to
query-interval 25 seconds. The default is 10 seconds.
Last Member Query ipv6 mld snooping Sets the query response time after sending membership reports before the
Interval last-member-query-interval software deletes the group state. Values can range from 1 to 25 seconds. The
default is 1 second.
Authentication ip pim Enables MD5 hash authentication for PIM IPv4 neighbors
hello-authentication
ah-md5
Multicast Domain ip pim border Enables the interface to be on the border of a PIM domain so that no bootstrap,
Boundary candidate-RP, or Auto-RP messages are sent or received on the interface. The
default is disabled.
Passive ip pim passive If the passive setting is configured on an interface, it will enable the interface
for IP multicast. PIM will operate on the interface in passive mode, which
means that the leaf will not send PIM messages on the interface, nor will it
accept PIM messages from other devices across this interface. The leaf will
instead consider that it is the only PIM device on the network and thus act as
the DR. IGMP operations are unaffected by this command.
Strict RFC Compliant ip pim When configured, the switch will not process joins from unknown neighbors
strict-rfc-compliant and will not send PIM joins to unknown neighbors
Designated Router Delay ip pimdr-delay Delays participation in the designated router (DR) election by setting the DR
(sec) priority that is advertised in PIM hello messages to 0 for a specified period.
During this delay, no DR changes occur, and the current switch is given time
to learn all of the multicast states on that interface. After the delay period
expires, the correct DR priority is sent in the hello packets, which retriggers
the DR election. Values are from 1 to 65,535. The default value is 3.
Note: This command delays participation in the DR election only upon bootup
or following an IP address or interface state change. It is intended for use with
multicast-access non-vPC Layer 3 interfaces only.
Designated Router ip pim dr-priority Sets the designated router (DR) priority that is advertised in PIM hello messages.
Priority Values range from 1 to 4294967295. The default is 1.
Hello Interval ip pim hello-interval Configures the interval at which hello messages are sent in milliseconds. The
(milliseconds) range is from 1000 to 18724286. The default is 30000.
Join-Prune Interval ip pim jp-interval Interval for sending PIM join and prune messages in seconds. Valid range is
Policy (seconds) from 60 to 65520. Value must be divisible by 60. The default value is 60.
Interface-level Inbound ip pimjp-policy Enables inbound join-prune messages to be filtered based on a route-map policy
Join-Prune Filter Policy* where you can specify group, group and source, or group and RP addresses.
The default is no filtering of join-prune messages.
Interface-level Outbound ip pim jp-policy Enables outbound join-prune messages to be filtered based on a route-map
Join-Prune Filter Policy* policy where you can specify group, group and source, or group and RP
addresses. The default is no filtering of join-prune messages.
Interface-level Neighbor ip pim neighbor-policy Controls which PIM neighbors to become adjacent to based on route-map
Filter Policy* policy where you specify the source address/address range of the permitted
PIM neighbors
Static RP ippimrp-address Configures a PIM static RP address for a multicast group range. You can specify
an optional route-map policy that lists multicast group ranges for the static RP.
If no route-map is configured, the static RP will apply to all multicast group
ranges excluding any configured SSM group ranges.
The mode is ASM.
Fabric RP n/a Configures an anycast RP on all multicast enabled border leaf switches in the
fabric. Anycast RP is implemented using PIM anycast RP. You can specify an
optional route-map policy that lists multicast group ranges for the static RP.
Auto-RP Forward ip pim auto-rp forward Enables the forwarding of Auto-RP messages. The default is disabled.
Auto-RP Updates
Auto-RP Listen to ip pim auto-rp listen Enables the listening for Auto-RP messages. The default is disabled.
Auto-RP Updates
Auto-RP MA Filter * ip pim auto-rp Enables Auto-RP discover messages to be filtered by the border leaf based on
mapping-agent-policy a route-map policy where you can specify mapping agent source addresses.
This feature is used when the border leaf is configured to listen for Auto-RP
messages. The default is no filtering of Auto-RP messages.
BSR Forward BSR ippimbsr forward Enables forwarding of BSR messages. The default is disabled, which means
Updates that the leaf does not forward BSR messages.
BSR Listen to BRS ip pim bsr listen Enables listening for BSR messages. The default is disabled, which means that
Updates the leaf does not listen for BSR messages.
BSR Filter ip pim bsr bsr-policy Enables BSR messages to be filtered by the border leaf based on a route-map
policy where you can specify BSR source. This command can be used when
the border leaf is configured to listen to BSR messages. The default is no
filtering of BSR messages.
ASM Source, Group ip pim sg-expiry-timer Applies a route map to the ASM Source, Group Expiry Timer to specify a
Expiry Timer Policy * <timer> sg-list group/range of groups for the adjusted expiry timer.
ASM Source, Group ip pim sg-expiry-timer To adjust the (S,G) expiry timer interval for Protocol Independent Multicast
Expiry Timer Expiry sparse mode (PIM-SM) (S,G) multicast routes. This command creates
(sec) persistency of the SPT (source based tree) over the default 180 seconds for
intermittent sources. Range is from 180 to 604801 seconds.
Register Traffic Policy: ip pim register-rate-limit Configures the rate limit in packets per second. The range is from 1 to 65,535.
Max Rate The default is no limit.
Register Traffic Policy: ip pim register-source Used to configure a source IP address of register messages. This feature can
Source IP be used when the source address of register messages is routed in the network
where the RP can send messages. This may happen if the bridge domain where
the source is connected is not configured to advertise its subnet outside of the
fabric.
SSM Group Range ippimssm route-map Can be used to specify different SSM group ranges other than the default range
Policy* 232.0.0.0/8. This command is not required if you want to only use the default
group range. You can configure a maximum of four ranges for SSM multicast
including the default range.
SSM Group range ip pim ssm-range none Can be used to deny the default SSM group range 232.0.0.0/8 and instead be
handled as an ASM group range.
Fast Convergence n/a When fast convergence mode is enabled, every border leaf in the fabric will
send PIM joins towards the root (RP for (*,G) and source (S,G)) in the external
network. This allows all PIM enabled BLs in the fabric to receive the multicast
traffic from external sources but only one BL will forward traffic onto the
fabric. The BL that forwards the multicast traffic onto the fabric is the
designated forwarder. The stripe winner BL decides on the DF. The advantage
of the fast-convergence mode is that when there is a changed of the stripe
winner due to a BL failure there is no latency incurred in the external network
by having the new BL send joins to create multicast state.
Note: Fast convergence mode can be disabled in deployments where the cost
of additional bandwidth outweighs the convergence time saving.
Strict RFC Compliant ip pim When configured, the switch will not process joins from unknown neighbors
strict-rfc-compliant and will not send PIM joins to unknown neighbors
MTU Port ippimmtu Enables bigger frame sizes for the PIM control plane traffic and improves the
convergence. Range is from 1500 to 9216 bytes
Resource Policy ip pim state-limit Sets the maximum (*,G)/(S,G) entries allowed per VRF. Range is from 1 to
Maximum Limit 4294967295
Resource Policy ip pim state-limit <limit> Configures a route-map policy matching multicast groups or groups and sources
Reserved Route Map* reserved <route-map> to be applied to the Resource Policy Maximum Limit reserved entries.
Resource Policy ip pim state-limit <limit> Maximum reserved (*, G) and (S, G) entries allowed in this VRF. Must be less
Reserved Multicast reserved <route-map> than or equal to the maximum states allowed. Used with the Resource Policy
Entries <limit> Reserved Route Map policy
• For Layer 3 IPv6 multicast support, when the ingress leaf switch receives a packet from a source
that is attached on a bridge domain, and the bridge domain is enabled for IPv6 multicast routing,
the ingress leaf switch sends only a routed VRF instance copy to the fabric (routed implies that the
TTL is decremented by 1, and the source-mac is rewritten with a pervasive subnet MAC). The egress
leaf switch also routes the packet into receivers. The egress leaf also decrements the TTL in the
packet by 1. This results in TTL being decremented two times. Also, for ASM the multicast group
must have a valid RP configured.
Note Cisco ACI does not support IP fragmentation. Therefore, when you configure Layer 3 Outside (L3Out)
connections to external routers, or Multi-Pod connections through an Inter-Pod Network (IPN), it is
recommended that the interface MTU is set appropriately on both ends of a link. On some platforms, such as
Cisco ACI, Cisco NX-OS, and Cisco IOS, the configurable MTU value does not take into account the Ethernet
headers (matching IP MTU, and excluding the 14-18 Ethernet header size), while other platforms, such as
IOS-XR, include the Ethernet header in the configured MTU value. A configured value of 9000 results in a
max IP packet size of 9000 bytes in Cisco ACI, Cisco NX-OS, and Cisco IOS, but results in a max IP packet
size of 8986 bytes for an IOS-XR untagged interface.
For the appropriate MTU values for each platform, see the relevant configuration guides.
We highly recommend that you test the MTU using CLI-based commands. For example, on the Cisco NX-OS
CLI, use a command such as ping 1.1.1.1 df-bit packet-size 9000 source-interface ethernet 1/1.
Note Click the help icon (?) located in the top-right corner of the Work pane and of each dialog box for information
about a visible tab or a field.
Procedure
Step 1 Navigate to Tenants > Tenant_name > Networking > VRFs > VRF_name > Multicast.
In the Work pane, a message is displayed as follows: PIM is not enabled on this VRF. Would you like to enable
PIM?.
Step 2 Click YES, ENABLE MULTICAST.
Step 3 Configure interfaces:
a) From the Work pane, click the Interfaces tab.
b) Expand the Bridge Domains table to display the Create Bridge Domain dialog and enter the appropriate value
in each field.
c) Click Select.
d) Expand the Interfaces table to display the Select an L3 Out dialog.
e) Click the L3 Out drop-down arrow to choose an L3 Out.
f) Click Select.
Step 4 Configure a rendezvous point (RP):
a) In the Work pane, click the Rendezvous Points tab and choose from the following rendezvous point (RP) options:
• Static RP
a. Expand the Static RP table.
b. Enter the appropriate value in each field.
c. Click Update.
• Fabric RP
a. Expand the Fabric RP table.
b. Enter the appropriate value in each field.
c. Click Update.
• Auto-RP
a. Enter the appropriate value in each field.
Procedure
Step 1 On the menu bar, navigate to Tenants > Tenant_name > Networking > VRFs > VRF_name > Multicast IPv6.
In the Work pane, a message is displayed as follows: PIM6 is not enabled on this VRF. Would you like to enable
PIM6?.
Step 2 Click YES, ENABLE MULTICAST IPv6.
Step 3 Configure interfaces:
a) From the Work pane, click the Interfaces tab.
b) Expand the Bridge Domains table to display the Create Bridge Domain dialog, and choose the appropriate BD
from drop-down list.
c) Click Select.
d) Expand the Interfaces table to display the Select an L3 Out dialog box.
e) Click the L3 Out drop-down arrow to choose an L3 Out.
f) Click Select.
Step 4 Configure a rendezvous point (RP).
a) In the Work pane, click the Rendezvous Points tab, choose Static RP.
b) Enter the appropriate value in each field.
c) Click Update.
Step 5 Configure the pattern policy.
a) From the Work pane, click the Pattern Policy tab and choose Any Source Multicast (ASM).
b) Enter the appropriate values in each field.
Step 6 Configure the PIM settings.
a) Click the PIM Setting tab.
b) Enter the appropriate value in each field.
Step 7 When finished, click Submit.
Step 8 On the menu bar, navigate to Tenants > Tenant_name > Networking > VRFs > VRF_name > Multicast IPv6, and
perform the following actions:
a) In the Work pane, Interfaces tab, choose the appropriate L3 Out and from the PIM Policy drop-down list, choose
the appropriate PIM policy to attach.
b) Click Submit.
Step 9 To verify the configuration perform the following actions:
a) In the Work pane, click Interfaces to display the associated Bridge Domains.
b) In the Navigation pane, navigate to the associated BD with IPv6 multicast.
In the Work pane, the configured PIM functionality is displayed as configured earlier.
c) In the Navigation pane, navigate to the associated L3 Out interface.
In the Work pane, the PIM6 check box is checked.
d) In the Work pane, navigate to Fabric > Inventory >Pod NodeProtocols > PIM6 and expand PIM.
Under the appropriate PIM6 protocol that was created earlier, you can view information about he associated Neighbors,
PIM Interfaces, Routes, Group Ranges, and RPs. You can verify that all these objects are set up.
Note The IPv4 version of the BGP IPv4/IPv6 multicast address-family feature was available as part of Cisco APIC
Release 4.1.
Beginning with Cisco APIC release 4.2(1), the BGP multicast address-family feature adds support for IPv6
for BGP peers towards external routers in the tenant VRF on the border leaf switch. You can specify if the
peer will also be used separately to carry multicast routes in the IPv4/IPv6 multicast address-family.
Guidelines and Restrictions for the BGP Multicast Address-Family Feature for Both IPv4 and IPv6
• There is no support for BGPv4/v6 multicast address-family within the Cisco ACI fabric.
• RP reachability should be present in the unicast address-family, if that is being used. For PIM
Source-Specific Multicast (SSM), there is no need for RP.
Procedure
Step 1 Locate the VRF that you will be using with the L3Out, or create the VRF, if necessary.
Tenants > tenant > Networking > VRFs
• To enable PIMv6 under the VRF, on the menu bar, navigate to Tenants > Tenant_name > Networking > VRFs >
VRF_name > Multicast IPv6.
• If you see the message PIMv6 is not enabled on this VRF. Would you like to enable PIMv6?, then click
Yes, enable multicast IPv6.
• If you see the main Multicast IPv6 window, check the Enable box, if it is not already checked.
Step 3 Create the L3Out and configure the BGP for the L3Out:
a) On the Navigation pane, expand Tenant and Networking.
b) Right-click L3Outs and choose Create L3Out.
c) Enter the necessary information to configure BGP for the L3Out.
In the Identity page:
• Select the VRF that you configured in the previous step.
• Select BGP in the Identity page in the L3Out creation wizard to configure the BGP protocol for this L3Out.
d) Continue through the remaining pages (Nodes and Interfaces, Protocols, and External EPG) to complete the
configuration for the L3Out.
Step 4 After you have completed the L3Out configuration, configure the BGP IPv4/IPv6 multicast address-family feature:
a) Navigate to the BGP Peer Connectivity Profile screen:
Tenants > tenant > Networking > L3Outs > L3out-name > Logical Node Profiles > logical-node-profile-name >
Logical Interface Profiles > logical-interface-profile-name > BGP Peer Connectivity Profile IP-address
b) Scroll down to the Address Type Controls field and make the following selections:
• Select AF Mcast.
• Leave AF Ucast selected, if it is already selected.
c) Click Submit.
d) Navigate to the bridge domain with the subnet that needs to be redistributed to the peer’s IPv4 or IPv6 multicast
address-family:
Tenants > tenant > Networking > Bridge Domains > bridge_domain-name
e) In the main pane, click the Policy/General tabs.
f) Enable PIMv4 or PIMv6 on the bridge domain.
• To enable PIMv4 on the bridge domain, scroll down to the PIM field and check the box next to that field to
enable it.
• To enable PIMv6 on the bridge domain, scroll down to the PIMv6 field and check the box next to that field to
enable it.
g) Click Submit.
You can configure multiple entries in a single route map, where some entries can be configured with a Permit
action and other entries can be configured with a Deny action, all within the same route map.
Note When a source filter is applied to a bridge domain, it will filter multicast traffic at the source. The filter will
prevent multicast from being received by receivers in different bridge domains, the same bridge domain, and
external receivers.
You can configure multiple entries in a single route map, where some entries can be configured with a Permit
action and other entries can be configured with a Deny action, all within the same route map.
group range, and can also perform restricting or can allow restricting to filtering when receiving traffic from
sources to a group range.
• Source and receiver filtering use an ordered list of route-map entries. Route-map entries are executed
with the lowest number first until there is a match. If there is a match, even if it is not the longest match
in the list, it will exit the program and will not consider the rest of the entries.
For example, assume that you have the following route map for a specific source (192.0.3.1/32), with
these entries:
1 192.0.0.0/16 Permit
2 192.0.3.0/24 Deny
The route map is evaluated based on the order number. Therefore, even though the second entry
(192.0.3.0/24) is a longer match for the source IP, the first entry (192.0.0.0/16) will be matched because
of the earlier order number.
Procedure
Step 1 Navigate to the bridge domain where you want to configure multicast filtering.
Tenant > tenant-name > Networking > Bridge Domains > bridge-domain-name
The Summary page for this bridge domain appears.
Step 2 Select the Policy tab, then select the General subtab.
Step 3 In the General window, locate the PIM field and verify that PIM is enabled (that there is a check in the box next to the
PIM field).
If PIM is not enabled, put a check in the box next to the PIM field to enable that now. The Source Filter and Destination
Filter fields become available.
Note
Multicast filtering is supported only for IPv4 (PIM), and is not supported for IPv6 (PIM6) at this time.
Step 4 Determine whether you want to enable multicast source or receiver filtering.
Note
You can also enable both source and receiver filtering on the same bridge domain.
• If you want to enable multicast source filtering at the first-hop router, in the Source Filter field, make one of the
following selections:
• Existing route map policy: Select an existing route map policy for multicast for the source filtering, then go
to Step 7, on page 95.
• New route map policy: Select Create Route Map Policy for Multicast, then proceed to Step 5, on page 94.
• If you want to enable multicast receiver filtering at the last-hop router, in the Destination Filter field, make one of
the following selections:
• Existing route map policy: Select an existing route map policy for multicast for the receiver filtering, then go
to Step 7, on page 95.
• New route map policy: Select Create Route Map Policy for Multicast, then proceed to Step 6, on page 94.
Step 5 If you selected the Create Route Map Policy for Multicast option to enable multicast source filtering at the first-hop
router, the Create Route Map Policy for Multicast window appears. Enter the following information in this window:
a) In the Name field, enter a name for this route map, and enter a description in the Description field, if desired.
b) In the Route Maps area, click +.
The Create Route Map Entry window appears.
c) In the Order field, if multiple access groups are being configured for this interface, select a number that reflects the
order in which this access group will be permitted or denied access to the multicast traffic on this interface.
Lower-numbered entries are ordered before higher-numbered entries. The range is from 0 to 65535.
d) Determine how you want to allow or deny traffic to be sent for multicast source filtering.
• If you want to allow or deny multicast traffic to be sent from a specific source to any group, in the Source IP
field, enter the IP address of the specific source from which the traffic is sent, and leave the Group IP field
empty.
• If you want to allow or deny multicast traffic to be sent from any source to a specific group, in the Group IP
field, enter the multicast IP address to which the traffic is sent, and leave the Source IP field empty.
• If you want to allow or deny multicast traffic to be sent from a specific source to a specific group, enter the
necessary information in both the Group IP and the Source IP fields.
Note
The RP IP field is not applicable for multicast source filtering or multicast receiver filtering. Any entry in this field
will be ignored for multicast filtering, so do not enter a value in this field for this feature.
e) In the Action field, choose Deny to deny access or Permit to allow access for the target source.
f) Click OK.
The Create Route Map Policy for Multicast window appears again, with the route map entry that you configured
displayed in the Route Maps table.
g) Determine if you want to create additional route map entries for this route map.
You can create multiple route map entries for a route map, each with their own IP addresses and related actions. For
example, you might want to have one set of IP addresses with a Permit action applied, and another set of IP addresses
with a Deny action applied, all within the same route map.
If you want to create additional route map entries for this route map, click + in the Route Maps area again, then go
to 5.c, on page 94 to repeat the steps for filling in the necessary information in the Create Route Map Entry window
for the additional route map entries for this route map.
h) When you have completed all of the route map entries for this route map, click Submit. Go to Step 7, on page 95.
Step 6 If you selected the Create Route Map Policy for Multicast option to enable multicast destination (receiver) filtering
at the last-hop router, the Create Route Map Policy for Multicast window appears. Enter the following information in
this window:
a) In the Name field, enter a name for this route map, and enter a description in the Description field, if desired.
b) In the Route Maps area, click +.
Note
The RP IP field is not applicable for multicast source filtering or multicast receiver filtering. Any entry in this field
will be ignored for multicast filtering, so do not enter a value in this field for this feature.
e) In the Action field, choose Deny to deny access or Permit to allow access for the target group.
f) Click OK.
The Create Route Map Policy for Multicast window appears again, with the route map entry that you configured
displayed in the Route Maps table.
g) Determine if you want to create additional route map entries for this route map.
You can create multiple route map entries for a route map, each with their own IP addresses and related actions. For
example, you might want to have one set of IP addresses with a Permit action applied, and another set of IP addresses
with a Deny action applied, all within the same route map.
If you want to create additional route map entries for this route map, click + in the Route Maps area again, then go
to 6.c, on page 95 to repeat the steps for filling in the necessary information in the Create Route Map Entry window
for the additional route map entries for this route map.
h) When you have completed all of the route map entries for this route map, click Submit. Go to Step 7, on page 95.
Step 7 At the bottom righthand corner of the Policy/General page, click Submit.
The Policy Usage Warning window appears.
Step 8 Verify that it is acceptable that the nodes and policies displayed in the table in the Policy Usage Warning window will
be affected by this policy change to enable multicast source and/or destination filtering, then click Submit Changes.
Firewalls are usually deployed in active/standby pairs, where both firewalls connect to the fabric on the same
VLAN and subnet, as shown below.
Because this is a LAN-like topology, it requires an SVI L3Out on the fabric side. Beginning with release
5.2(3), support is available for Layer 3 multicast on an SVI L3Out.
An L3Out SVI is an interface type where a Layer 3 SVI interface is configured on every border leaf switch
where the SVI is deployed. When PIM is enabled on an L3Out that is configured with an SVI, the PIM protocol
will be enabled on the border leaf switch that is part of the SVI. All SVIs will then form PIM adjacencies with
each other and any external PIM-enabled devices.
In this example, BL1 and BL2 are the border leaf switches on the fabric. Both border leaf switches are on the
same SVI L3Out that connects to the external firewalls. Each firewall is connected to one of the two border
leaf switches over a port-channel (non-vPC).
• Each border leaf switch will form a PIM neighbor adjacency to the active firewall.
• BL2 in the example will peer to the active firewall over the fabric tunnel for the L3Out external bridge
domain.
• The active firewall can send PIM joins/prunes to both BL1 and BL2.
• One of the two border leaf switches will send the PIM joins towards the firewall. The border leaf switch
that sends the PIM join towards the firewall is determined by the stripe winner selection for the multicast
group (group and source for SSM).
• BL2 can be selected as the stripe winner for a multicast group. BL2 in the example topology is not directly
connected to the active firewall. BL1 will notify BL2 that it is the directly connected reverse path
forwarding (RPF) to the source. BL2 can send the PIM via BL1. BL2 must be able to perform a recursive
lookup for the IP address of the firewall. This functionality is provided by the attached-host redistribution
feature. A route-map matching the firewall subnet must be configured for attached-host redistribution
on the L3Out.
With respect to the Layer 3 multicast states and multicast data traffic, the components in the figure above are
affected in the following manner:
• BL1, BL2, BL3, and BL4 are the border leaf switches on the fabric. All of these border leaf switches are
on the same SVI L3Out that connect to the external boxes, where the external boxes could be any external
switch or router.
• Logically, the Layer 3 link is up between the border leaf switches and the external routers. So a full mesh
adjacency exists with regards to unicast routing protocol(s) or PIM across the border leaf switches and
the external switches/routers on the SVI L3Out.
• Since the SVI L3Out is a bridge domain, even if there are multiple physical connections from border
leaf switches to the external switches/routers, only one link among them will be up at the Layer 2 level
to each external switch/router. All of the other links will be blocked by STP.
For example, in the figure above, only the following links at the Layer 2 level are up:
• The link between BL1 and external router 1
• The link between BL3 and external router 2
So for all of the other border leaf switches, this makes the IP addresses 10.1.1.10 reachable only through
BL1 and 10.1.1.20 reachable only through BL3.
• Protocol Independent Multicast (PIM) Any Source Multicast (ASM) and Source-Specific
Multicast (SSM)
• SVI with physical interfaces
• SVI with direct port-channels (non-vPC)
• All topology combinations:
• Source inside receiver inside (SIRI)
• Source inside receiver outside (SIRO)
• Source outside receiver inside (SORI)
• Source outside receiver outside (SORO)
• Unsupported:
• Layer 3 multicast with VPC over an SVI L3Out
• Source or receiver hosts connected directly on the SVI subnet (source or receiver hosts must
be connected behind a router on the SVI L3Out)
• Stretched SVI L3Out between local leaf switches (ACI main data center switches) and remote
leaf switches
• Stretched SVI L3Out across sites (Cisco ACI Multi-Site)
• SVI L3Out for PIMv6
• Secondary IP addresses. PIM joins/prunes will not be processed if sent to the secondary IP
address of the border leaf switch. Secondary IP address are typically used for configuring a
shared (virtual) IP address across border leaf switches for static routing. We recommend that
you use dynamic routing when configuring PIM over SVIs or create static routes to each border
leaf switch primary address.
Procedure
Step 1 Configure a standard L3Out using the Create L3Out wizard with SVI set as the Layer 3 interface type.
a) In the GUI Navigation pane, under the Tenant Example, navigate to Networking > L3Outs.
b) Right-click and choose Create L3Out.
c) In the Create L3Out screen, in the Identity window, enter a name for the L3Out and select a VRF and L3 domain
to associate with this L3Out.
d) Click Next when you have entered the necessary information in the Identity window.
The Nodes and Interfaces window appears.
e) In the Nodes and Interfaces window, in the Interface Types: Layer 3 field, choose SVI as the Layer 3 interface
type.
f) Continue configuring the individual fields through the Create L3Out wizard until you have completed the L3Out
configuration.
Step 2 Navigate to the configured L3Out:
Tenants > tenant_name > Networking > L3Outs > L3Out_name
The Summary page for the configured L3Out is displayed.
Step 4 In the Route Profile for Redistribution field, click + to configure a route profile for redistribution.
Step 5 In the Source field, choose attached-host.
Step 6 In the Route Map field, configure a route map that permits all.
a) Click Create Route Maps for Route Control.
The Create Route Maps for Route Control window is displayed.
b) Enter a name and description for this route map, then click + in the Contexts area.
The Create Route Control Context window is displayed.
c) Configure the necessary parameters in the Create Route Control Context window, with the value in the Action
field set to Permit.
d) Click + in the Associated Match Rules area, then choose Create Match Rule for a Route Map to configure the
match rules for this route control context.
The Create Match Rule window is displayed.
e) Click + in the Match Prefix area.
The Create Match Route Destination Rule window is displayed.
f) In the Create Match Route Destination Rule window, enter the following values in these fields to configure a match
rule with an aggregate route matching the subnet or 0.0.0.0/0 route and aggregate setting:
• IP: 0.0.0.0/0
• Aggregate: Check the box in this field. The Greater Than Mask and Less Than Mask fields appear.
• Greater Than Mask: 0
• Less Than Mask: 0
d) Click Submit.
Note • If a multicast l3ext:InstP exists on the IFC, we can check whether a corresponding fv:RtdEpP is created
and deployed on each switch where there is an interface in that L3Out.
• We do not support an L3Out SVI interface for PIM.
Note For interaction with igmp snooping, when PIM is enabled on a pervasive BD, the routing bit should be
automatically enabled for the corresponding igmpsnoop:If.
Note Beginning with Cisco APIC Release 5.2(3), a fabric consisting of only two pods can be connected directly,
without an IPN. For information about this Multi-Pod Spines Back-to-Back topology, see About Multi-Pod
Spines Back-to-Back, on page 117.
Multi-Pod Provisioning
The IPN is not managed by the APIC. It must be preconfigured with the following information:
• Configure the interfaces connected to the spines of all pods. Use Layer 3 sub-interfaces tagging traffic
with VLAN-4 and increase the MTU at least 50 bytes above the maximum MTU required for inter-site
control plane and data plane traffic.
If remote leaf switches are included in any pods, see Remote Leaf Switches, on page 121 and the Cisco
ACI Remote Leaf Architecture White Paper.
• If the IPN underlay protocol will be OSPF, enable OSPF on sub-interfaces with the correct area ID.
Beginning with Cisco APIC Release 5.2(3), the IPN underlay protocol can be either OSPF or BGP (eBGP
only).
• Enable DHCP Relay on IPN interfaces connected to all spines.
• Enable PIM.
• Add bridge domain GIPO range as PIM Bidirectional (bidir) group range (default is 225.0.0.0/15).
A group in bidir mode has only shared tree forwarding capabilities.
• Add 239.255.255.240/28 as PIM bidir group range.
• Enable PIM and IGMP on the interfaces connected to all spines.
Note When deploying PIM bidir, at any given time it is only possible to have a single active RP (Rendezvous
Point) for a given multicast group range. RP redundancy is hence achieved by leveraging a Phantom RP
configuration. Because multicast source information is no longer available in Bidir, the Anycast or MSDP
mechanism used to provide redundancy in sparse-mode is not an option for bidir.
• Create the associated node group and Layer 3 Outside (L3Out) policies.
• Before you make any changes to a spine switch, ensure that there is at least one operationally “up”
external link that is participating in the Cisco ACI Multi-Pod topology. Failure to do so could bring down
the Cisco ACI Multi-Pod connectivity.
• If you have to convert a Cisco ACI Multi-Pod setup to a single pod (containing only Pod 1), the Cisco
Application Policy Infrastructure Controllers (APICs) connected to the pods that are decommissioned
should be re-initialized and connected to the leaf switches in Pod 1, which will allow them to re-join the
cluster after going through the initial setup script. See Moving an APIC from One Pod to Another Pod,
on page 115 for those instructions. The TEP pool configuration should not be deleted.
• Cisco ACI GOLF (also known as Layer 3 EVPN Services for Fabric WAN) and Cisco ACI Multi-Pod
can be deployed together over all the switches used in the Cisco ACI Multi-Pod and EVPN topologies.
For more information on GOLF, see Cisco ACI GOLF, on page 429.
• In a Cisco ACI Multi-Pod fabric, the Pod 1 configuration (with the associated TEP pool) must always
exist on Cisco APIC, as the Cisco APIC nodes are always addressed from the Pod 1 TEP pool. This
remains valid also in the scenario where the Pod 1 is physically decommissioned (which is a fully
supported procedure) so that the original Pod 1 TEP pool is not re-assigned to other pods that may be
added to the fabric.
• In a Cisco ACI Multi-Pod fabric setup, if a new spine switch is added to a pod, it must first be connected
to at least one leaf switch in the pod. This enables the Cisco APIC to discover the spine switch and join
it to the fabric.
• After a pod is created and nodes are added in the pod, deleting the pod results in stale entries from the
pod that are active in the fabric. This occurs because the Cisco APIC uses open source DHCP, which
creates some resources that the Cisco APIC cannot delete when a pod is deleted
• If you connect spine switches belonging to separate pods with direct back-to-back links, an OSPF
neighborship might get established on the peer interface between the two spine switches. If there is a
mismatch between the peer interfaces, with one of the peers having the Cisco ACI Multi-Pod direct flag
disabled, the session won't be up and forwarding will not happen. Even though the system will throw a
fault in this situation, this is expected behavior.
• Beginning with Cisco APIC release 5.2(3), the IPN underlay protocol can be external BGP (eBGP).
Internal BGP (iBGP) is not supported as the underlay protocol.
When preparing to migrate a Cisco ACI Multi-Pod fabric between OSPF and BGP as the IPN underlay,
follow these guidelines:
• A BGP underlay is not supported if the Cisco ACI fabric is connected to a cloud site or to a GOLF
router.
• A BGP underlay supports only an IPv4 address family, not an IPv6 address family.
• When deploying Cisco APIC cluster connectivity to the fabric over a Layer 3 network, which was
introduced in Cisco APIC release 5.2(1), the IPN network can use OSPF as the underlay protocol, or a
BGP underlay if the Cisco APIC connects to the fabric using the same network that provides Cisco ACI
Multi-Pod or remote leaf switch connectivity.
• If you delete and recreate the Cisco ACI Multi-Pod L3Out, for example to change the name of a policy,
a clean reload of some of the spine switches in the fabric must be performed. The deletion of the Cisco
ACI Multi-Pod L3Out causes one or more of the spine switches in the fabric to lose connectivity to the
Cisco APICs and these spine switches are unable to download the updated policy from the Cisco APIC.
Which spine switches get into such a state depends upon the deployed topology. To recover from this
state, a clean reload must be performed on these spine switches. The reload is performed using the
setup-clean-config.sh command, followed by the reload command on the spine switch.
Note Cisco ACI does not support IP fragmentation. Therefore, when you configure Layer 3 Outside (L3Out)
connections to external routers, or Multi-Pod connections through an Inter-Pod Network (IPN), it is
recommended that the interface MTU is set appropriately on both ends of a link. On some platforms, such as
Cisco ACI, Cisco NX-OS, and Cisco IOS, the configurable MTU value does not take into account the Ethernet
headers (matching IP MTU, and excluding the 14-18 Ethernet header size), while other platforms, such as
IOS-XR, include the Ethernet header in the configured MTU value. A configured value of 9000 results in a
max IP packet size of 9000 bytes in Cisco ACI, Cisco NX-OS, and Cisco IOS, but results in a max IP packet
size of 8986 bytes for an IOS-XR untagged interface.
For the appropriate MTU values for each platform, see the relevant configuration guides.
We highly recommend that you test the MTU using CLI-based commands. For example, on the Cisco NX-OS
CLI, use a command such as ping 1.1.1.1 df-bit packet-size 9000 source-interface ethernet 1/1.
Note Cisco APIC will always establish a TCP connection to fabric switches with an MTU of 1496 bytes (TCP MSS
1456) regardless of the CP-MTU setting. The IPN network for remote pods and remote leaf switches must
support at least 1500 byte MTU for fabric discovery.
• You can set the global MTU for control plane (CP) packets sent by the nodes (Cisco APIC and the
switches) in the fabric at System > System Settings > Control Plane MTU.
• In a Cisco ACI Multi-Pod topology, the MTU set for the fabric external ports must be greater than or
equal to the CP MTU value set. Otherwise, the fabric external ports might drop the CP MTU packets.
• If you change the IPN or CP MTU, we recommend changing the CP MTU value first, then changing the
MTU value on the spine of the remote pod. This reduces the risk of losing connectivity between the pods
due to MTU mismatch. This is to ensure that the MTU across all the interfaces of the IPN devices between
the pods is large enough for both control plane and VXLAN data plane traffic at any given time. For
data traffic, keep in mind the extra 50 bytes due to VXLAN.
• To decommission a pod, decommission all the nodes in the pod. For instructions, see Decommissioning
and Recommissioning a Pod in Cisco APIC Troubleshooting Guide.
• In the Cisco APIC 6.0(2) release and later, when you configure an OSPF Cisco ACI Multi-Pod session
and the session is running, do not configure a passive interface for the L3Out interface properties.
Procedure
f) Click Next.
Step 7 In the Configure Interpod Connectivity STEP 3 > Routing Protocols dialog box, you configure the underlay protocol
to peer between the physical spines and the IPN. In releases before Cisco APIC Release 5.2(3), Open Shortest Path First
(OSPF) is the only supported underlay. For those earlier releases, or in a later release if you choose OSPF as the Underlay,
complete the following substeps in the OSPF area:
a) Leave the Use Defaults checked or uncheck it.
When the Use Defaults check box is checked, the GUI conceals the optional fields for configuring OSPF. When it
is unchecked, it displays all the fields. The check box is checked by default.
b) In the Area ID field, enter the OSPF area ID.
c) In the Area Type area, choose an OSPF area type.
You can choose NSSA area or Regular area. Stub area is not supported.
d) (Optional) With the Area Cost selector, choose an appropriate OSPF area cost value. This field appears only when
the Use Defaults checkbox is unchecked.
e) From the Interface Policy drop-down list, choose or configure an OSPF interface policy.
You can choose an existing policy, or you can create one with the Create OSPF Interface Policy dialog box.
Step 8 Beginning with Cisco APIC Release 5.2(3), the underlay protocol can be either OSPF or BGP. For releases before Cisco
APIC Release 5.2(3), or if you chose OSPF as the Underlay in the preceding step, skip this step. If you choose BGP as
the Underlay in the Configure Interpod Connectivity STEP 3 > Routing Protocols dialog box, complete the following
substeps in the BGP area to configure the BGP underlay:
In the MP-BGP area, leave the Use Defaults check box checked. The GUI conceals the fields for configuring Multiprotocol
Border Gateway Protocol (MP-BGP).
a) Note the nonconfigurable values in the Spine ID, Interface, and IPv4 Address fields.
b) In the Peer Address field, enter the IP address of the BGP neighbor.
c) In the Remote AS field, enter the Autonomous System (AS) number of the BGP neighbor.
d) Click Next.
Step 9 In the Configure Interpod Connectivity STEP 4 > External TEP dialog box, complete the following steps:
a) Leave the Use Defaults checked or uncheck it.
When the Use Defaults check box is checked, the GUI conceals the optional fields for configuring the external TEP
pool. When it is unchecked, it displays all the fields. The check box is checked by default.
b) Note the nonconfigurable values in the Pod and Internal TEP Pool fields.
c) In the External TEP Pool field, enter the external TEP pool for the physical pod.
The external TEP pool must not overlap the internal TEP pool or external TEP pools belonging to other pods.
d) In the Data Plane TEP IP field enter the address that is used to route traffic between pods. This address must have
a /32 subnet mask.
You can accept the default address that is generated when you configure the External TEP Pool. Alternatively, you
can enter another address, but it must be outside of the external TEP pool.
e) In the Router ID field, enter the IPN router IP address.
f) (Optional) In the Loopback Address field, enter the IPN router loopback IP address.
If you uncheck the Use Defaults, the Cisco APIC displays the nonconfigurable Unicast TEP IP and Spine ID fields.
g) Click Finish.
The Summary panel appears, displaying details of the IPN configuration. You can also click View JSON to view
the REST API for the configuration. You can save the REST API for later use.
What to do next
Take one of the following actions:
• You can proceed directly with adding a pod, continuing with the procedure Adding a Pod to Create a
Multi-Pod Fabric, on page 112 in this guide.
• Close the Configure Interpod Connectivity dialog box and add the pod later, returning to the procedure
Adding a Pod to Create a Multi-Pod Fabric, on page 112 in this guide.
Procedure
Step 7 In the Add Physical Pod STEP 3 > External TEP dialog box, complete the following steps:
a) Leave the Use Defaults check box checked or uncheck it to display the optional fields to configure an external TEP
pool.
b) Note the values in the Pod and Internal TEP Pool fields, which are already configured.
c) In the External TEP Pool field, enter the external TEP pool for the physical pod.
The external TEP pool must not overlap the internal TEP pool.
d) In the Dataplane TEP IP field, enter the address that is used to route traffic between pods.
e) (Optional) In the Unicast TEP IP field, enter the unicast TEP IP address.
Cisco APIC automatically configures the unicast TEP IP address when you enter the data plane TEP IP address.
f) (Optional) Note the value in the nonconfigurable Node field.
g) (Optional) In the Router ID field, enter the IPN router IP address.
Cisco APIC automatically configures the router IP address when you enter the data plane TEP address.
h) In the Loopback Address field, enter the router loopback IP address.
Leave the Loopback Address blank if you use a router IP address.
i) Click Finish.
Note • The deployment of a dedicated VRF in the IPN for Inter-Pod connectivity is optional, but is a best practice
recommendation. You can also use a global routing domain as an alternative.
• For the area of the sample configuration that shows ip dhcp relay address 10.0.0.1, this configuration
is valid based on the assumption that the TEP pool of Pod 1 is 10.0.0.0/x.
feature dhcp
feature pim
service dhcp
ip dhcp relay
ip pim ssm range 232.0.0.0/8
interface Ethernet2/7
no switchport
mtu 9150
no shutdown
interface Ethernet2/7.4
description pod1-spine1
mtu 9150
encapsulation dot1q 4
vrf member fabric-mpod
ip address 201.1.2.2/30
ip router ospf a1 area 0.0.0.0
ip pim sparse-mode
ip dhcp relay address 10.0.0.1
ip dhcp relay address 10.0.0.2
ip dhcp relay address 10.0.0.3
no shutdown
interface Ethernet2/9
no switchport
mtu 9150
no shutdown
interface Ethernet2/9.4
description to pod2-spine1
mtu 9150
encapsulation dot1q 4
vrf member fabric-mpod
ip address 203.1.2.2/30
ip router ospf a1 area 0.0.0.0
ip pim sparse-mode
ip dhcp relay address 10.0.0.1
interface loopback29
vrf member fabric-mpod
ip address 12.1.1.1/32
router ospf a1
vrf fabric-mpod
router-id 29.29.29.29
Procedure
Step 4 In the APIC setup script, specify the pod ID where the APIC node has been moved.
a) Log in to Cisco Integrated Management Controller (CIMC).
b) In the pod ID prompt, enter the pod ID.
Note
Do not modify the TEP Pool address information.
Note When both OSPF and BGP are used in the underlay for Multi-Pod, Multi-Site, or Remote Leaf, do not
redistribute router-ids into BGP from OSPF on IPN routers. Doing so may cause a routing loop and bring
down OSPF and BGP sessions between the spine switch and IPN routers.
Note Migrating the underlay protocol is a disruptive action and should be done only during a maintenance window.
Procedure
Step 1 From the APIC menu bar, navigate to Tenants > infra > Networking > L3Outs > your IPN L3Out, where your IPN
L3Out is the L3Out that connects to the IPN.
Step 2 In the Navigation pane, expand your IPN L3Out and navigate to Logical Node Profiles > your IPN node profile >
Logical Interface Profiles > your IPN interface, where your IPN interface is the Logical Interface Profile for the
current IPN connection.
The Logical Interface Profile table appears in the work pane.
Step 3 In the work pane, click the Policy tab and the Routed Sub-Interfaces tab below the Policy tab.
Step 4 In the Routed Sub-Interfaces table, double-click the interface of the current IPN connection.
The Routed Sub-Interface dialog box opens.
Step 5 In the Routed Sub-Interface dialog box, perform the following actions:
a) Click the + icon in the BGP Peer Connectivity Profiles bar to add a BGP peer connection.
The Create Peer Connectivity Profiles dialog box opens.
b) In the Peer IPv4 Address field, enter the IP address of the BGP peer.
c) Configure any other desired settings for the BGP peer connection.
Note
If you are configuring for migration but not actually migrating at this time, you can set the Admin State to Disabled
for now and return to this step when you are ready to migrate. Migration should be done during a maintenance
window.
d) Click Submit to return to the Routed Sub-Interface dialog box.
Step 6 In the Routed Sub-Interface dialog box, click Submit.
Step 7 In the Navigation pane, navigate to Logical Node Profiles > your IPN node profile > Configured Nodes > your IPN
node. Follow these steps to verify that the BGP neighbor is UP.
a) Expand your IPN node and locate the BGP entry, such as BGP for VRF-overlay-1.
b) Expand the BGP entry and click Neighbors.
c) In the Neighbors table, find the peer IP address that you configured in Peer IPv4 Address, and verify that the
State is "established."
Step 8 In the Navigation pane, under Logical Interface Profiles, right-click the current OSPF Interface Profile and select
Delete.
Note
Before deleting the OSPF Interface Profile, make sure that the BGP neighbor is UP.
Step 9 In the Navigation pane, navigate to Tenants > infra > Networking > L3Outs > your IPN L3Out.
Step 10 In the work pane, click the Policy tab and the Main tab below the Policy tab.
Step 11 In the work pane, in the Enable BGP/EIGRP/OSPF section, uncheck OSPF, leaving BGP checked.
Step 12 Click Submit.
Back-to-Back also brings operational simplification and end-to-end fabric visibility, as there are no external
devices to configure.
In the Multi-Pod Spines Back-to-Back topology, the back-to-back spine link interfaces are implemented as
L3Outs in the infra tenant. These links are typically carried on direct cable or dark fiber connections between
the Pods. Multi-Pod Spines Back-to-Back supports only Open Shortest Path First (OSPF) connectivity between
the spine switches belonging to different Pods.
The following figure shows a Multi-Pod Spines Back-to-Back topology with back-to-back spines connected
between Pod1 and Pod2.
For detailed information about Multi-Pod Spines Back-to-Back, see the Cisco knowledge base article Cisco
ACI Multi-Pod Spines Back-to-Back.
Error:400
If you receive the following error:
Error:400 - Invalid Configuration Following Intersite Spines are not configured as Mpod
Spines: 1202
You must enable the fabric external connectivity for all the existing spines and if you are trying to add new
spines use the Setup Multipod GUI wizard.
There are two ways to resolve this issue.
• Enable all the spines under the external routed network:
• In the APIC GUI, on the menu bar, click Tenant > infra.
• In the Navigation pane, expand Networking > External Routed Networks, right-click on the
external routed network and choose Enable Fabric External Connectivity.
• In the Navigation pane, expand Quick Start > Node or Pod Setup > Setup Multipod and complete
the Multipod setup.
Note • All inter-VRF traffic (pre-release 4.0(1)) goes to the spine switch before being forwarded.
• For releases prior to Release 4.1(2), before decommissioning a remote leaf switch, you must first delete
the vPC.
• Resolution of unknown L3 endpoints (through ToR glean process) in a remote leaf location when
spine-proxy is not reachable.
In addition, before Release 4.1(2), traffic between the remote leaf switch vPC pairs, either within a remote
location or between remote locations, is forwarded to the spine switches in the ACI main data center pod, as
shown in the following figure.
Starting in Release 4.1(2), support is now available for direct traffic forwarding between remote leaf switches
in different remote locations. This functionality offers a level of redundancy and availability in the connections
between remote locations, as shown in the following figure.
In addition, remote leaf switch behavior also takes on the following characteristics starting in release 4.1(2):
• Starting with Release 4.1(2), with direct traffic forwarding, when a spine switch fails within a single-pod
configuration, the following occurs:
• Local switching will continue to function for existing and new end point traffic between the remote
leaf switch vPC peers, as shown in the "Local Switching Traffic: Prior to Release 4.1(2)" figure
above.
• For traffic between remote leaf switches across remote locations:
• New end point traffic will fail because the remote leaf switch-to-spine switch tunnel would be
down. From the remote leaf switch, new end point details will not get synced to the spine
switch, so the other remote leaf switch pairs in the same or different locations cannot download
the new end point information from COOP.
• For uni-directional traffic, existing remote end points will age out after 300 secs, so traffic will
fail after that point. Bi-directional traffic within a remote leaf site (between remote leaf VPC
pairs) in a pod will get refreshed and will continue to function. Note that bi-directional traffic
to remote locations (remote leaf switches) will be affected as the remote end points will be
expired by COOP after a timeout of 900 seconds.
• For shared services (inter-VRF), bi-directional traffic between end points belonging to remote
leaf switches attached to two different remote locations in the same pod will fail after the remote
leaf switch COOP end point age-out time (900 sec). This is because the remote leaf
switch-to-spine COOP session would be down in this situation. However, shared services traffic
between end points belonging to remote leaf switches attached to two different pods will fail
after 30 seconds, which is the COOP fast-aging time.
• L3Out-to-L3Out communication would not be able to continue because the BGP session to
the spine switches would be down.
• When there is remote leaf direct uni-directional traffic, where the traffic is sourced from one remote leaf
switch and destined to another remote leaf switch (which is not the vPC peer of the source), there will
be a milli-second traffic loss every time the remote end point (XR EP) timeout of 300 seconds occurs.
• With a remote leaf switches with ACI Multi-Site configuration, all traffic continues from the remote leaf
switch to the other pods and remote locations, even with a spine switch failure, because traffic will flow
through an alternate available pod in this situation.
Figure 15: Remote Leaf Switch Behavior, Release 4.2(4): Remote Leaf Switch Management through IPN
Figure 16: Remote Leaf Switch Behavior, Release 4.2(4): 802.1Q Tunnel Support on Remote Leaf Switches
Create this 802.1Q tunnel between the remote leaf switch and the ACI main datacenter using the instructions
provided in the "802.1Q Tunnels" chapter in the Cisco APIC Layer 2 Networking Configuration Guide, located
in the Cisco APIC documentation landing page.
You can configure remote leaf switches in the APIC GUI, either with and without a wizard, or use the REST
API or the NX-OS style CLI.
• For modular spine switches, only Cisco Nexus 9000 series switches with names that end in EX, and later
(for example, N9K-X9732C-EX) are supported.
• Older generation spine switches, such as the fixed spine switch N9K-C9336PQ or modular spine switches
with the N9K-X9736PQ linecard are supported in the main data center, but only next generation spine
switches are supported to connect to the WAN.
If you do not want the mice flows to have a VLAN CoS priority of 3 when they egress a remote leaf
switch on which you enbled Cisco ACI Multi-Pod DSCP translation, use the CoS preservation feature
instead.
The following sections provide information on what is supported and not supported with remote leaf switches:
• Supported Features, on page 129
• Unsupported Features, on page 129
• Changes For Release 5.0(1), on page 131
• Changes For Release 5.2(3), on page 131
Supported Features
Beginning with Cisco APIC release 6.1(1), fabric ports (uplinks) can now be configured with user tenant
L3Outs and SR-MPLS Infra L3Outs, as a routed sub-interface.
• Only L3Outs with routed sub-interface are allowed on fabric ports of remote leaf.
• Remote leaf fabric ports can only be deployed as an L3Out of a user tenant or SR-MPLS Infra L3Out.
• You cannot deploy remote leaf fabric ports on an application EPG. Only L3Outs with routed sub-interface
are allowed.
• Only the PTP/Sync access policies are supported on a hybrid port. No other access policies are supported.
• Only fabric SPAN is supported on the hybrid port.
• Netflow is not supported on fabric port that is configured with a user tenant L3Out.
Beginning with Cisco APIC release 6.0(4), stretching of an L3Out SVI across vPC remote leaf switch pairs
is supported.
Beginning with Cisco APIC release 4.2(4), the 802.1Q (Dot1q) tunnels feature is supported.
Beginning with Cisco APIC release 4.1(2), the following features are supported:
• Remote leaf switches with ACI Multi-Site
• Traffic forwarding directly across two remote leaf vPC pairs in the same remote data center or across
data centers, when those remote leaf pairs are associated to the same pod or to pods that are part of the
same multipod fabric
• Transit L3Out across remote locations, which is when the main Cisco ACI data center pod is a transit
between two remote locations (the L3Out in RL location-1 and L3Out in RL location-2 are
advertising prefixes for each other)
Beginning with Cisco APIC release 4.0(1), the following features are supported:
• Q-in-Q Encapsulation Mapping for EPGs
• PBR Tracking on remote leaf switches (with system-level global GIPo enabled)
• PBR Resilient Hashing
• Netflow
• MacSec Encryption
• Troubleshooting Wizard
• Atomic counters
Unsupported Features
Full fabric and tenant policies are supported on remote leaf switches in this release with the exception of the
following features, which are unsupported:
• GOLF
• vPod
• Floating L3Out
• Stretching of L3Out SVI between local leaf switches (ACI main data center switches) and remote leaf
switches or stretching across two different vPC pairs of remote leaf switches
• Copy service is not supported when deployed on local leaf switches and when the source or destination
is on the remote leaf switch. In this situation, the routable TEP IP address is not allocated for the local
leaf switch. For more information, see the section "Copy Services Limitations" in the "Configuring Copy
Services" chapter in the Cisco APIC Layer 4 to Layer 7 Services Deployment Guide, available in the
APIC documentation page.
• Layer 2 Outside Connections (except Static EPGs)
• Copy services with vzAny contract
• FCoE connections on remote leaf switches
• Flood in encapsulation for bridge domains or EPGs
• Fast Link Failover policies are for ACI fabric links between leaf and spine switches, and are not applicable
to remote leaf connections. Alternative methods are introduced in Cisco APIC Release 5.2(1) to achieve
faster convergence for remote leaf connections.
• Managed Service Graph-attached devices at remote locations
• Traffic Storm Control
• Cloud Sec Encryption
• First Hop Security
• Layer 3 Multicast routing on remote leaf switches
• Maintenance mode
• TEP to TEP atomic counters
The following scenarios are not supported when integrating remote leaf switches in a Multi-Site architecture
in conjunction with the intersite L3Out functionality:
• Transit routing between L3Outs deployed on remote leaf switch pairs associated to separate sites
• Endpoints connected to a remote leaf switch pair associated to a site communicating with the L3Out
deployed on the remote leaf switch pair associated to a remote site
• Endpoints connected to the local site communicating with the L3Out deployed on the remote leaf switch
pair associated to a remote site
• Endpoints connected to a remote leaf switch pair associated to a site communicating with the L3Out
deployed on a remote site
Note The limitations above do not apply if the different data center sites are deployed as pods as part of the same
Multi-Pod fabric.
The following deployments and configurations are not supported with the remote leaf switch feature:
• It is not supported to stretch a bridge domain between remote leaf nodes associated to a given site (APIC
domain) and leaf nodes part of a separate site of a Multi-Site deployment (in both scenarios where those
leaf nodes are local or remote) and a fault is generated on APIC to highlight this restriction. This applies
independently from the fact that BUM flooding is enabled or disabled when configuring the stretched
bridge domain on the Multi-Site Orchestrator (MSO). However, a bridge domain can always be stretched
(with BUM flooding enabled or disabled) between remote leaf nodes and local leaf nodes belonging to
the same site (APIC domain).
• Spanning Tree Protocol across remote leaf switch location and main data center.
• APICs directly connected to remote leaf switches.
• Orphan port channel or physical ports on remote leaf switches, with a vPC domain (this restriction applies
for releases 3.1 and earlier).
• With and without service node integration, local traffic forwarding within a remote location is only
supported if the consumer, provider, and services nodes are all connected to remote leaf switches are in
vPC mode.
• /32 loopbacks advertised from the spine switch to the IPN must not be suppressed/aggregated toward
the remote leaf switch. The /32 loopbacks must be advertised to the remote leaf switch.
• The interfaces on the WAN routers which connect to the VLAN-5 interfaces on the spine switches must
be on different VRFs than the interfaces connecting to a regular multipod network.
• It is recommended, but not required to connect the pair of remote leaf switches with a vPC. The switches
on both ends of the vPC must be remote leaf switches at the same remote datacenter.
• Configure the northbound interfaces as Layer 3 sub-interfaces on VLAN-4, with unique IP addresses.
If you connect more than one interface from the remote leaf switch to the router, configure each interface
with a unique IP address.
• Enable OSPF on the interfaces, but do not set the OSPF area type as stub area.
• The IP addresses in the remote leaf switch TEP Pool subnet must not overlap with the pod TEP subnet
pool. The subnet used must be /24 or lower.
• Multipod is supported, but not required, with the Remote Leaf feature.
• When connecting a pod in a single-pod fabric with remote leaf switches, configure an L3Out from a
spine switch to the WAN router and an L3Out from a remote leaf switch to the WAN router, both using
VLAN-4 on the switch interfaces.
• When connecting a pod in a multipod fabric with remote leaf switches, configure an L3Out from a spine
switch to the WAN router and an L3Out from a remote leaf switch to the WAN router, both using VLAN-4
on the switch interfaces. Also configure a multipod-internal L3Out using VLAN-5 to support traffic that
crosses pods destined to a remote leaf switch. The regular multipod and multipod-internal connections
can be configured on the same physical interfaces, as long as they use VLAN-4 and VLAN-5.
• When configuring the Multipod-internal L3Out, use the same router ID as for the regular multipod L3Out,
but deselect the Use Router ID as Loopback Address option for the router-id and configure a different
loopback IP address. This enables ECMP to function.
• Starting with the 6.0(1) release, remote leaf switches support remote pools with a subnet mask of up to
/28. In prior releases, remote leaf switches supported remote pools with a subnet mask of up to /24. You
can remove remote pools only after you have decommissioned and removed them from the fabric including
all the nodes that are using that pool.
The /28 remote TEP pool supports a maximum of four remote leaf switches with two vPC pairs. We
recommend that you keep two IP addresses unused for RMA purposes. These two IP addresses are
sufficient to do an RMA of one switch. The following table shows how the remote leaf switches use
these IP addresses:
When you decommission a remote leaf switch, two IP addresses are freed, but are available for reuse
only after 24 hours have passed.
Configure the Pod and Fabric Membership for Remote Leaf Switches Using a
Wizard
You can configure and enable Cisco APIC to discover and connect the IPN router and remote switches, using
a wizard as in this topic, or in an alternative method using the APIC GUI. See Configure the Pod and Fabric
Membership for Remote Leaf Switches Using the GUI (Without a Wizard), on page 139.
Note Cisco recommends that you configure the connectivity between the physical Pod
and the IPN before launching the wizard. For information on configuring interpod
connectivity, see Preparing the Pod for IPN Connectivity, on page 110.
Procedure
Step 4 In the Add Remote Leaf wizard, review the information in the Overview page.
This panel provides high-level information about the steps that are required for adding a remote leaf switch to a pod in
the fabric. The information that is displayed in the Overview panel, and the areas that you will be configuring in the
subsequent pages, varies depending on your existing configuration:
• If you are adding a new remote leaf switch to a single-pod or multi-pod configuration, you will typically see the
following items in the Overview panel, and you will be configuring these areas in these subsequent pages:
• External TEP
• Pod Selection
• Routing Protocol
• Remote Leafs
In addition, because you are adding a new remote leaf switch, it will automatically be configured with the direct
traffic forwarding feature.
• If you already have remote leaf switches configured and you are using the remote leaf wizard to configure these
existing remote leaf switches, but the existing remote leaf switches were upgraded from a software release prior
to Release 4.1(2), then those remote leaf switches might not be configured with the direct traffic forwarding feature.
You will see a warning at the top of the Overview page in this case, beginning with the statement "Remote Leaf
Direct Communication is not enabled."
You have two options when adding a remote leaf switch using the wizard in this situation:
• Enable the direct traffic forwarding feature on these existing remote leaf switches. This is the
recommended course of action in this situation. You must first manually enable the direct traffic forwarding
feature on the switches using the instructions provided in Upgrade the Remote Leaf Switches and Enable
Direct Traffic Forwarding, on page 144. Once you have manually enabled the direct traffic forwarding feature
using those instructions, return to this remote leaf switch wizard and follow the process in the wizard to add
the remote leaf switches to a pod in the fabric.
• Add the remote leaf switches without enabling the direct traffic forwarding feature. This is an acceptable
option, though not recommended. To add the remote leaf switches without enabling the direct traffic forwarding
feature, continue with the remote leaf switch wizard configuration without manually enabling the direct traffic
forwarding feature.
Step 5 When you have finished reviewing the information in the Overview panel, click Get Started at the bottom right corner
of the page.
• If you adding a new remote leaf switch, where it will be running Release 4.1(2) or above and will be automatically
configured with the direct traffic forwarding feature, the External TEP page appears. Go to Step 6, on page 135.
• If you are adding a remote leaf switch without enabling the direct traffic forwarding feature, or if you upgraded
your switches to Release 4.1(2) and you manually enabled the direct traffic forwarding feature on the switches
using the instructions provided in Upgrade the Remote Leaf Switches and Enable Direct Traffic Forwarding, on
page 144, then the Pod Selection page appears. Go to Step 7, on page 136.
Step 8 In the Routing Protocol page, select and configure the necessary parameters for the underlay protocol to peer between
the remote leaf switches and the upstream router. Follow these substeps.
a) Under the L3 Outside Configuration section, in the L3 Outside field, create or select an existing L3Out to represent
the connection between the remote leaf switches and the upstream router. Multiple remote leaf pairs can use the
same L3 Outside to represent their upstream connection.
For the remote leaf switch configuration, we recommend that you use or create an L3Out that is different from the
L3Out used in the multi-pod configuration.
b) In Cisco APIC Release 5.2(3) and later releases, set the Underlay control to either OSPF or BGP
In releases before Cisco APIC Release 5.2(3), no selection is necessary because OSPF is the only supported underlay
protocol.
Note
When both OSPF and BGP are used in the underlay for Multi-Pod, Multi-Site, or Remote Leaf, do not redistribute
router-ids into BGP from OSPF on IPN routers. Doing so may cause a routing loop and bring down OSPF and
BGP sessions between the spine switch and IPN routers.
c) Choose the appropriate next configuration step.
• For an OSPF underlay, configure the OSPF parameters in Step Step 9, on page 136, then skip Step Step 10, on
page 137.
• For a BGP underlay, skip Step Step 9, on page 136 and configure the BGP parameters in Step Step 10, on page
137.
Step 9 (For an OSPF underlay only) To configure an OSPF underlay, follow these substeps in the Routing Protocol page.
Configure the OSPF Area ID, an Area Type, and OSPF Interface Policy in this page. The OSPF Interface Policy contains
OSPF-specific settings, such as the OSPF network type, interface cost, and timers. Configure the OSFP Authentication
Key and OSPF Area Cost by unchecking the Use Defaults checkbox.
Note
If you peer a Cisco ACI-mode switch with a standalone Cisco Nexus 9000 switch that has the default OSPF authentication
key ID of 0, the OSPF session will not come up. Cisco ACI only allows an OSPF authentication key ID of 1 to 255.
a) Under the OSPF section, leave the Use Defaults checkbox checked, or uncheck it if necessary.
The checkbox is checked by default. Uncheck it to reveal the optional fields, such as area cost and authentication
settings.
b) Gather the configuration information from the IPN, if necessary.
For example, from the IPN, you might enter the following command to gather certain configuration information:
IPN# show running-config interface ethernet slot/chassis-number
For example:
IPN# show running-config interface ethernet 1/5.11
...
ip router ospf infra area 0.0.0.59
...
Note
You might see Stub area as an option in the Area Type field; however, stub area will not advertise the routes to
the IPN, so stub area is not a supported option for infra L3Outs.
Step 10 (For a BGP underlay only) If the following BGP fields appear in the Routing Protocol page, follow these substeps.
Otherwise, click Next to continue.
a) Under the BGP section, leave the Use Defaults checkbox checked, or uncheck it if necessary.
The checkbox is checked by default. Uncheck it to reveal the optional fields, such as peering type, peer password,
and route reflector nodes.
b) Note the nonconfigurable values in the Spine ID, Interface, and IPv4 Address fields.
c) In the Peer Address field, enter the IP address of the BGP neighbor.
d) In the Remote AS field, enter the Autonomous System (AS) number of the BGP neighbor.
e) When you have entered all of the necessary information in this page, click the Next button at the bottom right
corner of the page.
The Remote Leafs page appears.
Step 12 In the Confirmation page, review the list of policies that the wizard will create and change the names of any of the
policies, if necessary, then click Finish at the bottom right corner of the page.
The Remote Leaf Summary page appears.
Step 13 In the Remote Leaf Summary page, click the appropriate button.
• If you want to view the API for the configuration in a JSON file, click View JSON. You can copy the API and
store it for future use.
• If you are satisfied with the information in this page and you do not want to view the JSON file, click OK.
Step 14 In the Navigation pane, click Fabric Membership, then click the Nodes Pending Registration tab to view the status
of the remote leaf switch configuration.
You should see Undiscovered in the Status column for the remote leaf switch that you just added.
Step 15 Log into the spine switch connected to the IPN and enter the following command:
switch# show nattable
Step 16 On the IPN sub-interfaces connecting the remote leaf switches, configure the DHCP relays for each interface.
For example:
Step 17 In the Navigation pane, click Fabric Membership, then click the Registered Nodes tab to view the status of the remote
leaf switch configuration.
After a few moments, you should see Active in the Status column for the remote leaf switch that you just added.
Configure the Pod and Fabric Membership for Remote Leaf Switches Using
the GUI (Without a Wizard)
Although we recommend that you configure remote leaf switches using the Add Remote Leaf wizard (see
Configure the Pod and Fabric Membership for Remote Leaf Switches Using a Wizard, on page 133), you can
use this GUI procedure as an alternative.
Procedure
Step 1 Configure the TEP pool for the remote leaf switches, with the following steps:
a) On the menu bar, click Fabric > Inventory.
b) In the Navigation pane, click Pod Fabric Setup Policy.
c) On the Fabric Setup Policy panel, double-click the pod where you want to add the pair of remote leaf switches.
d) Click the + on the Remote Pools table.
e) Enter the remote ID and a subnet for the remote TEP pool and click Submit.
f) On the Fabric Setup Policy panel, click Submit.
Step 2 Configure the L3Out for the spine switch connected to the IPN router, with the following steps:
a) On the menu bar, click Tenants > infra.
b) In the Navigation pane, expand Networking, right-click L3Outs, and choose Create L3Out.
c) In the Name field, enter a name for the L3Out.
d) From the VRF drop-down list, choose overlay-1.
e) From the L3 Domain drop-down list, choose the external routed domain that you previously created.
f) In the Use for control, select Remote Leaf.
g) To use BGP as the IPN underlay protocol, uncheck the OSPF checkbox.
Beginning with Cisco APIC Release 5.2(3), the IPN underlay protocol can be either OSPF or BGP.
h) To use OSPF as the IPN underlay protocol, in the OSPF area, where OSPF is selected by default, check the box
next to Enable Remote Leaf with Multipod, if the pod where you are adding the remote leaf switches is part of
a multipod fabric.
This option enables a second OSPF instance using VLAN-5 for multipod, which ensures that routes for remote leaf
switches are only advertised within the pod they belong to.
i) Click Next to move to the Nodes and Interfaces window.
Step 3 Configure the details for the spine and the interfaces used in the L3Out, with the following steps:
a) Determine if you want to use the default naming convention.
In the Use Defaults field, check if you want to use the default node profile name and interface profile names:
• The default node profile name is L3Out-name_nodeProfile, where L3Out-name is the name that you
entered in the Name field in the Identity page.
• The default interface profile name is L3Out-name_interfaceProfile, where L3Out-name is the
name that you entered in the Name field in the Identity page.
Step 4 Enter the necessary information in the Protocols window of the Create L3Out wizard.
a) If you chose BGP as the IPN underlay protocol, enter the Peer Address and the Remote AS of the BGP peer.
b) If you chose OSPF as the IPN underlay protocol, select an OSPF policy in the Policy field.
c) Click Next.
The External EPG window appears.
Step 5 Enter the necessary information in the External EPG window of the Create L3Out wizard, then click Finish to
complete the necessary configurations in the Create L3Out wizard.
Step 6 Navigate to Tenants > infra > Networking > L3Outs > L3Out_name > Logical Node Profiles > bLeaf > Logical
Interface Profiles > portIf > OSPF Interface Profile.
Step 7 Enter the name of the interface profile.
Step 8 In the Associated OSPF Interface Policy Name field, choose a previously created policy or click Create OSPF
Interface Policy.
Step 9 a) Under OSPF Profile, click OSPF Policy and choose a previously created policy or click Create OSPF Interface
Policy.
b) Click Next.
c) Click Routed Sub-Interface, click the + on the Routed Sub-Interfaces table, and enter the following details:
• Node—Spine switch where the interface is located.
• Path—Interface connected to the IPN router
• Encap—Enter 4 for the VLAN
• RL TEP Pool—Identifier for the remote leaf TEP pool, that you previously configured
• Node Name—Name of the remote leaf switch
After you configure the Node Identity Policy for each remote leaf switch, it is listed in the Fabric Membership
table with the role remote leaf.
Step 11 Configure the L3Out for the remote leaf location, with the following steps:
a) Navigate to Tenants > infra > Networking.
b) Right-click L3Outs, and choose Create L3Out.
c) Enter a name for the L3Out.
d) Click the OSPF checkbox to enable OSPF, and configure the OSPF details the same as on the IPN and WAN
router.
Note
Do not check the Enable Remote Leaf with Multipod check box if you are deploying new remote leaf switches
running Release 4.1(2) or later and you are enabling direct traffic forwarding on those remote leaf switches. This
option enables an OSPF instance using VLAN-5 for multipod, which is not needed in this case. See About Direct
Traffic Forwarding, on page 143 for more information.
Step 13 Navigate to Tenants > infra > Networking > L3Outs > L3Out_name > Logical Node Profiles > bLeaf > Logical
Interface Profiles > portIf > OSPF Interface Profile.
Step 14 In OSPF Interface Profiles, configure the following details for the routed sub-interface used to connect a remote leaf
switch with the WAN router.
• Identity—Name of the OSPF interface profile
• Protocol Profiles—A previously configured OSPF profile or create one
• Interfaces—On the Routed Sub-Interface tab, the path and IP address for the routed sub-interface leading to the
WAN router
Step 15 Configure the Fabric External Connection Profile, with the following steps:
a) Navigate to Tenants > infra > Policies > Protocol.
b) Right-click Fabric Ext Connection Policies and choose Create Intrasite/Intersite Profile.
c) Enter the mandatory Community value in the format provided in the example.
d) Click the + on Fabric External Routing Profile.
e) Enter the name of the profile and add uplink interface subnets for all of the remote leaf switches.
f) Click Update and click Submit.
Step 16 On the menu bar click System > System Settings.
• If your remote leaf switches are running on Release 5.0(1) or later, where direct traffic forwarding is
enabled by default, and you want to downgrade to any of these previous releases that also supported
direct traffic forwarding:
• Release 4.2(x)
• Release 4.1(2)
Then direct traffic forwarding may or may not continue to be enabled by default, depending on your
configuration:
• If both Routable Subnets and Routable Ucast were enabled for all pods prior to the downgrade, then
direct traffic forwarding continues to be enabled by default after the downgrade.
• If Routable Subnets were enabled for all pods but Routable Ucast was not enabled, then direct traffic
forwarding is not enabled after the downgrade.
Upgrade the Remote Leaf Switches and Enable Direct Traffic Forwarding
If your remote leaf switches are currently running on a release prior to 4.1(2), follow these procedures to
upgrade the switches to Release 4.1(2) or later, then make the necessary configuration changes and enable
direct traffic forwarding on those remote leaf switches.
Note When upgrading to Release 4.1(2) or later, enabling direct traffic forwarding might be optional or mandatory,
depending on the release you are upgrading to:
• If you are upgrading to a release prior to Release 5.0(1), then enabling direct traffic forwarding is optional;
you can upgrade your switches without enabling the direct traffic forwarding feature. You can enable
this feature at some point after you've made the upgrade, if necessary.
• If you are upgrading to Release 5.0(1) or later, then enabling direct traffic forwarding is mandatory.
Direct traffic forwarding is enabled by default starting in Release 5.0(1) and cannot be disabled.
If, at a later date, you have to downgrade the software on the remote leaf switches to a version that doesn’t
support remote leaf switch direct traffic forwarding [to a release prior to Release 4.1(2)], follow the procedures
provided in Disable Direct Traffic Forwarding and Downgrade the Remote Leaf Switches, on page 147 to
disable the direct traffic forwarding feature before downgrading the software on the remote leaf switches.
Procedure
Step 1 Upgrade Cisco APIC and all the nodes in the fabric to Release 4.1(2) or later.
Step 2 Verify that the routes for the Routable Subnet that you wish to configure will be reachable in the Inter-Pod Network
(IPN), and that the subnet is reachable from the remote leaf switches.
Step 3 Configure Routable Subnets in all the pods in the fabric:
a) On the menu bar, click Fabric > Inventory.
b) In the Navigation pane, click Pod Fabric Setup Policy.
c) On the Fabric Setup Policy panel, double-click the pod where you want to configure routable subnets.
d) Access the information in the subnets or TEP table, depending on the release of your APIC software:
• For releases prior to 4.2(3), click the + on the Routable Subnets table.
• For 4.2(3) only, click the + on the External Subnets table.
• For 4.2(4) and later, click the + on the External TEP table.
e) Enter the IP address and Reserve Address, if necessary, and set the state to active or inactive.
• The IP address is the subnet prefix that you wish to configure as the routeable IP space.
• The Reserve Address is a count of addresses within the subnet that must not be allocated dynamically to the
spine switches and remote leaf switches. The count always begins with the first IP in the subnet and increments
sequentially. If you wish to allocate the Unicast TEP (covered later in these procedures) from this pool, then
it must be reserved.
f) Click Update to add the external routable subnet to the subnets or TEP table.
g) On the Fabric Setup Policy panel, click Submit.
Note
If you find that you have to make changes to the information in the subnets or TEP table after you've made these
configurations, follow the procedures provided in "Changing the External Routable Subnet" in the Cisco APIC Getting
Started Guide to make those changes successfully.
You must make the appropriate configuration changes after upgrading to Release 4.2(5) or later in this case to clear
the faults. You must make these configuration changes before attempting any kind of configuration export, otherwise
a failure will occur on a configuration import, configuration rollback, or ID recovery going from Release 4.2(5)
and later.
• The remote leaf switch and spine switch COOP (council of oracle protocol) session remains with a private IP
address.
• The BGP route reflector switches to Routable CP TEP Interface (rt-cp-etep).
Step 7 Verify that the BGP route reflector session in the remote leaf switch is configured correctly.
remote-leaf# show bgp vpnv4 unicast summary vrf all | grep 14.0.0
14.0.0.227 4 100 1292 1164 395 0 0 19:00:13 52
14.0.0.228 4 100 1296 1164 395 0 0 19:00:10 52
You should see the following highlighted line in the output, which is verification that the configuration was set
correctly (full output truncated):
...
podId : 1
remoteNetworkId : 0
remoteNode : no
rldirectMode : yes
rn : sys
role : spine
...
Step 9 Add the Routable IP address of Cisco APIC as DHCP relay on the IPN interfaces connecting the remote leaf switches.
Each APIC in the cluster will get assigned an address from the pool. These addresses must be added as the DHCP relay
address on the interfaces facing the remote leaf switches. You can find these addresses by running the following
command from the APIC CLI:
Step 10 Decommission and recommission each remote leaf switch one at a time to get it discovered on the routable IP address
for the Cisco APIC.
The COOP configuration changes to Routable CP TEP Interface (rt-cp-etep). After each remote leaf switch is
decommissioned and recommissioned, the DHCP server ID will have the routable IP address for the Cisco APIC.
Disable Direct Traffic Forwarding and Downgrade the Remote Leaf Switches
If your remote leaf switches are running on Release 4.1(2) or later and have direct traffic forwarding enabled,
but you want to downgrade to a release prior to 4.1(2), follow these procedures to disable the direct traffic
forwarding feature before downgrading the remote leaf switches.
Procedure
Step 3 Disable remote leaf switch direct traffic forwarding for all remote leaf switches by posting the following policy:
This will post the MO to Cisco APIC, then the configuration will be pushed from Cisco APIC to all nodes in the fabric.
At this point, the following areas are configured:
• The Network Address Translation Access Control Lists (NAT ACLs) are deleted on the data center spine switches.
• The rlRoutableMode and rldirectMode attributes are set to no, as shown in the following example:
Step 4 Remove the Routable Subnets and Routable Ucast from the pods in the fabric.
The following areas are configured after removing the Routable Subnets and Routable Ucast from each pod:
• On the spine switch, the Remote Leaf Multicast TEP Interface (rl-mcast-hrep) and Routable CP TEP Interface
(rt-cp-etep) are deleted.
• On the remote leaf switches, the tunnel to the routable Remote Leaf Multicast TEP Interface (rl-mcast-hrep) is
deleted, and a private Remote Leaf Multicast TEP Interface (rl-mcast-hrep) is created. The Remote Leaf Unicast
TEP Interface (rl_ucast) tunnel remains routable at this point.
• The remote leaf switch and spine switch COOP (council of oracle protocol) and route reflector sessions switch to
private.
• The tunnel to the routable Remote Leaf Unicast TEP Interface (rl_ucast) is deleted, and a private Remote Leaf
Unicast TEP Interface (rl_ucast) tunnel is created.
Step 5 Decommission and recommission each remote leaf switch to get it discovered on the non-routable internal IP address of
the Cisco APIC.
Step 6 Downgrade the Cisco APIC and all the nodes in the fabric to a release prior to 4.1(2).
Note Movement of a remote leaf switch from one pod to another could result in traffic disruption of several seconds.
Note If you have a single remote leaf switch in a pod and the switch is clean reloaded, it is attached to the failover
pod (parent configured pod) of the spine switch. If you have multiple remote leaf switches in a pod, make
sure that at least one of switches is not clean-reloaded. Doing so ensures that the other remote leaf switches
can move to the pod where the remote leaf switch that was not reloaded is present.
Procedure
Step 4 In the Remote Leaf POD Redundancy Policy work pane, check the Enable Remote Leaf Pod Redundancy Policy
check box.
Step 5 (Optional) Check the Enable Remote Leaf Pod Redundancy pre-emption check box.
Checking the check box reassociates the remote leaf switch with the parent pod once that pod is back up. Leaving the
check box unchecked, the remote leaf remains associated with the operational pod even when the parent pod comes back
up.
What to do next
Enter the following commands on the remote leaf switch when failover occurs to verify which pod remote
leaf switch is operational:
cat /mit/sys/summary
moquery -c rlpodredRlSwitchoverPod
Note If you have remote leaf switches deployed, if you downgrade the APIC software from Release 3.1(1) or later,
to an earlier release that does not support the Remote Leaf feature, you must decommission the remote nodes
and remove the remote leaf-related policies (including the TEP Pool), before downgrading. For more information
on decommissioning switches, see Decommissioning and Recommissioning Switches in the Cisco APIC
Troubleshooting Guide.
Before you downgrade remote leaf switches, verify that the followings tasks are complete:
• Delete the vPC domain.
• Delete the vTEP - Virtual Network Adapter if using SCVMM.
• Decommission the remote leaf nodes, and wait 10 -15 minutes after the decommission for the task to
complete.
• Delete the remote leaf to WAN L3out in the infra tenant.
• Delete the infra-l3out with VLAN 5 if using Multipod.
• Delete the remote TEP pools.
Note Procedures in this document describe how to configure SR-MPLS handoff using the GUI and REST API.
You cannot configure SR-MPLS handoff through the NX-OS style CLI at this time.
In this configuration, the border leaf switch is connected to the DC-PE using VRF-Lite. The interface and
routing protocol session configurations between the border leaf switch and the DC-PE is done using separate
VRFs. Differentiated Services Code Point (DSCP) is configured on the border leaf switch for outgoing traffic.
On the DC-PE, the DSCP is mapped to the segment routing for traffic engineering (SR-TE) policy, which is
used to steer traffic through the transport network.
This configuration becomes cumbersome if you have a large number of sessions between the border leaf
switch and the data center. Therefore, automation and scalability are key challenges when configuring using
VRF-Lite.
In this scenario, VXLAN is being used in the ACI fabric area, whereas segment routing is being used in the
transport network. Rather than use VXLAN outside of the ACI fabric area, it would be preferable to use the
same SR-based routing, where you would do an SR handoff or an MPLS handoff towards the transport device.
By changing VXLAN to SR at the ACI border, the transport devices only need to run SR or MPLS and does
not need to run VXLAN.
In this scenario, the existing monitoring tools used for the transport network can monitor MPLS traffic, but
cannot monitor VXLAN packets. By using ACI to SR-MPLS handoff, this allows the transport team to monitor
the DC-to-DC flows using existing monitoring tools.
With SR handoff, a single control plane and data plane session is used instead of per-VRF control plane and
data plane sessions, with a unified SR transport from the Cisco ACI fabric to the SP core. The BGP Label
Unicast (BGP LU) address-family is used for the underlay label exchange. The MP-BGP EVPN address-family
carries the prefix and MPLS label per VRF information.
Similarly, you can advertise an EVPN type 5 prefix from the ACI border leaf switch and the DC-PE could
create an SR-TE or Flex Algo routing policy based on the destination prefix, as shown in the following figure.
Of the two methods, we recommend using color community to reduce the configurations on the DC-PE.
However, for either of these situations, you must verify that your DC-PE has the capability of supporting this
functionality before utilizing SR-MPLS in this way.
Similarly, using MPLS ingress rules, the ACI border leaf switch can mark the ingress packets coming into
the fabric with COS, DSCP and QoS levels based on EXP values, where the QoS levels define the QoS actions
within fabric.
the DC-PE. An SR-MPLS infra L3Out is scoped to a pod or a remote leaf switch site, and is not extended
across pods or remote leaf switch pairs.
Figure 18: SR-MPLS Infra L3Out
A pod or remote leaf switch site can have one or more SR-MPLS infra L3Outs.
See Configuring an SR-MPLS Infra L3Out Using the GUI, on page 176 for the procedures for configuring the
SR-MPLS infra L3Out.
As part of the configuration process for the SR-MPLS infra L3Out, you will configure the following areas:
• MP-BGP EVPN Session Between the Cisco ACI Border Leaf Switch and the DC-PE, on page 157
• Multi-Hop BFD for BGP EVPN Session, on page 158
• Underlay BGP Sessions (BGP-Labeled Unicast and IPv4 Address-family) On the Cisco ACI Border
Leaf Switch and Next-Hop Router, on page 159
• Single-Hop BFD for BGP-Labeled Unicast Session, on page 160
MP-BGP EVPN Session Between the Cisco ACI Border Leaf Switch and the DC-PE
You will need to provide the necessary information to configure the MP-BGP EVPN sessions between the
EVPN loopbacks of the border leaf switches and the DC-PE routers to advertise the overlay prefixes, as shown
in the following figure.
While you can use a different IP address for the loopback for the MP-BGP EVPN and the transport as shown
in the figure, we recommend that you use the same loopback for the MP-BGP EVPN and the transport loopback
on the Cisco ACI border leaf switch.
Only eBGP sessions are supported at this time.
A multi-hop BFD with a minimum timer of 250 milliseconds and a detect multiplier of 3 is supported for the
BGP EVPN session between the Cisco ACI border leaf switch and the DC-PE. You can modify this timer
value based on your requirements.
Underlay BGP Sessions (BGP-Labeled Unicast and IPv4 Address-family) On the Cisco ACI Border Leaf Switch
and Next-Hop Router
You will also configure the BGP IPv4 and labeled unicast address-family per interface between the Cisco
ACI border leaf switches and the DC-PE, as shown in the following figure.
The BGP IPv4 address family automatically advertises the EVPN loopbacks, and the BGP-labeled unicast
address family will automatically advertise the SR transport loopback with the SR-MPLS label.
Only eBGP sessions are supported at this time.
A single-hop BFD with a minimum timer of 50 milliseconds and a detect multiplier of 3 is supported for the
BGP EVPN session between the Cisco ACI border leaf switch and the DC-PE. You can modify this timer
value based on your requirements.
You can configure the BFD echo function on one or both ends of a BFD-monitored link. The echo function
slows down the required minimum receive interval, based on the configured slow timer. The
RequiredMinEchoRx BFD session parameter is set to zero if the echo function is disabled. The slow timer
becomes the required minimum receive interval if the echo function is enabled.
You can attach one or more SR-MPLS VRF L3Outs to the same SR-MPLS infra L3Out. Through the SR-MPLS
VRF L3Outs, you can configure import and export route maps to do the following things:
• Apply route policies based on prefixes and/or communities
• Advertise prefixes into the SR network
• Filter out prefixes received from the SR network
You will also configure an external EPG with one or more subnets on each SR-MPLS VRF L3Out tenant,
which is used for the following:
• Security policies (contract)
• Policy-Based Redirect (PBR) policies
• Route leaking between VRFs
See Configuring an SR-MPLS VRF L3Out Using the GUI, on page 183 for the procedures for configuring
SR-MPLS VRF L3Outs.
When configuring a custom QoS policy, you define the following two rules that are applied on the border leaf
switch:
• Ingress rules: Any traffic coming into the border leaf switch connected to the MPLS network will be
checked for the MPLS experimental bits (EXP) value and if a match is found, the traffic is classified into
an ACI QoS Level and marked with appropriate CoS and differentiated services code point (DSCP)
values.
The values are derived at the border leaf using a custom QoS translation policy. The original DSCP
values for traffic coming from SR-MPLS are retained without any remarking. If a custom policy is not
defined or not matched, the default QoS Level (Level3) is assigned.
• Egress rules: When the traffic is leaving the fabric out of the border leaf's MPLS interface, it will be
matched based on the DSCP value of the packet and if a match is found, the MPLS EXP and CoS values
will be set based on the policy.
If the egress MPLS QoS policy is not configured, the MPLS EXP will default to zero. If they are configured
based on the MPLS Custom QoS policy, it will remark the EXP.
The following two figures summarize when the ingress and egress rules are applied as well as how the internal
ACI traffic may remark the packets' QoS fields while inside the fabric.
Figure 20: Ingress QoS
You can define multiple custom QoS policies and apply them to each SR-MPLS Infra L3Out you create, as
described in Creating SR-MPLS Custom QoS Policy Using the GUI, on page 186.
• Outbound route map: You must configure the policy for the outbound route map to advertise any
prefix, including bridge domain subnets. By default, the policy for the outbound route map is to not
advertise any prefix.
An explicit outbound route map can be configured to:
• Match prefixes to be advertised to the SR-MPLS network
• Match prefixes and community to advertise prefixes to the SR-MPLS network
• Set community, including color community, based on the prefix and/or community match
Both the inbound route map and the outbound route map are used for the control plane, to set which
prefixes are permitted or denied in and out of the fabric.
Within the SR-MPLS VRF L3Out, you will also configure the external EPG and the subnets within this
external EPG, which is used for the data plane. These subnets will be used to apply ACI security policies.
The external EPG subnet is also used to leak prefixes in another VRF using flags. If you enable the
route-leak and security flag on an external EPG subnet, then that subnet can be leaked to another VRF.
You can also configure the external EPG subnet with the aggregated flag to leak prefixes to another VRF.
In this case, you will need to define a contract to the leaf switch prefixes and allow communication across
VRFs.
Note The external EPG on the SR-MPLS VRF L3Out is not used for routing policies,
such as applying a route map to advertise or deny prefix advertisement.
In this example, the SR-MPLS VRF-1 L3Out within the user tenant is attached to the SR-MPLS infra L3Out,
and the SR-MPLS VRF-2 L3Out within the user tenant is also attached to the SR-MPLS infra L3Out.
In this scenario, you would make configurations similar to the EPG to SR-MPLS L3Out configuration described
previously, with the differences highlighted below:
• Configure the SR-MPLS infra L3Out on the border leaf switches (BL1 and BL2 in the figure above)
• Configure the SR-MPLS VRF L3Out in the user tenant, along with the IP L3Out and user VRFs
• Configure the route map for exporting and importing on prefixes and apply it to the SR-MPLS VRF
L3Out
• Configure the contract and apply it between the external EPGs associated to the IP L3Out and the
SR-MPLS VRF L3Out for traffic forwarding between the IP L3Out and the SR-MPLS L3Out
• DC-PE routers:
• Network Convergence System (NCS) 5500 Series
• ASR 9000 Series
• NCS 540 or 560 routers
• ASR1000/ IOS-XE platforms
• The Cisco Application Centric Infrastructure (ACI)-to-SR-MPLS handoff solution uses a standards-based
implementation with SR-MPLS, BGP-LU, BGP EVPN, and prefix re-origination between BGP EVPN
and VPNv4/v6. Any DC-PE that supports these technologies should be able to support Cisco ACI to
SR-MPLS handoff.
Note When the Cisco Application Centric Infrastructure (ACI) border leaf switch with the SR-MPLS handoff is
connected to a PE device running IOS-XE software, the IOS-XE device should be configured with "neighbor
<aci-leaf> next-hop-unchanged" under the BGP L2VPN EVPN address-family. With next-hop-unchanged
configuration, Cisco ACI border leaf switch must learn the remote PE loopback.
Routing Policy
• Supported: Beginning with Cisco APIC release 6.1(1), fabric ports on a remote leaf can now be deployed
on SRMPLS infra l3outs, as a routed sub interface.
• Supported: Transit SR-MPLS traffic with the same border leaf pair and different VRFs, as
shown in the following figure.
• Supported: Transit SR-MPLS traffic with different border leaf pairs and different VRFs, as
shown in the following figure.
• Transit SR-MPLS traffic within the same VRF and on the same border leaf pair, as shown in
the following figure:
• Unsupported for releases prior to Release 5.1(1).
• Supported for Release 5.1(1) and later, where re-originated routes are prevented from being
advertised back into the same Infra L3Out peers to avoid transient loops in the system.
• If a leaf switch is configured on multiple SR-MPLS infra L3Outs, the same subnets can be advertised
out of all the L3Outs if the prefixes are configured in a single prefix list (in one match rule), and the route
map with that prefix list is then associated with all the SR-MPLS VRF L3Outs.
For example, consider the following configuration:
• A single prefix list P1, with subnets S1 and S2
• SR-MPLS VRF L3Out 1, which is associated with route map R1, with prefix list P1
• SR-MPLS VRF L3Out 2, which is associated with route map R2, with prefix list P1
Because the prefixes are configured in the same prefix list (P1), even though they are associated with
different SR-MPLS VRF L3Outs, the same subnets within prefix list P1 are advertised out of both L3Outs.
On the other hand, consider the following configuration:
• Two prefix lists:
• Prefix list P1, with subnets S1 and S2
• SR-MPLS VRF L3Out 1, which is associated with route map R1, with prefix list P1
• SR-MPLS VRF L3Out 2, which is associated with route map R2, with prefix list P2
Because the prefixes are configured in the two prefix lists (P1 and P2), and they are associated with
different SR-MPLS VRF L3Outs, subnets S1 and S2 are not advertised out of both of the L3Outs.
• SR-MPLS VRF L3Outs do not support multicast.
Security Policy
• You can configure a security policy through the external EPG instance profile, which is defined within
an SR-MPLS VRF L3Out. The external EPG instance profile contains IP prefixes that are reachable
through the SR-MPLS network from one or more SR-MPLS infra L3Outs and need the same security
policy.
• You can configure 0/0 prefix in the external EPG instance profile to classify, as part of the external EPG,
the inbound traffic flows originated from any external IP address.
• You can associate an external EPG in the external EPG instance profile with one or more SR-MPLS
VRF L3Outs. When the external EPG instance profile is external to multiple SR-MPLS infra L3Outs,
multiple SR-MPLS VRF L3Outs point to the same external EPG instance profile.
• You must configure contracts between local EPGs and external EPG instance profiles or between external
EPGs associated to different VRF L3Outs (to enable transit routing).
Following are the guidelines and limitations for configuring MPLS Custom QoS policies:
• Data Plane Policers (DPP) are not supported at the SR-MPLS L3Out.
• Layer 2 DPP works in the ingress direction on the MPLS interface.
• Layer 2 DPP works in the egress direction on the MPLS interface in the absence of an egress custom
MPLS QoS policy.
• VRF level policing is not supported.
You will configure the following pieces when configuring the SR-MPLS infra L3Out:
• Nodes
• Only leaf switches are allowed to be configured as nodes in the SR-MPLS infra L3Out (border leaf
switches and remote leaf switches).
• Each SR-MPLS infra L3Out can have border leaf switches from one pod or remote leaf switch from
the same site.
• Each border leaf switch or remote leaf switch can be configured in multiple SR-MPLS infra L3Outs
if it connects to multiple SR-MPLS domains.
• You will also configure the loopback interface underneath the node, and a node SID policy underneath
the loopback interface.
• Interfaces
• Supported types of interfaces are:
• Routed interface or sub-interface
• Routed port channel or port channel sub-interface
• You will also configure the underlay BGP peer policy underneath the interfaces area in the SR-MPLS
infra L3Out.
• QoS rules
• You can configure the MPLS ingress rule and MPLS egress rule through the MPLS QoS policy in
the SR-MPLS infra L3Out.
• If you do not create an MPLS QoS policy, any ingressing MPLS traffic is assigned the default QoS
level.
You will also configure the underlay and overlay through the SR-MPLS infra L3Out:
• Underlay: BGP peer IP (BGP LU and IPv4 peer) configuration as part of the interface configuration.
• Overlay: MP-BGP EVPN remote IPv4 address (MP-BGP EVPN peer) configuration as part of the logical
node profile configuration.
Procedure
Step 1 Navigate to Tenants > infra > Networking > SR-MPLS Infra L3Outs.
Step 2 Right-click on SR-MPLS Infra L3Outs and choose Create SR-MPLS Infra L3Out.
The Connectivity window appears.
b) In the Layer 3 Domain field, choose an existing Layer 3 domain or choose Create L3 Domain to create a new layer
3 domain.
c) In the Pod field, choose a pod, if you have a Multi-Pod configuration.
If you do not have a Multi-Pod configuration, leave the selection at pod 1.
d) (Optional) In the MPLS Custom QoS Policy field, choose an existing QoS policy or choose Create MPLS Custom
QoS Policy to create a new QoS policy.
For more information on creating a new QoS policy, see Creating SR-MPLS Custom QoS Policy Using the GUI, on
page 186.
If you do not create a custom QoS policy, the following default values are assigned:
• All incoming MPLS traffic on the border leaf switch is classified into QoS Level 3 (the default QoS level).
• The border leaf switch does the following:
• Retains the original DSCP values for traffic coming from SR-MPLS without any remarking.
• Forwards packets to the MPLS network with the original COS value of the tenant traffic if the COS
preservation is enabled.
• Forwards packets with the default MPLS EXP value (0) to the SR-MPLS network.
• In addition, the border leaf switch does not change the original DSCP values of the tenant traffic coming from
the application server while forwarding to the SR network.
Step 4 In the Nodes and Interfaces window, enter the necessary information to configure the border leaf nodes and interfaces.
a) In the Node Profile Name and Interface Profile Name fields, determine if you want to use the default naming
convention for the node profile and interface profile names.
The default node profile name is L3Out-name_nodeProfile, and the default interface profile name is
L3Out-name_interfaceProfile, where L3Out-name is the name that you entered in the Name field in the
Connectivity page. Change the profile names in these fields, if necessary.
b) (Optional) In the BFD Interface Policy field, choose an existing BFD interface policy or choose Create BFD
Interface Policy to create a new BFD interface policy.
c) In the Transport Data Plane field, determine the type of routing that you would like to use for the handoff on the
Cisco ACI border leaf switches.
The options are:
• MPLS: Select this option to use Multiprotocol Label Switching (MPLS) for the handoff towards the transport
device.
• SR-MPLS: Select this option to use segment routing (SR) Multiprotocol Label Switching (MPLS) for the handoff
towards the transport device.
d) In the Interface Types area, make the necessary selections in the Layer 3 and Layer 2 fields.
The options are:
• Layer 3:
• Interface: Choose this option to configure a Layer 3 interface to connect the border leaf switch to the
external router.
When choosing this option, the Layer 3 interface can be either a physical port or a direct port-channel,
depending on the specific option selected in the Layer 2 field in this page.
• Sub-Interface: Choose this option to configure a Layer 3 sub-interface to connect the border leaf switch
to the external router.
When choosing this option, a Layer 3 sub-interface is created for either a physical port or a direct
port-channel, depending on the specific option selected in the Layer 2 field in this page.
• Layer 2:
• Port
• Direct Port Channel
e) From the Node ID field drop-down menu, choose the border leaf switch, or node, for the L3Out.
For multi-pod configurations, only the leaf switches (nodes) that are part of the pod that you selected in the previous
screen are displayed.
You might see a warning message appear on your screen, describing how to configure the router ID.
• If you do not have a router ID already configured for this node, go to 4.f, on page 181 for instructions on
configuring a router ID for this node.
• If you have a router ID already configured for this node (for example, if you had configured MP-BGP route
reflectors previously), you have several options:
• Use the same router ID for the SR-MPLS configuration: This is the recommended option. Make a note
of the router ID displayed in this warning to use in the next step in this case, and go to 4.f, on page 181 for
instructions on configuring a router ID for this node.
• Use a different router ID for the SR-MPLS configuration: In this situation, you must first take the node
out of the active path to avoid traffic disruption to the existing application before entering the router ID in
the next step. To take the node out of the active path:
1. Put the node in maintenance mode.
2. Enter the different router ID for the SR-MPLS configuration, as described in 4.f, on page 181.
3. Reload the node.
f) In the Router ID field, enter a unique router ID (the IPv4 or IPv6 address) for the border leaf switch part of the infra
L3Out.
The router ID must be unique across all border leaf switches and the DC-PE.
As described in 4.e, on page 181, if a router ID has already been configured on this node, you have several options:
• If you want to use the same router ID for the SR-MPLS configuration, enter the router ID that was displayed in
the warning message in 4.e, on page 181.
• If you do not want to use the same router ID for the SR-MPLS configuration, or if you did not have a router ID
already configured, enter an IP address (IPv4 or IPv6) in this field for the border leaf switch part of the infra
L3Out, keeping in mind that it has to be a unique router ID.
Once you have settled on an entry for the Router ID, the entries in the BGP-EVPN Loopback and MPLS Transport
Loopback fields are automatically populated with the entry that you provided in the Router ID field.
g) (Optional) Enter an IP address in the BGP-EVPN Loopback field, if necessary.
For BGP-EVPN sessions, the BGP-EVPN loopback is used for the control plane session. Use this field to configure
the MP-BGP EVPN session between the EVPN loopbacks of the border leaf switch and the DC-PE to advertise the
overlay prefixes. The MP-BGP EVPN sessions are established between the BP-EVPN loopback and the BGP-EVPN
remote peer address (configured in the BGP-EVPN Remote IPv4 Address field in the Connectivity window).
The BGP-EVPN Loopback field is automatically populated with the same entry that you provide in the Router ID
field. Enter a different IP address for the BGP-EVPN loopback address, if you don't want to use the router ID as the
BGP-EVPN loopback address.
Note the following:
• For BGP-EVPN sessions, we recommend that you use a different IP address in the BGP-EVPN Loopback field
from the IP address that you entered in the Router ID field.
• While you can use a different IP address for the BGP-EVPN loopback and the MPLS transport loopback, we
recommend that you use the same loopback for the BGP-EVPN and the MPLS transport loopback on the ACI
border leaf switch.
h) In the MPLS Transport Loopback field, enter the address for the MPLS transport loopback.
The MPLS transport loopback is used to build the data plane session between the ACI border leaf switch and the
DC-PE, where the MPLS transport loopback becomes the next-hop for the prefixes advertised from the border leaf
switches to the DC-PE routers. See MP-BGP EVPN Session Between the Cisco ACI Border Leaf Switch and the
DC-PE, on page 157 for more information.
Note the following:
• For BGP-EVPN sessions, we recommend that you use a different IP address in the MPLS Transport Loopback
field from the IP address that you entered in the Router ID field.
• While you can use a different IP address for the BGP-EVPN loopback and the MPLS transport loopback, we
recommend that you use the same loopback for the BGP-EVPN and the MPLS transport loopback on the ACI
border leaf switch.
This is the IP address assigned to the Layer 3 interface/sub-interface/port channel that you configured in a previous
step.
o) In the Peer IPv4 Address field, enter the BGP-Label unicast peer IP address.
This is the interface's IP address of the router directly connected to the border leaf switch.
p) In the Remote ASN field, enter the BGP-Label Autonomous System Number of the directly-connected router.
q) Determine if you want to configure additional interfaces for this node for the SR-MPLS infra L3Out.
• If you do not want to configure additional interfaces for this node for this SR-MPLS infra L3Out, skip to 4.s,
on page 183.
• If you want to configure additional interfaces for this node for this SR-MPLS infra L3Out, click + in the Interfaces
area to bring up the same options for another interface for this node.
Note
If you want to delete the information that you entered for an interface for this node, or if you want to delete an
interface row that you added by accident, click the trash can icon for the interface row that you want to delete.
r) Determine if you want to configure additional nodes for this SR-MPLS infra L3Out.
• If you do not want to configure additional nodes for this SR-MPLS infra L3Out, skip to 4.s, on page 183.
• If you want to configure additional nodes for this SR-MPLS infra L3Out, click + in the Nodes area to bring up
the same options for another node.
Note
If you want to delete the information that you entered for a node, or if you want to delete a node row that you
added by accident, click the trash can icon for the node row that you want to delete.
s) When you have entered the remaining additional information in the Nodes and Interfaces window, click Finish to
complete the necessary configurations in the Create SR-MPLS Infra L3Out wizard.
What to do next
Configure an SR-MPLS VRF L3Out using the procedures provided in Configuring an SR-MPLS VRF L3Out
Using the GUI, on page 183.
Procedure
Step 1 Configure the SR-MPLS VRF L3Out by navigating to the Create SR-MPLS VRF L3Out window for the tenant
(Tenants > tenant > Networking > SR-MPLS VRF L3Outs).
Step 2 Right-click on SR-MPLS VRF L3Outs and select Create SR-MPLS VRF L3Out.
The Create SR-MPLS VRF L3Out window appears.
Figure 22: Create SR-MPLS VRF L3Out
Step 3 In the Name field, enter a name for the SR-MPLS VRF L3Out.
This will be the name for the policy controlling connectivity to the outside. The name can be up to 64 alphanumeric
characters.
Note
You cannot change this name after the object has been saved.
Step 4 In the VRF field, select an existing VRF or click Create VRF to create a new VRF.
Step 5 In the SR-MPLS Infra L3Out field, select an existing SR-MPLS infra L3Out or click Create SR-MPLS Infra L3Out
to create a new SR-MPLS infra L3Out.
For more information on creating an SR-MPLS infra L3Out, see Configuring an SR-MPLS Infra L3Out Using the GUI,
on page 176.
Step 6 Navigate to the External EPGs area and, in the External EPG Name area, enter a unique name for the external EPG
to be used for this SR-MPLS VRF L3Out.
Step 7 Navigate to the Subnets and Contracts area and configure individual subnets within this EPG.
Note
If you want to configure the subnet fields but you do not see the following fields, click Show Subnets and Contracts
to display the following fields.
a) In the IP Prefix field, enter an IP address and netmask for the subnet.
b) In the Inter VRF Policy field, determine if you want configure inter-VRF policies.
• If you do not want to configure inter-VRF policies, skip to 7.c, on page 185.
• If you want to configure inter-VRF policies, select the appropriate inter-VRF policy that you want to use.
The options are:
• Route Leaking.
If you select Route Leaking, the Aggregate field appears. Click the box next to Aggregate if you also
want to enable this option.
• Security.
Note that you can select one of the two options listed above or both options for the Inter VRF Policy field.
c) In the Provided Contract field, select an existing provider contract or click Create Contract to create a provider
contract.
d) In the Consumed Contract field, select an existing consumer contract or click Create Contract to create a consumer
contract.
e) Determine if you want to configure additional subnets for this external EPG.
• If you do not want to configure additional subnets for this external EPG, skip to Step 8, on page 185.
• If you want to configure additional subnets for this external EPG, click + in the Subnet and Contracts area
to bring up the same options for another subnet.
Note
If you want to delete the information that you entered for a subnet, or if you want to delete a subnet row that
you added by accident, click the trash can icon for the subnet row that you want to delete.
Step 8 Determine if you want to create additional external EPGs to be used for this SR-MPLS VRF L3Out.
• If you do not want to configure additional external EPGs to be used for this SR-MPLS VRF L3Out, skip to Step 9,
on page 186.
• If you want to configure additional external EPGs to be used for this SR-MPLS VRF L3Out, click + in the External
EPG Name area to bring up the same options for another external EPG.
Note
If you want to delete the information that you entered for an external EPG, or if you want to delete an external
EPG area that you added by accident, click the trash can icon for the external EPG area that you want to delete.
Step 9 In the Route Maps area, configure the outbound and inbound route maps.
Within each SR-MPLS VRF L3Out:
• Defining the outbound route map (export routing policy) is mandatory. This is needed to be able to advertise
prefixes toward the external DC-PE routers.
• Defining the inbound route map (import routing policy) is optional, because, by default, all the prefixes received
from the DC-PE routers are allowed into the fabric.
a) In the Outbound field, select an existing export route map or click Create Route Maps for Route Control to
create a new export route map.
b) In the Inbound field, select an existing import route map or click Create Route Maps for Route Control to create
a new import route map.
Step 10 When you have completed the configurations in the Create SR-MPLS VRF L3Out window, click Submit.
Procedure
Step 1 From the top menu bar, navigate to Tenants > infra.
Step 2 In the left pane, select infra > Policies > Protocol > MPLS Custom QoS.
Step 3 Right click the MPLS Custom QoS folder and choose Create MPLS Custom QoS Policy.
Step 4 In the Create MPLS Custom QoS Policy window that opens, provide the name and description of the policy you're
creating.
Step 5 In the MPLS Ingress Rule area, click + to add an ingress QoS translation rule.
Any traffic coming into the border leaf (BL) connected to the MPLS network will be checked for the MPLS EXP value
and if a match is found, the traffic is classified into an ACI QoS Level and marked with appropriate CoS and DSCP
values.
a) In the Priority field, select the priority for the ingress rule.
This is the QoS Level you want to assign for the traffic within ACI fabric, which ACI uses to prioritize the traffic
within the fabric.. The options range from Level1 to Level6. The default value is Level3. If you do not make a
selection in this field, the traffic will automatically be assigned a Level3 priority.
b) In the EXP Range From and EXP Range To fields, specify the EXP range of the ingressing MPLS packet you want
to match.
c) In the Target DSCP field, select the DSCP value to assign to the packet when it's inside the ACI fabric.
The DSCP value specified is set in the original traffic received from the external network, so it will be re-exposed
only when the traffic is VXLAN decapsulated on the destination ACI leaf node.
The default is Unspecified, which means that the original DSCP value of the packet will be retained.
d) In the Target CoS field, select the CoS value to assign to the packet when it's inside the ACI fabric.
The CoS value specified is set in the original traffic received from the external network, so it will be re-exposed only
when the traffic is VXLAN decapsulated on the destination ACI leaf node.
The default is Unspecified, which means that the original CoS value of the packet will be retained, but only if the
CoS preservation option is enabled in the fabric.
e) Click Update to save the ingress rule.
f) Repeat this step for any additional ingress QoS policy rules.
Step 6 In the MPLS Egress Rule area, click + to add an egress QoS translation rule.
When the traffic is leaving the fabric out of the border leaf's MPLS interface, it will be matched based on the DSCP value
of the packet and if a match is found, the MPLS EXP and CoS values will be set based on the policy.
a) Using the DSCP Range From and DSCP Range To dropdowns, specify the DSCP range of the ACI fabric packet
you want to match for assigning the egressing MPLS packet's priority.
b) From the Target EXP dropdown, select the EXP value you want to assign to the egressing MPLS packet.
c) From the Target CoS dropdown, select the CoS value you want to assign to the egressing MPLS packet.
d) Click Update to save the ingress rule.
e) Repeat this step for any additional egress QoS policy rules.
Step 7 Click OK to complete the creation of the MPLS custom QoS Policy.
To display statistics information for all the interfaces and VRFs in your system, navigate to:
Tenant > infra > Networking > SR-MPLS Infra L3Outs
The SR-MPLS Infra L3Outs panel is displayed, showing all of the SR-MPLS infra L3Outs configured on
your system. Remaining at the upper-level SR-MPLS Infra L3Outs panel, navigate to the appropriate statistics
page, depending on the type of statistics that you want to display:
• Click the Interface Stats tab to display a summary of the statistics for all of the MPLS interfaces on
your system. Each row in this window displays MPLS statistics information for a specific interface on
a specific node.
Note The interface statistics shown in the main SR-MPLS infra L3Outs page are for
all the SR-MPLS-enabled interfaces only on border leaf switch models with
"FX2" or "GX" at the end of the switch name.
To see other levels of MPLS interface statistics information, see Displaying SR-MPLS Statistics for
Interfaces, on page 189.
• Click the VRF Stats tab to display a summary of the statistics for all of the MPLS VRFs on your system.
Each row in this window displays MPLS statistics information for a specific VRF configured on a specific
node.
The VRF statistics provided in the SR-MPLS infra L3Out properties page are the individual VRF statistics
on the given border leaf switch or remote leaf switch where the provider label of the SR-MPLS infra
L3Out is consumed.
To see other levels of MPLS VRF statistics information, see Displaying SR-MPLS Statistics for VRFs,
on page 190.
To change the type of statistics that are shown on a statistics page, click the checkbox to bring up the Select
Stats window, then move entries from the left column to the right column to show different statistics, and
from the right column to the left column to remove certain statistics from view.
To change the layout of the statistics in this page to show statistics in a table format, click the icon with three
horizontal bars and select Table View.
• To display detailed aggregate interface statistics for all of the interfaces in the SR-MPLS VRF L3Outs
under an SR-MPLS infra L3Out, navigate to that SR-MPLS infra L3Out:
Tenant > infra > Networking > SR-MPLS Infra L3Outs > SR-MPLS_infra_L3Out_name
Click the Stats tab to display detailed aggregate interface statistics for all of the interfaces in the SR-MPLS
VRF L3Outs under that particular SR-MPLS infra L3Out.
• To display statistics for a specific interface on a specific leaf switch, navigate to that interfaces area on
the leaf switch:
Fabric > Inventory > Pod # > leaf_switch > Interfaces, then click either Routed Interfaces or
Encapsulated Routed Interfaces.
Click on the specific interface that you want statistic information for, then click the Stats tab.
To change the type of statistics that are shown on a statistics page, click the checkbox to bring up the Select
Stats window, then move entries from the left column to the right column to show different statistics, and
from the right column to the left column to remove certain statistics from view.
To change the layout of the statistics in this page to show statistics in a table format, click the icon with three
horizontal bars and select Table View.
• To display detailed aggregate VRF statistics for a specific VRF, navigate to that VRF:
Tenant > tenant_name > Networking > VRFs > VRF_name
Click the Stats tab to display the aggregate VRF statistics for this particular VRF. Note that this VRF is
being used by one of the SR-MPLS L3Outs, and this SR-MPLS L3Out might have multiple leaf switches,
with multiple interfaces for each leaf switch. The statistics shown in this window is an aggregate of all
the interfaces in this SR-MPLS L3Out that is being used by this VRF.
• To display VRF statistics for a specific leaf switch, navigate to the VRF contexts for that leaf switch:
Fabric > Inventory > Pod # > leaf_switch > VRF Contexts > VRF_context_name
Click the Stats tab to display the statistics for this VRF for this specific leaf switch.
We recommend that all nodes in an SR domain have the same SR-GB configuration.
Following are important guidelines to consider when configuring SR-MPLS global block:
• The allowed configurable SR-GB range is 16000-471804.
• The default SR-GB range in the ACI fabric is 16000-23999.
• ACI always advertises implicit null for the underlay label (transport loopback).
Procedure
Step 2 Acess the default MPLS Global Configurations screen by double-clicking on default in the main SR-MPLS Global
Configurations screen or by clicking on default in the left nav bar, under Mpls Global Configurations.
The default SR-MPLS Global Configurations window appears.
Step 3 In the SR Global Block Minimum field, enter the minimum value for the SR-GB range.
The lowest allowable value in this field is 16000.
Step 4 In the SR Global Block Maximum field, enter the maximum value for the SR-GB range.
The highest allowable value in this field is 471804.
that, currently, external clients are able to come in through the L3Outs used in the IP handoff configuration,
but once you have completed the procedures in this section, the external clients can then come in through the
L3Outs used in the SR-MPLS handoff configuration.
Note Throughout these procedures, the following terms are used to distinguish between the two types of L3Outs:
• IP-based L3Out: Used for the previously-configured user tenant L3Out that is using a pre-Release 5.0(1)
IP handoff configuration.
• SR-MPLS L3Out: Used for the newly-configured user tenant L3Out that has been configured using the
new SR-MPLS components that have been introduced in Cisco APIC Release 5.0(1).
Following are the overall steps that you will go through as part of this process:
• Configure the external EPGs on the SR-MPLS VRF L3Out to mirror the IP-based L3Out configuration.
This includes the subnets configuration for classification of inbound traffic and the contracts provided
or consumed by the external EPGs.
• Redirect inbound and outbound traffic to ensure that it starts preferring the SR-MPLS L3Out.
• Disconnect the IP-based L3Out.
The following sections provide detailed instructions for each of the overall steps listed above.
Procedure
Step 1 Create a new infra SR-MPLS L3Out, if you have not done so already.
See Configuring an SR-MPLS Infra L3Out Using the GUI, on page 176 for those instructions, then return here.
Step 2 Create a new user tenant SR-MPLS L3Out, if you have not done so already.
See Configuring an SR-MPLS VRF L3Out Using the GUI, on page 183 for those instructions, then return here. Note that
this L3Out should be associated to the same VRF of the previously-configured IP-based L3Out.
As part of the process for creating the new user tenant SR-MPLS L3Out, you will be asked to configure the external EPG
for this SR-MPLS L3Out.
• For the external EPG for the new SR-MPLS L3Out, enter the same IP prefix information that you currently have
for your previously-configured IP-based L3Out.
• If you have more than one external EPG configured for your previously-configured IP-based L3Out, create additional
external EPGs for the new SR-MPLS L3Out and match the same IP prefix information for each EPG.
In the end, the external EPG settings that you configure for the new SR-MPLS L3Out, with the accompanying subnet
settings, should match the external EPG and subnet settings that you had previously configured for the IP-based L3Out.
Once you have completed the procedures for creating the new user tenant SR-MPLS L3Out, you should now have two
L3Outs (two paths in BGP):
• The existing, previously-configured IP-based L3Out that is using a pre-Release 5.0(1) IP handoff configuration, as
mentioned in the Before you begin area in Migrating from IP Handoff Configuration to SR Handoff Configuration,
on page 192.
• The new SR-MPLS L3Out that you created using the new SR-MPLS components that have been introduced in Cisco
APIC Release 5.0(1).
Step 3 Ensure the same security policy is applied to the external EPGs of the SR-MPLS L3Out as you had for the IP-based
L3Out.
In the non-border leaf switches and the border leaf switches, the new security policy in the external EPG that you configured
when you created the new SR-MPLS L3Out will result in a fault for every subnet whose prefix clashes with the subnet
prefix in any EPG of the previously-configured IP-based L3Out. This is a fault that does not impact functionality, as long
as the same security policies are applied to the same external EPGs of both L3Outs.
What to do next
Redirect inbound and outbound traffic to ensure that it starts preferring the SR-MPLS L3Out using the
procedures provided in Redirecting Traffic to SR-MPLS L3Out, on page 194.
Procedure
Step 1 Navigate to the BGP Peer Connectivity Profile for the previously-configured IP-based L3Out.
In the Navigation pane, navigate to Tenants > tenant_name_for_IP_handoff_L3Out > Networking > L3Outs >
L3Out_name > Logical Node Profiles > logical_profile_name > Logical Interface Profiles >
logical_interface_profile_name > BGP_peer_connectivity_profile .
Step 2 Click on the BGP Peer Connectivity Profile in the left nav bar so that the BGP Peer Connectivity Profile page is displayed
in the right main window.
Step 3 Scroll down the page until you see the Route Control Profile area in the BGP Peer Connectivity Profile page.
Step 4 Determine if route control policies were already configured for the existing IP-based L3Out.
You may or may not have had route control policies configured for the existing IP-based L3Out; however, for the new
SR-MPLS L3Out, you will need to have route control policies configured. If you had route control policies configured
for the existing IP-based L3Out, you can use those route control policies for the new SR-MPLS L3Out; otherwise, you
will have to create new route control policies for the SR-MPLS L3Out.
• If you see two route control profiles displayed in the Route Control Profile table:
• An export route control policy, shown with Route Export Policy in the Direction column in the table.
• An import route control policy, shown with Route Import Policy in the Direction column in the table.
then route control policies have already been configured for the IP-based L3Out. Go to Step 5, on page 196.
• If you do not see two route control profiles displayed in the Route Control Profiles table, then create a new route
map that will be used for the SR-MPLS L3Out:
a) In the Navigation pane, expand the Tenants > tenant_name_for_IP_handoff_L3Out > Policies > Protocol.
b) Right-click on Route Maps for Route Control and select Create Route Maps for Route Control.
c) In the Create Route Maps for Route Control dialog box, in the Name field, enter a route profile name.
d) In the Type field, you must choose Match Routing Policy Only.
e) In the Contexts area, click the + sign to open the Create Route Control Context dialog box and perform the following
actions:
1. Populate the Order and the Name fields as desired.
2. In the Match Rule field, click Create Match Rule.
3. In the Create Match Rule dialog box, in the Name field, enter a name for the match rule.
4. Enter the necessary information in the appropriate fields (Match Regex Community Terms, Match Community
Terms and Match Prefix), then click Submit.
5. In the Set Rule field, click Create Set Rules for a Route Map
6. In the Create Set Rules for a Route Map dialog box, in the Name field, enter a name for the action rule profile.
7. Choose the desired attributes, and related community, criteria, tags, and preferences. Click Finish.
8. In the Create Route Control Context window, click OK.
9. In the Create Route Maps for Route Control dialog box, click Submit.
g) Click on the BGP Peer Connectivity Profile in the left nav bar so that the BGP Peer Connectivity Profile page is
displayed in the right main window.
h) Scroll down to the Route Control Profile field, then click + to configure the following:
• Name: Select the route-map that you just configured for the route import policy.
• Direction: Select Route Import Policy in the Direction field.
Repeat these steps to select the route-map for the route export policy and set the Route Export Policy in the Direction
field.
Step 5 Force the BGP to choose the new SR path by configuring the route policies for all the peers in the border leaf switches
for the VRF that will be undergoing the migration.
• If the previously-configured IP-based L3Out was configured for eBGP, configure both the route import policy and
the route export policy for the IP-based L3Out peer to have an additional AS path entry (for example, the same AS
as local entry). This is the most typical scenario.
Note
The following procedures assume you do not have set rules configured already for the route map. If you do have set
rules configured already for the route map, edit the existing set rules to add the additional AS path entry (check the
Set AS Path checkbox and select the criterion Prepend AS, then click + to prepend AS numbers).
a. Navigate to Tenant > tenant_name_for_IP_handoff_L3Out > Policies > Protocol > Set Rules and right click
Create Set Rules for a Route Map.
The Create Set Rules For A Route Map window appears.
b. In the Create Set Rules For A Route Map dialog box, perform the following tasks:
1. In the Name field, enter a name for these set rules.
2. Check the Set AS Path checkbox, then click Next.
3. In the AS Path window, click + to open the Create Set AS Path dialog box.
j. First locate the export route control profile that is being used for this existing IP-based L3Out and click on that
route profile.
The properties page for this route control profile appears in the main panel.
k. Locate the route control context entry in the page and double-click the route control context entry.
The properties page for this route control context appears.
l. In the Set Rule area, select the set rule that you created earlier in these procedures with the additional AS path
entry, then click Submit.
m. Now locate the import route control profile that is being used for this existing IP-based L3Out and click on that
route profile, then repeat these steps to use the set rule with the additional AS path entry for the import route
control profile. Doing this will influence inbound traffic, where an external source should start preferring.
• If the previously-configured IP-based L3Out was configured for iBGP, due to the fact that SR-MPLS only supports
eBGP, you will need to use the local preference setting to steer traffic to an eBGP-configured SR-MPLS L3Out, as
described in the previous bullet. Configure both the route import policy and the route export policy for the IP-based
L3Out peer to have a lower local preference value:
a. Navigate to Tenant > tenant_name_for_IP_handoff_L3Out > Policies > Protocol > Set Rules and right click
Create Set Rules for a Route Map.
The Create Set Rules For A Route Map window appears.
b. In the Name field, enter a name.
c. Check the Set Preference checkbox.
The Preference field appears.
d. Enter the BGP local preference path value.
The range is 0-4294967295.
e. Click Finish.
f. Navigate back to the BGP Peer Connectivity Profile screen for this existing IP-based L3Out:
Tenants > tenant_name_for_IP_handoff_L3Out > Networking > L3Outs > L3out-name > Logical Node
Profiles > logical-node-profile-name > Logical Interface Profiles > logical-interface-profile-name >
BGP_peer_connectivity_profile
g. Scroll down to the Route Control Profile area and note the route profile names for both the export route control
policy and the import route control policy that are being used for this existing IP-based L3Out.
h. Navigate to Tenants > tenant_name_for_IP_handoff_L3Out > Policies > Protocol > Route Maps for Route
Control.
i. First locate the export route control profile that is being used for this existing IP-based L3Out and click on that
route profile.
The properties page for this route control profile appears in the main panel.
j. Locate the route control context entry in the page and double-click the route control context entry.
The properties page for this route control context appears.
k. In the Set Rule area, select the set rule that you created earlier in these procedures with the BGP local preference
path, then click Submit.
l. Now locate the import route control profile that is being used for this existing IP-based L3Out and click on that
route profile, then repeat these steps to use the set rule with the BGP local preference path entry for the import
route control profile.
What to do next
Disconnect the IP-based L3Out using the procedures provided in Disconnecting the IP-Based L3Out, on page
198.
Procedure
Either of the methods above will result in the fault being cleared, and the external EPG in the SR-MPLS L3Out will now
be deployed.
As part of the process of changing the security policy from the IP-based L3Out to the SR-MPLS L3Out, there might be
up to a 15-second drop. After that period, the outbound traffic from ACI to outside will take the SR-MPLS path.
If you see that the previously-configured IP-based L3Out was migrated successfully to the new SR-MPLS L3Out, you
can then delete the previously-configured IP-based L3Out.
Step 2 Determine if you have additional L3Outs/VRFs that you want to migrate to SR-MPLS.
Repeat the procedures in Migrating from IP Handoff Configuration to SR Handoff Configuration, on page 192 to migrate
other user L3Outs and VRFs to SR-MPLS.
The same procedures in Migrating from IP Handoff Configuration to SR Handoff Configuration, on page 192 can also
be used to migrate between a tenant GOLF L3Out and a tenant SR-MPLS L3Out.
2. As a transit case, this prefix can be advertised out externally through an SR-MPLS infra L3Out.
3. This prefix could then be imported back into the ACI fabric from the core, either in the same VRF or in
a different VRF.
4. A BGP routing loop would occur when this imported prefix is then advertised back to the originating
switch, either from the same VRF or through a leak from a different VRF.
Beginning with Release 5.1(3), the new BGP Domain-Path feature is available, which helps with BGP routing
loops in the following ways:
• Keeps track of the distinct routing domains traversed by a route within the same VPN or extended VRFs,
as well as across different VPNs or VRFs
• Detects when a route loops back to a VRF in a domain where it has already traversed (typically at a
border leaf switch that is the stitching point between domains, but also at an internal switch, in some
cases)
• Prevents the route from getting imported or accepted when it would lead to a loop
Within an ACI fabric, the VRF scope is global and is extended to all switches where it is configured. Therefore,
a route that is exported out of a domain in a VRF is blocked from being received back into the VRF on any
other switch.
The following components are used with the BGP Domain-Path feature for loop prevention:
• Routing domain ID: Every tenant VRF in an ACI site is associated with one internal fabric domain,
one domain for each VRF in each SR-MPLS infra L3Out, and one domain for each IP L3Out. When the
BGP Domain-Path feature is enabled, each of these domains is assigned a unique routing domain ID, in
the format Base:<variable>, where:
• Base is the non-zero value that was entered in the Domain ID Base field in the BGP Route Reflector
Policy page
• <variable> is a randomly-generated value specifically for that domain
• Domain path: The domain segments traversed by a route are tracked using a BGP domain path attribute:
• The domain ID of the VRF for the source domain where the route is received is prepended to the
domain path
• The source domain ID is prepended to the domain path while re-originating a route across domains
on the border leaf switches
• An external route is not accepted if any of the local domain IDs for the VRFs is in the domain path
• The domain path is carried as an optional and transitive BGP path attribute with each domain
segment, represented as <Domain-ID:SAFI>
• The ACI border leaf switches prepend the VRF internal domain ID for both locally originated and
external routes to track leaks within the domain
• A route from the internal domain can be imported and installed in a VRF on a node with a conflicting
external domain ID to provide an internal backup or transit path
• For infra L3Out peers, the advertisement of a route to a peer is skipped if the domain ID of the peer
domain is present in the domain path of the route (outbound check is not applicable for IP L3Out
peers)
• The border leaf switches and non-border leaf switches will both process the domain path attribute
Note You can configure the BGP Domain-Path feature for loop prevention, or simply enable the configuration to
send a received domain path, through the GUI or REST API. You cannot configure the BGP Domain-Path
feature for loop prevention or enable the configuration to send a received domain path through the NX-OS
style CLI.
Note When upgrading to Release 5.1(3) from a previous release, if you have contracts configured for inter-VRF
shared services, those contracts might not work as expected with the BGP Domain-Path feature for loop
prevention because the BGP domain ID would not have been set in those contracts that were configured before
you upgraded to Release 5.1(3). In those situations, delete the contract and then add the contract back, which
will allow the BGP domain update to occur. This is only an issue when you have contracts that were configured
prior to your upgrade to Release 5.1(3); this is not an issue when you create new contracts after you've
completed the upgrade to Release 5.1(3).
Configuring the BGP Domain-Path Feature for Loop Prevention Using the GUI
Before you begin
Become familiar with the BGP Domain-Path feature using the information provided in About the BGP
Domain-Path Feature for Loop Prevention, on page 199.
Procedure
Step 1 If you want to use the BGP Domain-Path feature for loop prevention, set the BGP Domain-Path attribute on the BGP
route reflector.
Note
If you do not want to use the BGP Domain-Path feature for loop prevention but you still want to send a received domain
path, do not enable the BGP Domain-Path feature on the BGP route reflector in this step. Instead, go directly to Step 2,
on page 204 to only enable the Send Domain Path field in the appropriate BGP connectivity window.
When the BGP Domain-Path feature for loop prevention is enabled, an implicit routing domain ID of the format
Base:<variable> will be allocated, where:
• Base is the non-zero value that you entered in this Domain ID Base field
• <variable> is a randomly-generated value specifically for the VRF or L3Out that will be used for the BGP
Domain-Path feature for loop prevention
The Domain-Path attribute is processed on the inbound directions to check for loops based on the routing domain
IDs in the path. The Domain-Path attribute is sent to a peer, which is controlled separately through the BGP peer-level
Send Domain Path field in the IP L3Out or in the SR-MPLS infraL3Out, as described in the next step.
Step 2 To send the BGP domain path attribute to a peer, enable the Send Domain Path field in the appropriate BGP connectivity
window.
If you want to use the BGP Domain-Path feature for loop prevention, first set the Domain Base ID in Step 1, on page
204, then enable the Send Domain Path field here. If you do not want to use the BGP Domain-Path feature for loop
prevention but you still want to send a received domain path, only enable the Send Domain Path field here (do not set
the Domain Base ID in Step 1, on page 204 in that case).
• To enable the Send Domain Path field for a SR-MPLS infra L3Out peer:
a. Navigate to Tenant > infra > Networking > SR-MPLS Infra L3Outs > SR-MPLS-infra-L3Out_name >
Logical Node Profiles > log_node_prof_name.
The Logical Node Profile window for this configured SR-MPLS infra L3Out appears.
b. Locate the BGP-EVPN Connectivity Profile area, then determine if you want to create a new BGP-EVPN
connectivity policy or if you want to enable the Send Domain Path field in an existing BGP-EVPN connectivity
policy.
• If you want to create a new a create a new BGP-EVPN connectivity policy, click + above the table in the
BGP-EVPN Connectivity Profile area. The Create BGP-EVPN Connectivity Policy window appears.
• If you want to enable the Send Domain Path field in an existing BGP-EVPN connectivity policy,
double-click on that policy in the table in the BGP-EVPN Connectivity Profile area. The BGP-EVPN
Connectivity Policy window appears.
Step 3 Navigate to the appropriate areas to see the routing IDs assigned to the various domains.
• To see the routing ID assigned to the VRF domain, navigate to:
Tenants > tenant_name > Networking > VRFs > VRF_name, then click on the Policy tab for that VRF and locate
the entry in the Routing Domain ID field in the VRF window.
• To see the routing ID assigned to the IP L3Out domain, navigate to:
Tenants > tenant_name > Networking > L3Outs > L3Out_name > Logical Node Profiles > log_node_prof_name >
BGP Peer, then locate the entry in the Routing Domain ID field in the BGP Peer Connectivity Profile window.
• To see the routing ID assigned to the SR-MPLS infra L3Out domain, navigate to:
Tenants > tenant_name > Networking > SR-MPLS VRF L3Outs > SR-MPLS_VRF_L3Out_name, then locate
the entry in the Routing Domain ID column in the SR-MPLS Infra L3Outs table in the window for that SR-MPLS
VRFL3Out.
Note Procedures in this document describe how to configure ACI Border Gateways by using the GUI and REST
API. You cannot configure ACI Border Gateways through the NX-OS style CLI at this time.
compartmentalization, and using DCI. The BGWs provide the network control boundary that is necessary for
traffic enforcement and failure containment functionality.
A site-local EVPN domain consists of EVPN nodes with the same site identifier. BGWs on one hand are also
part of the site-specific EVPN domain and on the other hand a part of a common EVPN domain to interconnect
with BGWs from other sites. For a given site, these BGWs facilitate site-specific nodes to visualize all other
sites to be reachable only via them. This means:
• Site-local bridging domains are interconnected only via BGWs with bridging domains from other sites.
• Site-local routing domains are interconnected only via BGWs with routing domains from other sites.
VXLAN Site ID
Starting from Cisco APIC 6.1(2), you must configure a site ID. You will not be able to configure the border
gateway set policy if you do not have this site ID.
Refer to VXLAN Site ID, on page 228 to know more about creating a site ID for your VXLAN site.
Note If you have already configured the ACI Border Gateway feature for Cisco APIC 6.1(1), and upgrade to Cisco
APIC 6.1(2) without creating a VXLAN site ID a fault is generated for all the stretched VRFs and bridge
domains.
• Egress rules: As part of the egress VXLAN policy, you can control the values that needs to be marked
in the outer dscp and cos fields. These values will be matched with the inner dscp values and the outer
dscp and cos values will set accordingly. If you do not specify any values, the outer dscp and cos values
are set to the default value of zero.
• Forwards Packets: Replace the default VXLAN value (0) to the VXLAN network with the ol_dscp
value.
In the event where cos preservation is enabled, the ol_dscp value will be the encoded value corresponding
to a combination of QoS Level and the cos value of the packet when it entered the fabric. This is sent
out along with the preserved cos value when the packet is exiting the fabric. If you do want to enable
cos, it is advisable to have explicit ol_dscp remarking enabled via egress VXLAN QoS rules.
ESGs can only communicate with other ESGs according to the contract rules. The administrator uses a contract
to select the types of traffic that can pass between ESGs, including the protocols and ports that are allowed.
You must classify endpoints connected inside the ACI fabric and external network prefixes learned through
L3Out connections by using one of the existing selectors, such as EPG selector, IP subnet selector, and so on.
Contracts
Contracts are the Cisco ACI equivalent of access control lists (ACLs). Endpoint Security Groups (ESG)s can
only communicate with other ESGs according to the contract rules. You can use a contract to select the types
of traffic that can pass between ESGs, including the protocols and ports allowed. An ESG can be a provider,
consumer, or both provider and consumer of a contract, and can consume multiple contracts simultaneously.
ESGs can also be part of a preferred group so that multiple ESGs can talk freely with other ESGs that are part
of the preferred group.
EVPN VXLAN Selectors
Selectors are configured under each ESG with a variety of matching criteria to classify endpoints to the ESG.
Starting with Cisco APIC 6.1(2), two new selectors have been added to classify endpoints and external
destinations learned from remote VXLAN EVPN fabrics.
For more information on the existing selectors that are available on Cisco APIC, refer to Endpoint Security
Groups section of the Cisco APIC Security Configuration Guide, Release 6.1(x).
These newly supported selectors are only applicable for remote VXLAN endpoints:
• VXLAN Stretched Bridge Domain Selectors
Use this selector to classify all the L2 MAC addresses learned from the remote VXLAN fabrics into a
corresponding ESG. This selector can be configured only for bridge domains that are VXLAN stretched.
The endpoints from all the remote fabrics belonging to this bridge domain are classified as part of the
same ESG.
See VXLAN Stretched Bridge Domain Selector, on page 233 for more information.
• VXLAN External Subnet Selector
Use this selector to classify EVPN Type-5 prefixes received from a remote VXLAN fabric into a
corresponding ESG. You cannot have the same prefix configured under an external subnet selector and
an external EPG selector under a local L3Out. If you have an overlap, the longest prefix match determines
the classification of the prefix. You cannot configure the default (0.0.0.0/0) prefix as VXLAN external
subnet selector. A specific prefix configuration is the preferred approach. As a workaround, 0.0.0.0/1 or
128.0.0.0/1 can be used if the Catch All entry is required.
See VXLAN External Subnet Selectors, on page 233 for more information.
Note Differently from the case of VXLAN external subnet selectors, only exactly matching prefixes can be used
to classify remote DC subnets. A prefix covering a super-net won't be feasible for that purpose.
You can classify specific L2 MAC addresses and specific L3 IP addresses from the remote VXLAN fabric
into an ESG by using the existing MAC tag selector or the IP tag selector.
VXLAN EVPN Route-Maps
Starting from Cisco APIC 6.1(2), the ACI Border Gateway feature also supports VRF level route-maps that
can be configured on the stretched VRFs. These Route-maps are applicable for all the remote fabrics that are
associated to the border gateway set. The route-map set rules are configured with the route control profile
policies and the action rule profiles.
Use the Configuring a VXLAN VRF Stretch Using the GUI, on page 231 to specify the outbound and inbound
route-maps.
For more information on how to configure a route-map, see Configuring Route Control Policy in VRF Using
the GUI, on page 341.
Note This is an optional configuration. If you do not configure import route-maps, all the routes received from
remote VXLAN EVPN fabrics are accepted. If you do not configure export route-maps, all the local bridge
domain subnets and external routes are advertised to the remote VXLAN EVPN fabrics that are associated
to the border gateway set.
Following are the list of match and set clauses that are supported by both the inbound route-map and the
outbound route-map:
• Supported Match Clauses
• IP Prefix List
• AS-Path
• Community
• Extended Community (match on color extended community is not supported)
• Regex Community
• Regex Extended Community
Service Graphs
A service graph is a sequence of Layer 4 to Layer 7 services functions, Layer 4 to Layer 7 services devices,
or copy devices and their associated configuration. The service graph must be associated with a contract to
be "rendered"—or configured—on the Layer 4 to Layer 7 services device or copy device, and on the fabric.
The Layer 4 to Layer 7 service graphs can be enabled in the contract between:
• The ESGs classifying ACI Endpoints and the ESGs classifying EVPN endpoints or prefixes by using
the previously described selectors.
• The external EPGs classifying ACI L3Out prefixes and the ESGs classifying EVPN endpoints or prefixes
by using the previously described selectors.
The Layer 4 to Layer 7 service graphs can be deployed by using explicit contracts between ESGs or using a
vzAny contract. The Layer 4 to Layer 7 service graphs can be deployed with single or multiple service devices.
The Layer 4 to Layer 7 service graph has optional features such as PBR node tracking, health groups, thresholds,
resilient hashing, and backup PBR policies that are supported. The Layer 4 to Layer 7 service graph with L3
service devices are supported for stretched VRFs. These service devices need to reside only in ACI fabric.
• VXLAN is used as the overlay technology to encapsulate the data packets and tunnel the traffic over the
Layer 3 network.
• VXLAN handoff is through a node role called border gateways via the VXLAN tunnels.
• L2/L3 VXLAN connectivity between Cisco ACI pods that are part of the same fabric is achieved via the
spine-to-spine data path, through the IPN.
Inbound route-maps are applied on Type-5 and the IP portion of the Type-2 routes. Type 2 MAC
routes are not impacted and are imported irrespective of the IP import status.
Outbound route-maps are only applied to Type-5 routes.
• IGMP snooping and L3 Multicast traffic is not supported across domains
• Inter-VRFs traffic flows (shared services) across domains are not supported.
• Service redirection (PBR) for traffic flows between domains is only supported to service devices
deployed in L3 (Go-To) mode and connected to the Cisco ACI fabric.
• When an ACI fabric interconnects with a policy-aware remote VXLAN fabric, any policy or class
details received from the remote VXLAN fabric are ignored by the ACI border gateway nodes.
Similarly, the ACI fabric also doesn't advertise its policy or class information to the remote VXLAN
fabric.
Note You cannot register a spine with the node type border-gateway. The discovery will be blocked.
Procedure
Step 1 To pre-configure the node registration policy, if you are already aware of the serial number:
a) Navigate to Fabric > Inventory > Fabric Membership > Registered Node tab.
b) In the Work pane, click Actions > Create Fabric Node Member and complete the following steps.
a) In the Pod ID field, choose the pod ID from the drop down menu.
b) In the Serial Number field, enter the serial number for the leaf switch.
c) In the Node ID field, assign a node ID to the leaf switch.
d) In the Switch Name field, assign a name to the leaf switch.
e) In the Node Type field, select Leaf as the node type.
f) Put a check in Is Border Gateway check box to register the leaf as a node type.
g) Click Submit.
Step 2 To configure the node based on the DHCP discovery:
a) Navigate to Fabric > Inventory > Fabric Membership > Nodes Pending Registration tab.
b) In the Work pane, right click the serial number of the newly discovered leaf, click Register and complete the following
steps.
a) In the Pod ID field, choose the pod ID from the drop down menu.
b) In the Node ID field, assign a node ID to the leaf switch.
c) In the Node Name field, assign a name to the leaf switch.
d) In the Role field, select Border Gateway as the role type..
e) (Optional) In the Rack Name, specify the rack name.
f) Click Register.
What to do next
Create Border Gateway Sets by using the procedures provided in Creating Border Gateway Sets Using the
GUI, on page 229
• Interfaces
• Supported types of interfaces are:
• Routed interface or sub-interface
• You will also configure the underlay BGP peer policy in the interfaces tab in the VXLAN infra
L3Out. This is the basic underlay configuration that is needed to bring the BGP underlay to exchange
the loopback address to a connected device.
• QoS rules
• You can configure the VXLAN ingress rule and VXLAN egress rule through the VXLAN QoS
policy in the VXLAN Infra L3Out. Refer to Creating VXLAN Custom QoS Policy Using the GUI,
on page 234 for more information.
• If you do not create a VXLAN QoS policy, any ingressing VXLAN traffic is assigned the default
QoS level.
You will also configure the underlay and overlay through the VXLAN Infra L3Out:
• Underlay: BGP peer IP configuration as part of the interface configuration.
• Overlay: BGP EVPN remote configuration is part of the remote fabric configuration.
Procedure
Step 1 Navigate to Tenants > infra > Networking > VXLAN L3Outs.
Step 2 Right-click on VXLAN L3Outs and choose Create VXLAN L3Out.
The Connectivity window appears.
This will be the name for the policy controlling connectivity to the outside. The name can be up to 64 alphanumeric
characters.
Note
You cannot change this name after the object has been saved.
b) (Optional) In the VXLAN Custom QoS Policy field, choose an existing QoS policy or choose Create VXLAN
Custom QoS Policy to create a new QoS policy.
For more information on creating a new QoS policy, see Creating VXLAN Custom QoS Policy Using the GUI, on
page 234.
c) Click Next.
The Nodes and Interfaces window appears.
Step 4 In the Nodes and Interfaces window, enter the necessary information to configure the border gateway nodes and interfaces.
a) In the Node Profile Name and Interface Profile Name fields, determine if you want to use the default naming
convention for the node profile and interface profile names.
The default node profile name is L3Out-name_nodeProfile, and the default interface profile name is
L3Out-name_interfaceProfile, where L3Out-name is the name that you entered in the Name field in the
Connectivity page. Change the profile names in these fields, if necessary.
b) (Optional) In the BFD Interface Policy field, choose an existing BFD interface policy or choose Create BFD
Interface Policy to create a new BFD interface policy.
c) In the Interface Types area, make the necessary selections in the Layer 3 and Layer 2 fields.
The options are:
• Layer 3:
• Interface: Choose this option to configure a Layer 3 interface to connect the border leaf switch to the
external router.
• Sub-Interface: Choose this option to configure a Layer 3 sub-interface to connect the border leaf switch
to the external router.
• Layer 2:
• Port Layer 2 can either be a port or a port channel. Cisco APIC 6.1(1) only supports port.
d) From the Node ID field drop-down menu, choose the border gateway node for the VXLAN infra L3Out..
You might see the following warning message appear on your screen, describing how to configure the router ID.
The leaf switch 103 has a Operational Router ID 3.3.3.3 which is used for MP-BGP sessions running between
this leaf and spines. User can still configure a different Route ID than 3.3.3.3 but will flap the MP-BGP sessions
which are already running on this leaf.
• If you do not have a router ID already configured for this node, go to 4.e, on page 225 for instructions on
configuring a router ID for this node.
• If you have a router ID already configured for this node (for example, if you had configured MP-BGP route
reflectors previously).
Use the same router ID for the VXLAN configuration: The same router ID must be used across the VXLAN
infra L3Out configuration. This is the recommended option. Make a note of the router ID displayed in this
warning to use in the next step, 4.e, on page 225 for instructions on configuring a router ID for this node.
e) In the Router ID field, enter a unique router ID (the IPv4 address) for the border leaf switch part of the infra L3Out.
The router ID must be unique across all border leaf switches and the non-ACI fabric BGWs.
As described in 4.d, on page 225, if a router ID has already been configured on this node, you have several options:
• If you want to use the same router ID for the VXLAN configuration, enter the router ID that was displayed in
the warning message in 4.d, on page 225.
• You must configure the same router ID across all infra L3Outs for a given node.
f) Enter an IP address in the Loopback field. This is the routable control plane TEP address which is used for EVPN
peering, It is advertised via the underlay protocol.
g) In the the Interface field, choose a port from the drop-down list.
h) If you selected Sub-Interface in the Layer 3 area above, the VLAN Encap field appears. Enter the encapsulation
used for the layer 3 outside profile.
i) In the MTU (bytes) field, enter the maximum transmit unit of the external network.
Acceptable entries in this field are from 576-9216. To inherit the value, enter inherit in this field.
j) In the IPv4 Address field, enter an IP address for the eBGP underlay configuration.
This is the IP address assigned to the Layer 3 interface/sub-interface that you configured in the previous step.
k) In the Peer IPv4 Address field, enter the eBGP underlay unicast peer IP address.
This is the interface's IP address of the router directly connected to the border leaf switch.
l) In the Remote ASN field, enter the BGP Autonomous System Number of the directly-connected router.
m) Determine if you want to configure additional interfaces for this node for the VXLAN infra L3Out.
• If you do not want to configure additional interfaces for this node for this VXLAN infra L3Out, skip to 4.o, on
page 226.
• If you want to configure additional interfaces for this node for this VXLAN infra L3Out, click + in the Interfaces
area to bring up the same options for another interface for this node.
Note
If you want to delete the information that you entered for an interface for this node, or if you want to delete an
interface row that you added by accident, click the trash can icon for the interface row that you want to delete.
n) Determine if you want to configure additional border gateways for this VXLAN infra L3Out.
• If you do not want to configure additional border gateways for this VXLAN infra L3Out, skip to 4.o, on page
226.
• If you want to configure additional border gateways for this VXLAN infra L3Out, click + in the Nodes area to
bring up the same options for another node.
Note
If you want to delete the information that you entered for a node, or if you want to delete a node row that you
added by accident, click the trash can icon for the node row that you want to delete.
o) Click Next.
The Policy Configuration window appears.
Step 5 In the Policy Configuration window, enter the necessary information to configure the border gateway nodes and interfaces.
a) In the Border Gateway Set field, determine if you want to use an existing border gateway set or create a new border
gateway set.
b) Check the Configure VXLAN Remote Fabrics and configure the following fields:
1. In the Remote VXLAN Fabric field, specify an existing remote VXLAN fabric or click + to create a new remote
VXLAN fabric.
2. In the Remote EVPN Peer Address field, specify the remote EVPN address.
3. In the Remote AS field, enter the BGP autonomous system number of the BGP ASN of the remote NX-OS BGW
node to configure the remote AS for each remote fabric peer.
4. In the TTL field, enter the connection time to live (TTL). The value must be greater than 1.
Step 6 Click Finish to complete the necessary configurations in the Create VXLAN Infra L3Out wizard.
What to do next
Configure an VXLAN VRF Stretch using the procedures provided in Configuring a VXLAN VRF Stretch
Using the GUI, on page 231.
VXLAN Site ID
Starting from Cisco APIC 6.1(2), you must configure a site ID. You will not be able to configure the border
gateway set policy if you do not have this site ID.
Note If you have already configured the ACI Border Gateway feature for Cisco APIC 6.1(1), and upgrade to Cisco
APIC 6.1(2) without creating a VXLAN site ID a fault is generated for all the stretched VRFs and bridge
domains.
Procedure
Step 1 From the top menu bar, navigate to Tenants > infra >Policies >VXLAN Gateway >VXLAN Site.
Step 2 Right-click on VXLAN Site and select Create VXLAN Site .
The Create VXLAN Site window appears.
Step 3 In the Name field, enter a name for your VXLAN site.
Step 4 In the ID field, enter a unique site ID for the VXLAN Site.
Step 5 (Optional) In the Description field, enter a description for the VXLAN Site.
Step 6 Click Submit.
Procedure
Step 1 From the top menu bar, navigate to Tenants > infra >Policies >VXLAN Gateway >Border Gateway Sets.
Step 2 On the Border Gateway Set work pane, click Actions > Create Border Gateway Set Policy.
Step 3 In the VXLAN Site ID field, enter a unique site ID for the Border Gateway Set Policy.
Step 4 In the Name field, assign a name to the Border Gateway Set Policy.
Step 5 In the External Data Plane IP field, enter the address for each POD. Click + to enter the POD ID and the Address.
Step 6 Click Submit.
What to do next
Create Remote VXLAN fabrics by using the procedures provided in Creating Remote VXLAN Fabrics Using
the GUI, on page 229.
Procedure
Step 1 From the top menu bar, navigate to Tenants > infra >Policies >VXLAN Gateway > Remote VXLAN Fabrics.
Step 2 On the Remote VXLAN Fabrics work pane, click Actions > Create Remote VXLAN Fabric.
Step 3 In the Name field, assign a name to the remote VXLAN fabric.
Step 4 To enter the Peer IP Address and its associated TTL, Click + in the Remote EVPN Peers section, and complete the
following steps in the Create Remote EVPN Peer dialog box:
Note
For a infra peer TTL, you must specify a value greater than 1.
a) Peer Address: Enter the peer IP address. This is the loopback IP address of the remote NX-OS BGW device, which
is used to establish the EVPN control-plane adjacency.
b) (Optional) In the Description field, enter descriptive information about the remote EVPN policy.
c) Remote ASN: Enter a number that uniquely identifies the neighbor autonomous system. The Autonomous System
Number can be in 4-byte as plain format from 1 to 4294967295.
Note
ACI does not support asdot or asdot+ format AS numbers.
d) In the Admin State field, select Enabled or Disabled to enable or disable Remote EVPN Peer for this particular
policy.
e) In the BGP Controls field, check the desired controls.
The peer controls specify which Border Gateway Protocol (BGP) attributes are sent to a peer. The peer control options
are:
• Allow Self AS: Enables the autonomous number check on itself. This allows BGP peer to inject updates if the
same AS number is being used.
• Disable Peer AS Check: Disables the peer autonomous number check. When the check box is checked, if the
advertising router finds the AS number of the receiver in the AS path, it will not send the route to the receiver.
f) In the Peer Type field, the VXLAN BGW Connectivity is already selected.
g) (Optional) In the Password and Confirm Password field, enter the administrative password.
h) In the TTL field, enter the connection time to live (TTL).
The range is from 2 to 255 hops.
i) In the BGP Peer Prefix Policy field, select an existing peer prefix policy or create a new one.
The peer prefix policy defines how many prefixes can be received from a neighbor and the action to take when the
number of allowed prefixes is exceeded. This feature is commonly used for external BGP peers, but can also be
applied to internal BGP peers.
j) In the Local-AS Number Config field, choose the local Autonomous System Number (ASN) configuration.
When you configure the local ASN in the Cisco ACI fabric, the Cisco ACI BGWs still derive the bridge domain and
VRF route targets by using the fabric ASN value. If the peer-ASN value differs from the ASN value in the received
route targets with EVPN routes, the EVPN route targets rewrite will not work on the remote VXLAN fabric BGWs.
To resolve this, you must manually configure the route targets to match the Cisco ACI derived route targets based
on the fabric ASN value for both the bridge domain and the VRF.
Using a local AS number rather than the Global AS permits the routing devices in the associated network to appear
to belong to the former AS. The configuration can be:
• no-Prepend+replace-as+dual-as—Does not allow prepending on local AS and is replaced with both AS
numbers.
Note
You can prepend one or more autonomous system (AS) numbers at the beginning of an AS path. The AS numbers
are added at the beginning of the path after the actual AS number from which the route originates has been added
to the path. Prepending an AS path makes a shorter AS path look longer and therefore less preferable to BGP.
• no-prepend—Does not allow prepending on local AS.
l) Click OK.
Step 5 To enter the Associated Border Gateway Set, select an existing border gateway set from the drop down list or click +
in the Associated Border Gateway Set box and select an existing border gateway set.
Step 6 Click Submit.
What to do next
Configure an VXLAN Infra L3Out by using the procedures provided in the Configuring a VXLAN Infra
L3Out Using the GUI, on page 220 section.
Procedure
Step 3 In the VRF field, select an existing VRF or click Create VRF to create a new VRF with the following steps:
a) In the Name field, enter a name for the VRF.
b) In the Alias field, enter an alias name for the VRF.
c) (Optional) In the Descriptionfield, enter a description of the VRF.
d) In the Policy Control Enforcement Preference field, choose Unenforced.
e) In the Policy Control Enforcement Direction field, choose Ingress.
f) In the OSPF Timers field, from the drop down list, choose the OSPF timer policy that you want to associate with
this specific VRF (default or Create OSPF Timers Policy).
g) In the Monitoring Policy field, from the drop down list, choose the Monitoring policy that you want to associate
with this specific VRF.
h) Click Submit.
Step 4 In the Border Gateway Set field, select an existing border gateway set or click Create Border Gateway Set to create
a new border gateway set.
Step 5 Starting from Cisco APIC 6.1(2), you do not have to specify the Remote Fabric Name or the Remote VNI as these
options have been preselected.
Step 6 In the Outbound field, specify an outbound route map to control the routes that are advertised to the NXOS site.
For more information on how to create Route Maps, refer to Route Control Profile Policies, on page 343.
Step 7 In the Inbound field, specify the inbound route map to control the routes that are imported from the NXOS fabric.
For more information on how to create Route Maps, refer to Route Control Profile Policies, on page 343.
What to do next
Configure a VXLAN bridge domain stretch using the procedures provided in Configuring a VXLAN Bridge
Domain Stretch Using the GUI, on page 232.
Procedure
Step 3 In the Bridge Domain field, select an existing bridge domain or click Create Bridge Domain to create a new bridge
domain.
Step 4 In the Border Gateway Set field, select an existing border gateway set. As mentioned on the text box in the GUI, ensure
that L2 Unknown Unicast is set to flood for the bridge domain that is stretched.
Step 5 Starting from Cisco APIC 6.1(2), you do not have to specify the Remote Fabric Name or the Remote VNI as these
options have been preselected.
Step 6 Click Submit.
Procedure
Step 1 On the menu bar, choose Tenants and select the applicable Tenant.
Step 2 In the Navigation pane, expand tenant_name > Application Profiles > application_profile_name > Endpoint Security
Groups > esg_name > Selectors.
Step 3 Right click VXLAN BD Selector and select Create a VXLAN BD Selector.
Step 4 In the Create a VXLAN BD Selector dialog box, enter the following information:
a) Bridge Domain: From the drop down, select the stretched bridge domain to be mapped.
b) Description: (Optional) A description of the selector.
c) Click Submit.
Procedure
Step 1 On the menu bar, choose Tenants and select the applicable Tenant.
Step 2 In the Navigation pane, expand tenant_name > Application Profiles > application_profile_name > Endpoint Security
Groups > esg_name > Selectors.
Step 3 Right click VXLAN External Subnet Selectors and select Create a VXLAN External Subnet Selector.
Step 4 In the Create a VXLAN External Subnet Selector dialog box, enter the following information:
a) IP: Specify the IP prefix to be matched.
Procedure
Step 1 From the top menu bar, navigate to Tenants > infra > Networking > VXLAN L3Outs.
Step 2 Right-click on VXLAN L3Outs and choose Create VXLAN L3Out.
Step 3 In the Connectivity window, enter the necessary information.
Step 4 In the VXLAN Custom QoS Policy field, choose an existing QoS policy or choose Create VXLAN Custom QoS Policy
to create a new QoS policy.
Step 5 In the Create VXLAN Custom QoS Policy window that opens, provide the name and description of the policy you're
creating.
Step 6 In the VXLAN Ingress Rule area, click + to add an ingress QoS translation rule.
Data traffic coming into the border gateway connected to the ACI fabric will be checked for the inner DSCP value and
if a match is found, the traffic is classified into an ACI QoS Level and marked with appropriate COS and DSCP values.
a) In the Priority field, select the priority for the ingress rule.
This is the QoS level you want to assign for the traffic within ACI fabric, which ACI uses to prioritize the traffic
within the fabric. The options range from Level 1 to Level 6. The default value is Level 3. If you do not make a
selection in this field, the traffic will automatically be assigned a Level 3 priority.
b) Using the DSCP Range From and DSCP Range To dropdowns, specify the DSCP range of the ingressing VXLAN
packet that you want to match.
c) Use the Target DSCP to select the inner DSCP value to assign to the packet when it's inside the ACI fabric.
d) In the Target COS field, select the COS value to assign to the packet when it's inside the ACI fabric.
The COS value specified is set in the original traffic received from the external network, so it will be re-exposed only
when the traffic is VXLAN decapsulated on the destination ACI leaf node.
The default is Unspecified, which means that the original COS value of the packet will be retained, but only if the
COS preservation option is enabled in the fabric.
Networking Domains
A fabric administrator creates domain policies that configure ports, protocols, VLAN pools, and encapsulation.
These policies can be used exclusively by a single tenant, or shared. Once a fabric administrator configures
domains in the ACI fabric, tenant administrators can associate tenant endpoint groups (EPGs) to domains.
The following networking domain profiles can be configured:
• VMM domain profiles (vmmDomP) are required for virtual machine hypervisor integration.
• Physical domain profiles (physDomP) are typically used for bare metal server attachment and management
access.
• Bridged outside network domain profiles (l2extDomP) are typically used to connect a bridged external
network trunk switch to a leaf switch in the ACI fabric.
• Routed outside network domain profiles (l3extDomP) are used to connect a router to a leaf switch in the
ACI fabric.
• Fibre Channel domain profiles (fcDomP) are used to connect Fibre Channel VLANs and VSANs.
A domain is configured to be associated with a VLAN pool. EPGs are then configured to use the VLANs
associated with a domain.
Note EPG port and VLAN configurations must match those specified in the domain infrastructure configuration
with which the EPG associates. If not, the APIC will raise a fault. When such a fault occurs, verify that the
domain infrastructure configuration matches the EPG port and VLAN configurations.
The routes that are learned through peering are sent to the spine switches. The spine switches act as route
reflectors and distribute the external routes to all of the leaf switches that have interfaces that belong to the
same tenant. These routes are longest prefix match (LPM) summarized addresses and are placed in the leaf
switch's forwarding table with the VTEP IP address of the remote leaf switch where the external router is
connected. WAN routes have no forwarding proxy. If the WAN routes do not fit in the leaf switch's forwarding
table, the traffic is dropped. Because the external router is not the default gateway, packets from the tenant
endpoints (EPs) are sent to the default gateway in the ACI fabric.
Note Import route control is supported for BGP and OSPF, but not EIGRP.
• External Subnets for the External EPG (Security Import Subnet): Specifies which external subnets have
contracts applied as part of a specific external L3Out EPG (l3extInstP). For a subnet under the
l3extInstP to be classified as an external EPG, the scope on the subnet should be set to "import-security".
Subnets of this scope determine which IP addresses are associated with the l3extInstP. Once this is
determined, contracts determine with which other EPGs that external subnet is allowed to communicate.
For example, when traffic enters the ACI switch on the Layer 3 external outside network (L3extOut), a
lookup occurs to determine which source IP addresses are associated with the l3extInstP. This action
is performed based on Longest Prefix Match (LPM) so that more specific subnets take precedence over
more general subnets.
• Shared Route Control Subnet: In a shared service configuration, only subnets that have this property
enabled will be imported into the consumer EPG Virtual Routing and Forwarding (VRF). It controls the
route direction for shared services between VRFs.
• Shared Security Import Subnet: Applies shared contracts to imported subnets. The default specification
is External Subnets for the external EPG.
Routed subnets can be aggregated. When aggregation is not set, the subnets are matched exactly. For example,
if 11.1.0.0/16 is the subnet, then the policy will not apply to a 11.1.1.0/24 route, but it will apply only if the
route is 11.1.0.0/16. However, to avoid a tedious and error prone task of defining all the subnets one by one,
a set of subnets can be aggregated into one export, import or shared routes policy. At this time, only 0/0
subnets can be aggregated. When 0/0 is specified with aggregation, all the routes are imported, exported, or
shared with a different VRF, based on the selection option below:
• Aggregate Export: Exports all transit routes of a VRF (0/0 subnets).
• Aggregate Import: Imports all incoming routes of given L3 peers (0/0 subnets).
Note Aggregate import route control is supported for BGP and OSPF, but not for
EIGRP.
• Aggregate Shared Routes: If a route is learned in one VRF but needs to be advertised to another VRF,
the routes can be shared by matching the subnet exactly, or can be shared in an aggregate way according
to a subnet mask. For aggregate shared routes, multiple subnet masks can be used to determine which
specific route groups are shared between VRFs. For example, 10.1.0.0/16 and 12.1.0.0/16 can be specified
to aggregate these subnets. Or, 0/0 can be used to share all subnet routes across multiple VRFs.
Note Routes shared between VRFs function correctly on Generation 2 switches (Cisco Nexus N9K switches with
"EX" or "FX" on the end of the switch model name, or later; for example, N9K-93108TC-EX). On Generation
1 switches, however, there may be dropped packets with this configuration, because the physical ternary
content-addressable memory (TCAM) tables that store routes do not have enough capacity to fully support
route parsing.
Route summarization simplifies route tables by replacing many specific addresses with an single address. For
example, 10.1.1.0/24, 10.1.2.0/24, and 10.1.3.0/24 are replaced with 10.1.0.0/16. Route summarization policies
enable routes to be shared efficiently among border leaf switches and their neighbor leaf switches. BGP,
OSPF, or EIGRP route summarization policies are applied to a bridge domain or transit subnet. For OSPF,
inter-area and external route summarization are supported. Summary routes are exported; they are not advertised
within the fabric. In the example above, when a route summarization policy is applied, and an EPG uses the
10.1.0.0/16 subnet, the entire range of 10.1.0.0/16 is shared with all the neighboring leaf switches.
Note When two L3extOut policies are configured with OSPF on the same leaf switch, one regular and another for
the backbone, a route summarization policy configured on one L3extOut is applied to both L3extOut policies
because summarization applies to all areas in the VRF.
As illustrated in the figure below, route control profiles derive route maps according to prefix-based and
community-based matching.
Figure 26: Route Community Matching
The route control profile (rtctrtlProfile) specifies what is allowed. The Route Control Context specifies
what to match, and the scope specifies what to set. The subject profile contains the community match
specifications, which can be used by multiple l3extOut instances. The subject profile (SubjP) can contain
multiple community terms each of which contains one or more community factors (communities). This
arrangement enables specifying the following boolean operations:
• Logical or among multiple community terms
• Logical and among multiple community factors
For example, a community term called northeast could have multiple communities that each include many
routes. Another community term called southeast could also include many different routes. The administrator
could choose to match one, or the other, or both. A community factor type can be regular or extended. Care
should be taken when using extended type community factors, to ensure there are no overlaps among the
specifications.
The scope portion of the route control profile references the attribute profile (rtctrlAttrP) to specify what
set-action to apply, such as preference, next hop, community, and so forth. When routes are learned from an
l3extOut, route attributes can be modified.
The figure above illustrates the case where an l3extOut contains a rtctrtlProfile. A rtctrtlProfile can
also exist under the tenant. In this case, the l3extOut has an interleak relation policy (L3extRsInterleakPol)
that associates it with the rtctrtlProfile under the tenant. This configuration enables reusing the
rtctrtlProfile for multiple l3extOut connections. It also enables keeping track of the routes the fabric
learns from OSPF to which it gives BGP attributes (BGP is used within the fabric). A rtctrtlProfile defined
under an L3extOut has a higher priority than one defined under the tenant.
The rtctrtlProfile has two modes: combinable, and global. The default combinable mode combines
pervasive subnets (fvSubnet) and external subnets (l3extSubnet) with the match/set mechanism to render
the route map. The global mode applies to all subnets within the tenant, and overrides other policy attribute
settings. A global rtctrtlProfile provides permit-all behavior without defining explicit (0/0) subnets. A
global rtctrtlProfile is used with non-prefix based match rules where matching is done using different
subnet attributes such as community, next hop, and so on. Multiple rtctrtlProfile policies can be configured
under a tenant.
rtctrtlProfile policies enable enhanced default import and default export route control. Layer 3 Outside
networks with aggregated import or export routes can have import/export policies that specify supported
default-export and default–import, and supported 0/0 aggregation policies. To apply a rtctrtlProfile policy
on all routes (inbound or outbound), define a global default rtctrtlProfile that has no match rules.
Note While multiple l3extOut connections can be configured on one switch, all Layer 3 outside networks configured
on a switch must use the same rtctrtlProfile because a switch can have only one route map.
The protocol interleak and redistribute policy controls externally learned route sharing with ACI fabric BGP
routes. Set attributes are supported. Such policies are supported per L3extOut, per node, or per VRF. An
interleak policy applies to routes learned by the routing protocol in the L3extOut. Currently, interleak and
redistribute policies are supported for OSPF v2 and v3. A route control policy rtctrtlProfile has to be
defined as global when it is consumed by an interleak policy.
• The routes that are learned from the OSPF process on the border leaf are redistributed into BGP for the
tenant VRF and they are imported into MP-BGP on the border leaf.
• Import route control is supported for BGP and OSPF, but not for EIGRP.
• Export route control is supported for OSPF, BGP, and EIGRP.
• The routes are learned on the border leaf where the VRF is deployed. The routes are not advertised to
the External Layer 3 Outside connection unless it is permitted by the export route control.
Note When a subnet for a bridge domain/EPG is set to Advertise Externally, the subnet is programmed as a static
route on a border leaf. When the static route is advertised, it is redistributed into the EPG's Layer 3 outside
network routing protocol as an external network, not injected directly into the routing protocol.
• iBGP
• eBGP (IPv4 and IPv6)
• EIGRP (IPv4 and IPv6) protocols
ACI supports the VRF-lite implementation when connecting to the external routers. Using sub-interfaces, the
border leaf can provide Layer 3 outside connections for the multiple tenants with one physical interface. The
VRF-lite implementation requires one protocol session per tenant.
Within the ACI fabric, Multiprotocol BGP (MP-BGP) is implemented between the leaf and the spine switches
to propagate the external routes within the ACI fabric. The BGP route reflector technology is deployed in
order to support a large number of leaf switches within a single fabric. All of the leaf and spine switches are
in one single BGP Autonomous System (AS). Once the border leaf learns the external routes, it can then
redistribute the external routes of a given VRF to an MP-BGP address family VPN version 4 or VPN version
6. With address family VPN version 4, MP-BGP maintains a separate BGP routing table for each VRF. Within
MP-BGP, the border leaf advertises routes to a spine switch, that is a BGP route reflector. The routes are then
propagated to all the leaves where the VRFs (or private network in the APIC GUI’s terminology) are
instantiated.
The external Layer 3 Outside connections are supported on the following interfaces:
• Layer 3 Routed Interface
• Subinterface with 802.1Q tagging - With subinterface, you can use the same physical interface to provide
a Layer 2 outside connection for multiple private networks.
• Switched Virtual Interface (SVI) - With an SVI interface, the same physical interface that supports Layer
2 and Layer 3 and the same physical interface can be used for a Layer 2 outside connection and a Layer
3 outside connection.
The managed objects that are used for the L3Outside connections are:
• External Layer 3 Outside (L3ext): Routing protocol options (OSPF area type, area, EIGRP autonomous
system, BGP), private network, External Physical domain.
• Logical Node Profile: Profile where one or more nodes are defined for the external Layer 3 Outside
connections. The configurations of the router-IDs and the loopback interface are defined in the profile.
Note Use the same router-ID for the same node across multiple external Layer 3 Outside
connections.
Note Within a single L3Out, a node can only be part of one Logical Node Profile.
Configuring the node to be a part of multiple Logical Node Profiles in a single
L3Out might result in unpredictable behavior, such as having a loopback address
pushed from one Logical Node Profile but not from the other. Use more path
bindings under the existing Logical Interface Profiles or create a new Logical
Interface Profile under the existing Logical Node Profile instead.
• Logical Interface Profile: IP interface configuration for IPv4 and IPv6 interfaces. It is supported on the
Route Interfaces, Routed subinterfaces, and SVIs. The SVIs can be configured on physical ports,
port-channels, or vPCs.
• OSPF Interface Policy: Includes details such as OSPF Network Type and priority.
• EIGRP Interface Policy: Includes details such as Timers and split horizon.
• BGP Peer Connectivity Profile: The profile where most BGP peer settings, remote-as, local-as, and BGP
peer connection options are configured. You can associate the BGP peer connectivity profile with the
logical interface profile or the loopback interface under the node profile. This determines the update-source
configuration for the BGP peering session.
• External Layer 3 Outside EPG (l3extInstP): The external EPG is also referred to as the prefix-based EPG
or InstP. The import and export route control policies, security import policies, and contract associations
are defined in this profile. You can configure multiple external EPGs under a single L3Out. You may
use multiple external EPGs when a different route or a security policy is defined on a single external
Layer 3 Outside connections. An external EPG or multiple external EPGs combine into a route-map.
The import/export subnets defined under the external EPG associate to the IP prefix-list match clauses
in the route-map. The external EPG is also where the import security subnets and contracts are associated.
This is used to permit or drop traffic for this L3out.
• Action Rules Profile: The action rules profile is used to define the route-map set clauses for the L3Out.
The supported set clauses are the BGP communities (standard and extended), Tags, Preference, Metric,
and Metric type.
• Route Control Profile: The route-control profile is used to reference the action rules profiles. This can
be an ordered list of action rules profiles. The Route Control Profile can be referenced by a tenant BD,
BD subnet, external EPG, or external EPG subnet.
There are more protocol settings for BGP, OSPF, and EIGRP L3Outs. These settings are configured per tenant
in the ACI Protocol Policies section in the GUI.
Note When configuring policy enforcement between external EPGs (transit routing case), you must configure the
second external EPG (InstP) with the default prefix 0/0 for export route control, aggregate export, and external
security. In addition, you must exclude the preferred group, and you must use an any contract (or desired
contract) between the transit InstPs.
In either case, the configuration should be considered read-only in the incompatible UI.
Note Except for the procedures in the Configuring Layer 3 External Connectivity Using the Named Mode section,
this guide describes Implicit mode procedures.
• Layer 3 external network objects (l3extOut) created using the Implicit mode CLI procedures are identified
by names starting with “__ui_” and are marked as read-only in the GUI. The CLI partitions these
external-l3 networks by function, such as interfaces, protocols, route-map, and EPG. Configuration
modifications performed through the REST API can break this structure, preventing further modification
through the CLI.
For the steps to remove such objects, see Troubleshooting Unwanted _ui_ Objects in the APIC Troubleshooting
Guide.
Export Route Control Controls which external networks Specific match (prefix and prefix
are advertised out of the fabric length).
using route-maps and IP prefix
lists. An IP prefix list is created on
the BL switch for each subnet that
is defined. The export control
policy is enabled by default and is
supported for BGP, EIGRP, and
OSPF.
Import Route Control Controls the subnets that are Specific match (prefix and prefix
allowed into the fabric. Can include length) .
set and match rules to filter routes.
Supported for BGP and OSPF, but
not for EIGRP. If you enable the
import control policy for an
unsupported protocol, it is
automatically ignored. The import
control policy is not enabled by
default, but you can enable it on the
Create L3Out panel. On the
Identity tab, enable Route Control
Enforcement: Import.
Security Import Subnet Used to permit the packets to flow Uses the ACL match prefix or
between two prefix-based EPGs. wildcard match rules.
Implemented with ACLs.
Aggregate Export Used to allow all prefixes to be Only supported for 0.0.0.0/0 subnet
advertised to the external peers. (all prefixes).
Implemented with the 0.0.0.0/ le 32
IP prefix-list.
Aggregate Import Used to allow all prefixes that are Only supported for the 0.0.0.0/0
inbound from an external BGP subnet (all prefixes).
peer. Implemented with the
0.0.0.0/0 le 32 IP prefix-list.
You may prefer to advertise all the transit routes out of an L3Out connection. In this case, use the aggregate
export option with the prefix 0.0.0.0/0. Using this aggregate export option creates an IP prefix-list entry (permit
0.0.0.0/0 le 32) that the APIC system uses as a match clause in the export route-map. Use the show route-map
<outbound route-map> and show ip prefix-list <match-clause> commands to view the output.
If you enable aggregate shared routes, if a route learned in one VRF must be advertised to another VRF, the
routes can be shared by matching the subnet exactly, or they can be shared by using an aggregate subnet mask.
Multiple subnet masks can be used to determine which specific route groups are shared between VRFs. For
example, 10.1.0.0/16 and 12.1.0.0/16 can be specified to aggregate these subnets. Or, 0/0 can be used to share
all subnet routes across multiple VRFs.
Note Routes shared between VRFs function correctly on Generation 2 switches (Cisco Nexus N9K switches with
"EX" or "FX" on the end of the switch model name, or later; for example, N9K-93108TC-EX). On Generation
1 switches, however, there may be dropped packets with this configuration, because the physical ternary
content-addressable memory (TCAM) tables that store routes do not have enough capacity to fully support
route parsing.
1. Prerequisites
• Ensure that you have read/write access privileges to the infra security domain.
• Ensure that the target leaf switches with the necessary interfaces are available.
Note For guidelines and cautions for configuring and maintaining Layer 3 outside connections, see Guidelines for
Layer 3 Networking, on page 441.
For information about the types of L3Outs, see External Layer 3 Outside Connection Types, on page 245.
A Layer 3 external outside network (l3extOut object) includes the routing protocol options (BGP, OSPF, or
EIGRP or supported combinations) and the switch-specific and interface-specific configurations. While the
l3extOut contains the routing protocol (for example, OSPF with its related Virtual Routing and Forwarding
(VRF) and area ID), the Layer 3 external interface profile contains the necessary OSPF interface details. Both
are needed to enable OSPF.
The l3extInstP EPG exposes the external network to tenant EPGs through a contract. For example, a tenant
EPG that contains a group of web servers could communicate through a contract with the l3extInstP EPG
according to the network configuration contained in the l3extOut. The outside network configuration can
easily be reused for multiple nodes by associating the nodes with the L3 external node profile. Multiple nodes
that use the same profile can be configured for fail-over or load balancing. Also, a node can be added to
multiple l3extOuts resulting in VRFs that are associated with the l3extOuts also being deployed on that node.
For scalability information, refer to the current Verified Scalability Guide for Cisco ACI.
Note This example uses Cisco APIC release 4.2(x) and the associated GUI screens.
Example Topology
Figure 31: Example Topology for an OSPF L3Out with Two External Routers
• Allow communication with a contract between EPG1 and external route (10.0.0.0/8)
The preceding diagram illustrates the configuration for the example topology in Figure 31: Example Topology
for an OSPF L3Out with Two External Routers, on page 256. The configuration flow for this example is as
follows:
1. L3Out: This creates
• L3Out itself (OSPF parameters)
• Node, Interface, OSPF I/F Profiles
• L3Out EPG with External Subnets for the External EPG scope
3. Allow EPG - L3Out communication: This uses a contract between EPG1 and L3Out EPG1
Prerequisites
Figure 33: Example Screen of Objects Created as Prerequisites
• This configuration example focuses only on the L3Out configuration part. The other configurations such
as for VRF, BD, EPG, Application Profiles, and Access Policies (Layer 3 Domain etc.) are not covered.
The preceding screenshot displays the prerequisite tenant configurations that are as follows:
• VRF1
• BD1 with the subnet 192.168.1.254/24
• EPG1 with a static port towards endpoints
Procedure
Step 1 In the GUI Navigation pane, under the Tenant Example, navigate to Networking > L3Outs.
Step 2 Right-click and choose Create L3Out.
Step 3 In the Create L3Out screen, Identity tab, perform the following actions:
Step 4 Click Next to display the Nodes and Interfaces screen, and perform the following actions:
a) In the Interface Types area, in the Layer 3 field and in the Layer 2 field, ensure that your selections match the
choices in the preceding screenshot (Routed and Port).
b) In the Nodes area, in the Node ID field, from the drop-down list, choose the appropriate node ID. (leaf2 (Node
102))
c) In the Router ID field, enter the appropriate router ID. (2.2.2.2)
The Loopback Address field auto populates based on the router ID value you enter. You do not require the loopback
address, so delete the value and leave the field blank.
d) In the Interface field, choose the interface ID. (eth1/11)
e) In the IP Address field, enter the associated IP address. (172.16.1.1/30)
f) In the MTU field, keep the default value. (inherit)
g) Click the + icon next to the MTU field to add an additional interface for node leaf2. (Node-102)
h) In the Interface field, choose the interface ID. (eth1/12)
i) In the IP Address field, enter the associated IP address. (172.16.2.1/30)
j) In the MTU field, keep the default value. (inherit)
Step 5 To add another node, click the + icon next to the Loopback Address field, and perform the following actions:
Note
When you click the + icon, the new Nodes area is displayed below the area that you had populated earlier.
a) In the Nodes area, in the Node ID field, from the drop-down list, choose the node ID. (leaf3 (Node-103))
b) In the Router ID field, enter the router ID. (3.3.3.3)
The Loopback Address field auto populates based on the router ID value you enter. You do not require the loopback
address, so delete the value and leave the field blank.
c) In the Interface field, choose the interface ID. (eth1/11)
d) In the IP Address field, enter the IP address. (172.16.3.1/30)
e) In the MTU field, keep the default value. (inherit)
f) Click the + icon next to the MTU field to add an additional interface for node leaf3. (Node-103)
g) In the Interface field, choose the interface ID. (eth1/12)
h) In the IP Address field, enter the associated IP address. (172.16.4.1/30)
i) In the MTU field, keep the default value. (inherit), and click Next.
We have specified the node, interface, and IP address for each interface.
Step 6 Click Next to view the Protocols screen.
This screen allows you to specify the OSPF interface level policy to configure hello-interval, network-type, etc.
In this example, nothing is selected. Therefore, the default policy is used. The default OSPF interface profile uses
Unspecified as network-type which defaults to broadcast network type. To optimize this with point-to-point network-type
for sub-interface, see Change the OSPF Interface Level Parameters (Optional).
a) In the External EPG area, Name field, enter a name for the external EPG. (L3Out_EPG1)
b) In the Provided Contract field, do not choose a value.
In this example, there is no provided contract for L3Out_EPG1 because a normal EPG (EPG1) is the provider.
c) In the Consumed Contract field, choose default from the drop-down list.
Step 9 In the Default EPG for all external networks field, uncheck the checkbox, and perform the following actions:
a) Click the + icon in the Subnets area, to display the Create Subnet dialog box.
b) In the IP Address field, enter the subnet. (10.0.0.0/8)
c) In the External EPG Classification field, check the checkbox for External Subnets for the External EPG. Click
OK.
Step 10 Click the + icon in the Subnets area once more to display the Create Subnet dialog box, and perform the following
actions:
Note
Although this is an optional configuration, it is a best practice to specify the L3Out interface subnets in case endpoints
have to communicate with those IPs.
Procedure
Step 1 Navigate to your Tenant_name > Networking > L3Outs > EXAMPLE_L3Out1, in the Work pane, scroll to view the
details as follows:
At this location in the GUI, verify the main L3Out parameters such as VRF, domain, and OSPF parameters that are
configured in the Identity screen in the Create L3Out wizard.
Step 2 Verify that OSPF is enabled with the specified parameters such as Area ID and Area Type.
Step 3 Under Logical Node Profiles, EXAMPLE_L3Out1_nodeProfile is created to specify border leaf switches with their router
IDs.
Step 4 Under Logical Interface Profile, EXAMPLE_L3Out1_interfaceProfile is created.
Verify the interface parameters such as interface ID, IP addresses, in this example, as routed interfaces. The default MAC
addresses gets auto populated. OSPF interface profile is also created under this for OSPF interface level parameters.
Note This default-export route map will be applied to the L3Out (EXAMPLE_L3Out1) without being associated
to anything specific.
Procedure
Step 1 To enable a BD subnet to be advertised, navigate to Tenant > Networks > Bridge Domains > BD1 > Subnets >
192.168.1.254/24, and select Advertised Externally scope.
Step 2 To create a route map under your L3Out (EXAMPLE_L3Out1), navigate to Route map for import and export route
control.
Step 3 Right-click and choose Create Route map for import and export route control.
Step 4 In the Create Route map for import and export route control dialog box, in the Name field, choose default-export.
Step 5 In the Type field, choose Matching Route Policy Only.
Note
Match Routing Policy Only: By choosing this Type with default-export route map, all route advertisement configuration
is performed by this route map. BD associations and export route control subnets configured under the external EPG will
not apply. You should configure all match rules within this route-map for all routes that will be advertised from this
L3Out.
Match Prefix and Routing Policy: By choosing this Type with default-export route map, route advertisement is matched
by any match rules configured in this route map in addition to any BD to L3Out associations and export route control
subnets defined under the External EPG.
When using a route profile, it is recommended to use Match Routing Policy Only for a simpler configuration that is
easier to maintain.
Step 6 In the Contexts area, click the + icon, to display the Create Route Control Context dialog box, and perform the following
actions:
a) In the Order field, configure the order. (0)
In this example, we have only one order.
b) In the Name field, enter a name for the context. (BD_Subnets)
c) In the Action field, choose Permit.
This enables the route map to permit the prefix we will configure.
In this example, we require the match rule that requires the IP prefix list, BD1_prefix. This IP prefix list points to the
BD subnet advertised.
Step 7 In the Match Rule field, create the IP prefix-list by performing the following actions;
a) Choose Create Match Rule for a Route-Map.
b) In the Name field, enter a name BD1_prefix.
c) In the Match Prefix area, click the + icon, and enter the BD subnet (192.168.1.0/24).
Procedure
Step 2 In the Work pane, in the External EPG Instance Profile area, under Policy > General sub-tab, look at the Properties
and verify that the two subnets are displayed with External Subnets for the External EPG.
Step 3 Next, click the Contracts sub-tab and verify the contract you specified earlier is consumed correctly. In case you want
to add more contracts, you can perform the actions from this location in GUI.
Step 4 Navigate to Application Profile > Application EPGs > EPG1 > Contracts, and verify that EPG1 is providing the
appropriate contract.
Procedure
Step 1 Under your L3Out, navigate to Logical Interface Profile > EXAMPLE_L3Out1_interfaceProfile > OSPF Interface
Profile.
Step 2 In the Work pane, in the Properties area, choose the OSPF Interface Policy you wish to use.
• EPs/Host routes in SITE-1 will not be advertised out through Border Leafs in other SITEs.
• When EPs is aged out or removed from the database, Host routes are withdrawn from the Border Leaf.
• When EP is moved across SITEs or PODs, Host routes should be withdrawn from first SITE/POD and
advertised in new POD/SITE.
• EPs learned on a specific BD, under any of the BD subnets are advertised from the L3out on the border
leaf in the same POD.
• EPs are advertised out as Host Routes only in the local POD through the Border Leaf.
• Host routes are not advertised out from one POD to another POD.
• In the case of Remote Leaf, if EPs are locally learned in the Remote Leaf, they are then advertised only
through a L3out deployed in Remote Leaf switches in same POD.
• EPs/Host routes in a Remote Leaf are not advertised out through Border Leaf switches in main POD or
another POD.
• EPs/Host routes in the main POD are not advertised through L3out in Remote Leaf switches of same
POD or another POD.
• The BD subnet must have the Advertise Externally option enabled.
• The BD must be associated to an L3out or the L3out must have explicit route-map configured matching
BD subnets.
• There must be a contract between the EPG in the specified BD and the External EPG for the L3out.
Note If there is no contract between the BD/EPG and the External EPG the BD subnet
and host routes will not be installed on the border leaf.
• Advertise Host Route is supported for shared services. For example: epg1/BD1 deployed is in VRF-1
and L3out in another VRF-2. By providing shared contract between EPG and L3out host routes are pulled
from one VRF-1 to another VRF-2.
• When Advertise Host Route is enabled on BD custom tag cannot be set on BD Subnet using route-map.
• When Advertise Host Route is enabled on a BD and the BD is associated with an L3Out, BD subnet is
marked public. If there's a rogue EP present under the BD, that EP is advertised out on L3Out.
After both L3Outs and spine route reflectors are deployed, border leaf nodes learn external routes via L3Outs,
and those external routes are distributed to all leaf nodes in the fabric via spine MP-BGP route reflectors.
Check the Verified Scalability Guide for Cisco APIC for your release to find the maximum number of routes
supported by a leaf.
Procedure
Step 5 On the menu bar, choose Fabric > Fabric Policies > Pods > Policy Groups.
Step 6 In the Navigation pane, expand and right-click Policy Groups, and click Create Pod Policy Group.
Step 7 In the Create Pod Policy Group dialog box, in the Name field, enter the name of a pod policy group.
Step 8 In the BGP Route Reflector Policy drop-down list, choose the appropriate policy (default). Click Submit.
The BGP route reflector policy is associated with the route reflector pod policy group, and the BGP process is enabled
on the leaf switches.
Step 9 On the menu bar, choose Fabric > Fabric Policies > Profiles > Pod Profile default > default.
Step 10 In the Work pane, from the Fabric Policy Group drop-down list, choose the pod policy that was created earlier. Click
Submit.
The pod policy group is now applied to the fabric policy group.
Procedure
a) Use secure shell (SSH) to log in as an administrator to each leaf switch as required.
b) Enter the show processes | grep bgp command to verify the state is S.
If the state is NR (not running), the configuration was not successful.
Step 2 Verify that the autonomous system number is configured in the spine switches by performing the following actions:
a) Use the SSH to log in as an administrator to each spine switch as required.
b) Execute the following commands from the shell window
Example:
cd /mit/sys/bgp/inst
Example:
grep asn summary
The configured autonomous system number must be displayed. If the autonomous system number value displays as 0,
the configuration was not successful.
Note The steps for filling out the fields are not necessarily listed in the same order that you see them in the GUI.
Procedure
Step 4 Choose an interface type tab: Routed Sub-Interfaces, Routed Interfaces, SVI, or Floating SVI.
Step 5 Double click an existing interface to modify it, or click the Create (+) button to add a new interface to the logical
interface profile.
Step 6 For interface types other than floating SVI, perform the following substeps:
a) To add a new interface in the Path Type field, choose the appropriate path type.
For the routed sub-interface and routed interface interface types, choose Port or Direct Port Channel. For the SVI
interface type, choose Port, Direct Port Channel, or Virtual Port Channel.
b) In the Node drop-down list, choose a node.
Note
This is applicable only for the non-port channel path types. If you selected Path Type as Port, then perform this
step. Otherwise, proceed to the next step.
c) In the Path drop-down list, choose the interface ID or the port channel name.
An example of an interface ID is eth 1/1. The port channel name is the interface policy group name for each direct
or virtual port channel.
Step 7 For the floating SVI interface type, in the Anchor Node drop-down list, choose a node.
Step 8 (Optional) In the Description field, enter a description of the L3Out interface.
Step 9 For the routed sub-interfaces, SVI, and floating SVI interface types, in the Encap drop-down list, choose VLAN and
enter an integer value for this entry.
Step 10 For the SVI and floating SVI interface types, perform the following substeps:
a) For the Encap Scope buttons, choose the scope of the encapsulation used for the Layer 3 Outside profile.
• VRF: Use the same transit VLAN in all Layer 3 Outsides in the same VRF instance for a given VLAN
encapsulation. This is a global value.
• Local: Use a unique transit VLAN per Layer 3 Outside.
b) For the Auto State buttons, choose whether to enable or disable this feature.
• disabled: The SVI or floating SVI remains active even if no interfaces are operational in the corresponding
VLANs.
• enabled: When a VLAN interface has multiple ports in the VLAN, the SVI or floating SVI goes to the down
state when all the ports in the VLAN go down.
Step 16 In the Target DSCP drop-down list, choose the target differentiated services code point (DSCP) of the path attached
to the Layer 3 outside profile.
Step 17 Click Submit.
No fault will be raised if the OSPF session goes down because the Send/ Accept lifetime of the key was
expired, with no active key. The KeyChain state under the OSPF interface will be in “not-ready” state.
Procedure
Step 5 [Optional] In the Description field, enter a description for the OSPF interface profile. The description can be 0 to 128
alphanumeric characters.
Step 6 Enter a value for the target interface policy name. This name can be between 1 and 64 alphanumeric characters. You
cannot change this name after the object has been saved.
Step 7 To configure the OSPF interface profile by using the MD5 or the simple authentication, complete the following steps:
a) In the OSPFv2 Authentication Key field, enter the authentication key. The authentication key is a password (up to
8 characters) that can be assigned on an interface basis. The authentication key must match for each router on the
interface
Note
To use authentication, the OSPF authentication type for this interface's area should be set to Simple (the default is
None).
b) In the Confirm OSPFv2 Authentication Key field, renter the authentication key.
c) In the OSPFv2 Authentication Key ID field, enter the authentication key identifier.
d) In the OSPFv2 Authentication Type field, select the appropriate option.
The OSPF authentication type. Authentication enables the flexibility to authenticate OSPF neighbors. You can enable
authentication in OSPF to exchange routing update information in a secure manner.
Note
When you configure authentication, you must configure an entire area with the same type of authentication.
Step 8 To configure the OSPF interface profile by using the KeyChain authentication, complete the following steps:
a) In the OSPFv2 KeyChain Policy field, select OSPFv2 KeyChain policy.
The OSPFv2 KeyChain policy supports HMAC-SHA authentication along with Simple and MD5 authentication.
When you select this option, you can have multiple keys under the same key chain.
For enhanced security, you can use the rotating keys by specifying a life time for each key. When the lifetime expires
for a key it automatically rotates to next key. If you do not specify any algorithm, OSPF will use MD5, which is the
default cryptographic authentication algorithm
Note
The new key is the preferred key and will take precedence against the existing keys.
Note
You can configure the authentication by specifying the legacy way, which is the OSPFv2 authentication type - MD5
authentication /Simple authentication or by specifying the OSPFv2 keychain policy.
Configuring the Keychain policy will override the selected Authentication Type.
Step 9 (applicable only for OSPFv3) OSPFv3 IPsec Policy: to associate an OSPFv3 IPsec policy to an L3Out interface, select
an IPsec policy from the drop-down list. For creating an OSPFv3 IPsec policy, see the Create an OSPF IPsec Policy
procedure
What to do next
To specify the rotating keys for the OSPFv2 KeyChain, refer to the Create Key Policy, on page 277.
Procedure
h) In the Key accept lifetime end time field, specify the end time in the YYYY-MM--DD- HH-MM-SS format.
This field is not applicable for OSPFv3 IPSec policy.
Note OSPFv3 is not supported on infra tenant; the OSPF IPSec policy support is for user tenant only.
Procedure
• Security Parameter Index: unique value for creating the IPSec protocol. Select a value from the drop-down list. The
supported range is from 256 to 4294967295.
• OSPFv3 Authentication Keychain: select a keychain value from the drop-down list. If you have selected the AH
option for the IP Security Protocol field, this field is mandatory. If you leave the field blank, a fault is generated.
To check for faults, navigate to the OSPF Interface Profile screen and click the Faults tab.
• OSPFv3 Encryption Keychain: select a keychain value from the drop-down list. This field is not applicable if you
have selected the AH option for the IP Security Protocol field. If you have selected the ESP option for the IP
SecurityProtocol field, it is mandatory to enter a value for the Authentication Keychain field or the Encryption
Keychain field.
You can use the show ipv6 ospfv3 interface interface-id command on the switch over a SSH or console session to check
the created IPSec policy.
Step 5 To associate the created OSPFv3 IPSec policy to an L3Out interface, see Step-9 of the Create OSPF Interface
Profileprocedure.
Starting with Cisco APIC release 2.3, it is now possible to choose the behavior when deploying two (or more)
Layer 3 Outs using the same external encapsulation (SVI).
The encapsulation scope can now be configured as Local or VRF:
• Local scope (default): The example behavior is displayed in the figure titled Local Scope Encapsulation
and Two Layer 3 Outs.
• VRF scope: The ACI fabric configures the same bridge domain (VXLAN VNI) across all the nodes and
Layer 3 Out where the same external encapsulation (SVI) is deployed. See the example in the figure
titled VRF Scope Encapsulation and Two Layer 3 Outs.
The mapping among the CLI, API, and GUI syntax is as follows:
Note The CLI commands to configure encapsulation scope are only supported when the VRF is configured through
a named Layer 3 Out configuration.
Procedure
b) In the remaining fields, choose the desired options, and click Next.
c) In the Step 2 Protocol Profiles screen, choose the desired protocol profile details, and click Next.
d) In the Step 3 Interfaces screen, click the SVI tab, and click the + icon to open the Select SVI dialog box.
e) In the Specify Interface area, choose the desired values for the various fields.
f) In the Encap Scope field, choose the desired encapsulation scope value. Click OK.
The default value is Local.
Release 5.2(3) added support for configuring a single external bridge that can be configured with different
encapsulation VLANs on different leaf switches. The multiple encapsulation support feature uses the floating
SVI object to define the external bridge domain for floating L3Outs or an external bridge group profile for
defining the external bridge domain for regular L3Outs. The use case for this feature may be where the same
VLAN cannot be used on different leaf switches because it may already be in use.
Figure 38: Single VNID Associated to External Bridge Domains with Different Encapsulation (post-ACI 5.2(3) Releases).
As of ACI release 6.0(1), this feature is supported for physical domain L3Outs only, not for VMM domain
L3Outs.
To configure the use case shown above, where you are grouping multiple SVIs into a Layer 2 bridge group:
1. Create three regular SVIs for each VPC pair:
• Create the regular SVI svi-100 on leaf switches node101 and node102
• Create the regular SVI svi-101 on leaf switches node103 and node104
• Create the regular SVI svi-102 on leaf switches node105 and node106
3. Group the regular SVIs svi-100, svi-101, and svi-102 together to behave as part of a single Layer 2
broadcast domain:
a. Create a bridge domain profile.
The bridge domain profile is represented by the new MO l3extBdProfile
b. Provide a unique name string for the bridge domain profile.
c. Associate each of the regular SVIs that need to be grouped together to the same bridge domain profile.
Two new MOs are available for this association: l3extBdProfileCont and l3extRsBdProfile.
Configuring Multiple Encapsulation for L3Outs With SVI Using the GUI
Procedure
Step 1 Create the regular SVIs and configure the leaf switches with access encapsulations.
See Configuring SVI External Encapsulation Scope Using the GUI, on page 281 for those procedures.
Step 2 Create an external bridge group profile that will be used for SVI grouping.
a) Navigate to Tenants > tenant-name > Policies > Protocol > External Bridge Group Profiles.
A page showing the already-configured external bridge group profiles is displayed.
b) Right-click on External Bridge Group Profiles and choose Create External Bridge Group Profile.
Configuring Multiple Encapsulation for L3Outs With SVI Using the CLI
Procedure
Step 1 Create the regular SVIs and configure the leaf switches with access encapsulations.
See Configuring SVI Interface Encapsulation Scope Using NX-OS Style CLI, on page 481 for those procedures.
Step 2 Log into your APIC through the CLI, then go into configuration mode and tenant configuration mode.
apic1#
apic1# configuration
apic1(config)# tenant <tenant-name>
apic1(config-tenant)#
Step 3 Enter the following commands to create an external bridge profile that will be used for SVI grouping.
Step 4 Enter the following commands to associate a regular SVI with the bridge domain profile.
Configuring Multiple Encapsulation for L3Outs With SVI Using the REST API
Procedure
Step 1 Create the regular SVIs and configure the leaf switches with access encapsulations.
See Configuring SVI Interface Encapsulation Scope Using the REST API, on page 562 for those procedures.
Step 2 Enter a post such as the following example to create an external bridge profile that will be used for SVI grouping.
Step 3 Enter a post such as the following example to associate a regular SVI with the bridge domain profile.
<fvTenant name="t1">
<l3extOut name="l1">
<l3extLNodeP name="n1">
<l3extLIfP name="i1">
<l3extRsPathL3OutAtt encap="vlan-108"
tDn="topology/pod-1/paths-108/pathep-[eth1/10]"
ifInstT="ext-svi">
<l3extBdProfileCont>
<l3extRsBdProfile tDn="uni/tn-t1/bdprofile-bd100" status=""/
</l3extBdProfileCont>
</l3extRsPathL3OutAtt>
</l3extLIfP>
</l3extLNodeP>
</l3extOut>
</fvTenant>
Step 4 Enter a post such as the following example to specify the separate encapsulation for floating nodes.
<fvTenant name="t1">
<l3extOut name="l1">
<l3extLNodeP name="n1">
<l3extLIfP name="i1">
<l3extVirtualLIfP addr="10.1.0.1/24"
encap="vlan-100"
nodeDn="topology/pod-1/node-101"
ifInstT="ext-svi">
<l3extRsDynPathAtt floatingAddr="10.1.0.100/24"
encap="vlan-104"
tDn="uni/phys-phyDom"/>
</l3extVirtualLIfP>
</l3extLIfP>
</l3extOut>
</fvTenant>
Note This feature is available in the APIC Release 2.2(3x) release and going forward with APIC Release 3.1(1). It
is not supported in APIC Release 3.0(x).
The Switch Virtual Interface (SVI) represents a logical interface between the bridging function and the routing
function of a VLAN in the device. SVI can have members that are physical ports, direct port channels, or
virtual port channels. The SVI logical interface is associated with VLANs, and the VLANs have port
membership.
The SVI state does not depend on the members. The default auto state behavior for SVI in Cisco APIC is that
it remains in the up state when the auto state value is disabled. This means that the SVI remains active even
if no interfaces are operational in the corresponding VLAN/s.
If the SVI auto state value is changed to enabled, then it depends on the port members in the associated VLANs.
When a VLAN interface has multiple ports in the VLAN, the SVI goes to the down state when all the ports
in the VLAN go down.
Procedure
Note The percentage of traffic hashing on each next-hop is an approximation. The actual percentage varies.
This route entry on a non-border leaf switch results in two ECMP paths from the non-border leaf to each
border leaf switch. This can result in disproportionate load balancing to the border leaf switches if the next-hops
are not evenly distributed across the border leaf switches that advertise the routes.
Beginning with Cisco ACI release 6.0(2), you can use the next-hop propagate and redistribute attached host
features to avoid sub-optimal routing in the Cisco ACI fabric. When these features are enabled, packet flows
from a non-border leaf switch are forwarded directly to the leaf switch connected to the next-hop address. All
next-hops are now used for ECMP forwarding from the hardware. In addition, Cisco ACI now redistributes
ECMP paths into BGP for both directly connected next-hops and recursive next-hops.
In the following example, leaf switches 1 and 2 advertise the 10.1.1.0/24 route with the next-hop propagate
and redistribute attached host features:
10.1.1.0/24
through 192.168.1.1 (border leaf switch 1) -> ECMP path 1
through 192.168.1.2 (border leaf switch 1) -> ECMP path 2
through 192.168.1.3 (border leaf switch 1) -> ECMP path 3
through 192.168.1.4 (border leaf switch 2) -> ECMP path 4
must be different from the real AS number of the Cisco ACI fabric. When this feature is configured,
Cisco ACI border leaf switches prepend the local AS number to the AS_PATH of the incoming updates
and append the same to the AS_PATH of the outgoing updates. Prepending of the local AS number to
the incoming updates can be disabled by the no-prepend setting in the Local-AS Number Config. The
no-prepend + replace-as setting can be used to prevents the local AS number from being appended to
the outgoing updates in addition to not prepending the same to the incoming updates.
• A router ID for an L3Out for any routing protocols cannot be the same IP address or the same subnet as
the L3Out interfaces such as routed interface, sub-interface or SVI. However, if needed, a router ID can
be the same as one of the L3Out loopback IP addresses. Starting from Cisco APIC Release 6.0(4), this
restriction has been removed when the “Use Router ID for Loopback Address” option is not enabled.
• If you have multiple L3Outs of the same routing protocol on the same leaf switch in the same VRF
instance, the router ID for those must be the same. If you need a loopback with the same IP address as
the router ID, you can configure the loopback in only one of those L3Outs.
• There are two ways to define the BGP peer for an L3Out:
• Through the BGP peer connectivity profile (bgpPeerP) at the logical node profile level
(l3extLNodeP), which associates the BGP peer to the loopback IP address. When the BGP peer is
configured at this level, a loopback address is expected for BGP connectivity, so a fault is raised if
the loopback address configuration is missing.
• Through the BGP peer connectivity profile (bgpPeerP) at the logical interface profile level
(l3extRsPathL3OutAtt), which associates the BGP peer to the respective interface or sub-interface.
• You must configure an IPv6 address to enable peering over loopback using IPv6.
• Tenant networking protocol policies for BGP l3extOut connections can be configured with a maximum
prefix limit that enables monitoring and restricting the number of route prefixes received from a peer.
After the maximum prefix limit is exceeded, a log entry can be recorded, further prefixes can be rejected,
the connection can be restarted if the count drops below the threshold in a fixed interval, or the connection
is shut down. You can use only one option at a time. The default setting is a limit of 20,000 prefixes,
after which new prefixes are rejected. When the reject option is deployed, BGP accepts one more prefix
beyond the configured limit and the Cisco Application Policy Infrastructure Controller (APIC) raises a
fault.
Note Cisco ACI does not support IP fragmentation. Therefore, when you configure
Layer 3 Outside (L3Out) connections to external routers, or Multi-Pod connections
through an Inter-Pod Network (IPN), it is recommended that the interface MTU
is set appropriately on both ends of a link. On some platforms, such as Cisco
ACI, Cisco NX-OS, and Cisco IOS, the configurable MTU value does not take
into account the Ethernet headers (matching IP MTU, and excluding the 14-18
Ethernet header size), while other platforms, such as IOS-XR, include the Ethernet
header in the configured MTU value. A configured value of 9000 results in a
max IP packet size of 9000 bytes in Cisco ACI, Cisco NX-OS, and Cisco IOS,
but results in a max IP packet size of 8986 bytes for an IOS-XR untagged interface.
For the appropriate MTU values for each platform, see the relevant configuration
guides.
We highly recommend that you test the MTU using CLI-based commands. For
example, on the Cisco NX-OS CLI, use a command such as ping 1.1.1.1 df-bit
packet-size 9000 source-interface ethernet 1/1.
• When you stretch the L3Out across the parent leaf and the tier 2 leaf, ECMP is not supported
Note When OSPF is used with BGP peering, OSPF is only used to learn and advertise the routes to the BGP peering
addresses. All route control applied to the Layer 3 Outside Network (EPG) are applied at the BGP protocol
level.
ACI supports a number of features for iBGP and eBGP connectivity to external peers. The BGP features are
configured on the BGP Peer Connectivity Profile.
The BGP peer connectivity profile features are described in the following table.
Note ACI supports the following BGP features. NX-OS BGP features not listed below are not currently supported
in ACI.
Local Autonomous System The local AS feature used to local-as xxx <no-prepend> <replace-as>
Number advertise a different AS <dual-as>
number than the AS assigned
to the fabric MP-BGP Route
Reflector Profile. It is only
supported for the EBGP
neighbors and the local AS
number must be different than
the route reflector policy AS.
BGP Additional-Paths
Beginning with the Cisco Application Policy Infrastructure Controller (APIC) 6.0(2) release, BGP supports
the additional-paths feature, which allows the BGP speaker to propagate and receive multiple paths for the
same prefix without the new paths replacing any previous paths. This feature allows BGP speaker peers to
negotiate whether the peers support advertising and receiving multiple paths per prefix and advertising such
paths. A special 4-byte path ID is added to the network layer reachability information (NLRI) to differentiate
multiple paths for the same prefix sent across a peer session.
The following figure illustrates the BGP additional-paths receive capability:
Figure 39: BGP Route Advertisement with the Additional Paths Capability
Prior to the additional-paths receive feature, BGP advertised only one best path, and the BGP speaker accepted
only one path for a given prefix from a given peer. If a BGP speaker received multiple paths for the same
prefix within the same session, BGP used the most recent advertisement.
Procedure
Step 5 Enter the necessary information in the Identity page of the Create L3Out wizard.
a) Enter the necessary information in the Name, VRF and L3 Domain fields.
b) In the area with the routing protocol check boxes, choose BGP.
c) Click Next to move to the Nodes and Interfaces window.
Step 6 Enter the necessary information in the Nodes and Interfaces page of the Create L3Out wizard.
a) In the Layer 3 area, select Routed.
b) From the Node ID field drop-down menu, choose the node for the L3Out.
For the topology in these examples, use node 103.
c) In the Router ID field, enter the router ID.
d) (Optional) You can configure another IP address for a loopback address, if necessary.
The Loopback Address field is automatically populated with the same entry that you provide in the Router ID
field. This is the equivalent of the Use Router ID for Loopback Address option in previous builds. Enter a different
IP address for a loopback address, if you don't want to use route ID for the loopback address, or leave this field
empty if you do not want to use the router ID for the loopback address.
e) Enter necessary additional information in the Nodes and Interfaces page.
The fields shown in this page varies, depending on the options that you select in the Layer 3 and Layer 2 areas.
f) When you have entered the remaining additional information in the Nodes and Interfaces page, click Next.
The Protocols page appears.
Step 7 Enter the necessary information in the Protocols page of the Create L3Out wizard.
a) In the BGP Loopback Policies and BGP Interface Policies areas, enter the following information:
• Peer Address: Enter the peer IP address
• EBGP Multihop TTL: Enter the connection time to live (TTL). The range is from 1 to 255 hops; if zero, no
TTL is specified. The default is 1.
• Remote ASN: Enter a number that uniquely identifies the neighbor autonomous system. The Autonomous
System Number can be in 4-byte as plain format from 1 to 4294967295.
Note
ACI does not support asdot or asdot+ format AS numbers.
b) Click Next.
The External EPG page appears.
Step 8 Enter the necessary information in the External EPG page of the Create L3Out wizard.
a) In the Name field, enter a name for the external network.
b) In the Provided Contract field, enter the name of a provided contract.
c) In the Consumed Contract field, enter the name of a consumed contract.
d) In the Default EPG for all external networks field, uncheck if you don’t want to advertise all the transit routes
out of this L3Out connection.
The Subnets area appears if you uncheck this box. Specify the desired subnets and controls as described in the
following steps.
e) Click the + icon to expand Subnet, then perform the following actions in the Create Subnet dialog box.
f) In the IP address field, enter the IP address and network mask for the external network.
Note
Enter an IPv4 or IPv6 address depending upon what you have entered in earlier steps.
When creating the external subnet, you must configure either both the BGP loopbacks in the prefix EPG or neither
of them. If you configure only one BGP loopback, then BGP neighborship is not established.
b) Put a check in the Receive Additional Paths check box to enable this eBGP L3Out peer to receive additional paths
per prefix from other eBGP peers.
Without the Receive Additional Paths feature, eBGP allows a leaf switch to receive only one next hop from peers
for a prefix.
Alternatively, you can configure all eBGP peers within the tenant's VRF instance to receive additional paths per
prefix from other eBGP peers. For more information, see Configuring BGP Max Path Using the GUI, on page 303.
c) In the Password and Confirm Password field, enter the administrative password.
d) In the Allow Self AS Number Count field, choose the allowed number of occurrences of a local Autonomous
System Number (ASN).
The range is from 1 to 10. The default is 3.
e) In the Peer Controls field, enter the neighbor check parameters.
The options are:
• Bidirectional Forwarding Detection: Enables BFD on the peer.
• Disable Connected Check: Disables the check for peer connection.
f) In the Address Type Controls field, configure the BGP IPv4/IPv6 address-family feature, if desired.
• AF Mcast: Check to enable the multicast address-family feature.
• AF Ucast: Check to enable the unicast address-family feature.
• Remove all private AS: In outgoing eBGP route updates to this neighbor, this option removes all private AS
numbers from the AS_PATH. Use this option if you have private and public AS numbers in the eBGP route.
The public AS number is retained.
If the neighbor remote AS is in the AS_PATH, this option is not applied.
To enable this option, Remove private AS must be enabled.
• Remove private AS: In outgoing eBGP route updates to this neighbor, this option removes all private AS
numbers from the AS_PATH when the AS_PATH has only private AS numbers. Use this option, if you have
only private AS numbers in the eBGP route.
If the neighbor remote AS is in the AS_PATH, this option is not applied.
• Replace private AS with local AS: In outgoing eBGP route updates to this neighbor, this option replaces all
private AS numbers in the AS_PATH with ACI local AS, regardless of whether a public AS or the neighbor
remote AS is included in the AS_PATH.
To enable this option, Remove all private AS must be enabled.
k) In the BGP Peer Prefix Policy field, select an existing peer prefix policy or create a new one.
The peer prefix policy defines how many prefixes can be received from a neighbor and the action to take when the
number of allowed prefixes is exceeded. This feature is commonly used for external BGP peers, but can also be
applied to internal BGP peers.
l) In the Site of Origin field, enter an extended community value to identify this peer.
The site-of-origin (SoO) extended community is a BGP extended community attribute that is used to identify routes
that have originated from a site so that the readvertisement of that prefix back to the source site can be prevented.
The SoO extended community uniquely identifies the site from which a router has learned a route. BGP can use
the SoO value associated with a route to prevent routing loops.
Valid formats are:
• extended:as2-nn2:<2-byte number>:<2-byte number>
For example: extended:as2-nn2:1000:65534
• extended:as2-nn4:<2-byte number>:<4-byte number>
For example: extended:as2-nn4:1000:6554387
• extended:as4-nn2:<4-byte number>:<2-byte number>
For example: extended:as4-nn2:1000:65504
• extended:ipv4-nn2:<IPv4 address>:<2-byte number>
For example: extended:ipv4-nn2:1.2.3.4:65515
Note
When configuring the SoO for the User Tenant L3Outs, make sure not to configure the same SoO value as that of
the global Fabric, Pod, or Multi-Site SoO configured within the ACI fabric. You can view the Fabric, Pod, and
Multi-Site SoO values configured within the fabric by executing the following command on the switch:
show bgp process vrf overlay-1 | grep SOO
m) In the Remote Autonomous System Number field, choose a number that uniquely identifies the neighbor
autonomous system.
The Autonomous System Number can be in 4-byte asplain format from 1 to 4294967295.
Note
ACI does not support asdot or asdot+ format AS numbers.
n) In the Local-AS Number Config field, choose the local Autonomous System Number (ASN) configuration.
Using a local AS number rather than the Global AS permits the routing devices in the associated network to appear
to belong to the former AS. The configuration can be:
• no-Prepend+replace-as+dual-as: Does not allow prepending on local AS and is replaced with both AS
numbers.
You can prepend one or more autonomous system (AS) numbers at the beginning of an AS path. The AS
numbers are added at the beginning of the path after the actual AS number from which the route originates
has been added to the path. Prepending an AS path makes a shorter AS path look longer and therefore less
preferable to BGP.
• no-prepend: Does not allow prepending on local AS.
• no options: Does not allow alteration of local AS.
• no-Prepend+replace-as: Does not allow prepending on local AS and is replaces AS number.
q) In the Route Control Profile field, configure route control policies per BGP peer.
Click + to configure the following:
• Name: The route control profile name.
• Direction: Choose one of the following options:
• Route Import Policy
• Route Export Policy
r) Click Submit.
Step 10 Navigate to Tenants > tenant_name > Networking > L3Outs > L3Out_name .
Step 11 Click the Policy/Main tab and perform the following actions:
a) (Optional) In the Route Control Enforcement field, check the Import check box.
Note
Check this check box if you wish to enforce import control with BGP.
b) Expand the Route Control for Dampening field, and choose the desired address family type and route dampening
policy. Click Update.
In this step, the policy can be created either with step 4 or there is also an option to Create route profile in the
drop-down list where the policy name is selected.
Step 12 Navigate to Tenants > tenant_name > Networking > L3Outs > L3Out_name .
Step 13 Right-click Route map for import and export route control and choose Create Route map for import and export
route control.
Step 14 Enter the necessary information in this window, then click + in the Context area to bring up the Create Route Control
Context window.
a) In the Name field, enter a name for the route control VRF instance.
b) From the Set Attribute drop-down list, choose Create Action Rule Profile.
When creating an action rule, set the route dampening attributes as desired.
Procedure
h) Put a check in the BGP Add-Path Capability: Receive checkbox if you want all eBGP peers within the tenant's
VRF instance to receive additional paths per prefix from other eBGP peers.
Without the BGP Add-Path Capability: Receive feature, eBGP allows a leaf switch to receive only one next hop
from peers per prefix.
i) Click Submit after you have updated your entries.
Step 6 Click Tenants > tenant_name > Networking > VRFs > vrf_name
Step 7 Review the configuration details of the subject VRF.
Step 8 Locate the BGP Context Per Address Family field and, in the BGP Address Family Type area, select either IPv4
unicast address family or IPv6 unicast address family.
Step 9 Access the BGP Address Family Context you created in the BGP Address Family Context drop-down list and associate
it with the subject VRF.
Step 10 Click Submit.
Prepend Appends the specified AS number to the AS path of the route matched by
the route map.
Note
• You can configure more than one AS number.
• 4 byte AS numbers are supported.
• You can prepend a total 32 AS numbers. You must specify the order
in which the AS Number is inserted into the AS Path attribute.
Prepend-last-as Prepends the last AS numbers to the AS path with a range between 1 and 10.
The following table describes the selection criteria for implementation of AS Path Prepend:
SUMMARY STEPS
1. Log in to the APIC GUI, and on the menu bar, click Tenants > tenant_name > Policies > Protocol >
Set Rules and right click Create Set Rules for a Route Map.
2. In the Create Set Rules For A Route Map dialog box, perform the following tasks:
3. Select the criterion Prepend AS, then click + to prepend AS numbers.
4. Enter the AS number and its order and then click Update. Repeat by clicking + again if multiple AS
numbers must be prepended.
5. When you have completed the prepend AS number configurations, select the criterion Prepend Last-AS
to prepend the last AS number a specified number of times.
6. Enter Count (1-10).
7. Click OK.
8. In the Create Set Rules For A Route Map window, confirm the listed criteria for the set rule based
on AS Path and click Finish.
9. On the APIC GUI menu bar, click Tenants > tenant_name > Policies > Protocol > Set Rules and
right click your profile.
10. Confirm the Set AS Path values the bottom of the screen.
DETAILED STEPS
Procedure
Step 1 Log in to the APIC GUI, and on the menu bar, click Tenants > tenant_name > Policies > Protocol > Set Rules and
right click Create Set Rules for a Route Map.
The Create Set Rules For A Route Map window appears.
Step 2 In the Create Set Rules For A Route Map dialog box, perform the following tasks:
a) In the Name field, enter a name.
b) Check the Set AS Path checkbox, then click Next.
c) In the AS Path window, click + to open the Create Set AS Path dialog box.
Step 3 Select the criterion Prepend AS, then click + to prepend AS numbers.
Step 4 Enter the AS number and its order and then click Update. Repeat by clicking + again if multiple AS numbers must be
prepended.
Step 5 When you have completed the prepend AS number configurations, select the criterion Prepend Last-AS to prepend
the last AS number a specified number of times.
Step 6 Enter Count (1-10).
Step 7 Click OK.
Step 8 In the Create Set Rules For A Route Map window, confirm the listed criteria for the set rule based on AS Path and
click Finish.
Step 9 On the APIC GUI menu bar, click Tenants > tenant_name > Policies > Protocol > Set Rules and right click your
profile.
Step 10 Confirm the Set AS Path values the bottom of the screen.
Router 1 and Router 2 are the two customers with multiple sites (Site-A and Site-B). Customer Router 1
operates under AS 100 and customer Router 2 operates under AS 200.
The above diagram illustrates the Autonomous System (AS) override process as follows:
1. Router 1-Site-A advertises route 10.3.3.3 with AS100.
2. Router PE-1 propagates this as an internal route to PE2 as AS100.
3. Router PE-2 prepends 10.3.3.3 with AS121 (replaces 100 in the AS path with 121), and propagates the
prefix.
4. Router 2-Site-B accepts the 10.3.3.3 update.
Configuring BGP External Routed Network with Autonomous System Override Enabled Using the
GUI
Procedure
Step 1 On the menu bar, choose Tenants > Tenant_name > Networking > L3Outs > Non-GOLF Layer 3 Out_name > Logical
Node Profiles.
Step 2 In the Navigation pane, choose the appropriate BGP Peer Connectivity Profile.
Step 3 In the Work pane, under Properties for the BGP Peer Connectivity Profile, in the BGP Controls field, perform the
following actions:
a) Check the check box for the AS override field to enable the Autonomous System override function.
b) Check the check box for the Disable Peer AS Check field.
Note
You must check the check boxes for AS override and Disable Peer AS Check for the AS override feature to take
effect.
Procedure
Step 1 Create the L3Out and configure the BGP for the L3Out:
a) On the Navigation pane, expand Tenant and Networking.
b) Right-click L3Outs and choose Create L3Out.
c) Enter the necessary information to configure BGP for the L3Out.
You will select BGP in the Identity page in the L3Out creation wizard to configure the BGP protocol for this L3Out.
d) Continue through the remaining pages (Nodes and Interfaces, Protocols, and External EPG) to complete the
configuration for the L3Out.
Step 2 After you have completed the L3Out configuration, configure the BGP neighbor shutdown:
a) Navigate to the BGP Peer Connectivity Profile screen:
Tenants > tenant > Networking > L3Outs > L3out-name > Logical Node Profiles > logical-node-profile-name >
Logical Interface Profiles > logical-interface-profile-name > BGP Peer Connectivity Profile IP-address
b) Scroll down to the Admin State field and make the appropriate selection in this field.
• Disabled: Disables the BGP neighbor's admin state.
• Enabled: Enables the BGP neighbor's admin state.
Procedure
Step 1 Create the L3Out and configure the BGP for the L3Out:
a) On the Navigation pane, expand Tenant and Networking.
b) Right-click L3Outs and choose Create L3Out.
c) Enter the necessary information to configure BGP for the L3Out.
You will select BGP in the Identity page in the L3Out creation wizard to configure the BGP protocol for this L3Out.
d) Continue through the remaining pages (Nodes and Interfaces, Protocols, and External EPG) to complete the
configuration for the L3Out.
Step 2 After you have completed the L3Out configuration, configure the BGP neighbor soft reset:
a) Navigate to the BGP Peer Entry screen:
Tenants > tenant > Networking > L3Outs > L3out-name > Logical Node Profiles > logical-node-profile-name >
Configured Nodes > node > BGP for VRF-vrf-name > Neighbors
b) Right-click on the appropriate neighbor entry and select Clear BGP Peer.
The Clear BGP page appears.
c) In the Mode field, select Soft.
The Direction fields appear.
d) Select the appropriate value in the Direction field:
• Incoming: Enables the soft dynamic inbound reset.
On a node, the BGP timer policy is chosen based on the following algorithm:
• If bgpProtP is specified, then use bgpCtxPol referred to under bgpProtP.
• Else, if specified, use bgpCtxPol referred to under corresponding fvCtx.
• Else, if specified, use the default policy under the tenant, for example,
uni/tn-<tenant>/bgpCtxP-default.
• Else, use the default policy under tenant common, for example, uni/tn-common/bgpCtxP-default. This
one is pre-programmed.
Configuring a Per VRF Per Node BGP Timer Using the Advanced GUI
When a BGP timer is configured on a specific node, then the BGP timer policy on the node is used and the
BGP policy timer associated with the VRF is ignored.
Procedure
Step 1 On the menu bar, choose Tenant > Tenant_name > Policies > Protocol > BGP > BGP Timers, then right click Create
BGP Timers Policy.
Step 2 In the Create BGP Timers Policy dialog box, perform the following actions:
a) In the Name field, enter the BGP Timers policy name.
b) In the available fields, choose the appropriate values as desired. Click Submit.
A BGP timer policy is created.
Step 3 Navigate to Tenant > Tenant_name > Networking > L3Outs, and right-click Create L3Out.
The Create L3Out wizard appears. Create an L3Out with BGP enabled by performing the following actions.
Step 4 Enter the necessary information in the Identity window of the Create L3Out wizard.
a) In the Name field, enter a name for the L3Out.
b) From the VRF drop-down list, choose the VRF.
c) From the L3 Domain drop-down list, choose an external routed domain.
d) In the area with the routing protocol check boxes, check the BGP box.
e) Click Next to move to the Nodes and Interfaces window.
f) Continue through the remaining windows in the Create L3Out wizard to complete the L3Out creation process.
Step 5 After you have created the L3Out, navigate to the logical node profile in the L3Out that you just created: Tenant >
Tenant_name > Networking > L3Outs > L3Out_name > Logical Node Profiles > LogicalNodeProfile-name .
Step 6 In the Logical Node Profile window, check next to Create BGP Protocol Profile.
The Create Node Specific BGP Protocol Profile window appears.
Step 7 In the BGP Timers field, from the drop-down list, choose the BGP timer policy that you want to associate with this
specific node. Click Submit.
A specific BGP timer policy is now applied to the node.
Note
To associate an existing node profile with a BGP timer policy, right-click the node profile, and associate the timer policy.
If a timer policy is not chosen specifically in the BGP Timers field for the node, then the BGP timer policy that is
associated with the VRF under which the node profile resides automatically gets applied to this node.
Step 8 To verify the configuration, in the Navigation pane, perform the following steps:
a) Expand Tenant > Tenant_name > Networking > L3Outs > L3Out_name > Logical Node Profiles >
LogicalNodeProfile-name > BGP Protocol Profile.
b) In the Work pane, the BGP protocol profile that is associated with the node profile is displayed.
tn1
ctx1
out1
ctx1
node1
protp pol1
out2
ctx1
node1
protp pol2
If such a fault is raised, change the configuration to remove the conflict between the BGP timer policies.
Note Modifying the secondary address that is being used for sourcing the session is
allowed by adding a new address in the same subnet and later removing the
previous one.
• BFD is supported on modular spine switches that have -EX and -FX line cards (or newer versions), and
BFD is also supported on the Nexus 9364C non-modular spine switch (or newer versions).
• BFD between vPC peers is not supported.
• Beginning with Cisco APIC release 5.0(1), BFD multihop is supported on leaf switches. The maximum
number of BFD sessions is unchanged, as BFD multihop sessions are now included in the total.
• Beginning with Cisco APIC release 5.0(1), Cisco ACI supports C-bit-aware BFD. The C-bit on incoming
BFD packets determines whether BFD is dependent or independent of the control plane.
• BFD over iBGP is not supported for loopback address peers.
• BFD sub interface optimization can be enabled in an interface policy. One sub-interface having this flag
will enable optimization for all the sub-interfaces on that physical interface.
• BFD for BGP prefix peer not supported.
Note Cisco ACI does not support IP fragmentation. Therefore, when you configure Layer 3 Outside (L3Out)
connections to external routers, or Multi-Pod connections through an Inter-Pod Network (IPN), it is
recommended that the interface MTU is set appropriately on both ends of a link. On some platforms, such as
Cisco ACI, Cisco NX-OS, and Cisco IOS, the configurable MTU value does not take into account the Ethernet
headers (matching IP MTU, and excluding the 14-18 Ethernet header size), while other platforms, such as
IOS-XR, include the Ethernet header in the configured MTU value. A configured value of 9000 results in a
max IP packet size of 9000 bytes in Cisco ACI, Cisco NX-OS, and Cisco IOS, but results in a max IP packet
size of 8986 bytes for an IOS-XR untagged interface.
For the appropriate MTU values for each platform, see the relevant configuration guides.
We highly recommend that you test the MTU using CLI-based commands. For example, on the Cisco NX-OS
CLI, use a command such as ping 1.1.1.1 df-bit packet-size 9000 source-interface ethernet 1/1.
Note If one of the subinterfaces flap, subinterfaces on that physical interface are impacted and will go down for a
second.
Procedure
Procedure
For each of these BFD configurations, you can choose to use the default policy or create a new one for a specific switch
(or set of switches).
Note
By default, the APIC controller creates default policies when the system comes up. These default policies are global,
bi-directional forwarding detection (BFD) configuration polices. You can set attributes within that default global policy
in the Work pane, or you can modify these default policy values. However, once you modify a default global policy,
note that your changes affect the entire system (all switches). If you want to use a specific configuration for a particular
switch (or set of switches) that is not the default, create a switch profile as described in the next step.
Step 3 To create a switch profile for a specific global BFD policy (which is not the default), in the Navigation pane, expand
Switches > Leaf Switches > Profiles.
The Leaf Switches - Profiles screen appears in the Work pane.
Step 4 On the right side of the Work pane, under the acttions icon, select Create Leaf Profile.
The Create Leaf Profile dialog box appears.
Step 5 In the Create Leaf Profile dialog box, perform the following actions:
a) In the Name field, enter a name for the leaf switch profile.
b) (Optional) In the Description field, enter a description of the profile.
c) (Optional) In the Leaf Selectors toolbar, click +
d) Enter the appropriate values for Name (name the switch), Blocks (select the switch), and Policy Group (select Create
Access Switch Policy Group).
The Create Access Switch Policy Group dialog box appears where you can specify the Policy Group identity
properties.
Step 6 (If configuring a leaf selector) In the Create Access Switch Policy Group dialog box, perform the following actions:
a) In the Name field, enter a name for the policy group.
b) (Optional) In the Description field, enter a description of the policy group.
c) Choose a BFD policy type (BFD IPV4 Policy or BFD IPV6 Policy), then select a value (default or Create BFD
Global Ipv4 Policy for a specific switch or set of switches).
d) Click Update.
Step 7 Click Next to advance to Associations.
(Optional) In the Associations menu, you can associate the leaf profile with leaf interface profiles and access module
profiles.
Step 9 To view the BFD global configuration you created, in the Navigation pane, expand Policies > Switch > BFD.
Procedure
For each of these BFD configurations, you can choose to use the default policy or create a new one for a specific switch
(or set of switches).
Note
By default, the APIC controller creates default policies when the system comes up. These default policies are global,
bi-directional forwarding detection (BFD) configuration polices. You can set attributes within that default global policy
in the Work pane, or you can modify these default policy values. However, once you modify a default global policy,
note that your changes affect the entire system (all switches). If you want to use a specific configuration for a particular
switch (or set of switches) that is not the default, create a switch profile as described in the next step.
Step 3 To create a spine switch profile for a specific global BFD policy (which is not the default), in the Navigation pane, expand
Switches > Spine Switches > Profiles.
The Spine Switches - Profiles screen appears in the Work pane.
Step 4 On the right side of the Work pane, under the acttions icon, select Create Spine Profile.
The Create Spine Profile dialog box appears.
Step 5 In the Create Spine Profile dialog box, perform the following actions:
a) In the Name field, enter a name for the switch profile.
b) In the Description field, enter a description of the profile. (This step is optional.)
c) (Optional) In the Spine Selectors toolbar, click +
d) Enter the appropriate values for Name (name the switch), Blocks (select the switch), and Policy Group (select Create
Spine Switch Policy Group).
The Create Spine Switch Policy Group dialog box appears where you can specify the Policy Group identity
properties.
Step 6 (If configuring a spine selector) In the Create Spine Switch Policy Group dialog box, perform the following actions:
a) In the Name field, enter a name for the policy group.
b) (Optional) In the Description field, enter a description of the policy group.
c) Choose a BFD policy type (BFD IPV4 Policy or BFD IPV6 Policy), then select a value (default or Create BFD
Global Ipv4 Policy for a specific switch or set of switches).
d) Click Update.
Step 7 Click Next to advance to Associations.
(Optional) In the Associations menu, you can associate the spine profile with spine interface profiles.
Step 9 To view the BFD global configuration you created, in the Navigation pane, expand the Policies > Switch > BFD.
Note When a BFD interface policy is configured over a parent routed interface, by default all of its routed
sub-interfaces with the same address family as that of the parent interface will inherit this policy. If any of
the inherited configuration needs to be overridden, configure an explicit BFD interface policy on the
sub-interfaces. However, if Admin State or Echo Admin State is disabled on the parent interface, the property
cannot be overridden on the sub-interfaces.
Procedure
Step 6 Enter the necessary information in the Protocols window of the Create L3Out wizard.
a) In the BGP Loopback Policies and BGP Interface Policies areas, enter the following information:
• Peer Address: Enter the peer IP address
• EBGP Multihop TTL: Enter the connection time to live (TTL). The range is from 1 to 255 hops; if zero, no
TTL is specified. The default is zero.
• Remote ASN: Enter a number that uniquely identifies the neighbor autonomous system. The Autonomous
System Number can be in 4-byte as plain format from 1 to 4294967295.
Note
ACI does not support asdot or asdot+ format AS numbers.
b) In the OSPF area, choose the default OSPF policy, a previously created OSPF policy, or Create OSPF Interface
Policy.
c) Click Next.
The External EPG window appears.
Step 7 Enter the necessary information in the External EPG window of the Create L3Out wizard.
a) In the Name field, enter a name for the external network.
b) In the Provided Contract field, enter the name of a provided contract.
c) In the Consumed Contract field, enter the name of a consumed contract.
d) In the Default EPG for all external networks field, uncheck if you don’t want to advertise all the transit routes
out of this L3Out connection.
The Subnets area appears if you uncheck this box. Specify the desired subnets and controls as described in the
following steps.
e) Click Finish to complete the necessary configurations in the Create L3Out wizard.
Step 8 Navigate to Tenants > tenant_name > Networking > L3Outs > L3Out_name > Logical Node Profiles >
logical_node_profile_name > Logical Interface Profiles > logical_interface_profile_name
Step 9 In the Logical Interface Profile window, scroll down to the Create BFD Interface Profile field, then check the box
next to this field.
Step 10 In the Create BFD Interface Profile window, enter BFD details.
• In the Authentication Type field, choose No authentication or Keyed SHA1.
If you choose to authenticate (by selecting Keyed SHA1), enter the Authentication Key ID, enter the
Authentication Key (password), then confirm the password by re-entering it next to Confirm Key.
• In the BFD Interface Policy field, select either the common/default configuration (the default BFD policy), or
create your own BFD policy by selecting Create BFD Interface Policy.
If you select Create BFD Interface Policy, the Create BFD Interface Policy dialog box appears where you can
define the BFD interface policy values.
Note These four consumer protocols are located in the left navigation pane under Tenant > Policies > Protocol.
Procedure
Step 5 Enter a name in the Name field and provide values in the remaining fields to define the BGP peer prefix policy.
Step 6 Click Submit.
The BGP peer prefix policy you created now appears under BGP Peer Prefix in the left navigation pane.
Step 7 Navigate to Tenants > tenant_name > Networking > L3Outs > L3Out_name > Logical Node Profiles >
logical_node_profile_name > Logical Interface Profiles > logical_interface_profile_name > BGP Peer Connectivity
Profile.
Step 8 In the BGP Peer Connectivity Profile window, scroll down to the BGP Peer Prefix Policy field and select the BGP
peer prefix policy that you just created.
Step 9 In the Peer Controls field, select Bidirectional Forwarding Detection to enable BFD on the BGP consumer protocol
(or uncheck the box to disable BFD).
Step 10 To configure BFD in the OSPF protocol, in the Navigation pane, go to Policies > Protocol > OSPF > OSPF Interface.
Step 11 On the right side of the Work pane, under ACTIONS, select Create OSPF Interface Policy.
The Create OSPF Interface Policy dialog box appears.
Note
You can also right-click on OSPF Interface from the left navigation pane and select Create OSPF Interface Policy
to create the policy.
Step 12 Enter a name in the Name field and provide values in the remaining fields to define the OSPF interface policy.
Step 13 In the Interface Controls section of this dialog box, you can enable or disable BFD. To enable it, check the box next
to BFD, which adds a flag to the OSPF consumer protocol, shown as follows (or uncheck the box to disable BFD).
Step 14 Click Submit.
Step 15 To configure BFD in the EIGRP protocol, in the Navigation pane, go back to tenant_name > Policies > Protocol >
EIGRP > EIGRP Interface.
Step 16 On the right side of the Work pane, under ACTIONS, select Create EIGRP Interface Policy.
Step 17 Enter a name in the Name field and provide values in the remaining fields to define the OSPF interface policy.
Step 18 In the Control State section of this dialog box, you can enable or disable BFD. To enable it, check the box next to
BFD, which adds a flag to the EIGRP consumer protocol (or uncheck the box to disable BFD).
Step 19 Click Submit.
Step 20 To configure BFD in the Static Routes protocol, in the Navigation pane, go back to Networking > L3Outs >
L3Out_name > Configured Nodes, then click on the configured node to bring up the Node Association window.
Step 21 In the Static Routes section, click the "+" (expand) button.
The Create Static Route dialog box appears. Enter values for the required fields in this section.
Step 22 Next to Route Control, check the box next to BFD to enable (or uncheck the box to disable) BFD on the specified
Static Route.
Step 23 Click Submit.
Step 24 To configure BFD in the IS-IS protocol, in the Navigation pane go to Fabric > Fabric Policies > Policies > Interface >
L3 Interface.
Step 25 On the right side of the Work pane, under ACTIONS, select Create L3 Interface Policy.
The Create L3 Interface Policy dialog box appears.
Note
You can also right-click on L3 Interface from the left navigation pane and select Create L3 Interface Policy to create
the policy.
Step 26 Enter a name in the Name field and provide values in the remaining fields to define the L3 interface policy.
Step 27 To enable BFD ISIS Policy, in the BFD ISIS Policy Configuration field, click enabled.
Step 28 Click Submit.
BFD Multihop
BFD multihop provides subsecond forwarding failure detection for a destination with more than one hop and
up to 255 hops. Beginning with Release 5.0(1), APIC supports BFD multihop for IPv4 and BFD multihop
for IPv6 in compliance with RFC5883. BFD multihop sessions are set up between a unique source and
destination address pair. A BFD multihop session is created between a source and destination rather than with
an interface, as with single-hop BFD sessions.
BFD multihop sets the TTL field to the maximum limit supported by BGP, and does not check the value on
reception. The ACI leaf has no impact on the number of hops a BFD multihop packet can traverse, but the
number of hops is limited to 255.
• Node Policies: A BFD Multihop node policy applies to interfaces under a node profile.
You can create or modify BFD multihop node policies in this GUI location:
• Tenants > tenant > Policies > Protocol > BFD Multihop > Node Policies: right-click and select
Create BFD Multihop Node Policy.
• Inteface Policies: A BFD Multihop interface policy applies to interfaces under an interface profile.
You can create or modify BFD multihop interface policies in this GUI location:
• Tenants > tenant > Policies > Protocol > BFD Multihop > Interface Policies: right-click and
select Create BFD Multihop Interface Policy.
• Overriding Global Policies: If you don't want to use the default global configuration, but you want to
have an explicit configuration on a given interface, you can create your own global configuration. This
configuration is then applied to all the interfaces on a specific switch or set of switches. You can use this
interface override configuration when you want more granularity on a specific switch on a specific
interface.
You can create or modify BFD multihop override policies for a node profile or interface profile in these
GUI locations:
• Tenants > tenant > Networking > L3Outs > l3out > Logical Node Profiles > logical_node_profile:
right-click, select Create BFD Interface Protocol Profile, specify BFD Multihop node policy.
• Tenants > tenant > Networking > L3Outs > l3out > Logical Node Profiles > logical_node_profile
> Logical Interface Profiles > logical_interface_profile: right-click, select Create MH-BFD
Interface Protocol Profile, specify BFD Multihop interface policy.
• Tenants > infra > Networking > SR-MPLS Infra L3Outs > l3out > Logical Node Profiles >
logical_node_profile > Logical Interface Profiles > logical_interface_profile: right-click, select
Create MH-BFD Interface Profile, specify BFD Multihop interface policy.
Procedure
Step 1 Navigate to the GUI location where you will create or configure the BFD multihop policy.
Step 2 Edit an existing profile or policy or launch the dialog box to create a new profile.
Step 3 In the profile, choose an Authentication Type for BFD multihop sessions.
You can choose to require no authentication or SHA-1 authentication.
Step 4 If you are creating a new policy, configure the settings in the dialog box:
a) Enter a Name for the policy.
b) Set the Admin State to Enabled.
c) Set the Detection Multiplier value.
Specifies the minimum number of consecutive packets that can be missed before BFD declares a session to be down.
The range is from 1 to 50 packets. The default is 3.
d) Set the Minimum Transmit Interval value.
The minimum interval time for packets being transmitted. The range is from 250 to 999 milliseconds. The default is
250.
e) Set the Maximum Receive Interval value.
The maximum interval time for packets being received. The range is from 250 to 999 milliseconds. The default is
250.
f) Click Submit.
Micro BFD
Beginning with Cisco APIC Release 5.2(3), APIC supports Micro BFD, as defined in IETF RFC 7130. When
Bidirectional Forwarding Detection (BFD) is configured on a port channel, keep-alive packets are sent on
any available member link. The failure of a single member link might not be detected, because the keep-alive
packets can merely traverse a remaining link. Micro BFD is an enhancement to BFD that establishes individual
BFD sessions on each member link of a port channel, as shown in the following figure.
Figure 41: Micro BFD Sessions on a Port Channel
When a per-link BFD session senses a failure on its member link, the failed link is removed from the forwarding
table. This mechanism delivers faster failure detection and assists in identifying which link has failed on the
port channel.
Procedure
Step 1 Navigate to Tenants > tenant_name > Networking > L3Outs > L3Out_name > Logical Node Profiles >
logical_node_profile_name > Logical Interface Profiles
Step 2 Select the Logical Interface Profile that you want to modify.
Step 3 Select the Routed Interfaces tab.
Micro BFD is supported only on the routed interface over a port channel.
Step 4 In the Routed Interfaces section, double click the existing interface to modify it, or click the + icon to add a new interface
to the Logical Interface Profile.
The remaining steps of this procedure describe only the enabling of Micro BFD on an existing logical interface. If you
are adding a new interface to the Logical Interface Profile, refer to Modifying Interfaces for L3Out Using the GUI, on
page 273.
Step 5 In the configured properties of the selected interface, verify that the selected Path Type is Direct Port Channel.
Micro BFD is applicable only on a port channel.
What to do next
You can verify the Micro BFD sessions using the CLI, as shown in the following example:
OurAddr NeighAddr
LD/RD RH/RS Holdown(mult) State Int Vrf Type
2003:190:190:1::1 2003:190:190:1::2
1090519041/0 Up 6000(3) Up Po3 tenant1:vrf1 singlehop
2003:190:190:1::1 2003:190:190:1::2
1090519042/2148074790 Up 180(3) Up Eth1/44 tenant1:vrf1 singlehop
2003:190:190:1::1 2003:190:190:1::2
1090519043/2148074787 Up 180(3) Up Eth1/41 tenant1:vrf1 singlehop
2003:190:190:1::1 2003:190:190:1::2
1090519044/2148074789 Up 180(3) Up Eth1/43 tenant1:vrf1 singlehop
2003:190:190:1::1 2003:190:190:1::2
1090519045/2148074788 Up 180(3) Up Eth1/42 tenant1:vrf1 singlehop
When an SVI is used for a Layer 3 Outside connection, an external bridge domain is created on the border
leaf switches. The external bridge domain allows connectivity between the two vPC switches across the Cisco
ACI fabric. This allows both the vPC switches to establish the OSPF adjacencies with each other and the
external OSPF device.
When running OSPF over a broadcast network, the time to detect a failed neighbor is the dead time interval
(default 40 seconds). Reestablishing the neighbor adjacencies after a failure may also take longer due to
designated router (DR) election.
Note • A link or port channel failure to one vPC node does not cause OSPF adjacency to go down. OSPF
adjacency can stay up using the external bridge domain that is accessible through the other vPC node.
• When an OSPF time policy or a OSPF or EIGRP address family policy is applied to an L3Out, you can
observe the following behaviors:
• If the L3Out and the policy are defined in the same tenant, then there is no change in behavior.
• If the L3Out is configured in a user tenant other than the common tenant, the L3Out VRF instance
is resolved to the common tenant, and the policy is defined in the common tenant, then only the
default values are applied. Any change in the policy will not take effect.
• If a border leaf switch forms OSPF adjacency with two external switches and one of the two switches
experiences a route loss while the adjacent switches does not, the Cisco ACI border leaf switch reconverges
the route for both neighbors.
• OSPF supports aggressive timers. However, these timers quickly bring down the adjancency and cause
CPU churn. Therefore, we recommend that you use the default timers and use bidirectional forwarding
detection (BFD) to get sub-second failure detection.
Procedure
Step 4 In the Identity window in the Create L3Out wizard, perform the following actions:
a) In the Name field, enter a name (RtdOut).
b) In the VRF field, from the drop-down list, choose the VRF (inb).
Note
This step associates the routed outside with the in-band VRF.
Step 5 In the Nodes and Interfaces window, perform the following actions:
a) Uncheck the Use Defaults box.
This allows you to edit the Node Profile Name field.
b) In the Node Profile Name field, enter a name for the node profile. (borderLeaf).
c) In the Node ID field, from the drop-down list, choose the first node. (leaf1).
d) In the Router ID field, enter a unique router ID.
e) In the Loopback Address field, use a different IP address or leave this field empty if you do not want to use the
router ID for the loopback address.
Note
The Loopback Address field is automatically populated with the same entry that you provide in the Router ID field.
This is the equivalent of the Use Router ID for Loopback Address option in previous builds. Use a different IP
address or leave this field empty if you do not want to use the router ID for the loopback address.
f) Enter the appropriate information in the Interface, IP Address, Interface Profile Name and MTU fields for this
node, if necessary.
g) In the Nodes field, click + icon to add a second set of fields for another node.
Note
You are adding a second node ID.
h) In the Node ID field, from the drop-down list, choose the first node. (leaf1).
i) In the Router ID field, enter a unique router ID.
j) In the Loopback Address field, use a different IP address or leave this field empty if you do not want to use the
router ID for the loopback address.
Note
The Loopback Address field is automatically populated with the same entry that you provide in the Router ID field.
This is the equivalent of the Use Router ID for Loopback Address option in previous builds. Use a different IP
address or leave this field empty if you do not want to use the router ID for the loopback address.
k) Enter the appropriate information in the Interface, IP Address, Interface Profile Name and MTU fields for this
node, if necessary.
l) Click Next.
The Protocols window appears.
Step 6 In the Protocols window, in the Policy area, click default, then click Next.
The External EPG window appears.
You can configure EIGRP to perform automatic summarization of subnet routes (route summarization) into
network-level routes. For example, you can configure subnet 131.108.1.0 to be advertised as 131.108.0.0 over
interfaces that have subnets of 192.31.7.0 configured. Automatic summarization is performed when there are
two or more network router configuration commands configured for the EIGRP process. By default, this
feature is enabled. For more information, see Route Summarization.
Supported Features
The following features are supported:
• IPv4 and IPv6 routing
• Virtual routing and forwarding (VRF) and interface controls for each address family
• Redistribution with OSPF across nodes
• Default route leak policy per VRF
• Passive interface and split horizon support
• Route map control for setting tag for exported routes
• Bandwidth and delay configuration options in an EIGRP interface policy
• Authentication support
Unsupported Features
The following features are not supported:
• Stub routing
• EIGRP used for BGP connectivity
• Multiple EIGRP L3extOuts on the same node
• Per-interface summarization (an EIGRP summary policy will apply to all interfaces configured under
an L3Out)
• Per interface distribute lists for import and export
• EIGRP Address Family Context Policy (eigrpCtxAfPol)—contains the configuration for a given
address family in a given VRF. An eigrpCtxAfPol is configured under tenant protocol policies and can
be applied to one or more VRFs under the tenant. An eigrpCtxAfPol can be enabled on a VRF through
a relation in the VRF-per-address family. If there is no relation to a given address family, or the specified
eigrpCtxAfPol in the relation does not exist, then the default VRF policy created under the common tenant
is used for that address family.
The following configurations are allowed in the eigrpCtxAfPol:
• Administrative distance for internal route
• Administrative distance for external route
• Maximum ECMP paths allowed
• Active timer interval
• Metric version (32-bit / 64-bit metrics)
Procedure
Step 12 To create an EIGRP interface policy, in the Navigation pane, click Tenant_name > Policies > Protocol > EIGRP and
perform the following actions:
a) Right-click EIGRP Interface, and click Create EIGRP Interface Policy.
b) In the Create EIGRP Interface Policy dialog box, in the Name field, enter a name for the policy.
c) In the Control State field, check the desired checkboxes to enable one or multiple controls.
d) In the Hello Interval (sec) field, choose the desired interval.
e) In the Hold Interval (sec) field, choose the desired interval. Click Submit.
f) In the Bandwidth field, choose the desired bandwidth.
g) In the Delay field, choose the desired delay in tens of microseconds or pico seconds.
In the Work pane, the details for the EIGRP interface policy are displayed.
Step 13 In the Navigation pane, click the appropriate external routed network where EIGRP was enabled, expand Logical
Node Profiles and perform the following actions:
a) Expand an appropriate node and an interface under that node.
b) Right-click the interface and click Create EIGRP Interface Profile.
c) In the Create EIGRP Interface Profile dialog box, in the EIGRP Policy field, choose the desired EIGRP interface
policy. Click Submit.
Note
The EIGRP VRF policy and EIGRP interface policies define the properties that are used when EIGRP is enabled.
EIGRP VRF policy and EIGRP interface policies are also available as default policies if you do not want to create new
policies. So, if you do not explicitly choose either one of the policies, the default policy is automatically utilized when
EIGRP is enabled.
Introduction
Route summarization simplifies route tables by replacing many specific addresses with a single address. For
example, 10.1.1.0/24, 10.1.2.0/24, and 10.1.3.0/24 can be replaced with 10.1.0.0/16. Route summarization
policies enable routes to be shared efficiently among border leaf switches and their neighboring switches.
Two forms of route summarization are supported in ACI beginning in the Cisco APIC 5.2(4) release:
• Route Summarization at the L3Out External EPG Level:
This configuration of route summarization at the L3Out External EPG Level allows route summarization
towards external L3Out peers only.
• Route Filtering and Aggregation at the VRF Level:
Beginning in the Cisco APIC 5.2(4) release, Cisco APIC also provides option to perform route filtering
and aggregation of routes that are advertised in a fabric to reduce the scale requirements of the fabric.
This feature is configured at the VRF level. Enabling route summarization at the VRF level helps to
achieve summarization of routes into the ACI fabric, as well as towards external BGP L3Out peers.
Details of both of the above mentioned forms of summarization are described in the upcoming sections below.
Enabling route summarization at the L3Out External EPG helps to achieve route summarization towards
L3Out peers only and not within the ACI fabric. To achieve summarization of routes into the ACI fabric as
well as towards external L3Out peers, see Route Filtering and Aggregation at the VRF Level, on page 336.
Also, with this route summarization configured, the aggregate prefix would be advertised to external L3Out
peers and more-specific prefixes would not be advertised to the L3Out peers.
• Subnets: A list of subnets with each subnet pointing to the BGP route summarization policy configured
under tenant.
• You must configure at least one subnet to deploy the policy.
• There should be no overlap between subnets that are associated with different route summarization
policies for the same node.
• BGP route summarization policy: Using the route summarization policy control state options, you can
either enable advertisement of only the aggregate prefixes to peers, or you can allow advertisement of
both the aggregate as well as specific prefixes to peers.
• When you configure route summarization for the same subnet on a VRF instance as well as
l3extSubnet, the Cisco APIC raises a fault. Clear this fault before performing a fabric upgrade or
switch reload.
• To configure a BGP route summarization policy, see Configuring Route Control Policy in VRF
Using the GUI, on page 341.
• Route Map: Configure the same way that you configure an existing route profile configuration for a
tenant. Following route-map match and set clauses are applicable in the Import Route Control
configuration:
Match Clauses:
• IP Prefix List
• Community
• Extended Community (match on color extended community is not supported)
• Regex Community
• Regex Extended Community
• Regex AS-Path
Set Clauses:
• Community
• Extended Community
• Tag
• Weight
• Preference
• Metric
If route summarization or fabric export control is configured to suppress a prefix in MP-BGP, then this policy
will not be updated in the receiver leaf’s routing table even if it is allowed by the import route control policy
for the leaf.
The procedures for configuring an inter-VRF import route control policy is provided in Configuring Route
Control Policy in VRF Using the GUI, on page 341.
• Route Map: Configure the same way that you configure an existing route profile configuration for a
tenant. Following route-map match and set clauses are applicable to the Export Route Control
configuration:
Match Clauses:
• IP Prefix List
• Community
• Extended Community (match on color extended community is not supported)
• Regex Community
• Regex Extended Community
Set Clauses:
• Community
• Extended Community (except setting extended community to None)
• Weight
• Preference
• Metric
The procedures for configuring a VRF export route control policy is provided in Configuring Route Control
Policy in VRF Using the GUI, on page 341.
Procedure
Step 2 Configure OSPF inter-area and external summarization using the GUI as follows:
a) On the menu bar, choose Tenants > common.
b) In the Navigation pane, expand Networking > L3Outs > External EPGs, then click on the configured external EPG.
The overview information for that configured external EPG appears.
c) In the work pane, click the + sign above Route Summarization Policy.
The Create Subnet dialog box appears.
d) In the Specify the Subnet dialog box, you can associate a route summarization policy to the subnet as follows:
Example:
• Enter an IP address in the IP Address field.
• Check the check box next to Export Route Control Subnet.
• Check the check box next to External Subnets for the External EPG.
• From the OSPF Route Summarization Policy drop-down menu, choose either default for an existing (default)
policy or Create OSPF route summarization policy to create a new policy.
• If you chose Create OSPF route summarization policy, the Create OSPF Route Summarization Policy dialog
box appears. Enter a name for it in the Name field, check the check box next to Inter-Area Enabled, enter a value
next to Cost, click SUBMIT.
Procedure
Step 1 Navigate to Tenants > tenant_name > Networking > VRFs > vrf_name.
Step 2 On the VRF - vrf_name work pane, click the Route Control tab.
Step 3 Determine how you want to configure route filtering and aggregation at the VRF level.
• To configure a BGP route summarization policy, go to Step 4, on page 341.
• To configure an inter-VRF import route control policy, go to Step 5, on page 342.
• To configure a VRF export route control policy, go to Step 6, on page 342.
Step 4 Configure the Route Summarization Policy using the GUI as follows:
a) Click + next to the Route Summarization Policy.
The Create Route Summarization Policy dialog box appears.
b) Enter a name in the Name field and select switches from the Nodes list.
c) Click + next to Subnets.
The Create Association of Subnet to Summarization Policy dialog box appears.
d) Enter an IP address in the Subnet field.
e) Select a policy from the BGP Route Summary Policy list.
The Create BGP Route Summarization Policy dialog box appears.
f) Enter a name in the Name filed and select the appropriate Control State and Address Type Controls options.
If the Do not advertise more specifics control state option in the BGP route summarization policy configuration is
enabled, aggregate prefix would be advertised and more-specific prefixes would not be advertised to peers. If the Do
not advertise more specifics option is not enabled, both the aggregate and the more-specific prefixes would be
advertised to the peers.
g) Click Submit and confirm the previous configurations.
Note
If the aggregate prefix is learned from L3Out peers and from the locally originated aggregate prefix at the border leafs
using the route summarization policy configuration, to prefer the externally learned aggregate prefix as the BGP best
path over the locally originated aggregate prefix, the incoming weight of all routes originating from the L3Out peers must
be set to 32769 or greater.
Step 5 Configure the Intra-VRF Import Route Configuration Policy using the GUI as follows:
a) Click + next to the Intra-VRF Import Route Configuration Policy.
The Create VRF Import Route Control Policy dialog box appears.
b) Enter a name in the Name field and select switches from the Nodes list.
c) Select a policy from the Route Profile for Import list.
d) Click Submit.
Step 6 Configure the VRF Export Route Configuration Policy using the GUI as follows:
a) Click + next to the VRF Export Route Configuration Policy.
The Create VRF Export Route Control Policy dialog box appears.
b) Enter a name in the Name field and select switches from the Nodes list.
c) Select a policy from the Route Profile for Export list.
d) Click Submit.
Note The ACI fabric only supports the use of route map related policies, such as match and set rules, within the
tenant they were created in. If a route map related policy is created in the common tenant then it is only
supported for use in the common tenant.
The Route Profile Polices are created under the Layer 3 Outside connection. A Route Control Policy can be
referenced by the following objects:
• Tenant BD Subnet
• Tenant BD
• External EPG
• External EPG import/export subnet
Here is an example of using Import Route Control for BGP and setting the local preference for an external
route learned from two different Layer 3 Outsides. The Layer 3 Outside connection for the external connection
to AS300 is configured with the Import Route Control enforcement. An action rule profile is configured to
set the local preference to 200 in the Action Rule Profile for Local Preference window.
The Layer 3 Outside connection External EPG is configured with a 0.0.0.0/0 import aggregate policy to allow
all the routes. This is necessary because the import route control is enforced but any prefixes should not be
blocked. The import route control is enforced to allow setting the local preference. Another import subnet
151.0.1.0/24 is added with a Route Profile that references the Action Rule Profile in the External EPG settings
for Route Control Profile window.
Use the show ip bgp vrf overlay-1 command to display the MP-BGP table. The MP-BGP table on the spine
displays the prefix 151.0.1.0/24 with local preference 200 and a next hop of the border leaf for the BGP 300
Layer 3 Outside connection.
There are two special route control profiles—default-import and default-export. If the user configures using
the names default-import and default-export, then the route control profile is automatically applied at the
Layer3 outside level for both import and export. The default-import and default-export route control profiles
cannot be configured using the 0.0.0.0/0 aggregate.
A route control profile is applied in the following sequential order for fabric routes:
1. Tenant BD subnet
2. Tenant BD
3. Layer3 outside
The route control profile is applied in the following sequential order for transit routes:
1. External EPG prefix
2. External EPG
3. Layer3 outside
• If you specify a private BD subnet in the match prefix list, then it will be included. You do not have to
go through additional configurations to exclude private BD subnets.
• If you configure 0.0.0.0/0 in the match prefix list, then it will match all prefixes, including BD subnets.
• Cisco APIC creates and deploys the route-map on border leaf switches with <tenant name>_<route profile
name>_<L3Out name>-<direction>. For example, a route map with these settings:
• Tenant name: t1
• Route profile name: rp1
• L3Out name: l3out1
• Direction: import
• The behavior of the permit and deny entries in a route control profile is not deterministic when the order
is the same. When you map a route control profile to instp or BGP per peer, the order of entries determines
their behavior. To ensure that you have a predictable behavior, specify a lower order to the entry that
needs to be installed first and a higher order to the one that needs to be installed later.
Procedure
j) Click OK.
k) Click Next and click Finish.
Step 3 Create an application EPG:
a) Right-click Application Profiles and choose Create Application Profile.
b) Enter a name for the application.
c) Click the + icon for EPGs.
d) Enter a name for the EPG.
e) From the BD drop-down list, choose the bridge domain you previously created.
f) Click Update.
g) Click Submit.
Step 4 Create a tenant level route-map that will be used as the BGP Per Peer Route-Map:
a) In the Navigation pane, expand the Tenants > Tenant_name > Policies > Protocol.
b) Right-click on Route Maps for Route Control and select Create Route Maps for Route Control.
c) In the Create Route Maps for Route Control dialog box, in the Name field, enter a route profile name.
d) In the Type field, you must choose Match Routing Policy Only.
e) In the Contexts area, click the + sign to open the Create Route Control Context dialog box and perform the following
actions:
1. Populate the Order and the Name fields as desired.
2. In the Match Rule field, click Create Match Rule.
3. In the Create Match Rule dialog box, in the Name field, enter a name for the match rule.
4. Enter the necessary information in the appropriate fields (Match Regex Community Terms, Match Community
Terms, Match AS Path Regex Terms , and Match Prefix), then click Submit.
5. In the Set Rule field, click Create Set Rules for a Route Map
6. In the Create Set Rules for a Route Map dialog box, in the Name field, enter a name for the action rule profile.
7. Choose the desired attributes, and related community, criteria, tags, and preferences. Click Finish.
8. In the Create Route Control Context window, click OK.
9. In the Create Route Maps for BGP Dampening, Inter-leak dialog box, click Submit.
Step 5 Create the L3Out and configure the BGP for the L3Out:
a) On the Navigation pane, expand Tenant and Networking.
b) Right-click L3Outs and choose Create L3Out.
c) Enter the necessary information to configure BGP for the L3Out.
You will select BGP in the Identity page in the L3Out creation wizard to configure the BGP protocol for this L3Out.
d) Continue through the remaining pages (Nodes and Interfaces, Protocols, and External EPG) to complete the
configuration for the L3Out.
Step 6 After you have completed the L3Out configuration, configure the route control per BGP peer feature:
a) Navigate to the BGP Peer Connectivity Profile screen:
Tenants > tenant > Networking > L3Outs > L3out-name > Logical Node Profiles > logical-node-profile-name >
Logical Interface Profiles > logical-interface-profile-name > BGP Peer Connectivity Profile IP-address
b) Scroll down to the Route Control Profile field, then click + to configure the following:
• Name: Select the route-map that you configured in Step 4, on page 347.
• Direction: Choose one of the following options:
• Route Import Policy
Note When explicit prefix list is used, the type of the route profile should be set to "match routing policy only".
After the match and set profiles are defined, the route map must be created in the Layer 3 Out. Route maps
can be created using one of the following methods:
• Create a "default-export" route map for export route control, and a "default-import" route map for import
route control.
• Create other route maps (not named default-export or default-import) and setup the relation from one or
more l3extInstPs or subnets under the l3extInstP.
• In either case, match the route map on explicit prefix list by pointing to the rtctrlSubjP within the route
map.
In the export and import route map, the set and match rules are grouped together along with the relative
sequence across the groups (rtctrlCtxP). Additionally, under each group of match and set statements (rtctrlCtxP)
the relation to one or more match profiles are available (rtctrlSubjP).
Any protocol enabled on Layer 3 Out (for example BGP protocol), will use the export and import route map
for route filtering.
The subnets in the prefix list can represent the bridge domain public subnets or external networks. Explicit
prefix list presents an alternate method and can be used instead of the following:
• Advertising BD subnets through BD to Layer 3 Out relation.
Note The subnet in the BD must be marked public for the subnet to be advertised out.
• Specifying a subnet in the l3extInstP with export/import route control for advertising transit and external
networks.
Explicit prefix list is defined through a new match type that is called match route destination
(rtctrlMatchRtDest). An example usage is provided in the API example that follows.
Figure 43: External Policy Model of API
Additional information about match rules, set rules when using explicit prefix list are as follows:
Match Rules
• Under the tenant (fvTenant), you can create match profiles (rtctrlSubjP) for route map filtering. Each
match profile can contain one or more match rules. Match rule supports multiple match types. Prior to
Cisco APIC release 2.1, match types supported were explicit prefix list and community list.
Beginning with Cisco APIC release 2.1, explicit prefix match or match route destination
(rtctrlMatchRtDest) is supported.
Match prefix list (rtctrlMatchRtDest) supports one or more subnets with an optional aggregate flag.
Aggregate flags are used for allowing prefix matches with multiple masks starting with the mask mentioned
in the configuration till the maximum mask allowed for the address family of the prefix . This is the
equivalent of the "le " option in the prefix-list in NX-OS software (example, 10.0.0.0/8 le 32).
The prefix list can be used for covering the following cases:
• Allow all ( 0.0.0.0/0 with aggregate flag, equivalent of 0.0.0.0/0 le 32 )
• One or more of specific prefixes (example: 10.1.1.0/24)
• One or more of prefixes with aggregate flag (example, equivalent of 10.1.1.0/24 le 32).
Note When a route map with a match prefix “0.0.0.0/0 with aggregate flag” is used
under an L3Out EPG in the export direction, the rule is applied only for
redistribution from dynamic routing protocols. Therefore, the rule is not applied
to the following (in routing protocol such as OSPF or EIGRP):
• Bridge domain (BD) subnets
• Directly connected subnets on the border leaf switch
• Static routes defined on the L3Out
• The explicit prefix match rules can contain one or more subnets, and these subnets can be bridge domain
public subnets or external networks. Subnets can also be aggregated up to the maximum subnet mask
(/32 for IPv4 and /128 for IPv6).
• When multiple match rules of different types are present (such as match community and explicit prefix
match), the match rule is allowed only when the match statements of all individual match types match.
This is the equivalent of the AND filter. The explicit prefix match is contained by the subject profile
(rtctrlSubjP) and will form a logical AND if other match rules are present under the subject profile.
• Within a given match type (such as match prefix list), at least one of the match rules statement must
match. Multiple explicit prefix match (rtctrlMatchRtDest) can be defined under the same subject profile
(rtctrlSubjP) which will form a logical OR.
• When a per-peer route-map is configured with a permit-all rule followed by an exact match rule, then
any specific properties that were set in the exact match rule may not be processed.
• If an empty route in a route map is matched with action permit or deny without a match clause, all the
routes will be either permitted or denied. A regular route map for import or export route control does not
permit an empty route. Beginning with Cisco APIC release 5.2(4) , static and direct routes will not permit
routes without any route matches.
Release Field
To Prefix
Use these fields to specify the mask range when you create a prefix match rule and enable aggregation.
Following are example situations where you might use these fields:
• Allow all (0.0.0.0/0 with mask length between 24 to 30, the equivalent of 0.0.0.0/0 ge 24 le 30)
• Prefixes with a specific IP address and a netmask greater than 28 (for example, the equivalent of
10.1.1.0/24 ge 28)
The following table provides more information on the various scenarios where you might use these two new
fields and the result for each scenario. Note the following:
• The Greater Equal Mask and Less Equal Mask fields are available only if you select the Aggregate
option in the Create Match Route Destination Rule window.
• A value of 0 in the Greater Equal Mask and Less Equal Mask fields is considered unspecified and
assumes the following default values:
• Greater Equal Mask=0
• Less Equal Mask=32 or 128, depending on whether the IP address family is IPv4 or IPv6.
This situation assumes legacy behavior and provides support for importing old configurations where
these properties are missing. Refer to the second row in the following table for more information.
Set Rules
Set policies must be created to define set rules that are carried with the explicit prefixes such as set community
and set tag.
When used with the “Export Route Control Subnet” scope under the L3Out subnet, the route map will only
match routes learned from dynamic routing protocols. It will not match BD subnets or directly-connected
networks.
When used with the explicit route map configuration, the route map will match all routes, including BD
subnets and directly-connected networks.
Consider the following examples to get a better understanding of the expected and unexpected (inconsistent)
behavior in the two situations described above.
Scenario 1
For the first scenario, we configure a route map (with a name of rpm_with_catch_all) using a
configuration post similar to the following:
</l3extLNodeP>
<l3extInstP annotation="" descr="" exceptionTag="" floodOnEncap="disabled"
matchT="AtleastOne" name="epg" nameAlias="" prefGrMemb="exclude" prio="unspecified"
targetDscp="unspecified">
<l3extRsInstPToProfile annotation="" direction="export"
tnRtctrlProfileName="rpm_with_catch_all"/>
<l3extSubnet aggregate="" annotation="" descr="" ip="0.0.0.0/0" name="" nameAlias=""
scope="import-security"/>
<fvRsCustQosPol annotation="" tnQosCustomPolName=""/>
</l3extInstP>
</l3extOut>
With this route map, what we would expect with 0.0.0.0/0 is that all the routes would go with the property
metricType="ospf-type1", but only for the OSPF route.
In addition, we also have a subnet configured under a bridge domain (for example, 209.165.201.0/27), with
a bridge domain to L3Out relation, using a route map with a pervasive subnet (fvSubnet) for a static route.
However, even though the route map shown above is combinable, we do not want it applied for the subnet
configured under the bridge domain, because we want 0.0.0.0/0 in the route map above to apply only for the
transit route, not on the static route.
Following is the output for the show route-map and show ip prefix-list commands, where
exp-ctx-st-2555939 is the name of the outbound route map for the subnet configured under the bridge
domain, and the name of the prefix list is provided within the output from the show route-map command:
leaf4#
In this situation, everything behaves as expected, because when the bridge domain subnet goes out, it is not
applying the rpm_with_catch_all route map policies.
Scenario 2
For the second scenario, we configure a "default-export" route map for export route control, where an explicit
prefix-list (Match Prefix rule) is assigned to the "default-export" route map, using a configuration post similar
to the following:
Notice that this default-export route map has similar information as the rpm_with_catch_all
route map, where the IP is set to 0.0.0.0/0 (ip=0.0.0.0/0), and the set rule in the default-export route
map is configured only with the Set Metric Type (tnRtctrlAttrPName=set_metric_type).
Similar to the situation in the previous example, we also have the same subnet configured under the bridge
domain, with a bridge domain to L3Out relation, as we did in the previous example.
However, following is the output in this scenario for the show route-map and show ip prefix-list commands:
leaf4#
Notice that in this situation, when the bridge domain subnet goes out, it is applying the default-export
route map policies. In this situation, that route map matches all routes, including BD subnets and
directly-connected networks. This is inconsistent behavior.
• Starting 2.3(x), deny-static implicit entry has been removed from Export Route Map. The user needs to
configure explicitly the permit and deny entries required to control the export of static routes.
• Route-map per peer in an L3Out is not supported for OSPF and EIGRP. Route-map can only be applied
on L3Out as a whole. Starting 4.2(x), route-map per peer in an L3Out is supported for BGP.
Following are possible workarounds to this issue:
• Block the prefix from being advertised from the other side of the neighbor.
• Block the prefix on the route-map on the existing L3Out where you don't want to learn the prefix,
and move the neighbor to another L3Out where you want to learn the prefix and create a separate
route-map.
• Creating route-maps using a mixture of GUI and API commands is not supported. As a possible
workaround, you can create a route-map different from the default route-map using the GUI, but the
route-map created through the GUI on an L3Out cannot be applied to per-peer.
Configuring a Route Map/Profile with Explicit Prefix List Using the GUI
Before you begin
• Tenant and VRF must be configured.
• The VRF must be enabled on the leaf switch.
Procedure
Step 1 On the menu bar, click Tenant, and in the Navigation pane, expand Tenant_name > Policies > Protocol > Match
Rules.
Step 2 Right click Match Rules, and click Create Match Rule for a Route Map.
Step 3 In the Create Match Rule window, enter a name for the rule and choose the desired community terms.
Step 4 Enter the necessary information for the match prefix.
The method that you use to enter information for the match prefix varies, depending on the APIC release.
• For APIC releases prior to 4.2(3), in the Create Match Rule window, expand Match Prefix and perform the
following actions:
a. In the IP field, enter the explicit prefix list.
The explicit prefix can denote a BD subnet or an external network.
b. (Optional) In the Description field, enter descriptive information about the route destination policy.
c. Check the Aggregate check box only if you desire an aggregate prefix.
d. Click Update.
• For APIC releases 4.2(3) and later, in the Create Match Rule window, click + in the Match Prefix area.
The Create Match Route Destination Rule window appears. Perform the following actions in this window:
a. In the IP field, enter the explicit prefix list.
The explicit prefix can denote a BD subnet or an external network.
b. (Optional) In the Description field, enter descriptive information about the route destination policy.
c. Determine if you want an aggregate prefix or not.
• If you do not want an aggregate prefix, leave the Aggregate unchecked and click Submit, then go to
Step 5, on page 359.
• If you want an aggregate prefix, check the Aggregate check box.
The From Prefix and To Prefix fields become available.
1. In the From Prefix field, specify the prefix length to match.
The range is from 0 to 128. A value of 0 is considered unspecified.
2. In the To Prefix field, specify the prefix length to match.
The range is from 0 to 128. A value of 0 is considered unspecified.
See Enhancements for Match Prefix, on page 352 for more information on the From Prefix and To Prefix
fields for APIC releases 4.2(3) and later.
Step 6 Under L3Outs, click and choose the available default layer 3 out.
If you desire another layer 3 out, you can choose that instead.
Step 7 Right-click Route map for import and export route control, and click Create Route map for import and export
route control.
Step 8 In the Create Route map for import and export route control dialog box, use a default route map, or enter a name
for the desired route map.
For the purpose of this example, we use default_export route map.
Step 10 In the Contexts area, expand the + icon to display the Create Route Control Context dialog box.
Step 11 Enter a name for route control context, and choose the desired options for each field. To deny routes that match criteria
that are defined in the match rule (which you will be choosing in the next step), select the action deny. The default
action is permit.
Step 12 In the Match Rule field, choose the rule that was created earlier.
Step 13 In the Set Rule field, choose Create Set Rules for a Route Map.
Typically in the route map/profile you have a match and so the prefix list is allowed in and out, but in addition some
attributes are being set for these routes, so that the routes with the attributes can be matched further.
Step 14 In the Create Set Rules for a Route Map dialog box, enter a name for the action rule and check the desired check
boxes. Click Finish.
Step 15 In the Create Route Control Context dialog box, click OK. And in the Create Route map for import and export
route control dialog box, click Submit.
This completes the creation of the route map/profile. The route map is a combination of match action rules and set
action rules. The route map is associated with export profile or import profile or redistribute profile as desired by the
user. You can enable a protocol with the route map.
Configuring a Route Control Protocol to Use Import and Export Controls, With
the GUI
This example assumes that you have configured the Layer 3 outside network connections using BGP. It is
also possible to perform these tasks for a network configured using OSPF.
This task lists steps to create import and export policies. By default, import controls are not enforced, so the
import control must be manually assigned.
Procedure
Step 1 On the menu bar, click TENANTS > Tenant_name > Networking > L3Outs > Layer3_Outside_name .
Step 2 Right click Layer3_Outside_name and click Create Route map for import and export route control.
Step 3 In the Create Route map for import and export route control dialog box, perform the following actions:
a) From the Name field drop-down list, choose the appropriate route profile.
Depending on your selection, whatever is advertised on the specific outside is automatically used.
b) In the Type field, choose Match Prefix AND Routing Policy.
c) In the Contexts area, click + to bring up the Create Route Control Context window.
Step 4 In the Create Route Control Context dialog box, perform the following actions:
a) In the Order field, choose the desired order number.
b) In the Name field, enter a name for the route control private network.
c) From the Match Rule field drop-down list, click Create Match Rule For a Route Map.
d) In the Create Match Rule dialog box, in the Name field, enter a route match rule name. Click Submit.
Specify the match community regular expression term and match community terms as desired. Match community
factors will require you to specify the name, community and scope.
e) From the Set Rule drop-down list, choose Create Set Rules For a Route Map.
f) In the Create Set Rules For a Route Map dialog box, in the Name field, enter a name for the rule.
g) Check the check boxes for the desired rules you want to set, and choose the appropriate values that are displayed for
the choices. Click Finish.
The policy is created and associated with the action rule.
h) In the Create Route Control Context window, click OK.
i) In the Create Route map for import and export route control dialog box, click Submit.
Step 5 In the Navigation pane, choose Route Profile > route_profile_name > route_control_private_network_name .
In the Work pane, under Properties the route profile policy and the associated action rule name are displayed.
Step 6 In the Navigation pane, click the Layer3_Outside_name , then click the Policy/Main tabs.
In the Work pane, the Properties are displayed.
Step 7 (Optional) Next to the Route Control Enforcement field, check the Import check box to enable the import policy.
The import control policy is not enabled by default but can be enabled by the user. The import control policy is supported
for BGP and OSPF, but not for EIGRP. If the user enables the import control policy for an unsupported protocol, it will
be automatically ignored. The export control policy is supported for BGP, EIGRP, and OSPF.
Note
If BGP is established over OSPF, then the import control policy is applied only for BGP and ignored for OSPF.
Step 8 To create a customized export policy, right-click Route map for import and export route control, click Create Route
map for import and export route control, and perform the following actions:
a) In the Create Route map for import and export route control dialog box, from the drop-down list in the Name
field, choose or enter a name for the export policy.
b) In the Contexts area, click + to bring up the Create Route Control Context window.
c) In the Create Route Control Context dialog box, in the Order field, choose a value.
d) In the Name field, enter a name for the route control private network.
e) (Optional) From the Match Rule field drop-down list, choose Create Match Rule For a Route Map, and create
and attach a match rule policy if desired.
f) From the Set Rule field drop-down list, choose Create Set Rules For a Route Map and click OK.
Alternatively, if desired, you can choose an existing set action, and click OK
g) In the Create Set Rules For A Route Map dialog box, in the Name field, enter a name.
h) Check the check boxes for the desired rules you want to set, and choose the appropriate values that are displayed for
the choices. Click Finish.
In the Create Route Control Context dialog box, the policy is created and associated with the action rule.
i) Click OK.
j) In the Create Route map for import and export route control dialog box, click Submit.
In the Work pane, the export policy is displayed.
Note
To enable the export policy, it must first be applied. For the purpose of this example, it is applied to all the subnets under
the network.
Step 9 In the Navigation pane, expand L3Outs > L3Out_name > External EPGs > externalEPG_name , and perform the
following actions:
a) Expand Route Control Profile.
b) In the Name field drop-down list, choose the policy created earlier.
c) In the Direction field drop-down list, choose Route Export Policy. Click Update.
Procedure
Step 3 In the Navigation pane, expand tenant_name > Policies > Protocol > Route Maps for Route Control.
Step 4 Right-click Route Maps for Route Control and click Create Route Maps for Route Control. The Create Route Maps
for Route Control dialog box appears.
Step 5 In the Name field, enter a name for the route map to control interleak (redistribution to BGP).
Step 6 In the Contexts area, click the + sign to open the Create Route Control Context dialog box, and perform the following
actions:
a) Populate the Order and the Name fields as desired.
b) In the Action field, choose Permit.
c) In the Match Rule field, choose your desired match rule or create a new one.
d) In the Set Rule field, choose your desired set rule or create a new one.
e) Click OK.
Repeat this step for each route control context that you need to create.
Step 7 In the Create Route Maps for Route Control dialog box, click Submit.
Procedure
Aggregate Export, Aggregate Import, and Aggregate Shared Routes—This option adds 32 in front of the
0.0.0.0/0 prefix. Currently, you can only aggregate the 0.0.0.0/0 prefix for the import/export route control
subnet. If the 0.0.0.0/0 prefix is aggregated, no route control profile can be applied to the 0.0.0.0/0 network.
Aggregate Shared Route—This option is available for any prefix that is marked as Shared Route Control
Subnet.
Route Control Profile—The ACI fabric also supports the route-map set clauses for the routes that are advertised
into and out of the fabric. The route-map set rules are configured with the Route Control Profile policies and
the Action Rule Profiles.
• Supported for tenant EPGs ←→ EPGs and tenant EPGs ←→ External EPGs.
If there are no contracts between the external prefix-based EPGs, the traffic is dropped. To allow traffic
between two external EPGs, you must configure a contract and a security prefix. As only prefix filtering is
supported, the default filter can be used in the contract.
External L3Out Connection Contracts
The union of prefixes for L3Out connections is programmed on all the leaf nodes where the L3Out connections
are deployed. When more than two L3Out connections are deployed, the use of the aggregate rule 0.0.0.0/0
can allow traffic to flow between L3Out connections that do not have a contract.
You configure the provider and consumer contract associations and the security import subnets in the L3Out
Instance Profile (instP).
When security import subnets are configured and the aggragate rule, 0.0.0.0/0, is supported, the security
import subnets follow the ACL type rules. The security import subnet rule 10.0.0.0/8 matches all the addresses
from 10.0.0.0 to 10.255.255.255. It is not required to configure an exact prefix match for the prefixes to be
permitted by the route control subnets.
Be careful when configuring the security import subnets if more than two L3Out connections are configured
in the same VRF, due to the union of the rules.
Transit traffic flowing into and out of the same L3Out is dropped by policies when configured with the 0.0.0.0/0
security import subnet. This behavior is true for dynamic or static routing. To prevent this behavior, define
more specific subnets.
Note External EPG selection made through route-map has precedence over the external EPG subnets configured
on L3Out. For example, if route-map configuration associates 10.1.1.0/24 to external EPG1, and subnet
10.1.1.0/24 is configured on external EPG2, then external EPG1 will be programmed in hardware for
10.1.1.0/24 because external EPG determination through route-map is preferred.
Procedure
l) In the Create Route map for import and export route control dialog box, click Submit.
Step 7 In the Work pane, choose the Policy > Main tabs.
In the Work pane, the Properties are displayed.
Step 8 (Optional) Next to Route Control Enforcement, put a check in the Import check box to enable the import policy,
then click Submit.
The import control policy is disabled by default. The import control policy is supported for BGP and OSPF, but not
for EIGRP. If the you enable the import control policy for an unsupported protocol, the protocol will be automatically
ignored. The export control policy is supported for BGP, EIGRP, and OSPF. Also, you need not put a check in the
Import check box for the import policy when you configure BGP per neighbor import route-map.
Note
If BGP is established over OSPF, then the import control policy is applied only for BGP and ignored for OSPF.
Step 9 To create a customized export policy, in the Navigation pane, right-click Route map for import and export route
control, choose Create Route map for import and export route control, and perform the following actions:
a) In the Create Route map for import and export route control dialog box, from the Name drop-down list, choose
or enter a name for the export policy.
b) In the Contexts table, click + to open the Create Route Control Context dialog.
c) In the Create Route Control Context dialog box, in the Order field, enter a value.
d) In the Name field, enter a name for the route control private network.
e) (Optional) From the Associated Match Rules table, click +, choose Create Match Rule For a Route Map from
the Rule Name drop-down list, fill out the fields as desired, and click Submit.
f) From the Set Rule drop-down list, choose Create Set Rules For a Route Map.
Alternatively, you can choose an existing set rule.
g) If you chose Create Set Rules For a Route Map, in the Create Set Rules For A Route Map dialog box, enter a
name for the set rules in the Name field, put a check in the box for the rules you want to set, enter the appropriate
values for the rules, then click Finish.
In the Create Route Control Context dialog box, the policy is created and associated with the action rule.
h) Click OK.
i) In the Create Route map for import and export route control dialog box, click Submit.
In the Work pane, the export policy is displayed.
Note
To enable the export policy, it must first be applied. For the purpose of this example, the policy is applied to all the
subnets under the network.
Step 10 In the Navigation pane, expand tenant_name > Networking > L3Outs > L3Out_name > External EPGs >
external_EPG_name , and perform the following actions:
a) In the Route Control Profile table, click +.
b) In the Name drop-down list, choose the policy that you created earlier.
c) In the Direction drop-down list, choose Route Export Policy.
d) Click Update.
e) Click Submit.
In transit routing, multiple L3Out connections within a single tenant and VRF are supported and the APIC
advertises the routes that are learned from one L3Out connection to another L3Out connection. The external
Layer 3 domains peer with the fabric on the border leaf switches. The fabric is a transit Multiprotocol-Border
Gateway Protocol (MP-BGP) domain between the peers.
The configuration for external L3Out connections is done at the tenant and VRF level. The routes that are
learned from the external peers are imported into MP-BGP at the ingress leaf per VRF. The prefixes that are
learned from the L3Out connections are exported to the leaf switches only where the tenant VRF is present.
Note For cautions and guidelines for configuring transit routing, see Guidelines for Transit Routing, on page 379
In this topology, mainframes require the ACI fabric to be a transit domain for external connectivity through
a WAN router and for east-west traffic within the fabric. They push host routes to the fabric to be redistributed
within the fabric and out to external interfaces.
The VIP is the external facing IP address for a particular site or service. A VIP is tied to one or more servers
or nodes behind a service node.
In such scenarios, the policies are administered at the demarcation points and ACI policies need not be imposed.
Layer 4 to Layer 7 route peering is a special use case of the fabric as a transit where the fabric serves as a
transit OSPF or BGP domain for multiple pods. You configure route peering to enable OSPF or BGP peering
on the Layer 4 to Layer 7 service device so that it can exchange routes with the leaf node to which it is
connected. A common use case for route peering is Route Health Injection where the SLB VIP is advertised
over OSPF or iBGP to clients outside the fabric. See L4-L7 Route Peering with Transit Fabric - Configuration
Walkthrough for a configuration walk-through of this scenario.
Figure 49: GOLF L3Outs and a Border Leaf L3Out in a Transit-Routed Configuration
OSPF Yes Yes* Yes Yes* Yes Yes Yes Yes Yes* Yes
(tested (tested
in APIC in
release APIC
1.3c) release
1.2g)
eBGP eBGP Yes Yes* Yes* Yes* Yes Yes* Yes* Yes X Yes*
over (tested (tested (tested (tested (tested (tested
OSPF in in in APIC in in APIC in
APIC APIC release APIC release APIC
release release 1.3c) release 1.3c) release
1.3c) 1.3c) 1.3c) 1.3c)
eBGP Yes Yes Yes Yes* Yes* Yes* Yes Yes X Yes
over (tested (tested (tested
Direct in APIC in APIC in
Connection release release APIC
1.3c) 1.3c) release
1.3c)
EIGRPv4 Yes Yes Yes Yes Yes Yes Yes Yes X Yes
(tested
in
APIC
release
1.3c)
Static Route Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
(tested (tested
in APIC in
release APIC
1.3c) release
1.2g)
• connec. = connection
• * = Not supported on the same leaf switch
• X = Unsupported/Untested combinations
OSPF/EIGRP Redistribution into ACI Fabric iBGP In a transit routing scenario where external routers are used to route between
when Transit Routing across Multiple VRFs - multiple VRFs, and when an entry other than the default route tag (4294967295)
Route Tags is used to identify the policy in different VRFs, there is a risk of routing loops
when there's one or more routes withdrawn from a tenant L3Out in OSPF or
EIGRP.
This is expected behavior. Upon the EIGRP/OSPF redistribution of routes into
the ACI fabric, the default iBGP anti-routing loop mechanisms on the border
leaf switches either use the specific default route tag 4294967295 or they use
the same tag that is assigned in the Transit Route Tag Policy field in the
VRF/Policy page.
If you configure a different, specific transit route tag for each VRF, the default
anti-routing loop mechanism does not work. In order to avoid this situation, use
the same value for the Transit Route Tag Policy field across all VRFs. For
additional details regarding route-maps and tags usage, see the row for "OSPF
or EIGRP in Back to Back Configuration" and other information on route control
profile policies in this table.
Note
The route tag policy is configured in the Create Route Tag Policy page, which
is accessed through the Transit Route Tag Policy field in the VRF/Policy
page:
Tenants > tenant_name > Networking > VRFs > VRF_name
Transit Routing with a Single L3Out Profile Before Cisco APIC release 2.3(1f), transit routing was not supported within a
single L3Out profile. In Cisco APIC release 2.3(1f) and later, you can configure
transit routing with a single L3Out profile, with the following limitations:
• If the VRF instance is unenforced, you can use an external subnet
(l3extSubnet) of 0.0.0.0/0 to allow traffic between the routers sharing the
same Layer 3 EPG.
• If the VRF instance is enforced, you cannot use an external default subnet
(0.0.0.0/0) to match both source and destination prefixes for traffic within
the same Layer 3 EPG. To match all traffic within the same Layer 3 EPG,
Cisco APIC supports the following prefixes:
• IPv4
• 0.0.0.0/1—with external subnets for the external EPG
• 128.0.0.0/1—with external subnets for the external EPG
• 0.0.0.0/0—with import route control subnet, aggregate import
• IPv6
• 0::0/1—with external subnets for the external EPG
• 8000::0/1—with external subnets for the external EPG
• 0:0/0—with import route control subnet, aggregate import
Shared Routes: Differences in Hardware Support Routes shared between VRFs function correctly on generation 2 switches (Cisco
Nexus N9K switches with "EX" or "FX" on the end of the switch model name,
or later; for example, N9K-93108TC-EX). On generation 1 switches, however,
there may be dropped packets with this configuration, because the physical
ternary content-addressable memory (TCAM) tables that store routes do not
have enough capacity to fully support route parsing.
OSPF or EIGRP in Back to Back Configuration Cisco APIC supports transit routing in export route control policies that are
configured on the L3Out. These policies control which transit routes (prefixes)
are redistributed into the routing protocols in the L3Out. When these transit
routes are redistributed into OSPF or EIGRP, they are tagged 4294967295 to
prevent routing loops. The Cisco ACI fabric does not accept routes matching
this tag when learned on an OSPF or EIGRP L3Out. However, in the following
cases, it is necessary to override this behavior:
• When connecting two Cisco ACI fabrics using OSPF or EIGRP.
• When connecting two different VRFs in the same Cisco ACI fabric using
OSPF or EIGRP.
Where an override is required, you must configure the VRF with a different tag
policy at the following APIC GUI location: Tenant > Tenant_name >
Policies > Protocol > Route Tag. Apply a different tag.
In addition to creating the new route-tag policy, update the VRF to use this
policy at the following APIC GUI location: Tenant > Tenant_name >
Networking > VRFs > Tenant_VRF . Apply the route tag policy that you
created to the VRF.
Note
When multiple L3Outs or multiple interfaces in the same L3Out are deployed
on the same leaf switch and used for transit routing, the routes are advertised
within the IGP (not redistributed into the IGP). In this case the route-tag policy
does not apply.
Advertising BD Subnets Outside the Fabric The import and export route control policies only apply to the transit routes (the
routes that are learned from other external peers) and the static routes. The
subnets internal to the fabric that are configured on the tenant BD subnets are
not advertised out using the export policy subnets. The tenant subnets are still
permitted using the IP prefix-lists and the route-maps but they are implemented
using different configuration steps. See the following configuration steps to
advertise the tenant subnets outside the fabric:
1. Configure the tenant subnet scope as Public Subnet in the subnet properties
window.
2. Optional. Set the Subnet Control as ND RA Prefix in the subnet properties
window.
3. Associate the tenant bridge domain (BD) with the external Layer 3 Outside
(L3Out).
4. Create contract (provider or consumer) association between the tenant EPG
and the external EPG.
Setting the BD subnet to Public scope and associating the BD to the L3Out
creates an IP prefix-list and the route-map sequence entry on the border leaf
for the BD subnet prefix.
Advertising a Default Route For external connections to the fabric that only require a default route, there is
support for originating a default route for OSPF, EIGRP, and BGP L3Out
connections. If a default route is received from an external peer, this route can
be redistributed out to another peer following the transit export route control as
described earlier in this article.
A default route can also be advertised out using a Default Route Leak policy.
This policy supports advertising a default route if it is present in the routing
table or it always supports advertising a default route. The Default Route Leak
policy is configured in the L3Out connection.
When creating a Default Route Leak policy, follow these guidelines:
• For BGP, the Always property is not applicable.
• For BGP, when configuring the Scope property, choose Outside.
• For OSPF, the scope value Context creates a type-5 LSA while the Scope
value Outside creates type-7 LSA. Your choice depends on the area type
configured in the L3Out. If the area type is Regular, set the scope to
Context. If the area type is NSSA, set the scope to Outside.
• For EIGRP, when choosing the Scope property, you must choose Context.
MTU Cisco ACI does not support IP fragmentation. Therefore, when you configure
Layer 3 Outside (L3Out) connections to external routers, or multipod connections
through an Inter-Pod Network (IPN), it is critical that the MTU is set
appropriately on both sides. On some platforms, such as ACI, Cisco NX-OS,
and Cisco IOS, the configurable MTU value takes into account the IP headers
(resulting in a max packet size to be set as 9216 bytes for ACI and 9000 for
NX-OS and IOS). However, other platforms such as IOS-XR configure the MTU
value exclusive of packet headers (resulting in a max packet size of 8986 bytes).
For the appropriate MTU values for each platform, see the relevant configuration
guides.
Cisco highly recommends you test the MTU using CLI-based commands. For
example, on the Cisco NX-OS CLI, use a command such as ping 1.1.1.1
df-bit packet-size 9000 source-interface ethernet 1/1.
Export route-maps are made up of prefix-list matches. Each prefix-list consists of bridge domain (BD) public
subnet prefixes in the VRF and the export prefixes that need to be advertised outside.
Route control policies are defined in an l3extOut policy and controlled by properties and relations associated
with the l3extOut. APIC uses the enforceRtctrl property of the l3extOut to enforce route control directions.
The default is to enforce control on export and allow all on import. Imported and exported routes
(l3extSubnets), are defined in the l3extInstP. The default scope for every route is import. These are the
routes and prefixes which form a prefix-based EPG.
All the import routes form the import route map and are used by BGP and OSPF to control import. All the
export routes form the export route map used by OSPF and BGP to control export.
Import and export route control policies are defined at different levels. All IPv4 policy levels are supported
for IPv6. Extra relations that are defined in the l3extInstP and l3extSubnet MOs control import.
Default route leak is enabled by defining the l3extDefaultRouteLeakP MO under the l3extOut.
l3extDefaultRouteLeakP can have Virtual Routing and Forwarding (VRF) scope or L3extOut scope per area
for OSPF and per peer for BGP.
The following set rules provide route control:
• rtctrlSetPref
• rtctrlSetRtMetric
• rtctrlSetRtMetricType
BGP
The ACI fabric supports BGP peering with external routers. BGP peers are associated with an l3extOut policy
and multiple BGP peers can be configured per l3extOut. BGP can be enabled at the l3extOut level by defining
the bgpExtP MO under an l3extOut.
Note Although the l3extOut policy contains the routing protocol (for example, BGP with its related VRF), the
L3Out interface profile contains the necessary BGP interface configuration details. Both are needed to enable
BGP.
BGP peer reachability can be through OSPF, EIGRP, a connected interface, static routes, or a loopback. iBGP
or eBGP can be used for peering with external routers. The BGP route attributes from the external router are
preserved since MP-BGP is used for distributing the external routes in the fabric. BGP enables IPv4 and/or
IPv6 address families for the VRF associated with an l3extOut. The address family to enable on a switch is
determined by the IP address type defined in bgpPeerP policies for the l3extOut. The policy is optional; if
not defined, the default will be used. Policies can be defined for a tenant and used by a VRF that is referenced
by name.
You must define at least one peer policy to enable the protocol on each border leaf (BL) switch. A peer policy
can be defined in two places:
OSPF
Various host types require OSPF to enable connectivity and provide redundancy. These include mainframe
devices, external pods and service nodes that use the ACI fabric as a Layer 3 transit within the fabric and to
the WAN. Such external devices peer with the fabric through a nonborder leaf switch running OSPF. Configure
the OSPF area as an NSSA (stub) area to enable it to receive a default route and not participate in full-area
routing. Typically, existing routing deployments avoid configuration changes, so a stub area configuration is
not mandated.
You enable OSPF by configuring an ospfExtP managed object under an l3extOut. OSPF IP address family
versions configured on the BL switch are determined by the address family that is configured in the OSPF
interface IP address.
Note Although the l3extOut policy contains the routing protocol (for example, OSPF with its related VRF and
area ID), the Layer 3 external interface profile contains the necessary OSPF interface details. Both are needed
to enable OSPF.
You configure OSPF policies at the VRF level by using the fvRsCtxToOspfCtxPol relation, which you can
configure per address family. If you do not configured it, default parameters are used.
You configure the OSPF area in the ospfExtP managed object, which also exposes IPv6 the required area
properties.
Shared Security Import Subnet—This control is the same as External Subnets for the External EPG for Shared
L3Out learned routes. If you want traffic to flow from one external EPG to another external EPG or to another
internal EPG, the subnet must be marked with this control. If you do not mark the subnet with this control,
then routes learned from one EPG are advertised to the other external EPG, but packets are dropped in the
fabric.When using security policies that have this option configured, you must configure a contract and a
security prefix.
Aggregate Export, Aggregate Import, and Aggregate Shared Routes—This option adds 32 in front of the
0.0.0.0/0 prefix. Currently, you can only aggregate the 0.0.0.0/0 prefix for the import/export route control
subnet. If the 0.0.0.0/0 prefix is aggregated, no route control profile can be applied to the 0.0.0.0/0 network.
Aggregate Shared Route—This option is available for any prefix that is marked as Shared Route Control
Subnet.
Route Control Profile—The ACI fabric also supports the route-map set clauses for the routes that are advertised
into and out of the fabric. The route-map set rules are configured with the Route Control Profile policies and
the Action Rule Profiles.
In the examples in this chapter, the Cisco ACI fabric has 2 leaf switches and two spine switches, that are
controlled by an APIC cluster. The border leaf switches 101 and 102 have L3Outs on them providing
connections to two routers and thus to the Internet. The goal of this example is to enable traffic to flow from
EP 1 to EP 2 on the Internet into and out of the fabric through the two L3Outs.
In this example, the tenant that is associated with both L3Outs is t1, with VRF v1.
Before configuring the L3Outs, configure the nodes, ports, functional profiles, AEPs, and a Layer 3 domain.
You must also configure the spine switches 104 and 105 as BGP route reflectors.
Configuring transit routing includes defining the following components:
1. Tenant and VRF
2. Node and interface on leaf 101 and leaf 102
3. Primary routing protocol on each L3Out (used to exchange routes between border leaf switch and external
routers; in this example, BGP)
4. Connectivity routing protocol on each L3Out (provides reachability information for the primary protocol;
in this example, OSPF)
5. Two external EPGs
6. One route map
7. At least one filter and one contract
8. Associate the contract with the external EPGs
Note For transit routing cautions and guidelines, see Guidelines for Transit Routing, on page 379.
The following table lists the names that are used in the examples in this chapter:
Property Names for L3Out1 on Node 101 Names for L3Out2 on Node 102
Tenant t1 t1
VRF v1 v1
Route map rp1 with ctx1 and route destination rp2 with ctx2 and route destination
192.168.1.0/24 192.168.2.0/24
Procedure
Step 1 To create the tenant and VRF, on the menu bar, choose Tenants > Add Tenant and in the Create Tenant dialog box,
perform the following tasks:
a) In the Name field, enter the tenant name.
b) In the VRF Name field, enter the VRF name.
c) Click Submit.
Note
After this step, perform the steps twice to create two L3Outs in the same tenant and VRF for transit routing.
Step 2 To start creating the L3Out, on the Navigation pane, expand Tenant and Networking, then right-click L3Outs and
choose Create L3Out.
The Create L3Out wizard appears. The following steps provide the steps for an example L3Out configuration using
the Create L3Out wizard.
Step 3 Enter the necessary information in the Identity window of the Create L3Out wizard.
a) In the Name field, enter a name for the L3Out.
b) From the VRF drop-down list, choose the VRF.
c) From the L3 Domain drop-down list, choose the external routed domain that you previously created.
d) In the area with the routing protocol check boxes, check the desired protocols (BGP, OSPF, or EIGRP).
For the example in this chapter, choose BGP and OSPF.
Depending on the protocols you choose, enter the properties that must be set.
e) Enter the OSPF details, if you enabled OSPF.
For the example in this chapter, use the OSPF area 0 and type Regular area.
f) Click Next to move to the Nodes and Interfaces window.
Step 4 Enter the necessary information in the Nodes and Interfaces window of the Create L3Out wizard.
a) Determine if you want to use the default naming convention.
In the Use Defaults field, check if you want to use the default node profile name and interface profile names:
• The default node profile name is L3Out-name_nodeProfile, where L3Out-name is the name that you
entered in the Name field in the Identity page.
• The default interface profile name is L3Out-name_interfaceProfile, where L3Out-name is the name
that you entered in the Name field in the Identity page.
b) In the Interface Types area, make the necessary selections in the Layer 3 and Layer 2 fields.
The options are:
• Layer 3:
• Routed: Select this option to configure a Layer 3 route to the port channels.
When selecting this option, the Layer 3 route can be to either physical ports or direct port channels, which
are selected in the Layer 2 field in this page.
• Routed Sub: Select this option to configure a Layer 3 sub-interface route to the port channels.
When selecting this option, the Layer 3 sub-interface route can be to either physical ports or direct port
channels, which are selected in the Layer 2 field in this page.
• SVI: Select this option to configure a Switch Virtual Interface (SVI), which is used to provide connectivity
between the ACI leaf switch and a router.
SVI can have members that are physical ports, direct port channels, or virtual port channels, which are
selected in the Layer 2 field in this page.
• Floating SVI: Select this option to configure floating L3Out.
Floating L3Out enables you to configure an L3Out that allows a virtual router to move from under one
leaf switch to another. The feature saves you from having to configure multiple L3Out interfaces to
maintain routing when VMs move from one host to another.
• Layer 2: (not available if you select Virtual SVI in the Layer 3 area)
• Port
• Virtual Port Channel (available if you select SVI in the Layer 3 area)
• Direct Port Channel
c) From the Node ID field drop-down menu, choose the node for the L3Out.
For the topology in these examples, use node 103.
d) In the Router ID field, enter the router ID (IPv4 or IPv6 address for the router that is connected to the L3Out).
e) (Optional) You can configure another IP address for a loopback address, if necessary.
The Loopback Address field is automatically populated with the same entry that you provide in the Router ID
field. This is the equivalent of the Use Router ID for Loopback Address option in previous builds. Enter a different
IP address for a loopback address, if you don't want to use route ID for the loopback address, or leave this field
empty if you do not want to use the router ID for the loopback address.
f) Enter necessary additional information in the Nodes and Interfaces window.
The fields shown in this window varies, depending on the options that you select in the Layer 3 and Layer 2 areas.
g) When you have entered the remaining additional information in the Nodes and Interfaces window, click Next.
The Protocols window appears.
Step 5 Enter the necessary information in the Protocols window of the Create L3Out wizard.
Because you BGP and OSPF as the protocols for this example, the following steps provide information for those fields.
a) In the BGP Loopback Policies and BGP Interface Policies areas, enter the following information:
• Peer Address: Enter the peer IP address
• EBGP Multihop TTL: Enter the connection time to live (TTL). The range is from 1 to 255 hops; if zero, no
TTL is specified. The default is zero.
• Remote ASN: Enter a number that uniquely identifies the neighbor autonomous system. The Autonomous
System Number can be in 4-byte as plain format from 1 to 4294967295.
Note
ACI does not support asdot or asdot+ format AS numbers.
b) In the OSPF area, choose the default OSPF policy, a previously created OSPF policy, or Create OSPF Interface
Policy.
c) Click Next.
The External EPG window appears.
Step 6 Enter the necessary information in the External EPG window of the Create L3Out wizard.
a) In the Name field, enter a name for the external network.
b) In the Provided Contract field, enter the name of a provided contract.
c) In the Consumed Contract field, enter the name of a consumed contract.
d) In the Default EPG for all external networks field, uncheck if you don’t want to advertise all the transit routes
out of this L3Out connection.
The Subnets area appears if you uncheck this box. Specify the desired subnets and controls as described in the
following steps.
e) Click the + icon to expand Subnet, then perform the following actions in the Create Subnet dialog box.
f) In the IP address field, enter the IP address and network mask for the external network.
g) In the Name field, enter the name of the subnet.
h) In the Scope field, check the appropriate check boxes to control the import and export of prefixes for the L3Out.
Note
For more information about the scope options, see the online help for this Create Subnet panel.
i) (Optional) Click the check box for Export Route Control Subnet.
The BGP Route Summarization Policy field now becomes available.
j) In the BGP Route Summarization Policy field, from the drop-down list, choose an existing route summarization
policy or create a new one as desired.
The type of route summarization policy depends on the routing protocols that are enabled for the L3Out.
k) Click OK when you have completed the necessary configurations in the Create Subnet window.
l) (Optional) Repeat to add more subnets.
m) Click Finish to complete the necessary configurations in the Create L3Out wizard.
Step 7 Navigate to the L3Out that you just created, then right-click on the L3Out and select Create Route map for import
and export route control.
Step 8 In the Create Route map for import and export route control window, perform the following actions:
a) In the Name field, enter the route map name.
b) Choose the Type.
For this example, leave the default, Match Prefix AND Routing Policy.
c) Click the + icon to expand Contexts and create a route context for the route map.
d) Enter the order and name of the profile context.
e) Choose Deny or Permit for the action to be performed in this context.
f) (Optional) In the Set Rule field, choose Create Set Rules for a Route Map.
Enter the name for the set rules, click the objects to be used in the rules, and click Finish.
g) In the Match Rule field, choose Create Match Rule for a Route Map.
h) Enter the name for the match rule and enter the Match Regex Community Terms, Match Community Terms,
or Match Prefix to match in the rule.
i) When you have finished filling in the fields in the Create Match Rule window, click Submit.
j) In the Create Route Control Context dialog box, click OK.
k) In the Create Route map for import and export route control dialog box, click Submit.
Step 9 In the Navigation pane, expand L3Outs > L3Out_name > External EPGs > externalEPG_name, and perform the
following actions:
a) Click the + icon to expand Route Control Profile.
b) In the Name field, choose the route control profile that you previously created from the drop-down list.
c) In the Direction field, choose Route Export Policy.
d) Click Update.
Step 10 Navigate to the L3Out that you just created, then right-click on the L3Out and select Create Route map for import
and export route control.
Step 11 In the Create Route map for import and export route control window, perform the following actions.
Note
To set attributes for BGP, OSPF, or EIGRP for received routes, create a default-import route control profile, with the
appropriate set actions and no match actions.
For example, for a simple web filter, set criteria such as the following:
• EtherType—IP
• IP Protocol—tcp
• Destination Port Range From—Unspecified
• Destination Port Range To to https
f) Click Update.
g) In the Create Filter dialog box, click Submit.
Step 13 To add a contract, use the following steps:
a) Under Contracts, right-click Standard and choose Create Contract.
b) Enter the name of the contract.
c) Click the + icon to expand Subjects to add a subject to the contract.
d) Enter a name for the subject.
e) Click the + icon to expand Filters and choose the filter that you previously created from the drop-down list.
f) Click Update.
g) In the Create Contract Subject dialog box, click OK.
h) In the Create Contract dialog box, click Submit.
Step 14 Associate the EPGs for the L3Out with the contract, with the following steps:
The first L3 external EPG (extnw1) is the provider of the contract and the second L3 external EPG (extnw2) is the
consumer.
a) To associate the contract to the L3 external EPG, as the provider, under the tenant, click Networking, expand
L3Outs, and expand the L3Out.
b) Expand External EPGs, click the L3 external EPG, and click the Contracts tab.
c) Click the the + icon to expand Provided Contracts.
For the second L3 external EPG, click the + icon to expand Consumed Contracts.
d) In the Name field, choose the contract that you previously created from the list.
e) Click Update.
f) Click Submit.
Take note of the following guidelines and limitations for shared L3Out network configurations:
• No tenant limitations: Tenants A and B can be any kind of tenant (user, common, infra, mgmt). The shared
external EPG does not have to be in the common tenant.
• Flexible placement of EPGs: EPG A and EPG B in the illustration above are in different tenants. EPG
A and EPG B could use the same bridge domain and VRF instance, but they are not required to do so.
EPG A and EPG B are in different bridge domains and different VRF instances but still share the same
external EPG.
• A subnet can be private, public, or shared. A subnet that is to be advertised into a consumer or provider
EPG of an L3Out must be set to shared. A subnet that is to be exported to an L3Out must be set to public.
• The shared service contract is exported from the tenant that contains the external EPG that provides
shared L3Out network service. The shared service contract is imported into the tenants that contain the
EPGs that consume the shared service.
• Do not use taboo contracts with a shared L3Out; this configuration is not supported.
• The external EPG as a shared service provider is supported, but only with non-external EPG consumers
(where the L3Out EPG is the same as the external EPG).
• Traffic Disruption (Flap): When an external EPG is configured with an external subnet of 0.0.0.0/0 with
the scope property of the external EPG subnet set to shared route control (shared-rctrl), or shared security
(shared-security), the VRF instance is redeployed with a global pcTag. This will disrupt all the external
traffic in that VRF instance (because the VRF instance is redeployed with a global pcTag).
• Prefixes for a shared L3Out must to be unique. Multiple shared L3Out configurations with the same
prefix in the same VRF instance will not work. Be sure that the external subnets (external prefixes) that
are advertised into a VRF instance are unique (the same external subnet cannot belong to multiple external
EPGs). An L3Out configuration (for example, named L3Out1) with prefix1 and a second L3Out
configuration (for example, named L3Out2) also with prefix1 belonging to the same VRF instance will
not work (because only 1 pcTag is deployed).
• Different behaviors of L3Out are possible when configured on the same leaf switch under the same VRF
instance. The two possible scenarios are as follows:
• Scenario 1 has an L3Out with an SVI interface and two subnets (10.10.10.0/24 and 0.0.0.0/0) defined.
If ingress traffic on the L3Out network has the matching prefix 10.10.10.0/24, then the ingress traffic
uses the external EPG pcTag. If ingress traffic on the L3Out network has the matching default prefix
0.0.0.0/0, then the ingress traffic uses the external bridge pcTag.
• Scenario 2 has an L3Out using a routed or routed-sub-interface with two subnets (10.10.10.0/24
and 0.0.0.0/0) defined. If ingress traffic on the L3Out network has the matching prefix 10.10.10.0/24,
then the ingress traffic uses the external EPG pcTag. If ingress traffic on the L3Out network has
the matching default prefix 0.0.0.0/0, then the ingress traffic uses the VRF instance pcTag.
• As a result of these described behaviors, the following use cases are possible if the same VRF
instance and same leaf switch are configured with L3Out-A and L3Out-B using an SVI interface:
Case 1 is for L3Out-A: This external EPG has two subnets defined: 10.10.10.0/24 and 0.0.0.0/1. If
ingress traffic on L3Out-A has the matching prefix 10.10.10.0/24, it uses the external EPG pcTag
and contract-A, which is associated with L3Out-A. When egress traffic on L3Out-A has no specific
match found, but there is a maximum prefix match with 0.0.0.0/1, it uses the external bridge domain
pcTag and contract-A.
Case 2 is for L3Out-B: This external EPG has one subnet defined: 0.0.0.0/0. When ingress traffic
on L3Out-B has the matching prefix10.10.10.0/24 (which is defined under L3Out-A), it uses the
external EPG pcTag of L3Out-A and the contract-A, which is tied with L3Out-A. It does not use
contract-B, which is tied with L3Out-B.
• Traffic not permitted: Traffic is not permitted when an invalid configuration sets the scope of the external
subnet to shared route control (shared-rtctrl) as a subset of a subnet that is set to shared security
(shared-security). For example, the following configuration is invalid:
• shared rtctrl: 10.1.1.0/24, 10.1.2.0/24
• shared security: 10.1.0.0/16
In this case, ingress traffic on a non-border leaf with a destination IP of 10.1.1.1 is dropped, since prefixes
10.1.1.0/24 and 10.1.2.0/24 are installed with a drop rule. Traffic is not permitted. Such traffic can be
enabled by revising the configuration to use the shared-rtctrl prefixes as shared-security prefixes as
well.
• Inadvertent traffic flow: Prevent inadvertent traffic flow by avoiding the following configuration scenarios:
• Case 1 configuration details:
• A L3Out network configuration (for example, named L3Out-1) with VRF1 is called provider1.
• A second L3Out network configuration (for example, named L3Out-2) with VRF2 is called
provider2.
• L3Out-1 VRF1 advertises a default route to the Internet, 0.0.0.0/0, which enables both
shared-rtctrl and shared-security.
• L3Out-2 VRF2 advertises specific subnets to DNS and NTP, 192.0.0.0/8, which enables
shared-rtctrl.
• Variation B: An EPG conforms to the allow_all contract of a second shared L3Out network.
• Communications between EPG1 and L3Out-1 is regulated by an allow_all contract.
• Communications between EPG1 and L3Out-2 is regulated by an allow_icmp contract.
Result: Traffic from EPG1 to L3Out-2 to 192.2.x.x conforms to the allow_all contract.
In the following figure, there are two Layer 3 Outs with a shared subnet. There is a contract between the Layer
3 external instance profile (l3extInstP) in both the VRFs. In this case, the Shared Layer 3 Out for VRF1 can
communicate with the Shared Layer 3 Out for VRF2.
Figure 52: Shared Layer 3 Outs Communicating Between Two VRFs
Configuring Shared Layer 3 Out Inter-VRF Leaking Using the Advanced GUI
Before you begin
The contract label to be used by the consumer and provider is already created.
Procedure
Step 6 In the Create L3Out dialog box, perform the following actions:
a) In the Name field, enter a name for the L3Out.
b) In the VRF field, select the VRF that you created earlier.
c) In the L3 Domain field, select an L3 domain.
d) Make the appropriate selections for the protocols, then click Next.
Step 7 Make the necessary selections in the next windows, until you get to the External EPG window.
You might see the Nodes and Interfaces window and the Protocols window, depending on the protocol that you
selected in the Identity window. The last window in the Create L3Out wizard is the External EPG window.
Step 20 In the Create L3Out dialog box, perform the following actions:
a) In the Name field, enter a name for the L3Out.
b) In the VRF field, from the drop-down menu, choose the VRF that was created for the consumer.
c) In the Consumer Label field, enter the name for the consumer label.
d) In the L3 Domain field, select an L3 domain.
e) Make the appropriate selections for the protocols, then click Next.
Step 21 Make the necessary selections in the next windows, until you get to the External EPG window.
You might see the Nodes and Interfaces window and the Protocols window, depending on the protocol that you
selected in the Identity window. The last window in the Create L3Out wizard is the External EPG window.
L3Outs QoS
L3Out QoS can be configured using Contracts applied at the external EPG level. Starting with Release 4.0(1),
L3Out QoS can also be configured directly on the L3Out interfaces.
Note If you are running Cisco APIC Release 4.0(1) or later, we recommend using the custom QoS policies applied
directly to the L3Out to configure QoS for L3Outs.
Packets are classified using the ingress DSCP or CoS value so it is possible to use custom QoS policies to
classify the incoming traffic into Cisco ACI QoS queues. A custom QoS policy contains a table mapping the
DSCP/CoS values to the user queue and to the new DSCP/CoS value (in case of marking). If there is no
mapping for a specific DSCP/CoS value, the user queue is selected by the QoS priority setting of the ingress
L3Out interface if configured.
• To enable the QoS policy to be enforced, the VRF Policy Control Enforcement Preference must be
"Enforced."
• When configuring the contract that controls communication between the L3Out and other EPGs, include
the QoS class or target DSCP in the contract or subject.
Note Only configure a QoS class or target DSCP in the contract, not in the external
EPG (l3extInstP).
• When creating a contract subject, you must choose a QoS priority level. You cannot choose Unspecified.
Note The exception is with custom QoS policies, as a custom QoS policy will set the
DSCP/CoS value even if the QoS class is set to Unspecified. When the QoS level
is unspecified, the level is treated as 3 by default.
• On generation 2 switches, QoS supports levels 4, 5, and 6 configured under Global policies, EPG, L3Out,
custom QoS, and Contracts. The following limitations apply:
• The number of classes that can be configured with the strict priority is increased to 5.
• The 3 new classes are supported only with generation 2 switches.
• If traffic flows between generation 1 switches and generation 2 switches, the traffic will use QoS
level 3.
• For communicating with FEX for new classes, the traffic carries a Layer 2 CoS value of 0.
Generation 1 switches can be identified by the lack of "EX," "FX, "FX2," "GX," or later suffix at the
end of the name. For example, N9K-9312TX. Generation 2 and later switches can be identified by the
"EX," "FX, "FX2," "GX," or later suffix at the end of the name. For example N9K-93108TC-EX or
N9K-9348GC-FXP.
• You can configure QoS class or create a custom QoS policy to apply on an L3Out interface.
Procedure
Step 1 From the main menu bar, select Tenants > <tenant-name> .
Step 2 In the left-hand navigation pane, expand Tenant <tenant-name> > Networking > L3Outs > <routed-network-name>
> Logical Node Profiles > <node-profile-name> > Logical Interface Profiles > <interface-profile-name>.
You may need to create new network, node profile, and interface profile if none exists.
Step 3 In the main window pane, configure custom QoS for your L3Out.
You can choose to configure a standard QoS level priority using the QoS Priority drop-down list. Alternatively, you can
set an existing or create a new custom QoS policy from the Custom QoS Policy dropdown.
Note Starting with Release 4.0(1), we recommend using custom QoS policies for L3Out QoS as described in
Configuring QoS Directly on L3Out Using GUI, on page 402 instead.
Configuring QoS classification using a contract as described in this section will take priority over any QoS
policies configured directly on the L3Out.
Procedure
Step 1 Configure the VRF instance for the tenant consuming the L3Out to support QoS to be enforced on the border leaf switch
that is used by the L3Out.
a) From the main menu bar, choose Tenants > <tenant-name> .
b) In the Navigation pane, expand Networking, right-click VRFs, and choose Create VRF.
c) Enter the name of the VRF.
d) In the Policy Control Enforcement Preference field, choose Enforced.
e) In the Policy Control Enforcement Direction choose Egress
VRF enforcement must be set to Egress when the QoS classification is done in the contract.
f) Complete the VRF configuration according to the requirements for the L3Out.
Step 2 When configuring filters for contracts to enable communication between the EPGs consuming the L3Out, include a QoS
class or target DSCP to enforce the QoS priority in traffic ingressing through the L3Out.
a) On the Navigation pane, under the tenant that that will consume the L3Out, expand Contracts, right-click Filters
and choose Create Filter.
b) In the Name field, enter a filter name.
c) In the Entries field, click + to add a filter entry.
d) Add the Entry details, click Update and Submit.
e) Expand the previously created filter and click on a filter entry.
f) Set the Match DSCP field to the desired DSCP level for the entry, for example, EF.
Step 3 Add a contract.
a) Under Contracts, right-click Standard and choose Create Contract.
b) Enter the name of the contract.
c) In the QoS Class field, choose the QoS priority for the traffic governed by this contract. Alternatively, you can choose
a Target DSCP value.
Configuring QoS classification using a contract as described in this section will take priority over any QoS policies
configured directly on the L3Out.
d) Click the + icon on Subjects to add a subject to the contract.
e) Enter a name for the subject.
f) In the QoS Priority field, choose the desired priority level. You cannot choose Unspecified.
g) Under Filter Chain, click the + icon on Filters and choose the filter you previously created, from the drop down
list.
h) Click Update.
i) On the Create Contract Subject dialog box, click OK.
For more information about PBR tracking, see Configuring Policy-Based Redirect in the Cisco APIC Layer
4 to Layer 7 Services Deployment Guide.
Note For either feature, you can perform a network action based on the results of the probes, including configuration,
using APIs, or running scripts.
• Remote Leaf
• You can define single object tracking policies across ACI main data center and the remote leaf
switch.
• IP SLA probes on remote leaf switches track IP addresses locally without using the IP network.
• A workload can move from one local leaf to a remote leaf. The IP SLA policy continues to check
accessibility information and detects if an endpoint has moved.
• IP SLA policies move to the remote leaf switches or ACI main data center, based on the endpoint
location, for local tracking, so that tracking traffic is not passed through the IP network.
Note Currently, ACI does not support IP SLA for static route in vPC topology.
The following figure shows the network topology and the operation for tracking the static route availability
of a router.
Figure 53: Static Route Availability by Tracking the Next-Hop
Figure 56: Static Route Availability by Tracking an IP Address in the ACI Fabric
An IP SLA monitoring policy identifies the probe frequency and the type of probe.
ACI IP SLA Monitoring Operation Probe Types
Using ACI IP SLAs, you can monitor the performance between any area in the network: core, distribution,
and edge. Monitoring can be done anytime, anywhere, without deploying a physical probe. ACI IP SLAs use
generated traffic to measure network performance between two networking devices such as switches. The
types of IP SLA operations include:
• ICMP: Echo Probes
• TCP: Connect Probes
The connection response time is computed by measuring the time that is taken between sending a TCP request
message from Switch B to IP Host 1 and receiving a reply from IP Host 1.
The IP SLA ICMP Echo operation conforms to the same IETF specifications for ICMP ping testing and the
two methods result in the same response times.
In this track list, each of the four track members is assigned 25%. For the track list to become unreachable
(down), two of the four track members must be unreachable (50%). For the track list to return to reachable
(up), all four track members must be reachable (100%).
Note When a track list is associated with a static route and the track list becomes unreachable (down), the static
route is removed from the routing table until the track list becomes reachable again.
The following image shows a static route for the endpoint prefix of 192.168.13.1/24. It also shows a pair of
routers in a static route between an L3Out leaf switch and a consumer endpoint.
To configure an ACI IP SLA based on the figure above, the router must be monitored to ensure connectivity
to the consumer endpoint. This is accomplished by creating a static route, track members, and track lists:
• Static route for 192.168.13.1/24 with next hops of 10.10.10.1 and 11.11.11.1
• Track Member 1 (TM-1) includes the router IP address 10.10.10.1 (this is the next hop probe)
• Track Member 2 (TM-2) includes the router IP address 11.11.11.1 (this is the next hop probe)
• Track List 1 (TL-1) with TM-1 and TM-2 included (track list associated with a static route. The track
list contains list of next hops through which configured prefix end points can be reached. Thresholds
determining if the track list is reachable or unreachable are also configured.)
• Track List 2 (TL-2) with TM-1 included (associated with a next hop entry included in a static route)
• Track List 3 (TL-3) with TM-2 included (associated with a next hop entry included in a static route)
For a generic static route, you can associate TL-1 with the static route, associate TL-2 with the 10.10.10.1
next hop, and associate TL-3 with the 11.11.11.1 next hop. For a pair of specific static routes (both
192.168.13.1/24), you can associate TL-2 on one and TL-3 on the other. Both should also have TL-2 and
TL-3 associated with the router next hops.
These options allow for one router to fail while providing a back-up route in case of the failure. See the
following sections to learn more about track members and track lists.
The workaround is to configure a non-zero IP SLA port value before upgrading the Cisco APIC, and use
the snapshot and configuration export that was taken after the IP SLA port change.
• You must enable global GIPo if you are supporting remote leaf switches in an IP SLA:
1. On the menu bar, click System > System Settings.
2. In the System Settings navigation pane, click System Global GIPo.
3. In the System Global GIPo Policy work pane, click Enabled.
4. In the Policy Usage Warning dialog, review the nodes and policies that may be using the GIPo policy
and, if appropriate, click Submit Changes.
• Statistics viewed through Fabric > Inventory > Pod number > Leaf Node name > Protocols > IP SLA >
ICMP Echo Operations or TCP Connect Operations can only be gathered in five minute intervals. The
interval default is 15 Minute, but this must be set to 5 Minute.
• IP SLA policy is not supported for endpoints connected through vPod.
• IP SLA is supported for single pods, Cisco ACI Multi-Pod, and remote leaf switches.
• IP SLA is not supported when the destination IP address to be tracked is connected across Cisco ACI
Multi-Site.
For information on verified IP SLA numbers, refer to the appropriate Verified Scalability Guide for Cisco
APIC on the Cisco APIC documentation page.
The previous components are applied to either static routes or next hop profiles.
Procedure
Step 1 On the menu bar, click Tenant > tenant_name. In the navigation pane, click Policies > Protocol > IP SLA.
Step 2 Right-click IP SLA Monitoring Policies, and click Create IP SLA Monitoring Policy.
Step 3 In the Create IP SLA Monitoring Policy dialog box, perform the following actions:
a) In the Name field, enter a unique name for the IP SLA Monitoring policy.
b) In the SLA Type field, choose the SLA type.
The SLA type can be TCP, ICMP, L2Ping, or HTTP. ICMP is the default value.
Note
L2Ping is supported only for Layer 1/Layer 2 policy-based redirect (PBR) tracking.
c) If you chose HTTP for the SLA type, for the HTTP Version buttons, choose a version.
d) If you chose HTTP for the SLA type, for the HTTP URI field, enter the HTTP URI to use for service node tracking.
The URI must begin with "/", such as "/index.html".
e) If you chose TCP for the SLA type, enter a port number in the Destination Port field.
f) In the SLA Frequency field, enter a value, in seconds, to determine the configured frequency to track a packet.
The range is from 1 to 300. The default value is 60. The minimum frequency for HTTP tracking should be 5 seconds.
g) In the Detect Multiplier field, enter a value for the number of missed probes in a row that shows that a failure is
detected or a track is down.
By default, failures are detected when three probes are missed in a row. Changing the value in the Detect Multiplier
field changes the number of missed probes in a row that will determine when failures are detected or when a track is
considered to be down.
Used in conjunction with the entry in the SLA Frequency, you can determine when a failure will be detected. For
example, assume you have the following entries in these fields:
• SLA Frequency (sec): 5
• Detect Multiplier: 30
A failure would be detected in roughly 150 seconds in this example scenario (5 seconds x 30).
h) If you chose any SLA type except TCP, for the Request Data Size (bytes) field, enter the size of the protocol data
in the payload of the request packet of the IP SLA operation, in bytes.
i) For the Type of Service field, enter the type of service (ToS) byte for the IPv4 header of the IP SLA operation.
j) For the Operation Timeout (milliseconds) field, enter the amount of time in milliseconds that the IP SLA operation
waits for a response from its request packet.
k) For the Threshold (milliseconds) field, enter the upper threshold value for calculating network monitoring statistics
created by the IP SLA operation.
l) For the Traffic Class Value field, enter the traffic class byte for the IPv6 header of an IP SLA operation in an IPv6
network.
m) Click Submit.
The IP SLA monitoring policy is configured.
Procedure
What to do next
Repeat the preceding steps to create the required number of track members for the static route to be monitored.
Once all track members are configured, create a track list and add them to it.
Procedure
What to do next
Associate the track list with a static route or next hop IP address.
Note The following task assumes that a next hop configuration already exists for the static route.
Procedure
Step 5 In the Static Routes table, double-click the route entry to which you want to add the track list.
The Static Route dialog appears.
Step 6 In the Track Policy drop-down list, choose or create an IP SLA track list to associate with this static route.
Step 7 Click Submit.
Step 8 The Policy Usage Warning dialog appears.
Step 9 Verify that this change will not impact other nodes or policies using this static route and click Submit Changes.
Associating a Track List with a Next Hop Profile Using the GUI
Use this task to associate a track list with a configured next hop profile in a static route allowing the system
to monitor the next hop performance.
Procedure
Step 5 In the Static Routes table, double-click the route entry to which you want to add the track list.
The Static Route dialog appears.
Step 6 In the Next Hop Addresses table, double-click the next hop entry to which you want to add the track list.
The Next Hop Profile dialog appears.
Step 7 In the Track Policy drop-down list, choose or create an IP SLA track list to associate with this static route.
Note
If you add an IP SLA Policy to the next hop profile, a track member and track list is automatically created and associated
with the profile.
• Viewing Track List and Track Member Status Using the CLI
TCP
• Number of Failed TCP Connect Probes (packets)
• Number of Successful TCP Connect Probes (packets)
• Number of Transmitted TCP Connect Probes (packets)
• TCP Connect Round Trip Time (milliseconds)
Use this task to view statistics for an IP SLA track list or member currently monitoring a static route or next
hop.
Procedure
What to do next
The statistics chosen in this task are labeled in the legend above the graph. Lines representing the selected
probe statistic types should begin to appear on the graph as the counters begin to accumulate.
About HSRP
HSRP is a first-hop redundancy protocol (FHRP) that allows a transparent failover of the first-hop IP router.
HSRP provides first-hop routing redundancy for IP hosts on Ethernet networks configured with a default
router IP address. You use HSRP in a group of routers for selecting an active router and a standby router. In
a group of routers, the active router is the router that routes packets, and the standby router is the router that
takes over when the active router fails or when preset conditions are met.
Many host implementations do not support any dynamic router discovery mechanisms but can be configured
with a default router. Running a dynamic router discovery mechanism on every host is not practical for many
reasons, including administrative overhead, processing overhead, and security issues. HSRP provides failover
services to such hosts.
When you use HSRP, you configure the HSRP virtual IP address as the default router of the host (instead of
the IP address of the actual router). The virtual IP address is an IPv4 or IPv6 address that is shared among a
group of routers that run HSRP.
When you configure HSRP on a network segment, you provide a virtual MAC address and a virtual IP address
for the HSRP group. You configure the same virtual address on each HSRP-enabled interface in the group.
You also configure a unique IP address and MAC address on each interface that acts as the real address. HSRP
selects one of these interfaces to be the active router. The active router receives and routes packets destined
for the virtual MAC address of the group.
HSRP detects when the designated active router fails. At that point, a selected standby router assumes control
of the virtual MAC and IP addresses of the HSRP group. HSRP also selects a new standby router at that time.
HSRP uses a priority designator to determine which HSRP-configured interface becomes the default active
router. To configure an interface as the active router, you assign it with a priority that is higher than the priority
of all the other HSRP-configured interfaces in the group. The default priority is 100, so if you configure just
one interface with a higher priority, that interface becomes the default active router.
Interfaces that run HSRP send and receive multicast User Datagram Protocol (UDP)-based hello messages
to detect a failure and to designate active and standby routers. When the active router fails to send a hello
message within a configurable period of time, the standby router with the highest priority becomes the active
router. The transition of packet forwarding functions between the active and standby router is completely
transparent to all hosts on the network.
You can configure multiple HSRP groups on an interface. The virtual router does not physically exist but
represents the common default router for interfaces that are configured to provide backup to each other. You
do not need to configure the hosts on the LAN with the IP address of the active router. Instead, you configure
them with the IP address of the virtual router (virtual IP address) as their default router. If the active router
fails to send a hello message within the configurable period of time, the standby router takes over, responds
to the virtual addresses, and becomes the active router, assuming the active router duties. From the host
perspective, the virtual router remains the same.
Note Packets received on a routed port destined for the HSRP virtual IP address terminate on the local router,
regardless of whether that router is the active HSRP router or the standby HSRP router. This process includes
ping and Telnet traffic. Packets received on a Layer 2 (VLAN) interface destined for the HSRP virtual IP
address terminate on the active router.
HSRP Versions
Cisco APIC supports HSRP version 1 by default. You can configure an interface to use HSRP version 2.
HSRP version 2 has the following enhancements to HSRP version 1:
• Expands the group number range. HSRP version 1 supports group numbers from 0 to 255. HSRP version
2 supports group numbers from 0 to 4095.
• For IPv4, uses the IPv4 multicast address 224.0.0.102 or the IPv6 multicast address FF02::66 to send
hello packets instead of the multicast address of 224.0.0.2, which is used by HSRP version 1.
• Uses the MAC address range from 0000.0C9F.F000 to 0000.0C9F.FFFF for IPv4 and 0005.73A0.0000
through 0005.73A0.0FFF for IPv6 addresses. HSRP version 1 uses the MAC address range
0000.0C07.AC00 to 0000.0C07.ACFF.
• Currently, only one IPv4 and one IPv6 group is supported on the same sub-interface in Cisco ACI. Even
when dual stack is configured, Virtual MAC must be the same in IPv4 and IPv6 HSRP configurations.
• BFD IPv4 and IPv6 is supported when the network connecting the HSRP peers is a pure layer 2 network.
You must configure a different router MAC address on the leaf switches. The BFD sessions become
active only if you configure different MAC addresses in the leaf interfaces.
• Users must configure the same MAC address for IPv4 and IPv6 HSRP groups for dual stack configurations.
• HSRP VIP must be in the same subnet as the interface IP.
• It is recommended that you configure interface delay for HSRP configurations.
• HSRP is only supported on routed-interface or sub-interface. HSRP is not supported on VLAN interfaces
and switched virtual interface (SVI). Therefore, no VPC support for HSRP is available.
• Object tracking on HSRP is not supported.
• HSRP Management Information Base (MIB) for SNMP is not supported.
• Multiple group optimization (MGO) is not supported with HSRP.
• ICMP IPv4 and IPv6 redirects are not supported.
• Cold Standby and Non-Stop Forwarding (NSF) are not supported because HSRP cannot be restarted in
the Cisco ACI environment.
• There is no extended hold-down timer support as HSRP is supported only on leaf switches. HSRP is not
supported on spine switches.
• HSRP version change is not supported in APIC. You must remove the configuration and reconfigure
with the new version.
• HSRP version 2 does not inter-operate with HSRP version 1. An interface cannot operate both version
1 and version 2 because both versions are mutually exclusive. However, the different versions can be
run on different physical interfaces of the same router.
• Route Segmentation is programmed in Cisco Nexus 93128TX, Cisco Nexus 9396PX, and Cisco Nexus
9396TX leaf switches when HSRP is active on the interface. Therefore, there is no DMAC=router MAC
check conducted for route packets on the interface. This limitation does not apply for Cisco Nexus
93180LC-EX, Cisco Nexus 93180YC-EX, and Cisco Nexus 93108TC-EX leaf switches.
• HSRP configurations are not supported in the Basic GUI mode. The Basic GUI mode has been deprecated
starting with APIC release 3.0(1).
• Fabric to Layer 3 Out traffic will always load balance across all the HSRP leaf switches, irrespective of
their state. If HSRP leaf switches span multiple pods, the fabric to out traffic will always use leaf switches
in the same pod.
• This limitation applies to some of the earlier Cisco Nexus 93128TX, Cisco Nexus 9396PX, and Cisco
Nexus 9396TX switches. When using HSRP, the MAC address for one of the routed interfaces or routed
sub-interfaces must be modified to prevent MAC address flapping on the Layer 2 external device. This
is because Cisco APIC assigns the same MAC address (00:22:BD:F8:19:FF) to every logical interface
under the interface logical profiles.
Version 1
Delay 0
Reload Delay 0
Group ID 0
Group Af IPv4
Priority 100
Preempt Delay 0
Procedure
Step 1 On the menu bar, click > Tenants > Tenant_name. In the Navigation pane, click Networking > L3Outs > L3Out_name
> Logical Node Profiles > Logical Interface Profile.
An HSRP interface profile will be created here.
Step 2 Choose a logical interface profile, and click Create HSRP Interface Profile.
Step 3 In the Create HSRP Interface Profile dialog box, perform the following actions:
a) In the Version field, choose the desired version.
b) In the HSRP Interface Policy field, from the drop-down, choose Create HSRP Interface Policy.
c) In the Create HSRP Interface Policy dialog box, in the Name field, enter a name for the policy.
d) In the Control field, choose the desired control.
e) In the Delay field and the Reload Delay field, set the desired values. Click Submit.
The HSRP interface policy is created and associated with the interface profile.
Step 4 In the Create HSRP Interface Profile dialog box, expand HSRP Interface Groups.
Step 5 In the Create HSRP Group Profile dialog box, perform the following actions:
a) In the Name field, enter an HSRP interface group name.
b) In the Group ID field, choose the appropriate ID.
The values available depend upon whether HSRP version 1 or version 2 was chosen in the interface profile.
c) In the IP field, enter an IP address.
The IP address must be in the same subnet as the interface.
d) In the MAC address field, enter a Mac address.
Note
If you leave this field blank, the HSRP virtual MAC address will be automatically computed based on the group ID.
This can be used to enable HSRP on each sub-interface with secondary virtual IPs. The IP address that you provide
here also must be in the subnet of the interface.
g) Click OK.
Step 7 In the Create HSRP Interface Profile dialog box, click Submit.
This completes the HSRP configuration.
Step 8 To verify the HSRP interface and group policies created, in the Navigation pane, click Networking > Protocol Policies >
HSRP.
All tenant WAN connections use a single session on the spine switches where the WAN routers are connected.
This aggregation of tenant BGP sessions towards the Data Center Interconnect Gateway (DCIG) improves
control plane scale by reducing the number of tenant BGP sessions and the amount of configuration required
for all of them. The network is extended out using Layer 3 subinterfaces configured on spine fabric ports.
Transit routing with shared services using GOLF is not supported.
A Layer 3 external outside network (L3extOut) for GOLF physical connectivity for a spine switch is specified
under the infra tenant, and includes the following:
• LNodeP (l3extInstP is not required within the L3Out in the infra tenant. )
• A provider label for the L3extOut for GOLF in the infra tenant.
• OSPF protocol policies
• BGP protocol policies
All regular tenants use the above-defined physical connectivity. The L3extOut defined in regular tenants
requires the following:
• An l3extInstP (EPG) with subnets and contracts. The scope of the subnet is used to control import/export
route control and security policies. The bridge domain subnet must be set to advertise externally and it
must be in the same VRF as the application EPG and the GOLF L3Out EPG.
• Communication between the application EPG and the GOLF L3Out EPG is governed by explicit contracts
(not Contract Preferred Groups).
• An l3extConsLbl consumer label that must be matched with the same provider label of an L3Out for
GOLF in the infra tenant. Label matching enables application EPGs in other tenants to consume the
LNodeP external L3Out EPG.
• The BGP EVPN session in the matching provider L3extOut in the infra tenant advertises the tenant
routes defined in this L3Out.
• The default bgpPeerPfxPol policy restricts routes to 20,000. For Cisco ACI WAN Interconnect peers,
increase this as needed.
• In a deployment scenario where there are two L3extOuts on one spine switch, and one of them has the
provider label prov1 and peers with the DCI 1, the second L3extOut peers with DCI 2 with provider
label prov2. If the tenant VRF instance has a consumer label pointing to any 1 of the provider labels
(either prov1 or prov2), the tenant route will be sent out both DCI 1 and DCI 2.
• When aggregating GOLF OpFlex VRF instances, the leaking of routes cannot occur in the Cisco ACI
fabric or on the GOLF device between the GOLF OpFlex VRF instance and any other VRF instance in
the system. An external device (not the GOLF router) must be used for the VRF leaking.
Note Cisco ACI does not support IP fragmentation. Therefore, when you configure Layer 3 Outside (L3Out)
connections to external routers, or Multi-Pod connections through an Inter-Pod Network (IPN), it is
recommended that the interface MTU is set appropriately on both ends of a link. On some platforms, such as
Cisco ACI, Cisco NX-OS, and Cisco IOS, the configurable MTU value does not take into account the Ethernet
headers (matching IP MTU, and excluding the 14-18 Ethernet header size), while other platforms, such as
IOS-XR, include the Ethernet header in the configured MTU value. A configured value of 9000 results in a
max IP packet size of 9000 bytes in Cisco ACI, Cisco NX-OS, and Cisco IOS, but results in a max IP packet
size of 8986 bytes for an IOS-XR untagged interface.
For the appropriate MTU values for each platform, see the relevant configuration guides.
We highly recommend that you test the MTU using CLI-based commands. For example, on the Cisco NX-OS
CLI, use a command such as ping 1.1.1.1 df-bit packet-size 9000 source-interface ethernet 1/1.
Route Target Configuration between the Spine Switches and the DCI
There are two ways to configure EVPN route targets (RTs) for the GOLF VRFs: Manual RT and Auto RT.
The route target is synchronized between ACI spines and DCIs through OpFlex. Auto RT for GOLF VRFs
has the Fabric ID embedded in the format: – ASN: [FabricID] VNID
If two sites have VRFs deployed as in the following diagram, traffic between the VRFs can be mixed.
Site 1 Site 2
Procedure
Step 1 On the menu bar, click Tenants, then click infra to select the infra tenant.
Step 2 In the Navigation pane, expand the Networking option and perform the following actions:
a) Right-click L3Outs and click Create L3Out to open the Create L3Out wizard.
b) Enter the necessary information in the Name, VRF and L3 Domain fields.
c) In the Use For: field, select Golf.
The Provider Label and Route Target fields appear.
d) In the Provider Label field, enter a provider label (for example, golf).
e) In the Route Target field, choose whether to use automatic or explicit policy-governed BGP route target filtering
policy:
• Automatic - Implements automatic BGP route-target filtering on VRFs associated with this routed outside
configuration.
• Explicit - Implements route-target filtering through use of explicitly configured BGP route-target policies on
VRFs associated with this routed outside configuration.
Note
Explicit route target policies are configured in the BGP Route Target Profiles table on the BGP Page of the
Create VRF Wizard. If you select the Automatic option the in Route Target field, configuring explicit route
target policies in the Create VRF Wizard might cause BGP routing disruptions.
f) Leave the remaining fields as-is (BGP selected, and so on), and click Next.
The Nodes and Interfaces window appears.
Step 3 Enter the necessary information in the Nodes and Interfaces window of the Create L3Out wizard.
a) In the Node ID drop-down list, choose a spine switch node ID.
b) In the Router ID field, enter the router ID.
c) (Optional) You can configure another IP address for a loopback address, if necessary.
The Loopback Address field is automatically populated with the same entry that you provide in the Router ID field.
This is the equivalent of the Use Router ID for Loopback Address option in previous builds. Enter a different IP
address for a loopback address, if you don't want to use route ID for the loopback address. Leave this field empty if
you do not want to use the router ID for the loopback address.
d) Leave the External Control Peering field checked.
e) Enter necessary additional information in the Nodes and Interfaces window.
The fields that are shown in this window vary, depending on the options that you select in the Layer 3 and Layer 2
areas.
f) When you have entered the remaining additional information in the Nodes and Interfaces window, click Next.
The Protocols window appears.
Step 4 Enter the necessary information in the Protocols window of the Create L3Out wizard.
a) In the BGP Loopback Policies and BGP Interface Policies areas, enter the following information:
• Peer Address: Enter the peer IP address
• EBGP Multihop TTL: Enter the connection Time To Live (TTL). The range is 1–255 hops; if zero, no TTL is
specified. The default is zero.
• Remote ASN: Enter a number that uniquely identifies the neighbor autonomous system. The Autonomous
System Number can be in 4 byte as plain format 1–4294967295.
Note
ACI does not support asdot or asdot+ format autonomous system numbers.
b) In the OSPF area, choose the default OSPF policy, a previously created OSPF policy, or Create OSPF Interface
Policy.
c) Click Next.
The External EPG window appears.
Step 5 Enter the necessary information in the External EPG window of the Create L3Out wizard.
a) In the Name field, enter a name for the external network.
b) In the Provided Contract field, enter the name of a provided contract.
c) In the Consumed Contract field, enter the name of a consumed contract.
d) In the Allow All Subnet field, uncheck if you don’t want to advertise all the transit routes out of this L3Out connection.
The Subnets area appears if you uncheck this box. Specify the desired subnets and controls as described in the
following steps.
e) Click Finish to complete the necessary configurations in the Create L3Out wizard.
Step 6 In the Navigation pane for any tenant, expand the tenant_name > Networking > L3Outs and perform the following
actions:
a) Right-click L3Outs and click Create L3Out to open the wizard.
b) Enter the necessary information in the Name, VRF and L3 Domain fields.
Distributing BGP EVPN Type-2 Host Routes to a DCIG Using the GUI
Enable distributing BGP EVPN type-2 host routes with the following steps:
Procedure
SUMMARY STEPS
1. Verify that HostLeak object is enabled under the VRF-AF in question, by entering a command such as
the following in the spine-switch CLI:
2. Verify that the config-MO has been successfully processed by BGP, by entering a command such as the
following in the spine-switch CLI:
3. Verify that the public BD-subnet has been advertised to DCIG as an EVPN type-5 route:
4. Verify whether the host route advertised to the EVPN peer was an EVPN type-2 MAC-IP route:
5. Verify that the EVPN peer (a DCIG) received the correct type-2 MAC-IP route and the host route was
successfully imported into the given VRF, by entering a command such as the following on the DCIG
device (assuming that the DCIG is a Cisco ASR 9000 switch in the example below):
DETAILED STEPS
Procedure
Step 1 Verify that HostLeak object is enabled under the VRF-AF in question, by entering a command such as the following in
the spine-switch CLI:
Example:
spine1# ls /mit/sys/bgp/inst/dom-apple/af-ipv4-ucast/
ctrl-l2vpn-evpn ctrl-vpnv4-ucast hostleak summary
Step 2 Verify that the config-MO has been successfully processed by BGP, by entering a command such as the following in the
spine-switch CLI:
Example:
spine1# show bgp process vrf apple
Look for output similar to the following:
Information for address family IPv4 Unicast in VRF apple
Table Id : 0
Table state : UP
Table refcount : 3
Peers Active-peers Routes Paths Networks Aggregates
0 0 0 0 0 0
Redistribution
None
Step 3 Verify that the public BD-subnet has been advertised to DCIG as an EVPN type-5 route:
Example:
spine1# show bgp l2vpn evpn 10.6.0.0 vrf overlay-1
Route Distinguisher: 192.41.1.5:4123 (L3VNI 2097154)
BGP routing table entry for [5]:[0]:[0]:[16]:[10.6.0.0]:[0.0.0.0]/224, version 2088
Paths: (1 available, best #1)
Flags: (0x000002 00000000) on xmit-list, is not in rib/evpn
Multipath: eBGP iBGP
Advertised path-id 1
Path type: local 0x4000008c 0x0 ref 1, path is valid, is best path
AS-Path: NONE, path locally originated
192.41.1.1 (metric 0) from 0.0.0.0 (192.41.1.5)
Origin IGP, MED not set, localpref 100, weight 32768
Received label 2097154
Community: 1234:444
Extcommunity:
RT:1234:5101
4BYTEAS-GENERIC:T:1234:444
In the Path type entry, ref 1 indicates that one route was sent.
Step 4 Verify whether the host route advertised to the EVPN peer was an EVPN type-2 MAC-IP route:
Example:
spine1# show bgp l2vpn evpn 10.6.41.1 vrf overlay-1
Route Distinguisher: 10.10.41.2:100 (L2VNI 100)
BGP routing table entry for [2]:[0]:[2097154]:[48]:[0200.0000.0002]:[32]:[10.6.41
.1]/272, version 1146
Shared RD: 192.41.1.5:4123 (L3VNI 2097154)
Paths: (1 available, best #1)
Flags: (0x00010a 00000000) on xmit-list, is not in rib/evpn
Multipath: eBGP iBGP
Advertised path-id 1
Path type: local 0x4000008c 0x0 ref 0, path is valid, is best path
AS-Path: NONE, path locally originated
EVPN network: [5]:[0]:[0]:[16]:[10.6.0.0]:[0.0.0.0] (VRF apple)
10.10.41.2 (metric 0) from 0.0.0.0 (192.41.1.5)
Origin IGP, MED not set, localpref 100, weight 32768
Received label 2097154 2097154
Extcommunity:
RT:1234:16777216
The Shared RD line indicates the RD/VNI shared by the EVPN type-2 route and the BD subnet.
The EVPN Network line shows the EVPN type-5 route of the BD-Subnet.
The Path-id advertised to peers indicates the path advertised to EVPN peers.
Step 5 Verify that the EVPN peer (a DCIG) received the correct type-2 MAC-IP route and the host route was successfully
imported into the given VRF, by entering a command such as the following on the DCIG device (assuming that the DCIG
is a Cisco ASR 9000 switch in the example below):
Example:
RP/0/RSP0/CPU0:asr9k#show bgp vrf apple-2887482362-8-1 10.6.41.1
Tue Sep 6 23:38:50.034 UTC
BGP routing table entry for 10.6.41.1/32, Route Distinguisher: 44.55.66.77:51
Versions:
Process bRIB/RIB SendTblVer
Speaker 2088 2088
Last Modified: Feb 21 08:30:36.850 for 28w2d
Paths: (1 available, best #1)
Not advertised to any peer
Path #1: Received by speaker 0
Not advertised to any peer
Local
192.41.1.1 (metric 42) from 10.10.41.1 (192.41.1.5)
Received Label 2097154
Origin IGP, localpref 100, valid, internal, best, group-best, import-candidate, imported
Received Path ID 0, Local Path ID 1, version 2088
Community: 1234:444
Extended community: 0x0204:1234:444 Encapsulation Type:8 Router
MAC:0200.c029.0101 RT:1234:5101
RIB RNH: table_id 0xe0000190, Encap 8, VNI 2097154, MAC Address: 0200.c029.0101,
IP Address: 192.41.1.1, IP table_id 0x00000000
Source AFI: L2VPN EVPN, Source VRF: default,
Source Route Distinguisher: 192.41.1.5:4123
In this output, the received RD, next hop, and attributes are the same for the type-2 route and the BD subnet.
Traffic gets dropped if you have IPv6 endpoints If the border leaf switch has high utilization of Layer 3 next hops, then the border
and the corresponding bridge domain is leaf switch might drop traffic if you have bridge domains with high numbers of
deployed on a border leaf switch IPv6 endpoints deployed on the same border leaf switch.
All leaf switches must be configured with The same SVI IPv4 or IPv6 primary or preferred address can be configured across
unique IP addresses multiple leaf switches within the same L3Out within the management tenant. This
can cause network disruptions. To avoid this, ensure that each leaf switch is
configured with a unique SVI primary or preferred IP address.
Issue where a border leaf switch in a vPC pair If the following conditions exist in your configuration:
forwards a BGP packet with an incorrect VNID
• Two leaf switches are part of a vPC pair
to an on-peer learned endpoint
• For the two leaf switches connected behind the L3Out, the destination endpoint
is connected to the second (peer) border leaf switch, and the endpoint is on-peer
learned on that leaf switch
If the endpoint is on-peer learned on the ingress leaf switch that receives a BGP
packet that is destined to the on-peer learned endpoint, an issue might arise where
the transit BGP connection fails to establish between the first layer 3 switch behind
the L3Out and the on-peer learned endpoint on the second leaf switch in the vPC
pair. This might happen in this situation because the transit BGP packet with port
179 is forwarded incorrectly using the bridge domain VNID instead of the VRF
VNID.
To resolve this issue, move the endpoint to any other non-peer leaf switch in the
fabric so that it is not learned on the leaf switch.
Border leaf switches and GIR (maintenance) If a border leaf switch has a static route and is placed in Graceful Insertion and
mode Removal (GIR) mode, or maintenance mode, the route from the border leaf switch
might not be removed from the routing table of switches in the ACI fabric, which
causes routing issues.
To work around this issue, either:
• Configure the same static route with the same administrative distance on the
other border leaf switch, or
• Use IP SLA or BFD for track reachability to the next hop of the static route
L3Out aggregate stats do not support egress When accessing the Select Stats window through Tenants > tenant_name >
drop counters Networking > L3Outs > L3Out_name > Stats, you will see that L3Out aggregate
stats do not support egress drop counters. This is because there is currently no
hardware table in the ASICs that record egress drops from the EPG VLAN, so stats
do not populate these counters. There are only ingress drops for the EPG VLAN.
Updates through CLI For Layer 3 external networks created through the API or GUI and updated through
the CLI, protocols need to be enabled globally on the external network through the
API or GUI, and the node profile for all the participating nodes needs to be added
through the API or GUI before doing any further updates through the CLI.
Loopbacks for Layer 3 networks on same node When configuring two Layer 3 external networks on the same node, the loopbacks
need to be configured separately for both Layer 3 networks.
Ingress-based policy enforcement Starting with Cisco APIC release 1.2(1), ingress-based policy enforcement enables
defining policy enforcement for Layer 3 Outside (L3Out) traffic for both egress
and ingress directions. The default is ingress. During an upgrade to release 1.2(1)
or higher, existing L3Out configurations are set to egress so that the behavior is
consistent with the existing configuration. You do not need any special upgrade
sequence. After the upgrade, you change the global property value to ingress. When
it has been changed, the system reprograms the rules and prefix entries. Rules are
removed from the egress leaf and installed on the ingress leaf, if not already present.
If not already configured, an Actrl prefix entry is installed on the ingress leaf.
Direct server return (DSR), and attribute EPGs require ingress based policy
enforcement. vzAny and taboo contracts ignore ingress based policy enforcement.
Transit rules are applied at ingress.
Bridge Domains with L3Outs A bridge domain in a tenant can contain a public subnet that is advertised through
an l3extOut provisioned in the common tenant.
Bridge domain route advertisement For OSPF When both OSPF and EIGRP are enabled on the same VRF on a node and if the
and EIGRP bridge domain subnets are advertised out of one of the L3Outs, it will also get
advertised out of the protocol enabled on the other L3Out.
For OSPF and EIGRP, the bridge domain route advertisement is per VRF and not
per L3Out. The same behavior is expected when multiple OSPF L3Outs (for multiple
areas) are enabled on the same VRF and node. In this case, the bridge domain route
will be advertised out of all the areas, if it is enabled on one of them.
BGP Maximum Prefix Limit Starting with Cisco APIC release 1.2(1x), tenant policies for BGP l3extOut
connections can be configured with a maximum prefix limit, that enables monitoring
and restricting the number of route prefixes received from a peer. Once the maximum
prefix limit has been exceeded, a log entry is recorded, and further prefixes are
rejected. The connection can be restarted if the count drops below the threshold in
a fixed interval, or the connection is shut down. Only one option can be used at a
time. The default setting is a limit of 20,000 prefixes, after which new prefixes are
rejected. When the reject option is deployed, BGP accepts one more prefix beyond
the configured limit, before the APIC raises a fault.
MTU • Cisco ACI does not support IP fragmentation. Therefore, when you configure
Layer 3 Outside (L3Out) connections to external routers, or Multi-Pod
connections through an Inter-Pod Network (IPN), it is recommended that the
interface MTU is set appropriately on both ends of a link. On some platforms,
such as Cisco ACI, Cisco NX-OS, and Cisco IOS, the configurable MTU value
does not take into account the Ethernet headers (matching IP MTU, and
excluding the 14-18 Ethernet header size), while other platforms, such as
IOS-XR, include the Ethernet header in the configured MTU value. A
configured value of 9000 results in a max IP packet size of 9000 bytes in Cisco
ACI, Cisco NX-OS, and Cisco IOS, but results in a max IP packet size of 8986
bytes for an IOS-XR untagged interface.
• The MTU settings for the Cisco ACI physical interfaces vary:
• For sub-interfaces, the physical interface MTU is fixed and is set to 9216
for the front panel ports on the leaf switches.
• For SVI, the physical interface MTU is set based on the fabric MTU
policy. For example, if the fabric MTU policy is set to 9000, then the
physical interface for the SVI is set to 9000.
QoS for L3Outs To configure QoS policies for an L3Out and enable the policies to be enforced on
the BL switch where the L3Out is located, use the following guidelines:
• The VRF Policy Control Enforcement Direction must be set toEgress.
• The VRF Policy Control Enforcement Preference must be set to Enabled.
• When configuring the contract that controls communication between the EPGs
using the L3Out, include the QoS class or Target DSCP in the contract or
subject of the contract.
ICMP settings ICMP redirect and ICMP unreachable are disabled by default in Cisco ACI to protect
the switch CPU from generating these packets.
Procedure
Procedure
What to do next
To specify the interval used for tracking IP addresses on endpoints, create an Endpoint Retention policy.
Configuring a Static Route on a Bridge Domain Using the NX-OS Style CLI
Configuring a Static Route on a Bridge Domain Using the NX-OS Style CLI
To configure a static route in a pervasive bridge domain (BD), use the following NX-OS style CLI commands:
SUMMARY STEPS
1. configure
2. tenant tenant-name
3. application ap-name
4. epg epg-name
5. endpoint ipA.B.C.D/LEN next-hop A.B.C.D [scope scope ]
DETAILED STEPS
Procedure
Step 5 endpoint ipA.B.C.D/LEN next-hop A.B.C.D [scope scope Creates an endpoint behind the EPG. The subnet mask must
] be /32 (/128 for IPv6) pointing to one IP address or one
endpoint.
Example:
apic1(config-tenant-app-epg)# endpoint ip
125.12.1.1/32 next-hop 26.0.14.101
Example
The following example shows the commands to configure an endpoint behind an EPG.
apic1# config
apic1(config)# tenant t1
apic1(config-tenant)# application ap1
apic1(config-tenant-app)# epg ep1
apic1(config-tenant-app-epg)# endpoint ip 125.12.1.1/32 next-hop 26.0.14.101
Configuring Dataplane IP Learning per VRF Using the NX-OS Style CLI
Configuring Dataplane IP Learning Using the NX-OS-Style CLI
This section explains how to disable dataplane IP learning using the NX-OS-style CLI.
To disable dataplane IP learning for a specific VRF:
Procedure
Procedure
Step 1 Configure an IPv6 neighbor discovery interface policy and assign it to a bridge domain:
a) Create an IPv6 neighbor discovery interface policy:
Example:
Step 2 Configure an IPV6 bridge domain subnet and neighbor discovery prefix policy on the subnet:
Example:
Configuring an IPv6 Neighbor Discovery Interface Policy with RA on a Layer 3 Interface Using the
NX-OS Style CLI
This example configures an IPv6 neighbor discovery interface policy, and assigns it to a Layer 3 interface.
Next, it configures an IPv6 Layer 3 Out interface, neighbor discovery prefix policy, and associates the neighbor
discovery policy to the interface.
Procedure
Step 2 tenant tenant_name Creates a tenant and enters the tenant mode.
Example:
Step 4 ipv6 nd mtu mtu value Assigns an MTU value to the IPv6 ND policy.
Example:
apic1(config-tenant-template-ipv6-nd)# ipv6 nd
mtu 1500
apic1(config-tenant-template-ipv6)# exit
apic1(config-tenant-template)# exit
apic1(config-tenant)#
Step 7 vrf member VRF_name Associates the VRF with the Layer 3 Out.
Example:
Step 8 external-l3 epg instp l3out l3extOut001 Assigns the Layer 3 Out and the VRF to a Layer 3
interface.
Example:
Step 10 vrf context tenant ExampleCorp vrf pvn1 l3out Associates the VRF to the leaf switch.
l3extOut001
Example:
apic1(config-leaf-vrf)# exit
Step 12 vrf member tenant ExampleCorp vrf pvn1 l3out Specifies the associated Tenant, VRF, Layer 3 Out in the
l3extOut001 interface.
Example:
Step 13 ipv6 address 2001:20:21:22::2/64 preferred Specifies the primary or preferred IPv6 address.
Example:
Step 14 ipv6 nd prefix 2001:20:21:22::2/64 1000 1000 Configures the IPv6 ND prefix policy under the Layer 3
interface.
Example:
Step 15 inherit ipv6 nd NDPol001 Configures the ND policy under the Layer 3 interface.
Example:
SUMMARY STEPS
1. configure
2. tenant tenant-name
3. application app-profile-name
4. epg epg-name
5. [no] endpoint {ip | ipv6} ip-address epnlb mode mode-uc mac mac-address
DETAILED STEPS
Procedure
Step 2 tenant tenant-name Creates a tenant if it does not exist or enters tenant
configuration mode.
Example:
apic1 (config)# tenant tenant1
Step 5 [no] endpoint {ip | ipv6} ip-address epnlb mode mode-uc Configures Microsoft NLB in unicast mode, where:
mac mac-address
• ip-address is the Microsoft NLB cluster VIP.
Example:
• mac-address is the Microsoft NLB cluster MAC
apic1 (config-tenant-app-epg)# endpoint ip
192.0.2.2/32 epnlb mode mode-uc mac
address.
03:BF:01:02:03:04
Configuring Microsoft NLB in Multicast Mode Using the NX-OS Style CLI
This task configures Microsoft NLB to flood only on certain ports in the bridge domain.
SUMMARY STEPS
1. configure
2. tenant tenant-name
3. application app-profile-name
4. epg epg-name
5. [no] endpoint {ip | ipv6} ip-address epnlb mode mode-mcast--static mac mac-address
6. [no] nlb static-group mac-address leaf leaf-num interface {ethernet slot/port | port-channel
port-channel-name} vlan portEncapVlan
DETAILED STEPS
Procedure
Step 2 tenant tenant-name Creates a tenant if it does not exist or enters tenant
configuration mode.
Example:
apic1 (config)# tenant tenant1
Step 4 epg epg-name Creates an EPG if it does not exist or enters EPG
configuration mode.
Example:
apic1 (config-tenant-app)# epg epg1
Step 5 [no] endpoint {ip | ipv6} ip-address epnlb mode Configures Microsoft NLB in static multicast mode, where:
mode-mcast--static mac mac-address
• ip-address is the Microsoft NLB cluster VIP.
Example:
• mac-address is the Microsoft NLB cluster MAC
apic1 (config-tenant-app-epg)# endpoint ip
192.0.2.2/32 epnlb mode mode-mcast--static mac
address.
03:BF:01:02:03:04
Step 6 [no] nlb static-group mac-address leaf leaf-num interface Adds Microsoft NLB multicast VMAC to the EPG ports
{ethernet slot/port | port-channel port-channel-name} where the Microsoft NLB servers are connected, where:
vlan portEncapVlan
• mac-address is the Microsoft NLB cluster MAC
Example: address that you entered in Step 5, on page 453.
Configuring Microsoft NLB in IGMP Mode Using the NX-OS Style CLI
This task configures Microsoft NLB to flood only on certain ports in the bridge domain.
SUMMARY STEPS
1. configure
2. tenant tenant-name
3. application app-profile-name
4. epg epg-name
5. [no] endpoint {ip | ipv6} ip-address epnlb mode mode-mcast-igmp group multicast-IP-address
DETAILED STEPS
Procedure
Step 2 tenant tenant-name Creates a tenant if it does not exist or enters tenant
configuration mode.
Example:
apic1 (config)# tenant tenant1
Step 5 [no] endpoint {ip | ipv6} ip-address epnlb mode Configures Microsoft NLB in IGMP mode, where:
mode-mcast-igmp group multicast-IP-address
• ip-address is the Microsoft NLB cluster VIP.
Example:
• multicast-IP-address is the multicast IP for the NLB
apic1 (config-tenant-app-epg)# endpoint ip
192.0.2.2/32 epnlb mode mode-mcast-igmp group
endpoint group.
1.3.5.7
Procedure
Step 2 Modify the snooping policy as necessary. The example NX-OS style CLI sequence:
Example: • Specifies a custom value for the query-interval value
in the IGMP Snooping policy named cookieCut1.
apic1(config-tenant-template-ip-igmp-snooping)# ip
igmp snooping query-interval 300 • Confirms the modified IGMP Snooping value for the
apic1(config-tenant-template-ip-igmp-snooping)# policy cookieCut1.
show run all
Step 3 Modify the snooping policy as necessary. The example NX-OS style CLI sequence:
Example: • Specifies a custom value for the query version of the
IGMP Snooping policy.
apic1(config-tenant-template-ip-igmp-snooping)# ip
igmp snooping ? • Confirms the modified IGMP Snooping version for
<CR> the policy.
fast-leave Enable IP IGMP
Snooping fast leave processing
last-member-query-interval Change the IP IGMP
snooping last member query interval param
querier Enable IP IGMP
Snooping querier processing
query-interval Change the IP IGMP
snooping query interval param
query-max-response-time Change the IP IGMP
snooping max query response time
startup-query-count Change the IP IGMP
snooping number of initial queries to send
startup-query-interval Change the IP IGMP
snooping time for sending initial queries
version Change the IP IGMP
snooping version param
apic1(config-tenant-template-ip-igmp-snooping)# ip
igmp snooping version ?
Step 4 Assign the policy to a bridge domain. The example NX-OS style CLI sequence:
Example: • Navigates to bridge domain, BD3. for the
query-interval value in the IGMP Snooping policy
apic1(config-tenant)# int bridge-domain bd3 named cookieCut1.
apic1(config-tenant-interface)# ip igmp snooping
policy cookieCut1 • Assigns the IGMP Snooping policy with a modified
IGMP Snooping value for the policy cookieCut1.
What to do next
You can assign the IGMP Snooping policy to multiple bridge domains.
Enabling IGMP Snooping and Multicast on Static Ports in the NX-OS Style CLI
You can enable IGMP snooping and multicast on ports that have been statically assigned to an EPG. Then
you can create and assign access groups of users that are permitted or denied access to the IGMP snooping
and multicast traffic enabled on those ports.
The steps described in this task assume the pre-configuration of the following entities:
• Tenant: tenant_A
• Application: application_A
• EPG: epg_A
• Bridge Domain: bridge_domain_A
• vrf: vrf_A -- a member of bridge_domain_A
• VLAN Domain: vd_A (configured with a range of 300-310)
Note For details on static port assignment, see Deploying an EPG on a Specific Port
with APIC Using the NX-OS Style CLI in the Cisco APIC Layer 2 Networking
Configuration Guide.
• Identify the IP addresses that you want to be recipients of IGMP snooping multicast traffic.
Procedure
apic1# conf t
apic1(config)# tenant tenant_A; application
application_A; epg epg_A
apic1(config-tenant-app-epg)# ip igmp snooping
static-group 227.1.1.1 leaf 101 interface ethernet
1/11 vlan 309
apic1(config-tenant-app-epg)# exit
apic1(config-tenant-app)# exit
Enabling Group Access to IGMP Snooping and Multicast using the NX-OS Style CLI
After you have enabled IGMP snooping and multicast on ports that have been statically assigned to an EPG,
you can then create and assign access groups of users that are permitted or denied access to the IGMP snooping
and multicast traffic enabled on those ports.
The steps described in this task assume the pre-configuration of the following entities:
• Tenant: tenant_A
• Application: application_A
• EPG: epg_A
• Bridge Domain: bridge_domain_A
• vrf: vrf_A -- a member of bridge_domain_A
• VLAN Domain: vd_A (configured with a range of 300-310)
• Leaf switch: 101 and interface 1/10
The target interface 1/10 on switch 101 is associated with VLAN 305 and statically linked with tenant_A,
application_A, epg_A
• Leaf switch: 101 and interface 1/11
The target interface 1/11 on switch 101 is associated with VLAN 309 and statically linked with tenant_A,
application_A, epg_A
Note For details on static port assignment, see Deploying an EPG on a Specific Port with APIC Using the NX-OS
Style CLI in the Cisco APIC Layer 2 Networking Configuration Guide.
Procedure
Step 3 Specify the access group connection path. The example sequences configure:
Example: • Route-map-access group "foobroker" connected
apic1(config-tenant)# application application_A through leaf switch 101, interface 1/10, and VLAN
apic1(config-tenant-app)# epg epg_A 305.
apic1(config-tenant-app-epg)# ip igmp snooping
access-group route-map fooBroker leaf 101 interface • Route-map-access group "newbroker" connected
ethernet 1/10 vlan 305 through leaf switch 101, interface 1/10, and VLAN
apic1(config-tenant-app-epg)# ip igmp snooping
access-group route-map newBroker leaf 101 interface
305.
ethernet 1/10 vlan 305
• Create the bridge domain for the tenant, where you will attach the MLD Snooping policy.
SUMMARY STEPS
1. configure terminal
2. tenant tenant-name
3. template ipv6 mld snooping policy policy-name
4. [no] ipv6 mld snooping
5. [no] ipv6 mld snooping fast-leave
6. [no] ipv6 mld snooping querier
7. ipv6 mld snooping last-member-query-interval parameter
8. ipv6 mld snooping query-interval parameter
9. ipv6 mld snooping query-max-response-time parameter
10. ipv6 mld snooping startup-query-count parameter
11. ipv6 mld snooping startup-query-interval parameter
12. exit
13. interface bridge-domain bridge-domain-name
14. ipv6 address sub-bits/prefix-length snooping-querier
15. ipv6 mld snooping policy policy-name
16. exit
DETAILED STEPS
Procedure
Step 3 template ipv6 mld snooping policy policy-name Creates an MLD snooping policy. The example NX-OS
style CLI sequence creates an MLD snooping policy named
Example:
mldPolicy1.
apic1(config-tenant)# template ipv6 mld snooping
policy mldPolicy1
apic1(config-tenant-template-ip-mld-snooping)#
Step 4 [no] ipv6 mld snooping Enables or disables the admin state of the MLD snoop
policy. The default state is disabled.
Example:
apic1(config-tenant-template-ip-mld-snooping)#
ipv6 mld snooping
apic1(config-tenant-template-ip-mld-snooping)# no
ipv6 mld snooping
Step 5 [no] ipv6 mld snooping fast-leave Enables or disables IPv6 MLD snooping fast-leave
processing.
Example:
apic1(config-tenant-template-ip-mld-snooping)#
ipv6 mld snooping fast-leave
apic1(config-tenant-template-ip-mld-snooping)# no
ipv6 mld snooping fast-leave
Step 6 [no] ipv6 mld snooping querier Enables or disables IPv6 MLD snooping querier
processing. For the enabling querier option to be effectively
Example:
enabled on the assigned policy, you must also enable the
querier option in the subnets assigned to the bridge
apic1(config-tenant-template-ip-mld-snooping)#
ipv6 mld snooping querier domains to which the policy is applied, as described in
apic1(config-tenant-template-ip-mld-snooping)# no Step 14, on page 463.
ipv6 mld snooping querier
Step 7 ipv6 mld snooping last-member-query-interval Changes the IPv6 MLD snooping last member query
parameter interval parameter. The example NX-OS style CLI
sequence changes the IPv6 MLD snooping last member
Example:
query interval parameter to 25 seconds. Valid options are
1-25. The default is 1 second.
apic1(config-tenant-template-ip-mld-snooping)#
ipv6 mld snooping last-member-query-interval 25
Step 8 ipv6 mld snooping query-interval parameter Changes the IPv6 MLD snooping query interval parameter.
The example NX-OS style CLI sequence changes the IPv6
Example:
MLD snooping query interval parameter to 300 seconds.
Valid options are 1-18000. The default is 125 seconds.
apic1(config-tenant-template-ip-mld-snooping)#
ipv6 mld snooping query-interval 300
Step 9 ipv6 mld snooping query-max-response-time parameter Changes the IPv6 MLD snooping max query response
time. The example NX-OS style CLI sequence changes
Example:
the IPv6 MLD snooping max query response time to 25
seconds. Valid options are 1-25. The default is 10 seconds.
apic1(config-tenant-template-ip-mld-snooping)#
ipv6 mld snooping query-max-response-time 25
Step 10 ipv6 mld snooping startup-query-count parameter Changes the IPv6 MLD snooping number of initial queries
to send. The example NX-OS style CLI sequence changes
Example:
the IPv6 MLD snooping number of initial queries to send
to 10. Valid options are 1-10. The default is 2.
apic1(config-tenant-template-ip-mld-snooping)#
ipv6 mld snooping startup-query-count 10
Step 11 ipv6 mld snooping startup-query-interval parameter Changes the IPv6 MLD snooping time for sending initial
queries. The example NX-OS style CLI sequence changes
Example:
the IPv6 MLD snooping time for sending initial queries
apic1(config-tenant-template-ip-mld-snooping)#
exit
apic1(config-tenant)#
Step 13 interface bridge-domain bridge-domain-name Configures the interface bridge-domain. The example
NX-OS style CLI sequence configures the interface
Example:
bridge-domain named bd1.
apic1(config-tenant)# interface bridge-domain bd1
apic1(config-tenant-interface)#
Step 14 ipv6 address sub-bits/prefix-length snooping-querier Configures the bridge domain as switch-querier. This will
enable the querier option in the subnet assigned to the
Example:
bridge domain where the policy is applied.
apic1(config-tenant-interface)# ipv6 address
2000::5/64 snooping-querier
Step 15 ipv6 mld snooping policy policy-name Associates the bridge domain with an MLD snooping
policy. The example NX-OS style CLI sequence associates
Example:
the bridge domain with an MLD snooping policy named
mldPolicy1.
apic1(config-tenant-interface)# ipv6 mld snooping
policy mldPolicy1
apic1(config-tenant-interface)# exit
apic1(config-tenant)#
Procedure
Step 2 Enter the configure mode for a tenant, the configure mode for the VRF, and configure PIM options.
Example:
apic1(config)# tenant tenant1
apic1(config-tenant)# vrf context tenant1_vrf
apic1(config-tenant-vrf)# ip pim
apic1(config-tenant-vrf)# ip pim fast-convergence
apic1(config-tenant-vrf)# ip pim bsr forward
Step 3 Configure IGMP and the desired IGMP options for the VRF.
Example:
apic1(config-tenant-vrf)# ip igmp
apic1(config-tenant-vrf)# exit
apic1(config-tenant)# interface bridge-domain tenant1_bd
apic1(config-tenant-interface)# ip multicast
apic1(config-tenant-interface)# ip igmp allow-v3-asm
apic1(config-tenant-interface)# ip igmp fast-leave
apic1(config-tenant-interface)# ip igmp inherit interface-policy igmp_intpol1
apic1(config-tenant-interface)# exit
Step 4 Enter the L3 Out mode for the tenant, enable PIM, and enter the leaf interface mode. Then configure PIM for this interface.
Example:
apic1(config-tenant)# l3out tenant1_l3out
apic1(config-tenant-l3out)# ip pim
apic1(config-tenant-l3out)# exit
apic1(config-tenant)# exit
apic1(config)#
apic1(config)# leaf 101
apic1(config-leaf)# interface ethernet 1/125
apic1(config-leaf-if) ip pim inherit interface-policy pim_intpol1
Step 5 Configure IGMP for the interface using the IGMP commands.
Example:
Procedure
Step 1 Enable PIM6 on the VRF and configure the Rendezvous Point (RP).
Example:
Step 2 Configure a PIM6 interface policy and apply it on the Layer 3 Out.
Example:
Procedure
apic1# configure
apic1(config)#
Example:
apic1(config)# tenant t1
apic1(config-tenant)# vrf context v1
apic1(config-tenant-vrf)# ip pim
apic1(config-tenant-vrf)# exit
apic1(config-tenant)#
Step 3 Access the bridge domain where you want to configure multicast filtering.
Example:
Step 4 Determine whether you want to enable multicast source or receiver filtering on this bridge domain.
Note
You can also enable both source and receiver filtering on the same bridge domain.
• If you want to enable multicast source filtering on this bridge domain, enter the following:
For example:
• If you want to enable multicast receiver filtering on this bridge domain, enter the following:
For example:
apic1(config-tenant-bd)# mcast-allow
apic1(config-tenant-bd)#
Example:
Example:
Example:
Procedure
Example:
ifav4-ifc1# show running-config vlan-domain l3Dom
# Command: show running-config vlan-domain l3Dom
# Time: Mon Aug 1 21:32:31 2016
vlan-domain l3Dom
vlan 4
exit
ifav4-ifc1#
Step 4 Configure the spine switch interface and OSPF configuration as in the following example:
Example:
# Command: show running-config spine
# Time: Mon Aug 1 21:34:41 2016
spine 201
vrf context tenant infra vrf overlay-1
router-id 201.201.201.201
exit
interface ethernet 1/1
vlan-domain member l3Dom
exit
interface ethernet 1/1.4
vrf member tenant infra vrf overlay-1
ip address 201.1.1.1/30
ip router ospf default area 1.1.1.1
ip ospf cost 1
exit
interface ethernet 1/2
vlan-domain member l3Dom
exit
interface ethernet 1/2.4
vrf member tenant infra vrf overlay-1
ip address 201.2.1.1/30
ip router ospf default area 1.1.1.1
ip ospf cost 1
exit
router ospf default
vrf member tenant infra vrf overlay-1
area 1.1.1.1 loopback 201.201.201.201
exit
router ospf default
vrf member tenant infra vrf overlay-1
area 0.0.0.0 loopback 204.204.204.204
area 0.0.0.0 interpod peering
exit
exit
exit
ifav4-ifc1#
Procedure
Step 4 Configure two L3Outs for the infra tenant, one for the remote leaf connections and one for the multipod IPN.
Example:
Step 5 Configure the spine switch interfaces and sub-interfaces to be used by the L3Outs.
Example:
Step 6 Configure the remote leaf switch interface and sub-interface used for communicating with the main fabric pod.
Example:
(config)# leaf 101
apic1(config-leaf)# vrf context tenant infra vrf overlay-1 l3out rl-wan-test
apic1(config-leaf-vrf)# exit
apic1(config-leaf)#
apic1(config-leaf)# interface ethernet 1/49
apic1(config-leaf-if)# vlan-domain member ospfDom
apic1(config-leaf-if)# exit
apic1(config-leaf)# router ospf default
apic1(config-leaf-ospf)# vrf member tenant infra vrf overlay-1
apic1(config-leaf-ospf-vrf)# area 5 l3out rl-wan-test
apic1(config-leaf-ospf-vrf)# exit
apic1(config-leaf-ospf)# exit
apic1(config-leaf)#
apic1(config-leaf)# interface ethernet 1/49.4
apic1(config-leaf-if)# vrf member tenant infra vrf overlay-1 l3out rl-wan-test
apic1(config-leaf-if)# ip router ospf default area 5
apic1(config-leaf-if)# exit
Example
The following example provides a downloadable configuration:
apic1# configure
apic1(config)# system remote-leaf-site 5 pod 2 tep-pool 192.0.0.0/16
apic1(config)# system switch-id FDO210805SKD 109 ifav4-leaf9 pod 2
remote-leaf-site 5 node-type remote-leaf-wan
apic1(config)# vlan-domain ospfDom
apic1(config-vlan)# vlan 4-5
apic1(config-vlan)# exit
apic1(config)# tenant infra
apic1(config-tenant)# l3out rl-wan-test
apic1(config-tenant-l3out)# vrf member overlay-1
apic1(config-tenant-l3out)# exit
apic1(config-tenant)# l3out ipn-multipodInternal
apic1(config-tenant-l3out)# vrf member overlay-1
apic1(config-tenant-l3out)# exit
apic1(config-tenant)# exit
apic1(config)#
apic1(config)# spine 201
apic1(config-spine)# vrf context tenant infra vrf overlay-1 l3out rl-wan-test
apic1(config-spine-vrf)# exit
apic1(config-spine)# vrf context tenant infra vrf overlay-1 l3out ipn-multipodInternal
apic1(config-spine-vrf)# exit
apic1(config-spine)#
apic1(config-spine)# interface ethernet 8/36
apic1(config-spine-if)# vlan-domain member ospfDom
apic1(config-spine-if)# exit
apic1(config-spine)# router ospf default
apic1(config-spine-ospf)# vrf member tenant infra vrf overlay-1
apic1(config-spine-ospf-vrf)# area 5 l3out rl-wan-test
apic1(config-spine-ospf-vrf)# exit
apic1(config-spine-ospf)# exit
apic1(config-spine)#
apic1(config-spine)# interface ethernet 8/36.4
apic1(config-spine-if)# vrf member tenant infra vrf overlay-1 l3out rl-wan-test
apic1(config-spine-if)# ip router ospf default area 5
apic1(config-spine-if)# exit
apic1(config-spine)# router ospf multipod-internal
apic1(config-spine-ospf)# vrf member tenant infra vrf overlay-1
apic1(config-spine-ospf-vrf)# area 5 l3out ipn-multipodInternal
apic1(config-spine-ospf-vrf)# exit
apic1(config-spine-ospf)# exit
apic1(config-spine)#
apic1(config-spine)# interface ethernet 8/36.5
apic1(config-spine-if)# vrf member tenant infra vrf overlay-1 l3out ipn-multipodInternal
apic1(config-spine-if)# ip router ospf multipod-internal area 5
apic1(config-spine-if)# exit
apic1(config-spine)# exit
apic1(config)#
apic1(config)# leaf 101
apic1(config-leaf)# vrf context tenant infra vrf overlay-1 l3out rl-wan-test
apic1(config-leaf-vrf)# exit
apic1(config-leaf)#
apic1(config-leaf)# interface ethernet 1/49
apic1(config-leaf-if)# vlan-domain member ospfDom
apic1(config-leaf-if)# exit
apic1(config-leaf)# router ospf default
apic1(config-leaf-ospf)# vrf member tenant infra vrf overlay-1
apic1(config-leaf-ospf-vrf)# area 5 l3out rl-wan-test
apic1(config-leaf-ospf-vrf)# exit
apic1(config-leaf-ospf)# exit
apic1(config-leaf)#
apic1(config-leaf)# interface ethernet 1/49.4
apic1(config-leaf-if)# vrf member tenant infra vrf overlay-1 l3out rl-wan-test
apic1(config-leaf-if)# ip router ospf default area 5
apic1(config-leaf-if)# exit
Note In this example, the BGP fabric ASN is 100. Spine switches 104 and 105 are chosen as MP-BGP
route-reflectors.
apic1(config)# bgp-fabric
apic1(config-bgp-fabric)# asn 100
apic1(config-bgp-fabric)# route-reflector spine 104,105
SUMMARY STEPS
1. configure
2. leaf node-id
DETAILED STEPS
Procedure
Step 2 leaf node-id Specifies the leaf switch or leaf switches to be configured.
The node-id can be a single node ID or a range of IDs, in
Example:
the form node-id1-node-id2, to which the configuration
apic1(config)# leaf 101 will be applied.
Step 3 interface port-channel channel-name Enters the interface configuration mode for the specified
port channel.
Example:
apic1(config-leaf)# interface port-channel po1
Step 5 vrf member vrf-name tenant tenant-name Associates this port channel to this virtual routing and
forwarding (VRF) instance and L3 outside policy, where:
Example:
apic1(config-leaf-if)# vrf member v1 tenant t1 • vrf-name is the VRF name. The name can be any
case-sensitive, alphanumeric string up to 32
characters.
• tenant-name is the tenant name. The name can be any
case-sensitive, alphanumeric string up to 32
characters.
Step 6 vlan-domain member vlan-domain-name Associates the port channel template with the previously
configured VLAN domain.
Example:
apic1(config-leaf-if)# vlan-domain member dom1
Step 8 ipv6 address sub-bits/prefix-length preferred Configures an IPv6 address based on an IPv6 general prefix
and enables IPv6 processing on an interface, where:
Example:
apic1(config-leaf-if)# ipv6 address 2001::1/64 • sub-bits is the subprefix bits and host bits of the
preferred address to be concatenated with the prefixes provided
by the general prefix specified with the prefix-name
argument. The sub-bits argument must be in the form
documented in RFC 2373 where the address is
specified in hexadecimal using 16-bit values between
colons.
• prefix-length is the length of the IPv6 prefix. A
decimal value that indicates how many of the
high-order contiguous bits of the address comprise
the prefix (the network portion of the address). A
slash mark must precede the decimal value.
Step 9 ipv6 link-local ipv6-link-local-address Configures an IPv6 link-local address for an interface.
Example:
apic1(config-leaf-if)# ipv6 link-local fe80::1
Step 11 mtu mtu-value Sets the MTU for this class of service.
Example:
apic1(config-leaf-if)# mtu 1500
Example
This example shows how to configure a basic Layer 3 port channel.
apic1# configure
apic1(config)# leaf 101
apic1(config-leaf)# interface port-channel po1
apic1(config-leaf-if)# no switchport
apic1(config-leaf-if)# vrf member v1 tenant t1
apic1(config-leaf-if)# vlan-domain member dom1
apic1(config-leaf-if)# ip address 10.1.1.1/24
apic1(config-leaf-if)# ipv6 address 2001::1/64 preferred
apic1(config-leaf-if)# ipv6 link-local fe80::1
apic1(config-leaf-if)# mac-address 00:44:55:66:55::01
apic1(config-leaf-if)# mtu 1500
SUMMARY STEPS
1. configure
2. leaf node-id
3. vrf member vrf-name tenant tenant-name
4. vlan-domain member vlan-domain-name
5. ip address ip-address / subnet-mask
6. ipv6 address sub-bits / prefix-length preferred
7. ipv6 link-local ipv6-link-local-address
8. mac-address mac-address
9. mtu mtu-value
10. exit
11. interface port-channel channel-name
12. vlan-domain member vlan-domain-name
13. exit
14. interface port-channel channel-name.number
15. vrf member vrf-name tenant tenant-name
16. exit
DETAILED STEPS
Procedure
Step 2 leaf node-id Specifies the leaf switch or leaf switches to be configured.
The node-id can be a single node ID or a range of IDs, in
Example:
the form node-id1-node-id2, to which the configuration
apic1(config)# leaf 101 will be applied.
Step 3 vrf member vrf-name tenant tenant-name Associates this port channel to this virtual routing and
forwarding (VRF) instance and L3 outside policy, where:,
Example:
where:
apic1(config-leaf-if)# vrf member v1 tenant t1
• vrf-name is the VRF name. The name can be any
case-sensitive, alphanumeric string up to 32
characters.
Step 4 vlan-domain member vlan-domain-name Associates the port channel template with the previously
configured VLAN domain.
Example:
apic1(config-leaf-if)# vlan-domain member dom1
Step 5 ip address ip-address / subnet-mask Sets the IP address and subnet mask for the specified
interface.
Example:
apic1(config-leaf-if)# ip address 10.1.1.1/24
Step 6 ipv6 address sub-bits / prefix-length preferred Configures an IPv6 address based on an IPv6 general prefix
and enables IPv6 processing on an interface, where:
Example:
apic1(config-leaf-if)# ipv6 address 2001::1/64 • sub-bits is the subprefix bits and host bits of the
preferred address to be concatenated with the prefixes provided
by the general prefix specified with the prefix-name
argument. The sub-bits argument must be in the form
documented in RFC 2373 where the address is
specified in hexadecimal using 16-bit values between
colons.
• prefix-length is the length of the IPv6 prefix. A
decimal value that indicates how many of the
high-order contiguous bits of the address comprise
the prefix (the network portion of the address). A
slash mark must precede the decimal value.
Step 7 ipv6 link-local ipv6-link-local-address Configures an IPv6 link-local address for an interface.
Example:
apic1(config-leaf-if)# ipv6 link-local fe80::1
Step 9 mtu mtu-value Sets the MTU for this class of service.
Example:
apic1(config-leaf-if)# mtu 1500
Step 12 vlan-domain member vlan-domain-name Associates the port channel template with the previously
configured VLAN domain.
Example:
apic1(config-leaf-if)# vlan-domain member dom1
Step 14 interface port-channel channel-name.number Enters the interface configuration mode for the specified
sub-interface port channel.
Example:
apic1(config-leaf)# interface port-channel
po1.2001
Step 15 vrf member vrf-name tenant tenant-name Associates this port channel to this virtual routing and
forwarding (VRF) instance and L3 outside policy, where:,
Example:
where:
apic1(config-leaf-if)# vrf member v1 tenant t1
• vrf-name is the VRF name. The name can be any
case-sensitive, alphanumeric string up to 32
characters.
• tenant-name is the tenant name. The name can be any
case-sensitive, alphanumeric string up to 32
characters.
Example
This example shows how to configure a basic Layer 3 sub-interface port-channel.
apic1# configure
apic1(config)# leaf 101
apic1(config-leaf)# interface vlan 2001
apic1(config-leaf-if)# no switchport
apic1(config-leaf-if)# vrf member v1 tenant t1
apic1(config-leaf-if)# vlan-domain member dom1
apic1(config-leaf-if)# ip address 10.1.1.1/24
apic1(config-leaf-if)# ipv6 address 2001::1/64 preferred
apic1(config-leaf-if)# ipv6 link-local fe80::1
apic1(config-leaf-if)# mac-address 00:44:55:66:55::01
apic1(config-leaf-if)# mtu 1500
apic1(config-leaf-if)# exit
apic1(config-leaf)# interface port-channel po1
apic1(config-leaf-if)# vlan-domain member dom1
apic1(config-leaf-if)# exit
apic1(config-leaf)# interface port-channel po1.2001
apic1(config-leaf-if)# vrf member v1 tenant t1
apic1(config-leaf-if)# exit
SUMMARY STEPS
1. configure
2. leaf node-id
3. interface Ethernet slot/port
4. channel-group channel-name
DETAILED STEPS
Procedure
Step 2 leaf node-id Specifies the leaf switch or leaf switches to be configured.
The node-id can be a single node ID or a range of IDs, in
Example:
the form node-id1-node-id2, to which the configuration will
apic1(config)# leaf 101 be applied.
Step 3 interface Ethernet slot/port Enters interface configuration mode for the interface you
want to configure.
Example:
apic1(config-leaf)# interface Ethernet 1/1-2
Example
This example shows how to add ports to a Layer 3 port-channel.
apic1# configure
apic1(config)# leaf 101
SUMMARY STEPS
1. Enter the configure mode.
2. Enter the switch mode.
3. Create the VLAN interface.
4. Specify the encapsulation scope.
5. Exit the interface mode.
DETAILED STEPS
Procedure
Step 3 Create the VLAN interface. Creates the VLAN interface. The VLAN range is 1-4094.
Example:
apic1(config-leaf)# interface vlan 2001
SUMMARY STEPS
1. Enter the configure mode.
2. Enter the switch mode.
3. Create the VLAN interface.
4. Enable SVI auto state.
5. Exit the interface mode.
DETAILED STEPS
Procedure
Step 3 Create the VLAN interface. Creates the VLAN interface. The VLAN range is 1-4094.
Example:
apic1(config-leaf)# interface vlan 2001
Procedure
The following shows how to configure the BGP external routed network using the NX-OS CLI:
Example:
Example:
apic1(config)# leaf 101
apic1(config-leaf)# template bgp address-family newAf tenant t1
This template will be available on all nodes where tenant t1 has a VRF deployment
apic1(config-bgp-af)# maximum-paths ?
<1-64> Number of parallel paths
ibgp Configure multipath for IBGP paths
apic1(config-bgp-af)# maximum-paths 10
apic1(config-bgp-af)# maximum-paths ibgp 8
apic1(config-bgp-af)# end
apic1#
SUMMARY STEPS
1. To modify the autonomous system path (AS Path) for Border Gateway Protocol (BGP) routes, you can
use the set as-path command. The set as-path command takes the form of
apic1(config-leaf-vrf-template-route-profile)# set
as-path {'prepend as-num [ ,... as-num ] | prepend-last-as num}
DETAILED STEPS
Procedure
To modify the autonomous system path (AS Path) for Border Gateway Protocol (BGP) routes, you can use the set
as-path command. The set as-path command takes the form of apic1(config-leaf-vrf-template-route-profile)#
set as-path {'prepend as-num [ ,... as-num ] | prepend-last-as num}
Example:
apic1(config)# leaf 103
apic1(config-leaf)# vrf context tenant t1 vrf v1
apic1(config-leaf-vrf)# template route-profile rp1
apic1(config-leaf-vrf-template-route-profile)# set as-path ?
prepend Prepend to the AS-Path
prepend-last-as Prepend last AS to the as-path
apic1(config-leaf-vrf-template-route-profile)# set as-path prepend 100, 101, 102, 103
apic1(config-leaf-vrf-template-route-profile)# set as-path prepend-last-as 8
apic1(config-leaf-vrf-template-route-profile)# exit
apic1(config-leaf-vrf)# exit
apic1(config-leaf)# exit
What to do next
To disable AS Path prepend, use the no form of the shown command:
apic1(config-leaf-vrf-template-route-profile)# [no] set
as-path { prepend as-num [ ,... as-num ] | prepend-last-as num}
Procedure
apic1(config-leaf-bgp-vrf-neighbor)# shutdown
apic1(config-leaf-bgp-vrf)# exit
apic1(config-leaf-bgp)# exit
apic1(config-leaf)# exit
Configuring a Per VRF Per Node BGP Timer Policy Using the NX-OS Style CLI
SUMMARY STEPS
1. Configure BGP ASN and the route reflector before creating a timer policy.
2. Create a timer policy.
3. Display the configured BGP policy.
4. Refer to a specific policy at a node.
5. Display the node specific BGP timer policy.
DETAILED STEPS
Procedure
Step 2 Create a timer policy. The specific values are provided as examples only.
Example:
apic1# config
apic1(config)# leaf 101
apic1(config-leaf)# template bgp timers pol7 tenant
tn1
This template will be available on all nodes where
tenant tn1 has a VRF deployment
apic1(config-bgp-timers)# timers bgp 120 240
apic1(config-bgp-timers)# graceful-restart
stalepath-time 500
apic1(config-bgp-timers)# maxas-limit 300
apic1(config-bgp-timers)# exit
apic1(config-leaf)# exit
apic1(config)# exit
apic1#
Configuring Bidirectional Forwarding Detection on a Secondary IP Address Using the NX-OS-Style CLI
This procedure configures bidirectional forwarding detection (BFD) on a secondary IP address using the
NX-OS-style CLI. This example configures VRF v1 on node 103 (the border leaf switch), with router ID
1.1.24.24. It also configures interface eth1/3 as a routed interface (Layer 3 port), with IP address 12.12.12.3/24
as primary and 6.11.1.224/24 as secondary address in Layer 3 domain dom1. BFD is enabled on 99.99.99.14/32,
which is reachable using the secondary subnet 6.11.1.0/24.
Procedure
Configuring BFD Globally on Leaf Switch Using the NX-OS Style CLI
Procedure
Step 1 To configure the BFD IPV4 global configuration (bfdIpv4InstPol) using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# template bfd ip bfd_ipv4_global_policy
apic1(config-bfd)# [no] echo-address 1.2.3.4
apic1(config-bfd)# [no] slow-timer 2500
apic1(config-bfd)# [no] min-tx 100
Step 2 To configure the BFD IPV6 global configuration (bfdIpv6InstPol) using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# template bfd ipv6 bfd_ipv6_global_policy
apic1(config-bfd)# [no] echo-address 34::1/64
apic1(config-bfd)# [no] slow-timer 2500
apic1(config-bfd)# [no] min-tx 100
apic1(config-bfd)# [no] min-rx 70
apic1(config-bfd)# [no] multiplier 3
apic1(config-bfd)# [no] echo-rx-interval 500
apic1(config-bfd)# exit
Step 3 To configure access leaf policy group (infraAccNodePGrp) and inherit the previously created BFD global policies using
the NX-OS CLI:
Example:
apic1# configure
apic1(config)# template leaf-policy-group test_leaf_policy_group
apic1(config-leaf-policy-group)# [no] inherit bfd ip bfd_ipv4_global_policy
apic1(config-leaf-policy-group)# [no] inherit bfd ipv6 bfd_ipv6_global_policy
apic1(config-leaf-policy-group)# exit
Step 4 To associate the previously created leaf policy group onto a leaf using the NX-OS CLI:
Example:
Configuring BFD Globally on Spine Switch Using the NX-OS Style CLI
Use this procedure to configure BFD globally on spine switch using the NX-OS style CLI.
Procedure
Step 1 To configure the BFD IPV4 global configuration (bfdIpv4InstPol) using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# template bfd ip bfd_ipv4_global_policy
apic1(config-bfd)# [no] echo-address 1.2.3.4
apic1(config-bfd)# [no] slow-timer 2500
apic1(config-bfd)# [no] min-tx 100
apic1(config-bfd)# [no] min-rx 70
apic1(config-bfd)# [no] multiplier 3
Step 2 To configure the BFD IPV6 global configuration (bfdIpv6InstPol) using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# template bfd ipv6 bfd_ipv6_global_policy
apic1(config-bfd)# [no] echo-address 34::1/64
apic1(config-bfd)# [no] slow-timer 2500
apic1(config-bfd)# [no] min-tx 100
apic1(config-bfd)# [no] min-rx 70
apic1(config-bfd)# [no] multiplier 3
apic1(config-bfd)# [no] echo-rx-interval 500
apic1(config-bfd)# exit
Step 3 To configure spine policy group and inherit the previously created BFD global policies using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# template spine-policy-group test_spine_policy_group
apic1(config-spine-policy-group)# [no] inherit bfd ip bfd_ipv4_global_policy
apic1(config-spine-policy-group)# [no] inherit bfd ipv6 bfd_ipv6_global_policy
apic1(config-spine-policy-group)# exit
Step 4 To associate the previously created spine policy group onto a spine switch using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# spine-profile test_spine_profile
apic1(config-spine-profile)# spine-group test_spine_group
apic1(config-spine-group)# spine-policy-group test_spine_policy_group
apic1(config-spine-group)# spine 103-104
apic1(config-leaf-group)# exit
Procedure
Step 1 To configure BFD Interface Policy (bfdIfPol) using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# tenant t0
apic1(config-tenant)# vrf context v0
apic1(config-tenant-vrf)# exit
apic1(config-tenant)# exit
apic1(config)# leaf 101
apic1(config-leaf)# vrf context tenant t0 vrf v0
apic1(config-leaf-vrf)# exit
apic1(config-leaf)# interface Ethernet 1/18
apic1(config-leaf-if)# vrf member tenant t0 vrf v0
apic1(config-leaf-if)# exit
Step 2 To inherit the previously created BFD interface policy onto a L3 interface with IPv4 address using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# leaf 101
apic1(config-leaf)# interface Ethernet 1/15
apic1(config-leaf-if)# bfd ip tenant mode
apic1(config-leaf-if)# bfd ip inherit interface-policy bfdPol1
apic1(config-leaf-if)# bfd ip authentication keyed-sha1 key 10 key password
Step 3 To inherit the previously created BFD interface policy onto an L3 interface with IPv6 address using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# leaf 101
apic1(config-leaf)# interface Ethernet 1/15
apic1(config-leaf-if)# ipv6 address 2001::10:1/64 preferred
apic1(config-leaf-if)# bfd ipv6 tenant mode
apic1(config-leaf-if)# bfd ipv6 inherit interface-policy bfdPol1
apic1(config-leaf-if)# bfd ipv6 authentication keyed-sha1 key 10 key password
Step 4 To configure BFD on a VLAN interface with IPv4 address using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# leaf 101
apic1(config-leaf)# interface vlan 15
apic1(config-leaf-if)# vrf member tenant t0 vrf v0
apic1(config-leaf-if)# bfd ip tenant mode
apic1(config-leaf-if)# bfd ip inherit interface-policy bfdPol1
apic1(config-leaf-if)# bfd ip authentication keyed-sha1 key 10 key password
Step 5 To configure BFD on a VLAN interface with IPv6 address using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# leaf 101
apic1(config-leaf)# interface vlan 15
apic1(config-leaf-if)# ipv6 address 2001::10:1/64 preferred
apic1(config-leaf-if)# vrf member tenant t0 vrf v0
apic1(config-leaf-if)# bfd ipv6 tenant mode
apic1(config-leaf-if)# bfd ipv6 inherit interface-policy bfdPol1
apic1(config-leaf-if)# bfd ipv6 authentication keyed-sha1 key 10 key password
Procedure
Step 1 To enable BFD on the BGP consumer protocol using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# bgp-fabric
apic1(config-bgp-fabric)# asn 200
apic1(config-bgp-fabric)# exit
apic1(config)# leaf 101
apic1(config-leaf)# router bgp 200
apic1(config-bgp)# vrf member tenant t0 vrf v0
apic1(config-leaf-bgp-vrf)# neighbor 1.2.3.4
apic1(config-leaf-bgp-vrf-neighbor)# [no] bfd enable
Step 2 To enable BFD on the EIGRP consumer protocol using the NX-OS CLI:
Example:
Step 3 To enable BFD on the OSPF consumer protocol using the NX-OS CLI:
Example:
apic1# configure
apic1(config)# spine 103
apic1(config-spine)# interface ethernet 5/3.4
apic1(config-spine-if)# [no] ip ospf bfd enable
Step 4 To enable BFD on the Static Route consumer protocol using the NX-OS CLI:
Example:
Step 5 To enable BFD on IS-IS consumer protocol using the NX-OS CLI:
Example:
Configuring OSPF External Routed Networks Using the NX-OS Style CLI
Creating an OSPF External Routed Network for a Tenant Using the NX-OS CLI
Configuring external routed network connectivity involves the following steps:
1. Create a VRF under Tenant.
2. Configure L3 networking configuration for the VRF on the border leaf switches, which are connected to
the external routed network. This configuration includes interfaces, routing protocols (BGP, OSPF,
EIGRP), protocol parameters, route-maps.
3. Configure policies by creating external-L3 EPGs under tenant and deploy these EPGs on the border leaf
switches. External routed subnets on a VRF which share the same policy within the ACI fabric form one
"External L3 EPG" or one "prefix EPG".
The following steps are for creating an OSPF external routed network for a tenant. To create an OSPF external
routed network for a tenant, you must choose a tenant and then create a VRF for the tenant.
Note The examples in this section show how to provide external routed connectivity to the "web" epg in the
"OnlineStore" application for tenant "exampleCorp".
Procedure
Step 2 Configure the tenant VRF and enable policy enforcement on the VRF.
Example:
apic1(config)# tenant exampleCorp
apic1(config-tenant)# vrf context
exampleCorp_v1
apic1(config-tenant-vrf)# contract enforce
apic1(config-tenant-vrf)# exit
Step 3 Configure the tenant BD and mark the gateway IP as “public”. The entry "scope public" makes this gateway address
available for advertisement through the routing protocol for external-L3 network.
Example:
Step 5 Configure the OSPF area and add the route map.
Example:
Step 6 Assign the VRF to the interface (sub-interface in this example) and enable the OSPF area.
Example:
Note
For the sub-interface configuration, the main interface (ethernet 1/11 in this example) must be converted to an L3 port
through “no switchport” and assigned a vlan-domain (dom_exampleCorp in this example) that contains the encapsulation
VLAN used by the sub-interface. In the sub-interface ethernet1/11.500, 500 is the encapsulation VLAN.
Step 7 Configure the external-L3 EPG policy. This includes the subnet to match for identifying the external subnet and consuming
the contract to connect with the epg "web".
Example:
Configuring EIGRP External Routed Networks Using the NX-OS Style CLI
Configuring EIGRP Using the NX-OS-Style CLI
Procedure
Example:
apic1(config-leaf)# template eigrp vrf-policy tenant1 tenant tenant1
This template will be available on all leaves where tenant tenant1 has a VRF deployment
apic1(config-template-eigrp-vrf-pol)# show run
# Command: show running-config leaf 101 template eigrp vrf-policy tenant1 tenant tenant1
# Time: Tue Feb 16 09:46:31 2016
leaf 101
template eigrp vrf-policy tenant1 tenant tenant1
metric version 64bit
exit
exit
Step 8 Configure the EIGRP VLAN interface and enable EIGRP in the interface:
Example:
apic1(config-leaf)# interface vlan 1013
apic1(config-leaf-if)# show run
# Command: show running-config leaf 101 interface vlan 1013
# Time: Tue Feb 16 09:46:59 2016
leaf 101
interface vlan 1013
vrf member tenant tenant1 vrf l3out
ip address 101.13.1.2/24
ip router eigrp default
ipv6 address 101:13::1:2/112 preferred
ipv6 router eigrp default
ipv6 link-local fe80::101:13:1:2
inherit eigrp ip interface-policy tenant1
inherit eigrp ipv6 interface-policy tenant1
exit
exit
apic1(config-leaf-if)# ip summary-address ?
eigrp Configure route summarization for EIGRP
apic1(config-leaf-if)# ip summary-address eigrp default 11.11.0.0/16 ?
<CR>
apic1(config-leaf-if)# ip summary-address eigrp default 11.11.0.0/16
apic1(config-leaf-if)# ip summary-address eigrp default 11:11:1::/48
apic1(config-leaf-if)# show run
# Command: show running-config leaf 101 interface vlan 1013
# Time: Tue Feb 16 09:47:34 2016
leaf 101
interface vlan 1013
vrf member tenant tenant1 vrf l3out
ip address 101.13.1.2/24
ip router eigrp default
ip summary-address eigrp default 11.11.0.0/16
ip summary-address eigrp default 11:11:1::/48
ipv6 address 101:13::1:2/112 preferred
ipv6 router eigrp default
ipv6 link-local fe80::101:13:1:2
inherit eigrp ip interface-policy tenant1
inherit eigrp ipv6 interface-policy tenant1
exit
exit
leaf 101
interface ethernet 1/5
vlan-domain member cli
switchport trunk allowed vlan 1213 tenant tenant13 external-svi l3out l3out-L1
switchport trunk allowed vlan 1613 tenant tenant17 external-svi l3out l3out-L1
switchport trunk allowed vlan 1013 tenant tenant1 external-svi l3out l3out-L1
switchport trunk allowed vlan 666 tenant ten_v6_cli external-svi l3out l3out_cli_L1
switchport trunk allowed vlan 1513 tenant tenant16 external-svi l3out l3out-L1
switchport trunk allowed vlan 1313 tenant tenant14 external-svi l3out l3out-L1
switchport trunk allowed vlan 1413 tenant tenant15 external-svi l3out l3out-L1
switchport trunk allowed vlan 1113 tenant tenant12 external-svi l3out l3out-L1
switchport trunk allowed vlan 712 tenant mgmt external-svi l3out inband_l1
switchport trunk allowed vlan 1913 tenant tenant10 external-svi l3out l3out-L1
switchport trunk allowed vlan 300 tenant tenant1 external-svi l3out l3out-L1
exit
exit
Procedure
Step 1 Configure BGP route summarization using the NX-OS CLI as follows:
a) Enable BGP as follows:
Example:
apic1(config)# pod 1
Step 2 Configure OSPF external summarization using the NX-OS CLI as follows:
Example:
Step 3 Configure OSPF inter-area summarization using the NX-OS CLI as follows:
Note
There is no route summarization policy to be configured for EIGRP. The only configuration needed for enabling EIGRP
summarization is the summary subnet under the InstP.
Configuring Route Control with Route Maps and Route Profile Using NX-OS
Style CLI
Configuring Route Control Per BGP Peer Using the NX-OS Style CLI
The following procedure describes how to configure the route control per BGP peer feature using the NX-OS
CLI.
Procedure
Step 1 Create a route group template and add IP prefix to the route group.
This example creates a route group match-rule1 for tenant t1, and adds the IP prefix of 200.3.2.0/24 to the route group.
Example:
apic1(config)# leaf 103
apic1(config-leaf)# template route group match-rule1 tenant t1
apic1(config-route-group)# ip prefix permit 200.3.2.0/24
apic1(config-route-group)# exit
apic1(config-leaf)#
Step 3 Create a route-map and enter the route-map configuration mode, then match a route group that has already been created
and enter the match mode to configure the route-profile.
This example creates a route-map rp1, and matches route group match-rule1 with an order number 0.
Example:
apic1(config-leaf-vrf)# route-map rp1
apic1(config-leaf-vrf-route-map)# match route group match-rule1 order 0
apic1(config-leaf-vrf-route-map-match)# exit
apic1(config-leaf-vrf-route-map)# exit
apic1(config-leaf-vrf)# exit
Example:
Configuring Route Map/Profile with Explicit Prefix List Using NX-OS Style CLI
SUMMARY STEPS
1. configure
2. leaf node-id
3. template route group group-name tenant tenant-name
4. ip prefix permit prefix/masklen [le{32 | 128 }]
5. community-list [ standard | expanded] community-list-name expression
6. exit
7. vrf context tenant tenant-name vrf vrf-name [l3out {BGP | EIGRP | OSPF | STATIC }]
8. template route-profile profile-name [route-control-context-name order-value]
9. set attribute value
10. exit
11. route-map map-name
12. match route group group-name [order number] [deny]
13. inherit route-profile profile-name
14. exit
15. exit
16. exit
17. router bgp fabric-asn
18. vrf member tenant t1 vrf v1
19. neighbor IP-address-of-neighbor
20. route-map map-name {in | out }
DETAILED STEPS
Procedure
Step 3 template route group group-name tenant tenant-name Creates a route group template.
Example: Note
Step 4 ip prefix permit prefix/masklen [le{32 | 128 }] Add IP prefix to the route group.
Example: Note
apic1(config-route-group)# ip prefix permit The IP prefix can denote a BD subnet or an external
15.15.15.0/24 network. Use optional argument le 32 for IPv4 and le 128
for IPv6 if you desire an aggregate prefix.
Step 5 community-list [ standard | expanded] This is an optional command. Add match criteria for
community-list-name expression community if community also needs to be matched along
with IP prefix.
Example:
apic1(config-route-group)# community-list standard
com1 65535:20
Step 7 vrf context tenant tenant-name vrf vrf-name [l3out {BGP Enters a tenant VRF mode for the node.
| EIGRP | OSPF | STATIC }]
Note
Example: If you enter the optional l3out string, the L3Out must be
apic1(config-leaf)# vrf context tenant exampleCorp an L3Out that you configured through the NX-OS CLI.
vrf v1
Step 8 template route-profile profile-name Creates a template containing set actions that should be
[route-control-context-name order-value] applied to the matched routes.
Example:
apic1(config-leaf-vrf)# template route-profile
rp1 ctxl 1
Step 9 set attribute value Add desired attributes (set actions) to the template.
Example:
apic1(config-leaf-vrf-template-route-profile)#
set metric 128
Step 12 match route group group-name [order number] [deny] Match a route group that has already been created, and
enter the match mode to configure the route- profile.
Example:
Additionally choose the keyword Deny if routes matching
apic1(config-leaf-vrf-route-map)# match route the match criteria defined in route group needs to be
group g1 order 1
denied. The default is Permit.
Step 18 vrf member tenant t1 vrf v1 Set the BGP's VRF membership and the tenant for the BGP
policy.
Example:
apic1(config-leaf-bgp)# vrf member tenant t1 vrf
v1
Step 20 route-map map-name {in | out } Configure the route map for a BGP neighbor.
Example:
Configuring a Route Control Protocol to Use Import and Export Controls, With the NX-OS Style CLI
This example assumes that you have configured the Layer 3 outside network connections using BGP. It is
also possible to perform these tasks for a network configured using OSPF.
This section describes how to create a route map using the NX-OS CLI:
Procedure
apic1# configure
apic1(config)# leaf 101
# Create community-list
apic1(config-leaf)# template community-list standard CL_1 65536:20 tenant exampleCorp
apic1(config-leaf)# vrf context tenant exampleCorp vrf v1
Note
In this case, public-subnets from bd1 and prefixes matching prefix-list p1 are exported out using route-profile
“default-export”, while public-subnets from bd2 are exported out using route-profile “bd-rtctrl”.
Procedure
Step 1 Configure the route map for interleak redistribution for the border leaf node.
Example:
The following example configures the route map CLI_RP with an IP prefix-list CLI_PFX1 for tenant CLI_TEST and VRF
VRF1:
apic1# conf t
apic1(config)# leaf 101
apic1(config-leaf)# vrf context tenant CLI_TEST vrf VRF1
apic1(config-leaf-vrf)# route-map CLI_RP
apic1(config-leaf-vrf-route-map)# ip prefix-list CLI_PFX1 permit 192.168.1.0/24
apic1(config-leaf-vrf-route-map)# match prefix-list CLI_PFX1 [deny]
Procedure
Example:
apic1(config)# leaf 101
apic1(config-leaf)# vrf context tenant t1 vrf v1
apic1(config-leaf-vrf)# router-id 11.11.11.103
apic1(config-leaf-vrf)# exit
apic1(config-leaf)# interface ethernet 1/3
apic1(config-leaf-if)# vlan-domain member dom1
apic1(config-leaf-if)# no switchport
apic1(config-leaf-if)# vrf member tenant t1 vrf v1
apic1(config-leaf-if)# ip address 12.12.12.3/24
apic1(config-leaf-if)# exit
apic1(config-leaf)# exit
apic1(config)# leaf 102
apic1(config-leaf)# vrf context tenant t1 vrf v1
apic1(config-leaf-vrf)# router-id 22.22.22.203
apic1(config-leaf-vrf)# exit
apic1(config-leaf)# interface ethernet 1/3
apic1(config-leaf-if)# vlan-domain member dom1
apic1(config-leaf-if)# no switchport
apic1(config-leaf-if)# vrf member tenant t1 vrf v1
apic1(config-leaf-if)# ip address 23.23.23.3/24
apic1(config-leaf-if)# exit
apic1(config-leaf)# exit
Example:
apic1(config)# tenant t1
apic1(config-tenant)# external-l3 epg extnw1
apic1(config-tenant-l3ext-epg)# vrf member v1
apic1(config-tenant-l3ext-epg)# match ip 192.168.1.0/24
apic1(config-tenant-l3ext-epg)# exit
apic1(config-tenant)# external-l3 epg extnw2
apic1(config-tenant-l3ext-epg)# vrf member v1
apic1(config-leaf-vrf)# exit
apic1(config-leaf)# router bgp 100
apic1(config-leaf-bgp)# vrf member tenant t1 vrf v1
apic1(config-leaf-bgp-vrf)# neighbor 25.25.25.2
apic1(config-leaf-bgp-vrf-neighbor)# route-map rp2 in
apic1(config-leaf-bgp-vrf-neighbor)# route-map rp1 out
apic1(config-leaf-bgp-vrf-neighbor)# exit
apic1(config-leaf-bgp-vrf)# exit
apic1(config-leaf-bgp)# exit
apic1(config-leaf)# exit
Step 7 Create filters (access lists) and contracts to enable the EPGs to communicate.
Example:
apic1(config)# tenant t1
apic1(config-tenant)# access-list http-filter
apic1(config-tenant-acl)# match ip
apic1(config-tenant-acl)# match tcp dest 80
apic1(config-tenant-acl)# exit
apic1(config-tenant)# contract httpCtrct
apic1(config-tenant-contract)# scope vrf
apic1(config-tenant-contract)# subject subj1
apic1(config-tenant-contract-subj)# access-group http-filter both
apic1(config-tenant-contract-subj)# exit
apic1(config-tenant-contract)# exit
apic1(config-tenant)# exit
apic1(config)# tenant t1
apic1(config-tenant)# external-l3 epg extnw1
apic1(config-tenant-l3ext-epg)# vrf member v1
apic1(config-tenant-l3ext-epg)# match ip 192.168.1.0/24
apic1(config-tenant-l3ext-epg)# exit
apic1(config-tenant)# external-l3 epg extnw2
apic1(config-tenant-l3ext-epg)# vrf member v1
apic1(config-tenant-l3ext-epg)# match ip 192.168.2.0/24
apic1(config-tenant-l3ext-epg)# exit
apic1(config-tenant)# exit
apic1(config-route-group)# exit
apic1(config-leaf)# template route group match-rule2 tenant t1
apic1(config-route-group)# ip prefix permit 192.168.2.0/24
apic1(config-route-group)# exit
apic1(config-leaf)# vrf context tenant t1 vrf v1
apic1(config-leaf-vrf)# route-map rp1
apic1(config-leaf-vrf-route-map)# match route group match-rule1 order 0
apic1(config-leaf-vrf-route-map-match)# exit
apic1(config-leaf-vrf-route-map)# exit
apic1(config-leaf-vrf)# route-map rp2
apic1(config-leaf-vrf-route-map)# match route group match-rule2 order 0
apic1(config-leaf-vrf-route-map-match)# exit
apic1(config-leaf-vrf-route-map)# exit
apic1(config-leaf-vrf)# exit
apic1(config-leaf)# router bgp 100
apic1(config-leaf-bgp)# vrf member tenant t1 vrf v1
apic1(config-leaf-bgp-vrf)# neighbor 15.15.15.2
apic1(config-leaf-bgp-vrf-neighbor)# route-map rp1 in
apic1(config-leaf-bgp-vrf-neighbor)# route-map rp2 out
apic1(config-leaf-bgp-vrf-neighbor)# exit
apic1(config-leaf-bgp-vrf)# exit
apic1(config-leaf-bgp)# exit
apic1(config-leaf)# exit
apic1(config)# tenant t1
apic1(config-tenant)# access-list http-filter
apic1(config-tenant-acl)# match ip
apic1(config-tenant-acl)# match tcp dest 80
apic1(config-tenant-acl)# exit
apic1(config-tenant)# contract httpCtrct
apic1(config-tenant-contract)# scope vrf
apic1(config-tenant-contract)# subject http-subj
apic1(config-tenant-contract-subj)# access-group http-filter both
apic1(config-tenant-contract-subj)# exit
apic1(config-tenant-contract)# exit
apic1(config-tenant)# exit
apic1(config)# tenant t1
apic1(config-tenant)# external-l3 epg extnw1
apic1(config-tenant-l3ext-epg)# vrf member v1
apic1(config-tenant-l3ext-epg)# contract provider httpCtrct
apic1(config-tenant-l3ext-epg)# exit
apic1(config-tenant)# external-l3 epg extnw2
apic1(config-tenant-l3ext-epg)# vrf member v1
apic1(config-tenant-l3ext-epg)# contract consumer httpCtrct
apic1(config-tenant-l3ext-epg)# exit
apic1(config-tenant)# exit
apic1(config)#
SUMMARY STEPS
1. Enter the configure mode.
2. Configure the provider Layer 3 Out.
3. Configure the consumer Layer 3 Out.
DETAILED STEPS
Procedure
Configuring Shared Layer 3 Out Inter-VRF Leaking Using the NX-OS Style CLI - Implicit Example
SUMMARY STEPS
1. Enter the configure mode.
2. Configure the provider tenant and VRF.
3. Configure the consumer tenant and VRF.
4. Configure the contract.
5. Configure the provider External Layer 3 EPG.
6. Configure the provider export map.
7. Configure the consumer external Layer 3 EPG.
8. Configure the consumer export map.
DETAILED STEPS
Procedure
Procedure
ip address 107.2.1.252/24
description 'SVI19'
service-policy type qos VrfQos006 // for custom QoS attachment
set qos-class level6 // for set QoS priority
exit
Note Starting with Release 4.0(1), we recommend using custom QoS policies for L3Out QoS as described in
Configuring QoS Directly on L3Out Using CLI, on page 515 instead.
Procedure
Step 1 Configure the VRF for egress mode and enable policy enforcement to support QoS priority enforcement on the L3Out.
apic1# configure
apic1(config)# tenant t1
apic1(config-tenant)# vrf context v1
apic1(config-tenant-vrf)# contract enforce egress
apic1(config-tenant-vrf)# exit
apic1(congig-tenant)# exit
apic1(config)#
VRF enforcement must be ingress, for QoS or custom QoS on L3out interface, VRF enforcement need be egress, only
when the QOS classification is going to be done in the contract for traffic between EPG and L3out or L3out to L3out.
Note
If QoS classification is set in the contract and VRF enforcement is egress, then contract QoS classification would override
the L3Out interface QoS or Custom QoS classification.
apic1(config)# tenant t1
apic1(config-tenant)# access-list http-filter
apic1(config-tenant-acl)# match ip
apic1(config-tenant-acl)# match tcp dest 80
apic1(config-tenant-acl)# match dscp EF
apic1(config-tenant-acl)# exit
apic1(config-tenant)# contract httpCtrct
apic1(config-tenant-contract)# scope vrf
apic1(config-tenant-contract)# qos-class level1
apic1(config-tenant-contract)# subject http-subject
apic1(config-tenant-contract-subj)# access-group http-filter both
apic1(config-tenant-contract-subj)# exit
apic1(config-tenant-contract)# exit
apic1(config-tenant)# exit
apic1(config)#
Procedure
Step 2 Create a tenant and enter tenant configuration mode, or enter tenant configuration mode for an existing tenant.
Example:
apic1(config)# tenant t1
Step 3 Create an IP SLA monitoring policy and enter IP SLA policy configuration mode.
Example:
apic1(config-tenant)# ipsla-pol ipsla-policy-3
Step 4 Configure the monitoring frequency in seconds, which is the interval between sending probes.
Example:
apic1(config-ipsla-pol)# sla-frequency 40
Only ICMP and TCP are valid for IP SLA in static routes.
Example:
apic1(config-ipsla-pol)# sla-type tcp sla-port 90
What to do next
To view the IP SLA monitoring policy you just created, enter:
show running-config all tenant tenant-name ipsla-pol
Procedure
Step 1 configure
Enters configuration mode.
Example:
apic1# configure
Example
The following example shows the commands to configure an IP SLA track member.
apic1# configure
apic1(config)# tenant t1
apic1(config-tenant)# )# track-member tm-1 dst-IpAddr 10.10.10.1 l3-out ext-l3-1
apic1(config-track-member)# ipsla-monpol ipsla-policy-3
What to do next
To view the track member configuration you just created, enter:
show running-config all tenant tenant-name track-member name
Procedure
Step 1 configure
Enters configuration mode.
Example:
apic1# configure
Step 3 track-list name { percentage [ percentage-down | percentage-up ] number | weight [ weight-down | weight-up number
}
Creates a track list with percentage or weight threshold settings and enters track list configuration mode.
Example:
apic1(config-tenant)# )# track-list tl-1 percentage percentage-down 50 percentage-up 100
Example
The following example shows the commands to configure an IP SLA track list.
apic1# configure
apic1(config)# tenant t1
apic1(config-tenant)# )# track-list tl-1 percentage percentage-down 50 percentage-up
100
apic1(config-track-list)# track-member tm1
What to do next
To view the track member configuration you just created, enter:
show running-config all tenant tenant-name track-member name
exit
exit
Associating a Track List with a Static Route Using the NX-OS Style CLI
To associate an IP SLA track list with a static route using the NX-OS style CLI, perform the following steps:
Procedure
Step 1 configure
Enters configuration mode.
Example:
apic1# configure
Example
The following example shows the commands to associate an IP SLA track list with a static route.
apic1# configure
apic1(config)# leaf 102
apic1(config-leaf)# )# vrf context tenant 99 vrf default
apic1(config-leaf-vrf)# ip route 10.10.10.1/4 20.20.20.8 10 bfd ip-trackList tl-1
Associating a Track List with a Next Hop Profile Using the NX-OS Style CLI
To associate an IP SLA track list with a next hop profile using the NX-OS style CLI, perform the following
steps:
Procedure
Step 1 configure
Enters configuration mode.
Example:
apic1# configure
Example
The following example shows the commands to associate an IP SLA track list with a next hop profile.
apic1# configure
apic1(config)# leaf 102
apic1(config-leaf)# )# vrf context tenant 99 vrf default
apic1(config-leaf-vrf)# ip route 10.10.10.1/4 20.20.20.8 10 bfd nh-ip-trackList tl-1
Viewing Track List and Track Member Status Using the CLI
You can display IP SLA track list and track member status.
Procedure
Example
switch# show track brief
TrackId Type Instance Parameter State Last Change
97 IP SLA 2034 reachability up 2019-03-20T14:08:34.127-07:00
98 IP SLA 2160 reachability up 2019-03-20T14:08:34.252-07:00
99 List --- percentage up 2019-03-20T14:08:45.494-07:00
100 List --- percentage down 2019-03-20T14:08:45.039-07:00
101 List --- percentage down 2019-03-20T14:08:45.040-07:00
102 List --- percentage up 2019-03-20T14:08:45.495-07:00
103 IP SLA 2040 reachability up 2019-03-20T14:08:45.493-07:00
104 IP SLA 2887 reachability down 2019-03-20T14:08:45.104-07:00
105 IP SLA 2821 reachability up 2019-03-20T14:08:45.494-07:00
1 List --- percentage up 2019-03-20T14:08:39.224-07:00
2 List --- weight down 2019-03-20T14:08:33.521-07:00
3 IP SLA 2412 reachability up 2019-03-20T14:08:33.983-07:00
26 IP SLA 2320 reachability up 2019-03-20T14:08:33.988-07:00
27 IP SLA 2567 reachability up 2019-03-20T14:08:33.987-07:00
28 IP SLA 2598 reachability up 2019-03-20T14:08:33.990-07:00
29 IP SLA 2940 reachability up 2019-03-20T14:08:33.986-07:00
30 IP SLA 2505 reachability up 2019-03-20T14:08:38.915-07:00
31 IP SLA 2908 reachability up 2019-03-20T14:08:33.990-07:00
32 IP SLA 2722 reachability up 2019-03-20T14:08:33.992-07:00
33 IP SLA 2753 reachability up 2019-03-20T14:08:38.941-07:00
34 IP SLA 2257 reachability up 2019-03-20T14:08:33.993-07:00
Viewing Track List and Track Member Detail Using the CLI
You can display IP SLA track list and track member detail.
Procedure
Example
switch# show track | more
Track 4
IP SLA 2758
reachability is down
Track 3
List Threshold percentage
Threshold percentage is down
1 changes, last change 2019-03-12T21:41:34.700+00:00
Threshold percentage up 1% down 0%
Tracked List Members:
Object 4 (50)% down
Object 6 (50)% down
Attached to:
Route prefix 172.16.13.0/24
Track 5
List Threshold percentage
Threshold percentage is down
1 changes, last change 2019-03-12T21:41:34.710+00:00
Threshold percentage up 1% down 0%
Tracked List Members:
Object 4 (100)% down
Attached to:
Nexthop Addr 12.12.12.2/32
Track 6
IP SLA 2788
reachability is down
1 changes, last change 2019-03-14T21:34:26.398+00:00
Tracked by:
Track List 3
Track List 7
Track 20
List Threshold percentage
Threshold percentage is up
4 changes, last change 2019-02-21T14:04:21.920-08:00
Threshold percentage up 100% down 32%
Tracked List Members:
Object 4 (20)% up
Object 5 (20)% up
Object 6 (20)% up
Object 3 (20)% up
Object 9 (20)% up
Attached to:
Route prefix 88.88.88.0/24
Route prefix 5000:8:1:14::/64
Route prefix 5000:8:1:2::/64
Route prefix 5000:8:1:1::/64
In this example, Track 4 is a track member identified by the IP SLA ID and by the track lists in the
Tracked by: field.
Track 3 is a track list identified by the threshold information and the track member in the Track List
Members field.
Track 20 is a track list that is currently reachable (up) and shows the static routes to which it is
associated.
SUMMARY STEPS
1. configure
2. Configure HSRP by creating inline parameters.
DETAILED STEPS
Procedure
Configuring HSRP in Cisco APIC Using Template and Policy in NX-OS Style CLI
HSRP is enabled when the leaf switch is configured.
SUMMARY STEPS
1. configure
2. Configure HSRP policy templates.
3. Use the configured policy templates
DETAILED STEPS
Procedure
Procedure
Step 2 Configure the outbound peer policy to filter routes based on the community in the inbound peer policy.
Example:
Step 3 Configure the outbound peer policy to filter the community towards the WAN.
Example:
ip community-list standard test-com permit 1:1
update-source loopback0
send-community both
route-map multi-site-in in
send-community both
Cisco ACI GOLF Configuration Example, Using the NX-OS Style CLI
These examples show the CLI commands to configure GOLF Services, which uses the BGP EVPN protocol
over OSPF for WAN routers that are connected to spine switches.
configure
vlan-domain evpn-dom dynamic
exit
spine 111
# Configure Tenant Infra VRF overlay-1 on the spine.
vrf context tenant infra vrf overlay-1
router-id 10.10.3.3
exit
Configure
spine 111
router bgp 100
vrf member tenant infra vrf overlay- 1
neighbor 10.10.4.1 evpn
label golf_aci
update-source loopback 10.10.4.3
remote-as 100
exit
neighbor 10.10.5.1 evpn
label golf_aci2
update-source loopback 10.10.5.3
remote-as 100
exit
exit
exit
configure
tenant sky
vrf context vrf_sky
exit
bridge-domain bd_sky
vrf member vrf_sky
exit
interface bridge-domain bd_sky
ip address 59.10.1.1/24
exit
bridge-domain bd_sky2
vrf member vrf_sky
exit
interface bridge-domain bd_sky2
ip address 59.11.1.1/24
exit
exit
Configuring the BGP EVPN Route Target, Route Map, and Prefix EPG for the Tenant
The following example shows how to configure a route map to advertise bridge-domain subnets through BGP
EVPN.
configure
spine 111
vrf context tenant sky vrf vrf_sky
address-family ipv4 unicast
route-target export 100:1
route-target import 100:1
exit
route-map rmap
ip prefix-list p1 permit 11.10.10.0/24
match bridge-domain bd_sky
exit
match prefix-list p1
exit
route-map rmap2
match bridge-domain bd_sky
exit
match prefix-list p1
exit
exit
Enabling Distributing BGP EVPN Type-2 Host Routes to a DCIG Using the NX-OS Style CLI
Procedure
Procedure
<fvAp name="test">
<fvAEPg name="web">
<fvRsBd tnFvBDName="test"/>
<fvRsPathAtt tDn="topology/pod-1/paths-101/pathep-[eth1/3]" encap="vlan-1002"/>
</fvAEPg>
</fvAp>
</fvTenant>
</polUni>
Procedure
What to do next
To specify the interval used for tracking IP addresses on endpoints, create an Endpoint Retention policy by
sending a post with XML such as the following example:
<fvEpRetPol bounceAgeIntvl="630" bounceTrig="protocol"
holdIntvl="350" lcOwn="local" localEpAgeIntvl="900" moveFreq="256"
name="EndpointPol1" remoteEpAgeIntvl="350"/>
Procedure
To configure a static route for the BD used in a pervasive gateway, enter a post such as the following example:
Example:
<fvAEPg name="ep1">
<fvRsBd tnFvBDName="bd1"/>
<fvSubnet ip="2002:0db8:85a3:0000:0000:8a2e:0370:7344/128" ctrl="no-default-gateway" >
<fvEpReachability>
<ipNexthopEpP nhAddr="2001:0db8:85a3:0000:0000:8a2e:0370:7343/128" />
</fvEpReachability>
</fvSubnet>
</fvAEPg>
Procedure
Create a tenant, VRF, bridge domain with a neighbor discovery interface policy and a neighbor discovery prefix policy.
Example:
<fvTenant descr="" dn="uni/tn-ExampleCorp" name="ExampleCorp" ownerKey="" ownerTag="">
<ndIfPol name="NDPol001" ctrl="managed-cfg” descr="" hopLimit="64" mtu="1500" nsIntvl="1000"
nsRetries=“3" ownerKey="" ownerTag="" raIntvl="600" raLifetime="1800" reachableTime="0"
retransTimer="0"/>
<fvCtx descr="" knwMcastAct="permit" name="pvn1" ownerKey="" ownerTag="" pcEnfPref="enforced">
</fvCtx>
<fvBD arpFlood="no" descr="" mac="00:22:BD:F8:19:FF" multiDstPktAct="bd-flood" name="bd1"
ownerKey="" ownerTag="" unicastRoute="yes" unkMacUcastAct="proxy" unkMcastAct="flood">
<fvRsBDToNdP tnNdIfPolName="NDPol001"/>
<fvRsCtx tnFvCtxName="pvn1"/>
<fvSubnet ctrl="nd" descr="" ip="34::1/64" name="" preferred="no" scope="private">
<fvRsNdPfxPol tnNdPfxPolName="NDPfxPol001"/>
</fvSubnet>
<fvSubnet ctrl="nd" descr="" ip="33::1/64" name="" preferred="no" scope="private">
<fvRsNdPfxPol tnNdPfxPolName="NDPfxPol002"/>
</fvSubnet>
</fvBD>
<ndPfxPol ctrl="auto-cfg,on-link" descr="" lifetime="1000" name="NDPfxPol001" ownerKey=""
ownerTag="" prefLifetime="1000"/>
<ndPfxPol ctrl="auto-cfg,on-link" descr="" lifetime="4294967295" name="NDPfxPol002" ownerKey=""
ownerTag="" prefLifetime="4294967295"/>
</fvTenant>
Note
If you have a public subnet when you configure the routed outside, you must associate the bridge domain with the outside
configuration.
Configuring an IPv6 Neighbor Discovery Interface Policy with RA on a Layer 3 Interface Using the
REST API
Procedure
Configure an IPv6 neighbor discovery interface policy and associate it with a Layer 3 interface:
The following example displays the configuration in a non-VPC set up.
Example:
<ndPfxP>
<ndRsPfxPToNdPfxPol tnNdPfxPolName="NDPfxPol001"/>
</ndPfxP>
</l3extRsPathL3OutAtt>
<l3extRsNdIfPol tnNdIfPolName="NDPol001"/>
</l3extLIfP>
</l3extLNodeP>
<l3extInstP name="instp"/>
</l3extOut>
<ndPfxPol ctrl="auto-cfg,on-link" descr="" lifetime="1000" name="NDPfxPol001" ownerKey="" ownerTag=""
prefLifetime="1000"/>
</fvTenant>
Note
For VPC ports, ndPfxP must be a child of l3extMember instead of l3extRsNodeL3OutAtt. The following code snippet
shows the configuration in a VPC setup.
<l3extLNodeP name="lnodeP001">
<l3extRsNodeL3OutAtt rtrId="11.11.205.1" rtrIdLoopBack="yes" tDn="topology/pod-2/node-2011"/>
<l3extRsNodeL3OutAtt rtrId="12.12.205.1" rtrIdLoopBack="yes" tDn="topology/pod-2/node-2012"/>
<l3extLIfP name="lifP002">
<l3extRsPathL3OutAtt addr="0.0.0.0" encap="vlan-205" ifInstT="ext-svi" llAddr="::"
mac="00:22:BD:F8:19:FF" mode="regular" mtu="inherit"
tDn="topology/pod-2/protpaths-2011-2012/pathep-[vpc7]" >
<l3extMember addr="2001:20:25:1::1/64" descr="" llAddr="::" name="" nameAlias="" side="A">
<ndPfxP >
<ndRsPfxPToNdPfxPol tnNdPfxPolName="NDPfxPol001"/>
</ndPfxP>
</l3extMember>
<l3extMember addr="2001:20:25:1::2/64" descr="" llAddr="::" name="" nameAlias="" side="B">
<ndPfxP >
<ndRsPfxPToNdPfxPol tnNdPfxPolName="NDPfxPol001"/>
</ndPfxP>
</l3extMember>
</l3extRsPathL3OutAtt>
<l3extRsNdIfPol tnNdIfPolName="NDPol001"/> </l3extLIfP>
</l3extLNodeP>
Configuring Neighbor Discovery Duplicate Address Detection Using the REST API
Procedure
Step 1 Disable the Neighbor Discovery Duplicate Address Detection process for a subnet by changing the value of the ipv6Dad
entry for that subnet to disabled.
The following example shows how to set the Neighbor Discovery Duplicate Address Detection entry for the
2001:DB8:A::11/64 subnet to disabled:
Note
In the following REST API example, long single lines of text are broken up with the \ character to improve readability.
Example:
</l3extRsPathL3OutAtt>
</l3extLIfP>
</l3extLNodeP>
Step 2 Enter the show ipv6 int command on the leaf switch to verify that the configuration was pushed out correctly to the leaf
switch. For example:
swtb23-leaf5# show ipv6 int vrf icmpv6:v1
IPv6 Interface Status for VRF "icmpv6:v1"(9)
IPv6 address:
2001:DB8:A::2/64 [VALID] [PREFERRED]
2001:DB8:A::11/64 [VALID] [dad-disabled]
IPv6 subnet: 2001:DB8:A::/64
IPv6 link-local address: fe80::863d:c6ff:fe9f:eb8b/10 (Default) [VALID]
Procedure
To configure Microsoft NLB in unicast mode, send a post with XML such as the following example:
Example:
https://apic-ip-address/api/node/mo/uni/.xml
<polUni>
<fvTenant name="tn2" >
<fvCtx name="ctx1"/>
<fvBD name="bd2">
<fvRsCtx tnFvCtxName="ctx1" />
</fvBD>
<fvAp name = "ap1">
<fvAEPg name = "ep1">
<fvRsBd tnFvBDName = "bd2"/>
<fvSubnet ip="10.0.1.1/32" scope="public" ctrl="no-default-gateway">
<fvEpNlb mac="12:21:21:35" mode="mode-uc"/>
</fvSubnet>
</fvAEPg>
</fvAp>
</fvTenant>
</polUni>
Procedure
To configure Microsoft NLB in multicast mode, send a post with XML such as the following example:
Example:
https://apic-ip-address/api/node/mo/uni/.xml
<polUni>
<fvTenant name="tn2" >
<fvCtx name="ctx1"/>
<fvBD name="bd2">
<fvRsCtx tnFvCtxName="ctx1" />
</fvBD>
<fvAp name = "ap1">
Procedure
To configure Microsoft NLB in IGMP mode, send a post with XML such as the following example:
Example:
https://apic-ip-address/api/node/mo/uni/.xml
<polUni>
<fvTenant name="tn2" >
<fvCtx name="ctx1"/>
<fvBD name="bd2">
<fvRsCtx tnFvCtxName="ctx1" />
</fvBD>
<fvAp name = "ap1">
<fvAEPg name = "ep1">
<fvRsBd tnFvBDName = "bd2"/>
<fvSubnet ip="10.0.1.3/32" scope="public" ctrl="no-default-gateway">
<fvEpNlb group ="224.132.18.17" mode="mode-mcast-igmp" />
</fvSubnet>
</fvAEPg>
</fvAp>
</fvTenant>
</polUni>
SUMMARY STEPS
1. To configure an IGMP Snooping policy and assign it to a bridge domain, send a post with XML such as
the following example:
DETAILED STEPS
Procedure
To configure an IGMP Snooping policy and assign it to a bridge domain, send a post with XML such as the following
example:
Example:
https://apic-ip-address/api/node/mo/uni/.xml
<fvTenant name="mcast_tenant1">
<!-- Create an IGMP snooping template, and provide the options -->
<igmpSnoopPol name="igmp_snp_bd_21"
ver="v2"
adminSt="enabled"
lastMbrIntvl="1"
queryIntvl="125"
rspIntvl="10"
startQueryCnt="2"
startQueryIntvl="31"
/>
<fvCtx name="ip_video"/>
<fvBD name="bd_21">
<fvRsCtx tnFvCtxName="ip_video"/>
<!-- Bind IGMP snooping to a BD -->
<fvRsIgmpsn tnIgmpSnoopPolName="igmp_snp_bd_21"/>
</fvBD></fvTenant>
This example creates and configures the IGMP Snooping policy, igmp_snp_bd_12 with the following properties, and
binds the IGMP policy, igmp_snp_bd_21, to bridge domain, bd_21:
• Administrative state is enabled
• Last Member Query Interval is the default 1 second.
• Query Interval is the default 125.
• Query Response interval is the default 10 seconds
• The Start Query Count is the default 2 messages
• The Start Query interval is 31 seconds.
• Setting the Querier Version to v2.
Enabling IGMP Snooping and Multicast on Static Ports Using the REST API
You can enable IGMP snooping and multicast processing on ports that have been statically assigned to an
EPG. You can create and assign access groups of users that are permitted or denied access to the IGMP snoop
and multicast traffic enabled on those ports.
SUMMARY STEPS
1. To configure application EPGs with static ports, enable those ports to receive and process IGMP snooping
and multicast traffic, and assign groups to access or be denied access to that traffic, send a post with XML
such as the following example.
DETAILED STEPS
Procedure
To configure application EPGs with static ports, enable those ports to receive and process IGMP snooping and multicast
traffic, and assign groups to access or be denied access to that traffic, send a post with XML such as the following example.
In the following example, IGMP snooping is enabled on leaf 102 interface 1/10 on VLAN 202. Multicast IP addresses
224.1.1.1 and 225.1.1.1 are associated with this port.
Example:
https://apic-ip-address/api/node/mo/uni/.xml
<fvTenant name="tenant_A">
<fvAp name="application">
<fvAEPg name="epg_A">
<fvRsPathAtt encap="vlan-202" instrImedcy="immediate" mode="regular"
tDn="topology/pod-1/paths-102/pathep-[eth1/10]">
<!-- IGMP snooping static group case -->
<igmpSnoopStaticGroup group="224.1.1.1" source="0.0.0.0"/>
<igmpSnoopStaticGroup group="225.1.1.1" source="2.2.2.2"/>
</fvRsPathAtt>
</fvAEPg>
</fvAp>
</fvTenant>
Enabling Group Access to IGMP Snooping and Multicast using the REST API
After you have enabled IGMP snooping and multicast on ports that have been statically assigned to an EPG,
you can then create and assign access groups of users that are permitted or denied access to the IGMP snooping
and multicast traffic enabled on those ports.
Procedure
To define the access group, F23broker, send a post with XML such as in the following example.
The example configures access group F23broker, associated with tenant_A, Rmap_A, application_A, epg_A, on leaf
102, interface 1/10, VLAN 202. By association with Rmap_A, the access group F23broker has access to multicast traffic
received at multicast address 226.1.1.1/24 and is denied access to traffic received at multicast address 227.1.1.1/24.
Example:
<!-- api/node/mo/uni/.xml --> <fvTenant name="tenant_A"> <pimRouteMapPol name="Rmap_A"> <pimRouteMapEntry
action="permit" grp="226.1.1.1/24" order="10"/> <pimRouteMapEntry action="deny" grp="227.1.1.1/24" order="20"/>
</pimRouteMapPol> <fvAp name="application_A"> <fvAEPg name="epg_A"> <fvRsPathAtt encap="vlan-202"
Procedure
To configure an MLD Snooping policy and assign it to a bridge domain, send a post with XML such as the following
example:
Example:
https://apic-ip-address/api/node/mo/uni/.xml
<fvTenant name="mldsn">
<mldSnoopPol adminSt="enabled" ctrl="fast-leave,querier" name="mldsn-it-fabric-querier-policy"
queryIntvl="125"
rspIntvl="10" startQueryCnt="2" startQueryIntvl="31" status=""/>
<fvBD name="mldsn-bd3">
<fvRsMldsn status="" tnMldSnoopPolName="mldsn-it-policy"/>
</fvBD>
</fvTenant>
This example creates and configures the MLD Snooping policy mldsn with the following properties, and binds the MLD
policy mldsn-it-fabric-querier-policy to bridge domain mldsn-bd3:
• Fast leave processing is enabled
• Querier processing is enabled
• Query Interval is set at 125
• Max query response time is set at 10
• Number of initial queries to send is set at 2
• Time for sending initial queries is set at 31
Procedure
Example:
<fvTenant dn="uni/tn-PIM_Tenant" name="PIM_Tenant">
<fvCtx knwMcastAct="permit" name="ctx1">
<pimCtxP mtu="1500">
</pimCtxP>
</fvCtx>
</fvTenant>
Step 2 Configure L3 Out and enable multicast (PIM, IGMP) on the L3 Out.
Example:
<l3extOut enforceRtctrl="export" name="l3out-pim_l3out1">
<l3extRsEctx tnFvCtxName="ctx1"/>
<l3extLNodeP configIssues="" name="bLeaf-CTX1-101">
<l3extRsNodeL3OutAtt rtrId="200.0.0.1" rtrIdLoopBack="yes" tDn="topology/pod-1/node-101"/>
<l3extLIfP name="if-PIM_Tenant-CTX1" tag="yellow-green">
<igmpIfP/>
<pimIfP>
<pimRsIfPol tDn="uni/tn-PIM_Tenant/pimifpol-pim_pol1"/>
</pimIfP>
<l3extRsPathL3OutAtt addr="131.1.1.1/24" ifInstT="l3-port" mode="regular" mtu="1500"
tDn="topology/pod-1/paths-101/pathep-[eth1/46]"/>
</l3extLIfP>
</l3extLNodeP>
<l3extRsL3DomAtt tDn="uni/l3dom-l3outDom"/>
<l3extInstP name="l3out-PIM_Tenant-CTX1-1topo" >
</l3extInstP>
<pimExtP enabledAf="ipv4-mcast" name="pim"/>
</l3extOut>
Step 3 Configure a BD under the tenant and enable multicast and IGMP on the BD.
Example:
<fvTenant dn="uni/tn-PIM_Tenant" name="PIM_Tenant">
<fvBD arpFlood="yes" mcastAllow="yes" multiDstPktAct="bd-flood" name="bd2" type="regular"
unicastRoute="yes" unkMacUcastAct="flood" unkMcastAct="flood">
<igmpIfP/>
<fvRsBDToOut tnL3extOutName="l3out-pim_l3out1"/>
<fvRsCtx tnFvCtxName="ctx1"/>
<fvRsIgmpsn/>
<fvSubnet ctrl="" ip="41.1.1.254/24" preferred="no" scope="private" virtual="no"/>
</fvBD>
</fvTenant>
Example:
Configuring a static RP:
<fvTenant dn="uni/tn-PIM_Tenant" name="PIM_Tenant">
<pimRouteMapPol name="rootMap">
<pimRouteMapEntry action="permit" grp="224.0.0.0/4" order="10" rp="0.0.0.0" src="0.0.0.0/0"/>
</pimRouteMapPol>
<fvCtx knwMcastAct="permit" name="ctx1">
<pimCtxP ctrl="" mtu="1500">
<pimStaticRPPol>
<pimStaticRPEntryPol rpIp="131.1.1.2">
<pimRPGrpRangePol>
<rtdmcRsFilterToRtMapPol tDn="uni/tn-PIM_Tenant/rtmap-rootMap"/>
</pimRPGrpRangePol>
</pimStaticRPEntryPol>
</pimStaticRPPol>
</pimCtxP>
</fvCtx>
</fvTenant>
<fvTenant name="t0">
<pimRouteMapPol name="fabricrp-rtmap">
<pimRouteMapEntry grp="226.20.0.0/24" order="1" />
</pimRouteMapPol>
<fvCtx name="ctx1">
<pimCtxP ctrl="">
<pimFabricRPPol status="">
<pimStaticRPEntryPol rpIp="6.6.6.6">
<pimRPGrpRangePol>
<rtdmcRsFilterToRtMapPol tDn="uni/tn-t0/rtmap-fabricrp-rtmap" />
</pimRPGrpRangePol>
</pimStaticRPEntryPol>
</pimFabricRPPol>
</pimCtxP>
</fvCtx>
</fvTenant>
<fvTenant name="t0">
<pimRouteMapPol name="intervrf" status="">
<pimRouteMapEntry grp="225.0.0.0/24" order="1" status=""/>
<pimRouteMapEntry grp="226.0.0.0/24" order="2" status=""/>
<pimRouteMapEntry grp="228.0.0.0/24" order="3" status="deleted"/>
</pimRouteMapPol>
<fvCtx name="ctx1">
<pimCtxP ctrl="">
<pimInterVRFPol status="">
<pimInterVRFEntryPol srcVrfDn="uni/tn-t0/ctx-stig_r_ctx" >
<rtdmcRsFilterToRtMapPol tDn="uni/tn-t0/rtmap-intervrf" />
</pimInterVRFEntryPol>
</pimInterVRFPol>
</pimCtxP>
</fvCtx>
</fvTenant>
Procedure
<fvTenant name="t0">
<fvCtx name="ctx1" pcEnfPref="unenforced" >
<pimIPV6CtxP ctrl="" mtu="1500" />
</fvCtx>
</fvTenant>
<fvTenant name="t0">
<pimRouteMapPol dn="uni/tn-t0/rtmap-static_101_ipv6" name="static_101_ipv6">
<pimRouteMapEntry action="permit" grp="ff00::/8" order="1" rp="2001:0:2001:2001:1:1:1:1/128"
src="::"/>
</pimRouteMapPol>
<fvCtx name="ctx1" pcEnfPref="unenforced">
<pimIPV6CtxP ctrl="" mtu="1500">
<pimStaticRPPol>
<pimStaticRPEntryPol rpIp="2001:0:2001:2001:1:1:1:1">
<pimRPGrpRangePol>
<rtdmcRsFilterToRtMapPol tDn="uni/tn-t0/rtmap-static_101_ipv6"/>
</pimRPGrpRangePol>
</pimStaticRPEntryPol>
</pimStaticRPPol>
</pimIPV6CtxP>
</fvCtx>
</fvTenant>
Step 5 Configure a PIM6 interface policy and apply it on the Layer 3 Out.
Example:
Procedure
Step 1 If you want to enable multicast source filtering on the bridge domain, send a post with XML such as the following
example:
Example:
Step 2 If you want to enable multicast receiver filtering on the bridge domain, send a post with XML such as the following
example:
Example:
Note
You can also enable both source and receiver filtering on the same bridge domain by sending a post with XML such as
the following example:
Procedure
<fabricSetupPol status=''>
<fabricSetupP podId="1" tepPool="10.0.0.0/16" />
<fabricSetupP podId="2" tepPool="10.1.0.0/16" status='' />
</fabricSetupPol>
<fabricNodeIdentPol>
<fabricNodeIdentP serial="SAL1819RXP4" name="ifav4-leaf1" nodeId="101" podId="1"/>
<fabricNodeIdentP serial="SAL1803L25H" name="ifav4-leaf2" nodeId="102" podId="1"/>
<fabricNodeIdentP serial="SAL1934MNY0" name="ifav4-leaf3" nodeId="103" podId="1"/>
<fabricNodeIdentP serial="SAL1934MNY3" name="ifav4-leaf4" nodeId="104" podId="1"/>
<fabricNodeIdentP serial="SAL1748H56D" name="ifav4-spine1" nodeId="201" podId="1"/>
<fabricNodeIdentP serial="SAL1938P7A6" name="ifav4-spine3" nodeId="202" podId="1"/>
<fabricNodeIdentP serial="SAL1938PHBB" name="ifav4-leaf5" nodeId="105" podId="2"/>
<fabricNodeIdentP serial="SAL1942R857" name="ifav4-leaf6" nodeId="106" podId="2"/>
<fabricNodeIdentP serial="SAL1931LA3B" name="ifav4-spine2" nodeId="203" podId="2"/>
<fabricNodeIdentP serial="FGE173400A9" name="ifav4-spine4" nodeId="204" podId="2"/>
</fabricNodeIdentPol>
<polUni>
<l3extLNodeP name="bSpine">
<l3extLIfP name='portIf'>
<l3extRsPathL3OutAtt descr='asr' tDn="topology/pod-1/paths-201/pathep-[eth1/1]"
encap='vlan-4' ifInstT='sub-interface' addr="201.1.1.1/30" />
<l3extRsPathL3OutAtt descr='asr' tDn="topology/pod-1/paths-201/pathep-[eth1/2]"
encap='vlan-4' ifInstT='sub-interface' addr="201.2.1.1/30" />
<l3extRsPathL3OutAtt descr='asr' tDn="topology/pod-1/paths-202/pathep-[eth1/2]"
encap='vlan-4' ifInstT='sub-interface' addr="202.1.1.1/30" />
<l3extRsPathL3OutAtt descr='asr' tDn="topology/pod-2/paths-203/pathep-[eth1/1]"
encap='vlan-4' ifInstT='sub-interface' addr="203.1.1.1/30" />
<l3extRsPathL3OutAtt descr='asr' tDn="topology/pod-2/paths-203/pathep-[eth1/2]"
encap='vlan-4' ifInstT='sub-interface' addr="203.2.1.1/30" />
<l3extRsPathL3OutAtt descr='asr' tDn="topology/pod-2/paths-204/pathep-[eth4/31]"
encap='vlan-4' ifInstT='sub-interface' addr="204.1.1.1/30" />
<ospfIfP>
<ospfRsIfPol tnOspfIfPolName='ospfIfPol'/>
</ospfIfP>
</l3extLIfP>
</l3extLNodeP>
Procedure
Step 1 To define the TEP pool for two remote leaf switches to be connected to a pod, send a post with XML such as the following
example:
Example:
<fabricSetupPol>
<fabricSetupP tepPool="10.0.0.0/16" podId="1" >
<fabricExtSetupP tepPool="30.0.128.0/20" extPoolId="1"/>
</fabricSetupP>
<fabricSetupP tepPool="10.1.0.0/16" podId="2" >
<fabricExtSetupP tepPool="30.1.128.0/20" extPoolId="1"/>
</fabricSetupP>
</fabricSetupPol>
Step 2 To define the node identity policy, send a post with XML, such as the following example:
Example:
<fabricNodeIdentPol>
<fabricNodeIdentP serial="SAL17267Z7W" name="leaf1" nodeId="101" podId="1"
extPoolId="1" nodeType="remote-leaf-wan"/>
<fabricNodeIdentP serial="SAL17267Z7X" name="leaf2" nodeId="102" podId="1"
extPoolId="1" nodeType="remote-leaf-wan"/>
Step 3 To configure the Fabric External Connection Profile, send a post with XML such as the following example:
Example:
<?xml version="1.0" encoding="UTF-8"?>
<imdata totalCount="1">
<fvFabricExtConnP dn="uni/tn-infra/fabricExtConnP-1" id="1" name="Fabric_Ext_Conn_Pol1"
rt="extended:as2-nn4:5:16" siteId="0">
<l3extFabricExtRoutingP name="test">
<l3extSubnet ip="150.1.0.0/16" scope="import-security"/>
</l3extFabricExtRoutingP>
<l3extFabricExtRoutingP name="ext_routing_prof_1">
<l3extSubnet ip="204.1.0.0/16" scope="import-security"/>
<l3extSubnet ip="209.2.0.0/16" scope="import-security"/>
<l3extSubnet ip="202.1.0.0/16" scope="import-security"/>
<l3extSubnet ip="207.1.0.0/16" scope="import-security"/>
<l3extSubnet ip="200.0.0.0/8" scope="import-security"/>
<l3extSubnet ip="201.2.0.0/16" scope="import-security"/>
<l3extSubnet ip="210.2.0.0/16" scope="import-security"/>
<l3extSubnet ip="209.1.0.0/16" scope="import-security"/>
<l3extSubnet ip="203.2.0.0/16" scope="import-security"/>
<l3extSubnet ip="208.1.0.0/16" scope="import-security"/>
<l3extSubnet ip="207.2.0.0/16" scope="import-security"/>
<l3extSubnet ip="100.0.0.0/8" scope="import-security"/>
<l3extSubnet ip="201.1.0.0/16" scope="import-security"/>
<l3extSubnet ip="210.1.0.0/16" scope="import-security"/>
<l3extSubnet ip="203.1.0.0/16" scope="import-security"/>
<l3extSubnet ip="208.2.0.0/16" scope="import-security"/>
</l3extFabricExtRoutingP>
<fvPodConnP id="1">
<fvIp addr="100.11.1.1/32"/>
</fvPodConnP>
<fvPodConnP id="2">
<fvIp addr="200.11.1.1/32"/>
</fvPodConnP>
<fvPeeringP type="automatic_with_full_mesh"/>
</fvFabricExtConnP>
</imdata>
Step 4 To configure an L3Out on VLAN-4, that is required for both the remote leaf switches and the spine switch connected to
the WAN router, enter XML such as the following example:
Example:
<?xml version="1.0" encoding="UTF-8"?>
<polUni>
<fvTenant name="infra">
<l3extOut name="rleaf-wan-test">
<ospfExtP areaId="0.0.0.5"/>
<bgpExtP/>
<l3extRsEctx tnFvCtxName="overlay-1"/>
<l3extRsL3DomAtt tDn="uni/l3dom-l3extDom1"/>
<l3extProvLbl descr="" name="prov_mp1" ownerKey="" ownerTag="" tag="yellow-green"/>
<l3extLNodeP name="rleaf-101">
<l3extRsNodeL3OutAtt rtrId="202.202.202.202" tDn="topology/pod-1/node-101">
</l3extRsNodeL3OutAtt>
<l3extLIfP name="portIf">
<l3extRsPathL3OutAtt ifInstT="sub-interface" tDn="topology/pod-1/paths-101/pathep-[eth1/49]"
addr="202.1.1.2/30" mac="AA:11:22:33:44:66" encap='vlan-4'/>
<ospfIfP>
<ospfRsIfPol tnOspfIfPolName='ospfIfPol'/>
</ospfIfP>
</l3extLIfP>
</l3extLNodeP>
<l3extLNodeP name="rlSpine-201">
<l3extRsNodeL3OutAtt rtrId="201.201.201.201" rtrIdLoopBack="no" tDn="topology/pod-1/node-201">
<!--
<l3extLoopBackIfP addr="201::201/128" descr="" name=""/>
<l3extLoopBackIfP addr="201.201.201.201/32" descr="" name=""/>
-->
<l3extLoopBackIfP addr="::" />
</l3extRsNodeL3OutAtt>
<l3extLIfP name="portIf">
<l3extRsPathL3OutAtt ifInstT="sub-interface" tDn="topology/pod-1/paths-201/pathep-[eth8/36]"
addr="201.1.1.1/30" mac="00:11:22:33:77:55" encap='vlan-4'/>
<ospfIfP>
<ospfRsIfPol tnOspfIfPolName='ospfIfPol'/>
</ospfIfP>
</l3extLIfP>
</l3extLNodeP>
<l3extInstP descr="" matchT="AtleastOne" name="instp1" prio="unspecified" targetDscp="unspecified">
<fvRsCustQosPol tnQosCustomPolName=""/>
</l3extInstP>
</l3extOut>
<ospfIfPol name="ospfIfPol" nwT="bcast"/>
</fvTenant>
</polUni>
Step 5 To configure the multipod L3Out on VLAN-5, that is required for both multipod and the remote leaf topology, send a
post such as the following example:
Example:
<?xml version="1.0" encoding="UTF-8"?>
<polUni>
<l3extLoopBackIfP addr="202.202.202.212"/>
</l3extRsNodeL3OutAtt>
<l3extRsNodeL3OutAtt rtrId="102.102.102.102" rtrIdLoopBack="no" tDn="topology/pod-1/node-102">
<l3extLoopBackIfP addr="102.102.102.112"/>
</l3extRsNodeL3OutAtt>
<l3extLIfP name="portIf">
<ospfIfP authKeyId="1" authType="none">
<ospfRsIfPol tnOspfIfPolName="ospfIfPol" />
</ospfIfP>
<l3extRsPathL3OutAtt addr="10.0.254.233/30" encap="vlan-5" ifInstT="sub-interface"
tDn="topology/pod-2/paths-202/pathep-[eth5/2]"/>
<l3extRsPathL3OutAtt addr="10.0.255.229/30" encap="vlan-5" ifInstT="sub-interface"
tDn="topology/pod-1/paths-102/pathep-[eth5/2]"/>
</l3extLIfP>
</l3extLNodeP>
<l3extInstP matchT="AtleastOne" name="ipnInstP" />
</l3extOut>
</fvTenant>
</polUni>
You will configure the following pieces when configuring the SR-MPLS infra L3Out:
• Nodes
• Only leaf switches are allowed to be configured as nodes in the SR-MPLS infra L3Out (border leaf
switches and remote leaf switches).
• Each SR-MPLS infra L3Out can have border leaf switches from one pod or remote leaf switch from
the same site.
• Each border leaf switch or remote leaf switch can be configured in multiple SR-MPLS infra L3Outs
if it connects to multiple SR-MPLS domains.
• You will also configure the loopback interface underneath the node, and a node SID policy underneath
the loopback interface.
• Interfaces
• Supported types of interfaces are:
• Routed interface or sub-interface
• Routed port channel or port channel sub-interface
• QoS rules
• You can configure the MPLS ingress rule and MPLS egress rule through the MPLS QoS policy in
the SR-MPLS infra L3Out.
• If you do not create an MPLS QoS policy, any ingressing MPLS traffic is assigned the default QoS
level.
You will also configure the underlay and overlay through the SR-MPLS infra L3Out:
• Underlay: BGP peer IP (BGP LU peer) configuration as part of the interface configuration.
• Overlay: MP-BGP EVPN remote IPv4 address (MP-BGP EVPN peer) configuration as part of the logical
node profile configuration.
Procedure
<polUni>
<fvTenant name="infra">
<mplsIfPol name="default"/>
<mplsLabelPol name="default" >
<mplsSrgbLabelPol minSrgbLabel="16000" maxSrgbLabel="17000" localId="1" status=""/>
</mplsLabelPol>
<l3extInstP name="mplsInstP">
<l3extSubnet aggregate="" descr="" ip="11.11.11.0/24" name="" scope="import-security"/>
</l3extInstP>
<bgpExtP/>
<l3extRsL3DomAtt tDn="uni/l3dom-l3extDom1" />
</l3extOut>
</fvTenant>
</polUni>
Procedure
<polUni>
<fvTenant name="t1">
<fvCtx name="v1">
<!-- specify bgp evpn route-target -->
<bgpRtTargetP af="ipv4-ucast">
Procedure
<polUni>
<fvTenant name="infra">
<qosMplsCustomPol descr="" dn="uni/tn-infra/qosmplscustom-customqos1" name="customqos1" status=""
>
<qosMplsIngressRule from="2" to="3" prio="level5" target="CS5" targetCos="4" status="" />
<qosMplsEgressRule from="CS2" to="CS4" targetExp="5" targetCos="3" status=""/>
</qosMplsCustomPol>
</fvTenant>
</polUni>
Procedure
<bgpInstPol name="default">
<bgpAsP asn="1" />
<bgpRRP>
<bgpRRNodePEp id=“<spine_id1>”/>
<bgpRRNodePEp id=“<spine_id2>”/>
</bgpRRP>
</bgpInstPol>
<fabricFuncP>
<fabricPodPGrp name="bgpRRPodGrp”>
<fabricRsPodPGrpBGPRRP tnBgpInstPolName="default" />
</fabricPodPGrp>
</fabricFuncP>
Example:
For the PodP setup—
POST https://apic-ip-address/api/policymgr/mo/uni.xml
<fabricPodP name="default">
<fabricPodS name="default" type="ALL">
<fabricRsPodPGrp tDn="uni/fabric/funcprof/podpgrp-bgpRRPodGrp"/>
</fabricPodS>
</fabricPodP>
Configuring the BGP Domain-Path Feature for Loop Prevention Using the REST API
Procedure
Step 1 If you want to use the BGP Domain-Path feature for loop prevention, set the global DomainIdBase.
<polUni>
<fabricInst>
<bgpInstPol name="default">
<bgpDomainIdBase domainIdBase="12346" />
</bgpInstPol>
</fabricInst>
</polUni>
Note In the following REST API example, long single lines of text are broken up with the \ character to improve
readability.
Procedure
To configure a Layer 3 route to the port channels that you created previously using the REST API, send a post with XML
such as the following:
Example:
<polUni>
<fvTenant name=pep9>
<l3extOut descr="" dn="uni/tn-pep9/out-routAccounting" enforceRtctrl="export" \
name="routAccounting" nameAlias="" ownerKey="" ownerTag="" \
targetDscp="unspecified">
<l3extRsL3DomAtt tDn="uni/l3dom-Dom1"/>
<l3extRsEctx tnFvCtxName="ctx9"/>
<l3extLNodeP configIssues="" descr="" name="node101" nameAlias="" ownerKey="" \
ownerTag="" tag="yellow-green" targetDscp="unspecified">
<l3extRsNodeL3OutAtt rtrId="10.1.0.101" rtrIdLoopBack="yes" \
tDn="topology/pod-1/node-101">
<l3extInfraNodeP descr="" fabricExtCtrlPeering="no" \
fabricExtIntersiteCtrlPeering="no" name="" nameAlias="" spineRole=""/>
</l3extRsNodeL3OutAtt>
<l3extLIfP descr="" name="lifp17" nameAlias="" ownerKey="" ownerTag="" \
tag="yellow-green">
<ospfIfP authKeyId="1" authType="none" descr="" name="" nameAlias="">
<ospfRsIfPol tnOspfIfPolName=""/>
</ospfIfP>
<l3extRsPathL3OutAtt addr="10.1.5.3/24" autostate="disabled" descr="" \
encap="unknown" encapScope="local" ifInstT="l3-port" llAddr="::" \
mac="00:22:BD:F8:19:FF" mode="regular" mtu="inherit" \
tDn="topology/pod-1/paths-101/pathep-[po17_PolGrp]" \
targetDscp="unspecified"/>
<l3extRsNdIfPol tnNdIfPolName=""/>
<l3extRsIngressQosDppPol tnQosDppPolName=""/>
<l3extRsEgressQosDppPol tnQosDppPolName=""/>
</l3extLIfP>
</l3extLNodeP>
<l3extInstP descr="" floodOnEncap="disabled" matchT="AtleastOne" \
name="accountingInst" nameAlias="" prefGrMemb="exclude" prio="unspecified" \
targetDscp="unspecified">
<fvRsProv matchT="AtleastOne" prio="unspecified" tnVzBrCPName="webCtrct"/>
<l3extSubnet aggregate="export-rtctrl,import-rtctrl" descr="" ip="0.0.0.0/0" \
name="" nameAlias="" scope="export-rtctrl,import-rtctrl,import-security"/>
<l3extSubnet aggregate="export-rtctrl,import-rtctrl" descr="" ip="::/0" \
name="" nameAlias="" scope="export-rtctrl,import-rtctrl,import-security"/>
<fvRsCustQosPol tnQosCustomPolName=""/>
</l3extInstP>
<l3extConsLbl descr="" name="golf" nameAlias="" owner="infra" ownerKey="" \
ownerTag="" tag="yellow-green"/>
</l3extOut>
</fvTenant>
</polUni>
• An APIC fabric administrator account is available that will enable creating the necessary fabric
infrastructure configurations.
• The target leaf switches are registered in the ACI fabric and available.
• Port channels are configured using the procedures in "Configuring Port Channels Using the REST API".
Note In the following REST API example, long single lines of text are broken up with the \ character to improve
readability.
Procedure
To configure a Layer 3 sub-interface route to the port channels that you created previously using the REST API, send a
post with XML such as the following:
Example:
<polUni>
<fvTenant name=pep9>
<l3extOut descr="" dn="uni/tn-pep9/out-routAccounting" enforceRtctrl="export" \
name="routAccounting" nameAlias="" ownerKey="" ownerTag="" targetDscp="unspecified">
<l3extRsL3DomAtt tDn="uni/l3dom-Dom1"/>
<l3extRsEctx tnFvCtxName="ctx9"/>
<l3extLNodeP configIssues="" descr="" name="node101" nameAlias="" ownerKey="" \
ownerTag="" tag="yellow-green" targetDscp="unspecified">
<l3extRsNodeL3OutAtt rtrId="10.1.0.101" rtrIdLoopBack="yes" \
tDn="topology/pod-1/node-101">
<l3extInfraNodeP descr="" fabricExtCtrlPeering="no" \
fabricExtIntersiteCtrlPeering="no" name="" nameAlias="" spineRole=""/>
</l3extRsNodeL3OutAtt>
<l3extLIfP descr="" name="lifp27" nameAlias="" ownerKey="" ownerTag="" \
tag="yellow-green">
<ospfIfP authKeyId="1" authType="none" descr="" name="" nameAlias="">
<ospfRsIfPol tnOspfIfPolName=""/>
</ospfIfP>
<l3extRsPathL3OutAtt addr="11.1.5.3/24" autostate="disabled" descr="" \
encap="vlan-2001" encapScope="local" ifInstT="sub-interface" \
llAddr="::" mac="00:22:BD:F8:19:FF" mode="regular" mtu="inherit" \
tDn="topology/pod-1/paths-101/pathep-[po27_PolGrp]" \
targetDscp="unspecified"/>
<l3extRsNdIfPol tnNdIfPolName=""/>
<l3extRsIngressQosDppPol tnQosDppPolName=""/>
<l3extRsEgressQosDppPol tnQosDppPolName=""/>
</l3extLIfP>
</l3extLNodeP>
<l3extInstP descr="" floodOnEncap="disabled" matchT="AtleastOne" \
name="accountingInst" nameAlias="" prefGrMemb="exclude" prio="unspecified" \
targetDscp="unspecified">
<fvRsProv matchT="AtleastOne" prio="unspecified" tnVzBrCPName="webCtrct"/>
<l3extSubnet aggregate="export-rtctrl,import-rtctrl" descr="" ip="0.0.0.0/0" \
name="" nameAlias="" scope="export-rtctrl,import-rtctrl,import-security"/>
<l3extSubnet aggregate="export-rtctrl,import-rtctrl" descr="" ip="::/0" \
name="" nameAlias="" scope="export-rtctrl,import-rtctrl,import-security"/>
<fvRsCustQosPol tnQosCustomPolName=""/>
</l3extInstP>
<l3extConsLbl descr="" name="golf" nameAlias="" owner="infra" ownerKey="" \
ownerTag="" tag="yellow-green"/>
</l3extOut>
</fvTenant>
</polUni>
Procedure
• A Layer 3 Out is configured and a logical node profile and a logical interface profile under the Layer 3
Out is configured.
Procedure
To disable the autostate, you must change the value to disabled in the above example. For example, autostate="disabled".
Procedure
Example:
<l3extRsEgressQosDppPol tnQosDppPolName=""/>
<l3extRsPathL3OutAtt addr="3001::31:0:1:2/120" descr="" encap="vlan-3001" encapScope="local"
ifInstT="sub-interface" llAddr="::" mac="00:22:BD:F8:19:FF" mode="regular" mtu="inherit"
tDn="topology/pod-1/paths-101/pathep-[eth1/8]" targetDscp="unspecified">
<bgpPeerP addr="3001::31:0:1:0/120" allowedSelfAsCnt="3" ctrl="send-com,send-ext-com" descr=""
name="" peerCtrl="bfd" privateASctrl="remove-all,remove-exclusive,replace-as" ttl="1" weight="1000">
<bgpRsPeerPfxPol tnBgpPeerPfxPolName=""/>
<bgpAsP asn="3001" descr="" name=""/>
</bgpPeerP>
</l3extRsPathL3OutAtt>
</l3extLIfP>
<l3extLIfP descr="" name="l3extLIfP_1" ownerKey="" ownerTag="" tag="yellow-green">
<l3extRsNdIfPol tnNdIfPolName=""/>
<l3extRsIngressQosDppPol tnQosDppPolName=""/>
<l3extRsEgressQosDppPol tnQosDppPolName=""/>
<l3extRsPathL3OutAtt addr="31.0.1.2/24" descr="" encap="vlan-3001" encapScope="local"
ifInstT="sub-interface" llAddr="::" mac="00:22:BD:F8:19:FF" mode="regular" mtu="inherit"
tDn="topology/pod-1/paths-101/pathep-[eth1/8]" targetDscp="unspecified">
<bgpPeerP addr=“31.0.1.0/24" allowedSelfAsCnt="3" ctrl="send-com,send-ext-com" descr="" name=""
peerCtrl="" privateASctrl="remove-all,remove-exclusive,replace-as" ttl="1" weight="100">
<bgpRsPeerPfxPol tnBgpPeerPfxPolName=""/>
<bgpLocalAsnP asnPropagate="none" descr="" localAsn="200" name=""/>
<bgpAsP asn="3001" descr="" name=""/>
</bgpPeerP>
</l3extRsPathL3OutAtt>
</l3extLIfP>
</l3extLNodeP>
<l3extRsL3DomAtt tDn="uni/l3dom-l3-dom"/>
<l3extRsDampeningPol af="ipv6-ucast" tnRtctrlProfileName="damp_rp"/>
<l3extRsDampeningPol af="ipv4-ucast" tnRtctrlProfileName="damp_rp"/>
<l3extInstP descr="" matchT="AtleastOne" name="l3extInstP_1" prio="unspecified"
targetDscp="unspecified">
<l3extSubnet aggregate="" descr="" ip="130.130.130.0/24" name="" scope="import-rtctrl"></l3extSubnet>
The two properties that enable you to configure more paths are maxEcmp and maxEcmpIbgp in the bgpCtxAfPol
object. After you configure these two properties, they are propagated to the rest of your implementation. The
ECMP policy is applied at the VRF level.
The following example provides information on how to configure the BGP Max Path feature using the REST
API:
<l3extOut name="out1">
<rtctrlProfile name="rp1">
<rtctrlCtxP name="ctxp1" order="1">
<rtctrlScope>
<rtctrlRsScopeToAttrP tnRtctrlAttrPName="attrp1"/>
</rtctrlScope>
</rtctrlCtxP>
</rtctrlProfile>
</l3extOut>
</fvTenant>
Configuring BGP External Routed Network with Autonomous System Override Enabled Using the REST API
SUMMARY STEPS
1. Configure the BGP External Routed Network with Autonomous override enabled.
DETAILED STEPS
Procedure
Configure the BGP External Routed Network with Autonomous override enabled.
Note
The line of code that is in bold displays the BGP AS override portion of the configuration. This feature was introduced
in the Cisco APIC Release 3.1(2m).
Example:
<fvTenant name="coke">
<fvBD name="cokeBD">
<!-- Association from Bridge Doamin to Private Network -->
<fvRsCtx tnFvCtxName="coke" />
<fvRsBDToOut tnL3extOutName="routAccounting" />
<!-- Subnet behind the bridge domain-->
<fvSubnet ip="20.1.1.1/16" scope="public"/>
<fvSubnet ip="2000:1::1/64" scope="public"/>
</fvBD>
<fvBD name="cokeBD2">
<!-- Association from Bridge Doamin to Private Network -->
<fvRsCtx tnFvCtxName="coke" />
<fvRsBDToOut tnL3extOutName="routAccounting" />
<!-- Subnet behind the bridge domain-->
<fvSubnet ip="30.1.1.1/16" scope="public"/>
</fvBD>
<vzBrCP name="webCtrct" scope="global">
<vzSubj name="http">
<vzRsSubjFiltAtt tnVzFilterName="default"/>
</vzSubj>
</vzBrCP>
/>
<fvRsProv tnVzBrCPName="webCtrct"/>
</l3extInstP>
<l3extRsEctx tnFvCtxName="coke"/>
</l3extOut>
<fvAp name="cokeAp">
<fvAEPg name="cokeEPg" >
<fvRsBd tnFvBDName="cokeBD" />
<fvRsPathAtt tDn="topology/pod-1/paths-103/pathep-[eth1/20]" encap="vlan-100"
instrImedcy="immediate" mode="regular"/>
<fvRsCons tnVzBrCPName="webCtrct"/>
</fvAEPg>
<fvAEPg name="cokeEPg2" >
<fvRsBd tnFvBDName="cokeBD2" />
<fvRsPathAtt tDn="topology/pod-1/paths-103/pathep-[eth1/20]" encap="vlan-110"
instrImedcy="immediate" mode="regular"/>
<fvRsCons tnVzBrCPName="webCtrct"/>
</fvAEPg>
</fvAp>
</l3extRsNodeL3OutAtt>
<l3extLIfP name='portIfV4'>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-101/pathep-[eth1/17]" encap='vlan-1010'
ifInstT='sub-interface' addr="20.1.12.2/24">
</l3extRsPathL3OutAtt>
</l3extLIfP>
<l3extLIfP name='portIfV6'>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-101/pathep-[eth1/17]" encap='vlan-1010'
ifInstT='sub-interface' addr="64:ff9b::1401:302/120">
<bgpPeerP addr="64:ff9b::1401:d03" ctrl="send-com,send-ext-com" />
</l3extRsPathL3OutAtt>
</l3extLIfP>
<bgpPeerP addr="2.2.2.2" ctrl="as-override,disable-peer-as-check, send-com,send-ext-com"
status=""/>
</l3extLNodeP>
<!--
<bgpPeerP addr="2.2.2.2" ctrl="send-com,send-ext-com" status=""/>
-->
<l3extInstP name="accountingInst">
<l3extSubnet ip="192.10.0.0/16" scope="import-security,import-rtctrl" />
<l3extSubnet ip="192.3.3.0/24" scope="import-security,import-rtctrl" />
<l3extSubnet ip="192.4.2.0/24" scope="import-security,import-rtctrl" />
<l3extSubnet ip="64:ff9b::c007:200/120" scope="import-security,import-rtctrl" />
<l3extSubnet ip="192.2.2.0/24" scope="export-rtctrl" />
<l3extSubnet ip="0.0.0.0/0"
scope="export-rtctrl,import-rtctrl,import-security"
aggregate="export-rtctrl,import-rtctrl"
/>
</l3extInstP>
<l3extRsEctx tnFvCtxName="coke"/>
</l3extOut>
</fvTenant>
Configuring BGP Neighbor Shutdown and Soft Reset Using the REST API
Procedure
Step 2 Configure the BGP routing protocol and configure the BGP neighbor shutdown feature.
This example configures BGP as the primary routing protocol, with a BGP peer with the IP address, 15.15.15.2 and
ASN 100.
The adminSt variable can be set to one of the following:
• enabled: Enables the BGP neighbor shutdown feature.
• disabled: Disables the BGP neighbor shutdown feature.
Procedure
Step 2 Configure the BGP routing protocol and configure the BGP neighbor soft reset feature.
This example configures BGP as the primary routing protocol, with a BGP peer with the IP address, 15.15.15.2 and
ASN 100.
The dir variable can be set to one of the following:
• in: Enables the soft dynamic inbound reset.
• out: Enables the soft outbound reset.
<l3extOut name="l3out1">
<l3extLNodeP name="nodep1">
<bgpPeerP addr="15.15.15.2">
<bgpAsP asn="100"/>
<bgpPeerEntryClearPeerLTask>
<attributes>
<mode>soft</mode>
<dir>in</dir>
<adminSt>start</adminSt>
</attributes>
<children/>
</bgpPeerEntryClearPeerLTask>
</bgpPeerP>
</l3extLNodeP>
<bgpExtP/>
</l3extOut>
Configuring a Per VRF Per Node BGP Timer Using the REST API
The following example shows how to configure Per VRF Per node BGP timer in a node. Configure bgpProtP
under l3extLNodeP configuration. Under bgpProtP, configure a relation (bgpRsBgpNodeCtxPol) to the desired
BGP Context Policy (bgpCtxPol).
Procedure
Configure a node specific BGP timer policy on node1, and configure node2 with a BGP timer policy that is not node
specific.
Example:
POST https://apic-ip-address/mo.xml
In this example, node1 gets BGP timer values from policy pol2, and node2 gets BGP timer values from pol1. The timer
values are applied to the bgpDom corresponding to VRF tn1:ctx1. This is based upon the BGP timer policy that is chosen
following the algorithm described in the Per VRF Per Node BPG Timer Values section.
Deleting a Per VRF Per Node BGP Timer Using the REST API
The following example shows how to delete an existing Per VRF Per node BGP timer in a node.
Procedure
The code phrase <bgpProtP name="protp1" status="deleted" > in the example above, deletes the BGP timer policy.
After the deletion, node1 defaults to the BGP timer policy for the VRF with which node1 is associated, which is pol1 in
the above example.
Configuring Bidirectional Forwarding Detection on a Secondary IP Address Using the REST API
The following example configures bidirectional forwarding detection (BFD) on a secondary IP address using
the REST API:
<l3extLIfP
dn="uni/tn-sec-ip-bfd/out-secip-bfd-l3out/lnodep-secip-bfd-l3out_nodeProfile/
lifp-secip-bfd-l3out_interfaceProfile" name="secip-bfd-l3out_interfaceProfile"
prio="unspecified" tag="yellow-green" userdom=":all:">
<l3extRsPathL3OutAtt addr="50.50.50.200/24" autostate="disabled"
encap="vlan-2" encapScope="local" ifInstT="ext-svi" ipv6Dad="enabled"
isMultiPodDirect="no" llAddr="::" mac="00:22:BD:F8:19:FF" mode="regular"
mtu="inherit" tDn="topology/pod-1/paths-101/pathep-[eth1/3]"
targetDscp="unspecified" userdom=":all:">
<l3extIp addr="9.9.9.1/24" ipv6Dad="enabled" userdom=":all:"/>
<l3extIp addr="6.6.6.1/24" ipv6Dad="enabled" userdom=":all:"/>
</l3extRsPathL3OutAtt>
<l3extRsNdIfPol userdom="all"/>
<l3extRsLIfPCustQosPol userdom="all"/>
<l3extRsIngressQosDppPol userdom="all"/>
<l3extRsEgressQosDppPol userdom="all"/>
<l3extRsArpIfPol userdom="all"/>
</l3extLIfP>
<ipRouteP aggregate="no"
dn="uni/tn-sec-ip-bfd/out-secip-bfd-l3out/lnodep-secip-bfd-l3out_nodeProfile/
rsnodeL3OutAtt-[topology/pod-1/node-101]/rt-[6.0.0.1/24]"
fromPfxLen="0" ip="6.0.0.1/24" pref="1" rtCtrl="bfd" toPfxLen="0" userdom=":all:">
<ipNexthopP nhAddr="6.6.6.2" pref="unspecified" type="prefix" userdom=":all:"/>
</ipRouteP>
Procedure
The following REST API shows the global configuration for bidirectional forwarding detection (BFD):
Example:
<polUni>
<infraInfra>
<bfdIpv4InstPol name="default" echoSrcAddr="1.2.3.4" slowIntvl="1000" minTxIntvl="150"
minRxIntvl="250" detectMult="5" echoRxIntvl="200"/>
<bfdIpv6InstPol name="default" echoSrcAddr="34::1/64" slowIntvl="1000" minTxIntvl="150"
minRxIntvl="250" detectMult="5" echoRxIntvl="200"/>
</infraInfra>
</polUni>
Procedure
The following REST API shows the interface override configuration for bidirectional forwarding detection (BFD):
Example:
<fvTenant name="ExampleCorp">
<bfdIfPol name=“bfdIfPol" minTxIntvl="400" minRxIntvl="400" detectMult="5" echoRxIntvl="400"
echoAdminSt="disabled"/>
<l3extOut name="l3-out">
<l3extLNodeP name="leaf1">
<l3extRsNodeL3OutAtt tDn="topology/pod-1/node-101" rtrId="2.2.2.2"/>
<l3extLIfP name='portIpv4'>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-101/pathep-[eth1/11]" ifInstT='l3-port'
addr="10.0.0.1/24" mtu="1500"/>
<bfdIfP type=“sha1” key=“password">
<bfdRsIfPol tnBfdIfPolName=‘bfdIfPol'/>
</bfdIfP>
</l3extLIfP>
</l3extLNodeP>
</l3extOut>
</fvTenant>
Procedure
Step 1 The following example shows the interface configuration for bidirectional forwarding detection (BFD):
Example:
<fvTenant name="ExampleCorp">
<bfdIfPol name=“bfdIfPol" minTxIntvl="400" minRxIntvl="400" detectMult="5" echoRxIntvl="400"
echoAdminSt="disabled"/>
<l3extOut name="l3-out">
<l3extLNodeP name="leaf1">
<l3extRsNodeL3OutAtt tDn="topology/pod-1/node-101" rtrId="2.2.2.2"/>
<l3extLIfP name='portIpv4'>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-101/pathep-[eth1/11]" ifInstT='l3-port'
addr="10.0.0.1/24" mtu="1500"/>
<bfdIfP type=“sha1” key=“password">
<bfdRsIfPol tnBfdIfPolName=‘bfdIfPol'/>
</bfdIfP>
</l3extLIfP>
</l3extLNodeP>
</l3extOut>
</fvTenant>
Step 2 The following example shows the interface configuration for enabling BFD on OSPF and EIGRP:
Example:
BFD on leaf switch
<fvTenant name=“ExampleCorp">
<ospfIfPol name="ospf_intf_pol" cost="10" ctrl="bfd”/>
<eigrpIfPol ctrl="nh-self,split-horizon,bfd" dn="uni/tn-Coke/eigrpIfPol-eigrp_if_default"
</fvTenant>
Example:
BFD on spine switch
<l3extLNodeP name="bSpine">
<l3extLIfP name='portIf'>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-103/pathep-[eth5/10]" encap='vlan-4'
ifInstT='sub-interface' addr="20.3.10.1/24"/>
<ospfIfP>
<ospfRsIfPol tnOspfIfPolName='ospf_intf_pol'/>
</ospfIfP>
<bfdIfP name="test" type="sha1" key="hello" status="created,modified">
<bfdRsIfPol tnBfdIfPolName='default' status="created,modified"/>
</bfdIfP>
</l3extLIfP>
</l3extLNodeP>
Step 3 The following example shows the interface configuration for enabling BFD on BGP:
Example:
<fvTenant name="ExampleCorp">
<l3extOut name="l3-out">
<l3extLNodeP name="leaf1">
<l3extRsNodeL3OutAtt tDn="topology/pod-1/node-101" rtrId="2.2.2.2"/>
<l3extLIfP name='portIpv4'>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-101/pathep-[eth1/11]" ifInstT='l3-port'
addr="10.0.0.1/24" mtu="1500">
<bgpPeerP addr="4.4.4.4/24" allowedSelfAsCnt="3" ctrl="bfd" descr="" name=""
peerCtrl="" ttl="1">
<bgpRsPeerPfxPol tnBgpPeerPfxPolName=""/>
<bgpAsP asn="3" descr="" name=""/>
</bgpPeerP>
</l3extRsPathL3OutAtt>
</l3extLIfP>
</l3extLNodeP>
</l3extOut>
</fvTenant>
Step 4 The following example shows the interface configuration for enabling BFD on Static Routes:
Example:
BFD on leaf switch
<fvTenant name="ExampleCorp">
<l3extOut name="l3-out">
<l3extLNodeP name="leaf1">
<l3extRsNodeL3OutAtt tDn="topology/pod-1/node-101" rtrId="2.2.2.2">
<ipRouteP ip=“192.168.3.4" rtCtrl="bfd">
<ipNexthopP nhAddr="192.168.62.2"/>
</ipRouteP>
</l3extRsNodeL3OutAtt>
<l3extLIfP name='portIpv4'>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-101/pathep-[eth1/3]" ifInstT='l3-port'
addr="10.10.10.2/24" mtu="1500" status="created,modified" />
</l3extLIfP>
</l3extLNodeP>
</l3extOut>
</fvTenant>
Example:
BFD on spine switch
<l3extLNodeP name="bSpine">
<l3extLIfP name='portIf'>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-103/pathep-[eth5/10]" encap='vlan-4'
ifInstT='sub-interface' addr="20.3.10.1/24"/>
</l3extLNodeP>
Step 5 The following example shows the interface configuration for enabling BFD on IS-IS:
Example:
<fabricInst>
<l3IfPol name="testL3IfPol" bfdIsis="enabled"/>
<fabricLeafP name="LeNode" >
<fabricRsLePortP tDn="uni/fabric/leportp-leaf_profile" />
<fabricLeafS name="spsw" type="range">
<fabricNodeBlk name="node101" to_="102" from_="101" />
</fabricLeafS>
</fabricLeafP>
</fabricSpineP>
<fabricLePortP name="leaf_profile">
<fabricLFPortS name="leafIf" type="range">
<fabricPortBlk name="spBlk" fromCard="1" fromPort="49" toCard="1" toPort="49" />
<fabricRsLePortPGrp tDn="uni/fabric/funcprof/leportgrp-LeTestPGrp" />
</fabricLFPortS>
</fabricLePortP>
<fabricSpPortP name="spine_profile">
<fabricSFPortS name="spineIf" type="range">
<fabricPortBlk name="spBlk" fromCard="5" fromPort="1" toCard="5" toPort="2" />
<fabricRsSpPortPGrp tDn="uni/fabric/funcprof/spportgrp-SpTestPGrp" />
</fabricSFPortS>
</fabricSpPortP>
<fabricFuncP>
<fabricLePortPGrp name = "LeTestPGrp">
<fabricRsL3IfPol tnL3IfPolName="testL3IfPol"/>
</fabricLePortPGrp>
</fabricFuncP>
</fabricInst>
Procedure
<fvTenant name="mgmt">
<fvBD name="bd1">
<fvRsBDToOut tnL3extOutName="RtdOut" />
<fvSubnet ip="1.1.1.1/16" />
<fvSubnet ip="1.2.1.1/16" />
<fvSubnet ip="40.1.1.1/24" scope="public" />
<fvRsCtx tnFvCtxName="inb" />
</fvBD>
<l3extOut name="RtdOut">
<l3extRsL3DomAtt tDn="uni/l3dom-extdom"/>
<l3extInstP name="extMgmt">
</l3extInstP>
<l3extLNodeP name="borderLeaf">
<l3extRsNodeL3OutAtt tDn="topology/pod-1/node-101" rtrId="10.10.10.10"/>
<l3extRsNodeL3OutAtt tDn="topology/pod-1/node-102" rtrId="10.10.10.11"/>
<l3extLIfP name='portProfile'>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-101/pathep-[eth1/40]" ifInstT='l3-port'
addr="192.168.62.1/24"/>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-102/pathep-[eth1/40]" ifInstT='l3-port'
addr="192.168.62.5/24"/>
<ospfIfP/>
</l3extLIfP>
</l3extLNodeP>
<l3extRsEctx tnFvCtxName="inb"/>
<ospfExtP areaId="57" />
</l3extOut>
</fvTenant>
Procedure
<polUni>
<fvTenant name="cisco_6">
<fvCtx name="dev">
<fvRsCtxToEigrpCtxAfPol tnEigrpCtxAfPolName="eigrp_ctx_pol_v4" af="1"/>
</fvCtx>
</fvTenant>
</polUni>
IPv6:
<polUni>
<fvTenant name="cisco_6">
<fvCtx name="dev">
<fvRsCtxToEigrpCtxAfPol tnEigrpCtxAfPolName="eigrp_ctx_pol_v6" af="ipv6-ucast"/>
</fvCtx>
</fvTenant>
</polUni>
IPv6
<polUni>
<fvTenant name="cisco_6">
<l3extOut name="ext">
<eigrpExtP asn="4001"/>
<l3extLNodeP name="node1">
<l3extLIfP name="intf_v6">
<l3extRsPathL3OutAtt addr="2001::1/64" ifInstT="l3-port"
tDn="topology/pod-1/paths-101/pathep-[eth1/4]"/>
<eigrpIfP name="eigrp_ifp_v6">
<eigrpRsIfPol tnEigrpIfPolName="eigrp_if_pol_v6"/>
</eigrpIfP>
</l3extLIfP>
</l3extLNodeP>
</l3extOut>
</fvTenant>
</polUni>
<l3extLNodeP name="node1">
<l3extLIfP name="intf_v4">
<l3extRsPathL3OutAtt addr="201.1.1.1/24" ifInstT="l3-port"
tDn="topology/pod-1/paths-101/pathep-[eth1/4]"/>
<eigrpIfP name="eigrp_ifp_v4">
<eigrpRsIfPol tnEigrpIfPolName="eigrp_if_pol_v4"/>
</eigrpIfP>
</l3extLIfP>
<l3extLIfP name="intf_v6">
<l3extRsPathL3OutAtt addr="2001::1/64" ifInstT="l3-port"
tDn="topology/pod-1/paths-101/pathep-[eth1/4]"/>
<eigrpIfP name="eigrp_ifp_v6">
<eigrpRsIfPol tnEigrpIfPolName="eigrp_if_pol_v6"/>
</eigrpIfP>
</l3extLIfP>
</l3extLNodeP>
</l3extOut>
</fvTenant>
</polUni>
The bandwidth (bw) attribute is defined in Kbps. The delayUnit attribute can be "tens of micro" or "pico".
Procedure
Step 1 Configure BGP route summarization using the REST API as follows:
Example:
<fvTenant name="common">
<fvCtx name="vrf1"/>
<bgpRtSummPol name=“bgp_rt_summ” cntrl=‘as-set'/>
<l3extOut name=“l3_ext_pol” >
<l3extLNodeP name="bLeaf">
<l3extRsNodeL3OutAtt tDn="topology/pod-1/node-101" rtrId=“20.10.1.1"/>
<l3extLIfP name='portIf'>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-101/pathep-[eth1/31]" ifInstT=‘l3-port’
addr=“10.20.1.3/24/>
</l3extLIfP>
</l3extLNodeP>
<bgpExtP />
Step 2 Configure OSPF inter-area and external summarization using the following REST API:
Example:
Example:
<fvTenant name="exampleCorp">
<l3extOut name="out1">
<l3extInstP name="eigrpSummInstp" >
<l3extSubnet aggregate="" descr="" ip="197.0.0.0/8" name="" scope="export-rtctrl">
<l3extRsSubnetToRtSumm/>
</l3extSubnet>
</l3extInstP>
</l3extOut>
<eigrpRtSummPol name="pol1" />
Note
There is no route summarization policy to be configured for EIGRP. The only configuration needed for enabling EIGRP
summarization is the summary subnet under the InstP.
Configuring Route Control with Route Maps and Route Profile Using REST API
Configuring Route Control Per BGP Peer Using the REST API
The following procedure describes how to configure the route control per BGP peer feature using the REST
API.
Procedure
Example:
<polUni>
<fvTenant name="t1">
<fvCtx name="v1"/>
<l3extOut name="l3out1">
<l3extRsEctx tnFvCtxName="v1"/>
<l3extLNodeP name="nodep1">
<l3extRsNodeL3OutAtt rtrId="11.11.11.103" tDn="topology/pod-1/node-103"/>
<l3extLIfP name="ifp1">
<l3extRsPathL3OutAtt addr="12.12.12.3/24" ifInstT="l3-port"
tDn="topology/pod-1/paths-103/pathep-[eth1/3]"/>
</l3extLIfP>
<bgpPeerP addr="15.15.15.2">
<bgpAsP asn="100"/>
<bgpRsPeerToProfile direction="export" tnRtctrlProfileName="rp1"/>
</bgpPeerP>
</l3extLNodeP>
<l3extRsL3DomAtt tDn="uni/l3dom-dom1"/>
<bgpExtP/>
Configuring Route Map/Profile with Explicit Prefix List Using REST API
Procedure
Example:
<?xml version="1.0" encoding="UTF-8"?>
<fvTenant name="PM" status="">
<rtctrlAttrP name="set_dest">
<rtctrlSetComm community="regular:as2-nn2:5:24" />
</rtctrlAttrP>
<rtctrlSubjP name="allow_dest">
<rtctrlMatchRtDest ip="192.169.0.0/24" aggregate="yes" fromPfxLen="26" toPfxLen="30" />
<rtctrlMatchCommTerm name="term1">
<rtctrlMatchCommFactor community="regular:as2-nn2:5:24" status="" />
<rtctrlMatchCommFactor community="regular:as2-nn2:5:25" status="" />
</rtctrlMatchCommTerm>
<rtctrlMatchCommRegexTerm commType="regular" regex="200:*" status="" />
</rtctrlSubjP>
<rtctrlSubjP name="deny_dest">
<rtctrlMatchRtDest ip="192.168.0.0/24" />
</rtctrlSubjP>
<fvCtx name="ctx" />
<l3extOut name="L3Out_1" enforceRtctrl="import,export" status="">
<l3extRsEctx tnFvCtxName="ctx" />
<l3extLNodeP name="bLeaf">
<l3extRsNodeL3OutAtt tDn="topology/pod-1/node-101" rtrId="1.2.3.4" />
<l3extLIfP name="portIf">
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-101/pathep-[eth1/25]" ifInstT="sub-interface"
encap="vlan-1503" addr="10.11.12.11/24" />
<ospfIfP />
</l3extLIfP>
<bgpPeerP addr="5.16.57.18/32" ctrl="send-com" />
<bgpPeerP addr="6.16.57.18/32" ctrl="send-com" />
</l3extLNodeP>
<bgpExtP />
<ospfExtP areaId="0.0.0.59" areaType="nssa" status="" />
<l3extInstP name="l3extInstP_1" status="">
<l3extSubnet ip="17.11.1.11/24" scope="import-security" />
</l3extInstP>
<rtctrlProfile name="default-export" type="global" status="">
<rtctrlCtxP name="ctx_deny" action="deny" order="1">
<rtctrlRsCtxPToSubjP tnRtctrlSubjPName="deny_dest" status="" />
</rtctrlCtxP>
<rtctrlCtxP name="ctx_allow" order="2">
<rtctrlRsCtxPToSubjP tnRtctrlSubjPName="allow_dest" status="" />
</rtctrlCtxP>
<rtctrlScope name="scope" status="">
<rtctrlRsScopeToAttrP tnRtctrlAttrPName="set_dest" status="" />
</rtctrlScope>
</rtctrlProfile>
</l3extOut>
<fvBD name="testBD">
<fvRsBDToOut tnL3extOutName="L3Out_1" />
<fvRsCtx tnFvCtxName="ctx" />
<fvSubnet ip="40.1.1.12/24" scope="public" />
<fvSubnet ip="40.1.1.2/24" scope="private" />
<fvSubnet ip="2003::4/64" scope="public" />
</fvBD>
</fvTenant>
Configuring a Route Control Protocol to Use Import and Export Controls, With the REST API
This example assumes that you have configured the Layer 3 outside network connections using BGP. It is
also possible to perform these tasks for a network using OSPF.
Procedure
Configure the route control protocol using import and export controls.
Example:
Procedure
Example:
POST: https://<APIC IP>/api/mo/uni.xml
BODY:
<fvTenant dn="uni/tn-SAMPLE">
<l3extOut name="l3out1">
<!-- interleak redistribution for OSPF/EIGRP routes -->
<l3extRsInterleakPol tnRtctrlProfileName="INTERLEAK_RP"/>
<!-- interleak redistribution for static routes -->
<l3extRsRedistributePol tnRtctrlProfileName="INTERLEAK_RP" src="static"/>
</l3extOut>
</fvTenant>
Procedure
Example:
<l3extOut name="l3out1">
<l3extRsEctx tnFvCtxName="v1"/>
<l3extLNodeP name="nodep1">
<l3extRsNodeL3OutAtt rtrId="11.11.11.103" tDn="topology/pod-1/node-101"/>
<l3extLIfP name="ifp1"/>
<l3extRsPathL3OutAtt addr="12.12.12.3/24" ifInstT="l3-port"
tDn="topology/pod-1/paths-101/pathep-[eth1/3]"/>
</l3extLIfP>
</l3extLNodeP>
<l3extRsL3DomAtt tDn="uni/l3dom-dom1"/>
</l3extOut>
<l3extOut name="l3out2">
<l3extRsEctx tnFvCtxName="v1"/>
<l3extLNodeP name="nodep2">
<l3extRsNodeL3OutAtt rtrId="22.22.22.203" tDn="topology/pod-1/node-102"/>
<l3extLIfP name="ifp2"/>
<l3extRsPathL3OutAtt addr="23.23.23.3/24" ifInstT="l3-port"
tDn="topology/pod-1/paths-102/pathep-[eth1/3]"/>
</l3extLIfP>
</l3extLNodeP>
<l3extRsL3DomAtt tDn="uni/l3dom-dom1"/>
</l3extOut>
Step 3 Configure the routing protocol for both border leaf switches.
This example configures BGP as the primary routing protocol for both the border leaf switches, both with ASN 100. It
also configures Node 101 with BGP peer 15.15.15.2 and node 102 with BGP peer 25.25.25.2.
Example:
<l3extOut name="l3out1">
<l3extLNodeP name="nodep1">
<bgpPeerP addr="15.15.15.2/24"
<bgpAsP asn="100"/>
</bgpPeerP>
</l3extLNodeP>
</l3extOut>
<l3extOut name="l3out2">
<l3extLNodeP name="nodep2">
<bgpPeerP addr="25.25.25.2/24"
<bgpAsP asn="100"/>
</bgpPeerP>
</l3extLNodeP>
</l3extOut>
Step 7 Create the filter and contract to enable the EPGs to communicate.
This example configures the filter http-filter and the contract httpCtrct. The external EPGs and the application EPGs
are already associated with the contract httpCtrct as providers and consumers respectively.
Example:
<vzFilter name="http-filter">
<vzEntry name="http-e" etherT="ip" prot="tcp"/>
</vzFilter>
<vzBrCP name="httpCtrct" scope="context">
<vzSubj name="subj1">
<vzRsSubjFiltAtt tnVzFilterName="http-filter"/>
</vzSubj>
</vzBrCP>
<l3extOut name="l3out1">
<l3extInstP name="extnw1">
<fvRsProv tnVzBrCPName="httpCtrct"/>
</l3extInstP>
</l3extOut>
<l3extOut name="l3out2">
<l3extInstP name="extnw2">
<fvRsCons tnVzBrCPName="httpCtrct"/>
</l3extInstP>
</l3extOut>
<bgpPeerP addr="25.25.25.2/24">
<bgpAsP asn="100"/>
</bgpPeerP>
<l3extRsNodeL3OutAtt rtrId="22.22.22.203" tDn="topology/pod-1/node-102" />
<l3extLIfP name="ifp2">
<l3extRsPathL3OutAtt addr="23.23.23.3/24" ifInstT="l3-port"
tDn="topology/pod-1/paths-102/pathep-[eth1/3]" />
<ospfIfP/>
</l3extLIfP>
</l3extLNodeP>
<l3extInstP name="extnw2">
<l3extSubnet ip="192.168.2.0/24" scope="import-security"/>
<l3extRsInstPToProfile direction="import" tnRtctrlProfileName="rp2"/>
<l3extRsInstPToProfile direction="export" tnRtctrlProfileName="rp1"/>
<fvRsCons tnVzBrCPName="httpCtrct"/>
</l3extInstP>
<bgpExtP/>
<ospfExtP areaId="0.0.0.0" areaType="regular"/>
<l3extRsL3DomAtt tDn="uni/l3dom-dom1"/>
<rtctrlProfile name="rp1">
<rtctrlCtxP name="ctxp1" action="permit" order="0">
<rtctrlRsCtxPToSubjP tnRtctrlSubjPName="match-rule1"/>
</rtctrlCtxP>
</rtctrlProfile>
<rtctrlProfile name="rp2">
<rtctrlCtxP name="ctxp1" action="permit" order="0">
<rtctrlRsCtxPToSubjP tnRtctrlSubjPName="match-rule2"/>
</rtctrlCtxP>
</rtctrlProfile>
</l3extOut>
<rtctrlSubjP name="match-rule1">
<rtctrlMatchRtDest ip="192.168.1.0/24"/>
</rtctrlSubjP>
<rtctrlSubjP name="match-rule2">
<rtctrlMatchRtDest ip="192.168.2.0/24"/>
</rtctrlSubjP>
<vzFilter name="http-filter">
<vzEntry name="http-e" etherT="ip" prot="tcp"/>
</vzFilter>
<vzBrCP name="httpCtrct" scope="context">
<vzSubj name="subj1">
<vzRsSubjFiltAtt tnVzFilterName="http-filter"/>
</vzSubj>
</vzBrCP>
</fvTenant>
</polUni>
Shared L3Out
Configuring Shared Services Using REST API
Configuring Two Shared Layer 3 Outs in Two VRFs Using REST API
The following REST API configuration example that displays how two shared Layer 3 Outs in two VRFs
communicate.
Procedure
Procedure
Note Starting with Release 4.0(1), we recommend using custom QoS policies for L3Out QoS as described in
Configuring QoS Directly on L3Out Using REST API, on page 590 instead.
Procedure
Step 1 When configuring the tenant, VRF, and bridge domain, configure the VRF for egress mode (pcEnfDir="egress") with
policy enforcement enabled (pcEnfPref="enforced"). Send a post with XML similar to the following example:
Example:
<fvTenant name="t1">
<fvCtx name="v1" pcEnfPref="enforced" pcEnfDir="egress"/>
<fvBD name="bd1">
<fvRsCtx tnFvCtxName="v1"/>
<fvSubnet ip="44.44.44.1/24" scope="public"/>
<fvRsBDToOut tnL3extOutName="l3out1"/>
</fvBD>"/>
</fvTenant>
Step 2 When creating the filters and contracts to enable the EPGs participating in the L3Out to communicate, configure the QoS
priority.
The contract in this example includes the QoS priority, level1, for traffic ingressing on the L3Out. Alternatively, it could
define a target DSCP value. QoS policies are supported on either the contract or the subject.
The filter also has the matchDscp="EF" criteria, so that traffic with this specific TAG received by the L3out processes
through the queue specified in the contract subject.
Note
VRF enforcement should be ingress, for QOS or custom QOS on L3out interface, VRF enforcement need be egress, only
when the QOS classification is going to be done in the contract for traffic between EPG and L3out or L3out to L3out.
Note
If QOS classification is set in the contract and VRF enforcement is egress, then contract QOS classification would override
the L3out interface QOS or Custom QOS classification, So either we need to configure this one or the new one.
Example:
<vzFilter name="http-filter">
<vzEntry name="http-e" etherT="ip" prot="tcp" matchDscp="EF"/>
</vzFilter>
<vzBrCP name="httpCtrct" prio="level1" scope="context">
<vzSubj name="subj1">
<vzRsSubjFiltAtt tnVzFilterName="http-filter"/>
</vzSubj>
</vzBrCP>
Procedure
<polUni>
<fvTenant name="infra">
<qosMplsCustomPol descr="" dn="uni/tn-infra/qosmplscustom-customqos1" name="customqos1" status=""
>
<qosMplsIngressRule from="2" to="3" prio="level5" target="CS5" targetCos="4" status="" />
<qosMplsEgressRule from="CS2" to="CS4" targetExp="5" targetCos="3" status=""/>
</qosMplsCustomPol>
</fvTenant>
</polUni>
Procedure
Procedure
</fvTrackMember>
</imdata>
Procedure
Associating a Track List with a Static Route Using the REST API
To associate an IP SLA track list with a static route using REST API, perform the following steps:
Procedure
dn="uni/tn-t8/out-t8_l3/lnodep-t8_l3_vpc1/rsnodeL3OutAtt-[topology/pod-2/node-108]/rt-[88.88.88.2/24]"
pref="1" type="prefix"/>
</ipRouteP>
</imdata>
Associating a Track List with a Next Hop Profile Using the REST API
To associate an IP SLA track list with a next hop profile using REST API, perform the following steps:
Procedure
dn="uni/tn-t8/out-t8_l3/lnodep-t8_l3_vpc1/rsnodeL3OutAtt-[topology/pod-2/node-109]/rt-[86.86.86.2/24]"
Procedure
</infraAccPortP>
<infraFuncP>
<infraAccPortGrp name="TenantPortGrp_101">
<infraRsAttEntP tDn="uni/infra/attentp-AttEntityProfTenant"/>
<infraRsHIfPol tnFabricHIfPolName="default"/>
</infraAccPortGrp>
</infraFuncP>
</infraInfra>
</polUni>
</l3extLNodeP>
<l3extRsEctx tnFvCtxName="t9_ctx1"/>
<l3extRsL3DomAtt tDn="uni/l3dom-dom1"/>
<l3extInstP matchT="AtleastOne" name="extEpg" prio="unspecified" targetDscp="unspecified">
<l3extSubnet aggregate="" descr="" ip="176.21.21.21/21" name="" scope="import-security"/>
</l3extInstP>
</l3extOut>
</fvTenant>
</polUni>
<polUni>
<fvTenant name="t9" dn="uni/tn-t9" descr="">
<hsrpIfPol name="hsrpIfPol" ctrl="bfd" delay="4" reloadDelay="11"/>
</fvTenant>
</polUni>
SUMMARY STEPS
1. The following example shows how to deploy nodes and spine switch interfaces for GOLF, using the REST
API:
2. The XML below configures the spine switch interfaces and infra tenant provider of the GOLF service.
Include this XML structure in the body of the POST message.
3. The XML below configures the tenant consumer of the infra part of the GOLF service. Include this XML
structure in the body of the POST message.
DETAILED STEPS
Procedure
Step 1 The following example shows how to deploy nodes and spine switch interfaces for GOLF, using the REST API:
Example:
POST
https://192.0.20.123/api/mo/uni/golf.xml
Step 2 The XML below configures the spine switch interfaces and infra tenant provider of the GOLF service. Include this XML
structure in the body of the POST message.
Example:
<l3extOut descr="" dn="uni/tn-infra/out-golf" enforceRtctrl="export,import"
name="golf"
ownerKey="" ownerTag="" targetDscp="unspecified">
<l3extRsEctx tnFvCtxName="overlay-1"/>
<l3extProvLbl descr="" name="golf"
ownerKey="" ownerTag="" tag="yellow-green"/>
<l3extLNodeP configIssues="" descr=""
name="bLeaf" ownerKey="" ownerTag=""
tag="yellow-green" targetDscp="unspecified">
<l3extRsNodeL3OutAtt rtrId="10.10.3.3" rtrIdLoopBack="no"
tDn="topology/pod-1/node-111">
<l3extInfraNodeP descr="" fabricExtCtrlPeering="yes" name=""/>
<l3extLoopBackIfP addr="10.10.3.3" descr="" name=""/>
</l3extRsNodeL3OutAtt>
<l3extRsNodeL3OutAtt rtrId="10.10.3.4" rtrIdLoopBack="no"
tDn="topology/pod-1/node-112">
<l3extInfraNodeP descr="" fabricExtCtrlPeering="yes" name=""/>
<l3extLoopBackIfP addr="10.10.3.4" descr="" name=""/>
</l3extRsNodeL3OutAtt>
<l3extLIfP descr="" name="portIf-spine1-3"
ownerKey="" ownerTag="" tag="yellow-green">
<ospfIfP authKeyId="1" authType="none" descr="" name="">
<ospfRsIfPol tnOspfIfPolName="ospfIfPol"/>
</ospfIfP>
<l3extRsNdIfPol tnNdIfPolName=""/>
<l3extRsIngressQosDppPol tnQosDppPolName=""/>
<l3extRsEgressQosDppPol tnQosDppPolName=""/>
<l3extRsPathL3OutAtt addr="7.2.1.1/24" descr=""
encap="vlan-4"
encapScope="local"
ifInstT="sub-interface"
llAddr="::" mac="00:22:BD:F8:19:FF"
mode="regular"
mtu="1500"
tDn="topology/pod-1/paths-111/pathep-[eth1/12]"
targetDscp="unspecified"/>
</l3extLIfP>
<l3extLIfP descr="" name="portIf-spine2-1"
ownerKey=""
ownerTag=""
tag="yellow-green">
<ospfIfP authKeyId="1"
authType="none"
descr=""
name="">
<ospfRsIfPol tnOspfIfPolName="ospfIfPol"/>
</ospfIfP>
<l3extRsNdIfPol tnNdIfPolName=""/>
<l3extRsIngressQosDppPol tnQosDppPolName=""/>
<l3extRsEgressQosDppPol tnQosDppPolName=""/>
<l3extRsPathL3OutAtt addr="7.1.0.1/24" descr=""
encap="vlan-4"
encapScope="local"
ifInstT="sub-interface"
llAddr="::" mac="00:22:BD:F8:19:FF"
mode="regular"
mtu="9000"
tDn="topology/pod-1/paths-112/pathep-[eth1/11]"
targetDscp="unspecified"/>
</l3extLIfP>
<l3extLIfP descr="" name="portif-spine2-2"
ownerKey=""
ownerTag=""
tag="yellow-green">
<ospfIfP authKeyId="1"
authType="none" descr=""
name="">
<ospfRsIfPol tnOspfIfPolName="ospfIfPol"/>
</ospfIfP>
<l3extRsNdIfPol tnNdIfPolName=""/>
<l3extRsIngressQosDppPol tnQosDppPolName=""/>
<l3extRsEgressQosDppPol tnQosDppPolName=""/>
<l3extRsPathL3OutAtt addr="7.2.2.1/24" descr=""
encap="vlan-4"
encapScope="local"
ifInstT="sub-interface"
llAddr="::" mac="00:22:BD:F8:19:FF"
mode="regular"
mtu="1500"
tDn="topology/pod-1/paths-112/pathep-[eth1/12]"
targetDscp="unspecified"/>
</l3extLIfP>
<l3extLIfP descr="" name="portIf-spine1-2"
ownerKey="" ownerTag="" tag="yellow-green">
<ospfIfP authKeyId="1" authType="none" descr="" name="">
<ospfRsIfPol tnOspfIfPolName="ospfIfPol"/>
</ospfIfP>
<l3extRsNdIfPol tnNdIfPolName=""/>
<l3extRsIngressQosDppPol tnQosDppPolName=""/>
<l3extRsEgressQosDppPol tnQosDppPolName=""/>
<l3extRsPathL3OutAtt addr="9.0.0.1/24" descr=""
encap="vlan-4"
encapScope="local"
ifInstT="sub-interface"
llAddr="::" mac="00:22:BD:F8:19:FF"
mode="regular"
mtu="9000"
tDn="topology/pod-1/paths-111/pathep-[eth1/11]"
targetDscp="unspecified"/>
</l3extLIfP>
<l3extLIfP descr="" name="portIf-spine1-1"
ownerKey="" ownerTag="" tag="yellow-green">
<ospfIfP authKeyId="1" authType="none" descr="" name="">
<ospfRsIfPol tnOspfIfPolName="ospfIfPol"/>
</ospfIfP>
<l3extRsNdIfPol tnNdIfPolName=""/>
<l3extRsIngressQosDppPol tnQosDppPolName=""/>
<l3extRsEgressQosDppPol tnQosDppPolName=""/>
<l3extRsPathL3OutAtt addr="7.0.0.1/24" descr=""
encap="vlan-4"
encapScope="local"
ifInstT="sub-interface"
llAddr="::" mac="00:22:BD:F8:19:FF"
mode="regular"
mtu="1500"
tDn="topology/pod-1/paths-111/pathep-[eth1/10]"
targetDscp="unspecified"/>
</l3extLIfP>
<bgpInfraPeerP addr="10.10.3.2"
allowedSelfAsCnt="3"
ctrl="send-com,send-ext-com"
descr="" name="" peerCtrl=""
peerT="wan"
privateASctrl="" ttl="2" weight="0">
<bgpRsPeerPfxPol tnBgpPeerPfxPolName=""/>
<bgpAsP asn="150" descr="" name="aspn"/>
</bgpInfraPeerP>
<bgpInfraPeerP addr="10.10.4.1"
allowedSelfAsCnt="3"
ctrl="send-com,send-ext-com" descr="" name="" peerCtrl=""
peerT="wan"
privateASctrl="" ttl="1" weight="0">
<bgpRsPeerPfxPol tnBgpPeerPfxPolName=""/>
<bgpAsP asn="100" descr="" name=""/>
</bgpInfraPeerP>
<bgpInfraPeerP addr="10.10.3.1"
allowedSelfAsCnt="3"
ctrl="send-com,send-ext-com" descr="" name="" peerCtrl=""
peerT="wan"
privateASctrl="" ttl="1" weight="0">
<bgpRsPeerPfxPol tnBgpPeerPfxPolName=""/>
<bgpAsP asn="100" descr="" name=""/>
</bgpInfraPeerP>
</l3extLNodeP>
<bgpRtTargetInstrP descr="" name="" ownerKey="" ownerTag="" rtTargetT="explicit"/>
<l3extRsL3DomAtt tDn="uni/l3dom-l3dom"/>
<l3extInstP descr="" matchT="AtleastOne" name="golfInstP"
prio="unspecified"
targetDscp="unspecified">
<fvRsCustQosPol tnQosCustomPolName=""/>
</l3extInstP>
<bgpExtP descr=""/>
<ospfExtP areaCost="1"
areaCtrl="redistribute,summary"
areaId="0.0.0.1"
areaType="regular" descr=""/>
</l3extOut>
Step 3 The XML below configures the tenant consumer of the infra part of the GOLF service. Include this XML structure in the
body of the POST message.
Example:
<fvTenant descr="" dn="uni/tn-pep6" name="pep6" ownerKey="" ownerTag="">
<vzBrCP descr="" name="webCtrct"
ownerKey="" ownerTag="" prio="unspecified"
scope="global" targetDscp="unspecified">
<vzSubj consMatchT="AtleastOne" descr=""
name="http" prio="unspecified" provMatchT="AtleastOne"
revFltPorts="yes" targetDscp="unspecified">
<vzRsSubjFiltAtt directives="" tnVzFilterName="default"/>
</vzSubj>
</vzBrCP>
<vzBrCP descr="" name="webCtrct-pod2"
ownerKey="" ownerTag="" prio="unspecified"
scope="global" targetDscp="unspecified">
<vzSubj consMatchT="AtleastOne" descr=""
name="http" prio="unspecified"
provMatchT="AtleastOne" revFltPorts="yes"
targetDscp="unspecified">
<vzRsSubjFiltAtt directives=""
tnVzFilterName="default"/>
</vzSubj>
</vzBrCP>
<fvCtx descr="" knwMcastAct="permit"
name="ctx6" ownerKey="" ownerTag=""
pcEnfDir="ingress" pcEnfPref="enforced">
<bgpRtTargetP af="ipv6-ucast"
descr="" name="" ownerKey="" ownerTag="">
<bgpRtTarget descr="" name="" ownerKey="" ownerTag=""
rt="route-target:as4-nn2:100:1256"
type="export"/>
<bgpRtTarget descr="" name="" ownerKey="" ownerTag=""
rt="route-target:as4-nn2:100:1256"
type="import"/>
</bgpRtTargetP>
<bgpRtTargetP af="ipv4-ucast"
descr="" name="" ownerKey="" ownerTag="">
<bgpRtTarget descr="" name="" ownerKey="" ownerTag=""
rt="route-target:as4-nn2:100:1256"
type="export"/>
<bgpRtTarget descr="" name="" ownerKey="" ownerTag=""
rt="route-target:as4-nn2:100:1256"
type="import"/>
</bgpRtTargetP>
<fvRsCtxToExtRouteTagPol tnL3extRouteTagPolName=""/>
<fvRsBgpCtxPol tnBgpCtxPolName=""/>
<vzAny descr="" matchT="AtleastOne" name=""/>
<fvRsOspfCtxPol tnOspfCtxPolName=""/>
<fvRsCtxToEpRet tnFvEpRetPolName=""/>
<l3extGlobalCtxName descr="" name="dci-pep6"/>
</fvCtx>
<fvBD arpFlood="no" descr="" epMoveDetectMode=""
ipLearning="yes"
limitIpLearnToSubnets="no"
llAddr="::" mac="00:22:BD:F8:19:FF"
mcastAllow="no"
multiDstPktAct="bd-flood"
name="bd107" ownerKey="" ownerTag="" type="regular"
unicastRoute="yes"
unkMacUcastAct="proxy"
unkMcastAct="flood"
vmac="not-applicable">
<fvRsBDToNdP tnNdIfPolName=""/>
<fvRsBDToOut tnL3extOutName="routAccounting-pod2"/>
<fvRsCtx tnFvCtxName="ctx6"/>
<fvRsIgmpsn tnIgmpSnoopPolName=""/>
<fvSubnet ctrl="" descr="" ip="27.6.1.1/24"
name="" preferred="no"
scope="public"
virtual="no"/>
<fvSubnet ctrl="nd" descr="" ip="2001:27:6:1::1/64"
name="" preferred="no"
scope="public"
virtual="no">
<fvRsNdPfxPol tnNdPfxPolName=""/>
</fvSubnet>
<fvRsBdToEpRet resolveAct="resolve" tnFvEpRetPolName=""/>
</fvBD>
<fvBD arpFlood="no" descr="" epMoveDetectMode=""
ipLearning="yes"
limitIpLearnToSubnets="no"
llAddr="::" mac="00:22:BD:F8:19:FF"
mcastAllow="no"
multiDstPktAct="bd-flood"
name="bd103" ownerKey="" ownerTag="" type="regular"
unicastRoute="yes"
unkMacUcastAct="proxy"
unkMcastAct="flood"
vmac="not-applicable">
<fvRsBDToNdP tnNdIfPolName=""/>
<fvRsBDToOut tnL3extOutName="routAccounting"/>
<fvRsCtx tnFvCtxName="ctx6"/>
<fvRsIgmpsn tnIgmpSnoopPolName=""/>
<fvSubnet ctrl="" descr="" ip="23.6.1.1/24"
name="" preferred="no"
scope="public"
virtual="no"/>
<fvSubnet ctrl="nd" descr="" ip="2001:23:6:1::1/64"
name="" preferred="no"
scope="public" virtual="no">
<fvRsNdPfxPol tnNdPfxPolName=""/>
</fvSubnet>
<fvRsBdToEpRet resolveAct="resolve" tnFvEpRetPolName=""/>
</fvBD>
<vnsSvcCont/>
<fvRsTenantMonPol tnMonEPGPolName=""/>
<fvAp descr="" name="AP1"
ownerKey="" ownerTag="" prio="unspecified">
<fvAEPg descr=""
isAttrBasedEPg="no"
matchT="AtleastOne"
name="epg107"
pcEnfPref="unenforced" prio="unspecified">
<fvRsCons prio="unspecified"
tnVzBrCPName="webCtrct-pod2"/>
<fvRsPathAtt descr=""
encap="vlan-1256"
instrImedcy="immediate"
mode="regular" primaryEncap="unknown"
tDn="topology/pod-2/paths-107/pathep-[eth1/48]"/>
<fvRsDomAtt classPref="encap" delimiter=""
encap="unknown"
instrImedcy="immediate"
primaryEncap="unknown"
resImedcy="lazy" tDn="uni/phys-phys"/>
<fvRsCustQosPol tnQosCustomPolName=""/>
<fvRsBd tnFvBDName="bd107"/>
<fvRsProv matchT="AtleastOne"
prio="unspecified"
tnVzBrCPName="default"/>
</fvAEPg>
<fvAEPg descr=""
isAttrBasedEPg="no"
matchT="AtleastOne"
name="epg103"
pcEnfPref="unenforced" prio="unspecified">
<fvRsCons prio="unspecified" tnVzBrCPName="default"/>
<fvRsCons prio="unspecified" tnVzBrCPName="webCtrct"/>
<fvRsPathAtt descr="" encap="vlan-1256"
instrImedcy="immediate"
mode="regular" primaryEncap="unknown"
tDn="topology/pod-1/paths-103/pathep-[eth1/48]"/>
<fvRsDomAtt classPref="encap" delimiter=""
encap="unknown"
instrImedcy="immediate"
primaryEncap="unknown"
resImedcy="lazy" tDn="uni/phys-phys"/>
<fvRsCustQosPol tnQosCustomPolName=""/>
<fvRsBd tnFvBDName="bd103"/>
</fvAEPg>
</fvAp>
<l3extOut descr=""
enforceRtctrl="export"
name="routAccounting-pod2"
ownerKey="" ownerTag="" targetDscp="unspecified">
<l3extRsEctx tnFvCtxName="ctx6"/>
<l3extInstP descr=""
matchT="AtleastOne"
name="accountingInst-pod2"
prio="unspecified" targetDscp="unspecified">
<l3extSubnet aggregate="export-rtctrl,import-rtctrl"
descr="" ip="::/0" name=""
scope="export-rtctrl,import-rtctrl,import-security"/>
<l3extSubnet aggregate="export-rtctrl,import-rtctrl"
descr=""
ip="0.0.0.0/0" name=""
scope="export-rtctrl,import-rtctrl,import-security"/>
<fvRsCustQosPol tnQosCustomPolName=""/>
<fvRsProv matchT="AtleastOne"
prio="unspecified" tnVzBrCPName="webCtrct-pod2"/>
</l3extInstP>
<l3extConsLbl descr=""
name="golf2"
owner="infra"
ownerKey="" ownerTag="" tag="yellow-green"/>
</l3extOut>
<l3extOut descr=""
enforceRtctrl="export"
name="routAccounting"
ownerKey="" ownerTag="" targetDscp="unspecified">
<l3extRsEctx tnFvCtxName="ctx6"/>
<l3extInstP descr=""
matchT="AtleastOne"
name="accountingInst"
prio="unspecified" targetDscp="unspecified">
<l3extSubnet aggregate="export-rtctrl,import-rtctrl" descr=""
ip="0.0.0.0/0" name=""
scope="export-rtctrl,import-rtctrl,import-security"/>
<fvRsCustQosPol tnQosCustomPolName=""/>
<fvRsProv matchT="AtleastOne" prio="unspecified" tnVzBrCPName="webCtrct"/>
</l3extInstP>
<l3extConsLbl descr=""
name="golf"
owner="infra"
ownerKey="" ownerTag="" tag="yellow-green"/>
</l3extOut>
</fvTenant>
Enabling Distributing BGP EVPN Type-2 Host Routes to a DCIG Using the REST API
Enable distributing BGP EVPN type-2 host routes using the REST API, as follows:
Procedure
Step 1 Configure the Host Route Leak policy, with a POST containing XML such as in the following example:
Example:
<bgpCtxAfPol descr="" ctrl="host-rt-leak" name="bgpCtxPol_0 status=""/>
Step 2 Apply the policy to the VRF BGP Address Family Context Policy for one or both of the address families using a POST
containing XML such as in the following example:
Example:
<fvCtx name="vni-10001">
<fvRsCtxToBgpCtxAfPol af="ipv4-ucast" tnBgpCtxAfPolName="bgpCtxPol_0"/>
<fvRsCtxToBgpCtxAfPol af="ipv6-ucast" tnBgpCtxAfPolName="bgpCtxPol_0"/>
</fvCtx>