0% found this document useful (0 votes)
8 views164 pages

B ACI Fundamentals

The document provides an overview of Cisco's Application Centric Infrastructure (ACI), detailing its architecture, policy model, and fabric fundamentals. It is intended for data center administrators and covers various aspects such as fabric provisioning, user access, and management tools. The content includes chapters on networking, troubleshooting, and examples of tenant policies, among others.

Uploaded by

jakezc21
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views164 pages

B ACI Fundamentals

The document provides an overview of Cisco's Application Centric Infrastructure (ACI), detailing its architecture, policy model, and fabric fundamentals. It is intended for data center administrators and covers various aspects such as fabric provisioning, user access, and management tools. The content includes chapters on networking, troubleshooting, and examples of tenant policies, among others.

Uploaded by

jakezc21
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 164

Cisco Application Centric Infrastructure Fundamentals

First Published: August 01, 2014

Americas Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
800 553-NETS (6387)
Fax: 408 527-0883
THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS,
INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS.

THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH
THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY,
CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.

The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB's public domain version
of the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California.

NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED “AS IS" WITH ALL FAULTS.
CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE.

IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT
LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS
HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network
topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional
and coincidental.

Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: http://
www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership
relationship between Cisco and any other company. (1110R)

© 2014 Cisco Systems, Inc. All rights reserved.


CONTENTS

Preface Preface ix
Audience ix
Document Conventions ix
Related Documentation xi
Documentation Feedback xii
Obtaining Documentation and Submitting a Service Request xii

CHAPTER 1 Cisco Application Centric Infrastructure 1


About the Cisco Application Centric Infrastructure 1
About the Cisco Application Policy Infrastructure Controller 2
Cisco Application Centric Infrastructure Fabric Overview 2
Determining How the Fabric Behaves 4

CHAPTER 2 ACI Policy Model 5


About the ACI Policy Model 5
Policy Model Key Characteristics 6
Logical Constructs 6
Management Information Model 7
Tenants 9
Endpoint Groups 11
Application Profiles 12
Contracts 13
Labels, Filters, and Subjects Govern EPG Communications 14
Contexts 15
Bridge Domains and Subnets 16
Outside Networks 17
Managed Object Relations and Policy Resolution 17

Cisco Application Centric Infrastructure Fundamentals


iii
Contents

Trans Tenant EPG Communications 18


Tags 18

CHAPTER 3 ACI Fabric Fundamentals 21


About ACI Fabric Fundamentals 21
Decoupled Identity and Location 22
Policy Identification and Enforcement 22
Encapsulation Normalization 24
Multicast Tree Topology 24
Load Balancing 25
Endpoint Retention 26
ACI Fabric Security Policy Model 27
Access Control List Limitations 27
Contracts Contain Security Policy Specifications 28
Security Policy Enforcement 29
Multicast and EPG Security 30
Taboos 30

CHAPTER 4 Fabric Provisioning 31


Fabric Provisioning 31
Startup Discovery and Configuration 32
Cluster Management Guidelines 33
Expanding the APIC Cluster Size 33
Replacing APIC Controllers in the Cluster 34
Reducing the APIC Cluster Size 35
Fabric Inventory 35
Provisioning 37
Default Policies 37
Fabric Policies Overview 38
Fabric Policy Configuration 39
Access Policies Overview 41
Access Policy Configuration 42
Scheduler 44
Firmware Upgrade 45
Geolocation 48

Cisco Application Centric Infrastructure Fundamentals


iv
Contents

CHAPTER 5 Networking and Management Connectivity 49


Routing Within the Tenant 49
Layer 3 VNIDs Used to Transport Intersubnet Tenant Traffic 50
Configuring Route Reflectors 50
WAN and Other External Networks 51
Bridged Interface to an External Router 51
Router Peering and Route Distribution 52
Attach Entity Profile 52
Bridged and Routed Connectivity to External Networks 54
DHCP Relay 55
DNS 58
In-Band and Out-of-Band Management Access 58
In-Band Management Access 59
Out-of-Band Management Access 61
Shared Services Contracts Usage 62

CHAPTER 6 User Access, Authentication, and Accounting 65


User Access, Authentication, and Accounting 65
Multiple Tenant Support 65
User Access: Roles, Privileges, and Security Domains 65
APIC Local Users 66
Externally Managed Authentication Server Users 69
Cisco AV Pair Format 71
RADIUS 71
TACACS+ Authentication 71
LDAP/Active Directory Authentication 72
User IDs in the APIC Bash Shell 72
Login Domains 72

CHAPTER 7 Virtual Machine Manager Domains 73


Virtual Machine Manager Domains 73
VMM Policy Model 76
vCenter Domain Configuration Workflow 77
vCenter and vShield Domain Configuration Workflow 81

Cisco Application Centric Infrastructure Fundamentals


v
Contents

Creating Application EPGs Policy Resolution and Deployment Immediacy 86

CHAPTER 8 Layer 4 to Layer 7 Service Insertion 89


Layer 4 to Layer 7 Service Insertion 89
Layer 4 to Layer 7 Policy Model 90
Service Graphs 90
Service Graph Configuration Parameters 91
Service Graph Connections 91
Automated Service Insertion 91
Device Packages 92
About Device Clusters (Logical Devices) 94
About Concrete Devices 94
Function Nodes 94
Function Node Connectors 94
Terminal Nodes 94
About Privileges 95
Service Automation and Configuration Management 95
Service Resource Pooling 95

CHAPTER 9 Management Tools 97


Management Tools 97
About the Management GUI 97
About the CLI 98
Visore Managed Object Viewer 98
Management Information Model Reference 99
API Inspector 100
User Login Menu Options 101
Locating Objects in the MIT 101
Tree-Level Queries 103
Class-Level Queries 103
Object-Level Queries 104
Managed-Object Properties 104
Accessing the Object Data Through REST Interfaces 105
Configuration Export/Import 106
Tech Support, Statistics, Core 109

Cisco Application Centric Infrastructure Fundamentals


vi
Contents

CHAPTER 10 Monitoring 111


Faults, Errors, Events, Audit Logs 111
Faults 111
Events 112
Errors 113
Audit Logs 114
Statistics Properties, Tiers, Thresholds, and Monitoring 114
Configuring Monitoring Policies 115

CHAPTER 11 Troubleshooting 121


Troubleshooting 121
Health Score 122
Health Score Aggregation and Impact 123
Atomic Counters 124
Multinode SPAN 125
ARPs, ICMP Pings, and Traceroute 125

APPENDIX A Tenant Policy Example 127


Tenant Policy Example Overview 127
Tenant Policy Example XML Code 128
Tenant Policy Example Explanation 129
Policy Universe 129
Tenant Policy Example 129
Filters 129
Contracts 131
Subjects 131
Labels 132
Context 132
Bridge Domains 133
Application Profiles 134
Endpoints and Endpoint Groups (EPGs) 134
Closing 135
What the Example Tenant Policy Does 136

Cisco Application Centric Infrastructure Fundamentals


vii
Contents

APPENDIX B Label Matching 139


Label Matching 139

APPENDIX C Access Policy Examples 141


Single Port Channel Configuration Applied to Multiple Switches 141
Two Port Channel Configurations Applied to Multiple Switches 142
Single Virtual Port Channel Across Two Switches 142
One Virtual Port Channel on Selected Port Blocks of Two Switches 143
Setting the Interface Speed 144

APPENDIX D Tenant Layer 3 External Network Policy Example 145


Tenant External Network Policy Example 145

APPENDIX E DHCP Relay Policy Examples 147


Layer 2 and Layer 3 DHCP Relay Sample Policies 147

APPENDIX F DNS Policy Example 149


DNS Policy Example 149

APPENDIX G List of Terms 151


List of Terms 151

Cisco Application Centric Infrastructure Fundamentals


viii
Preface
This preface includes the following sections:

• Audience, page ix
• Document Conventions, page ix
• Related Documentation, page xi
• Documentation Feedback, page xii
• Obtaining Documentation and Submitting a Service Request, page xii

Audience
This guide is intended primarily for data center administrators with responsibilities and expertise in one or
more of the following:
• Virtual machine installation and administration
• Server administration
• Switch and network administration

Document Conventions
Command descriptions use the following conventions:

Convention Description
bold Bold text indicates the commands and keywords that you enter literally
as shown.

Italic Italic text indicates arguments for which the user supplies the values.

[x] Square brackets enclose an optional element (keyword or argument).

Cisco Application Centric Infrastructure Fundamentals


ix
Preface
Document Conventions

Convention Description
[x | y] Square brackets enclosing keywords or arguments separated by a vertical
bar indicate an optional choice.

{x | y} Braces enclosing keywords or arguments separated by a vertical bar


indicate a required choice.

[x {y | z}] Nested set of square brackets or braces indicate optional or required


choices within optional or required elements. Braces and a vertical bar
within square brackets indicate a required choice within an optional
element.

variable Indicates a variable for which you supply values, in context where italics
cannot be used.

string A nonquoted set of characters. Do not use quotation marks around the
string or the string will include the quotation marks.

Examples use the following conventions:

Convention Description
screen font Terminal sessions and information the switch displays are in screen font.

boldface screen font Information you must enter is in boldface screen font.

italic screen font Arguments for which you supply values are in italic screen font.

<> Nonprinting characters, such as passwords, are in angle brackets.

[] Default responses to system prompts are in square brackets.

!, # An exclamation point (!) or a pound sign (#) at the beginning of a line


of code indicates a comment line.

This document uses the following conventions:

Note Means reader take note. Notes contain helpful suggestions or references to material not covered in the
manual.

Caution Means reader be careful. In this situation, you might do something that could result in equipment damage
or loss of data.

Cisco Application Centric Infrastructure Fundamentals


x
Preface
Related Documentation

Warning IMPORTANT SAFETY INSTRUCTIONS


This warning symbol means danger. You are in a situation that could cause bodily injury. Before you
work on any equipment, be aware of the hazards involved with electrical circuitry and be familiar with
standard practices for preventing accidents. Use the statement number provided at the end of each warning
to locate its translation in the translated safety warnings that accompanied this device.
SAVE THESE INSTRUCTIONS

Related Documentation
The Application Centric Infrastructure documentation set includes the following documents:

Web-Based Documentation
• Cisco APIC Management Information Model Reference
• Cisco APIC Online Help Reference
• Cisco ACI MIB Support List

Downloadable Documentation
• Cisco Application Centric Infrastructure Release Notes
• Cisco Application Centric Infrastructure Fundamentals Guide
• Cisco APIC Getting Started Guide
• Cisco APIC REST API User Guide
• Cisco APIC Command Line Interface User Guide
• Cisco APIC Faults, Events, and System Message Guide
• Cisco APIC Layer 4 to Layer 7 Device Package Development Guide
• Cisco APIC Layer 4 to Layer 7 Services Deployment Guide
• Cisco ACI Firmware Management Guide
• Cisco ACI Troubleshooting Guide
• Cisco ACI NX-OS Syslog Reference Guide
• Cisco ACI Switch Command Reference, NX-OS Release 11.0
• Cisco ACI MIB Quick Reference
• Cisco Nexus CLI to Cisco APIC Mapping Guide
• Installing the Cisco Application Virtual Switch with the Cisco APIC
• Configuring the Cisco Application Virtual Switch using the Cisco APIC
• Application Centric Infrastructure Fabric Hardware Installation Guide

Cisco Application Centric Infrastructure Fundamentals


xi
Preface
Documentation Feedback

Documentation Feedback
To provide technical feedback on this document, or to report an error or omission, please send your comments
to [email protected]. We appreciate your feedback.

Obtaining Documentation and Submitting a Service Request


For information on obtaining documentation, using the Cisco Bug Search Tool (BST), submitting a service
request, and gathering additional information, see What's New in Cisco Product Documentation at: http://
www.cisco.com/c/en/us/td/docs/general/whatsnew/whatsnew.html
Subscribe to What’s New in Cisco Product Documentation, which lists all new and revised Cisco technical
documentation as an RSS feed and delivers content directly to your desktop using a reader application. The
RSS feeds are a free service.

Cisco Application Centric Infrastructure Fundamentals


xii
CHAPTER 1
Cisco Application Centric Infrastructure
This chapter contains the following sections:

• About the Cisco Application Centric Infrastructure, page 1


• About the Cisco Application Policy Infrastructure Controller, page 2
• Cisco Application Centric Infrastructure Fabric Overview, page 2
• Determining How the Fabric Behaves, page 4

About the Cisco Application Centric Infrastructure


The Cisco Application Centric Infrastructure (ACI) allows application requirements to define the network.
This architecture simplifies, optimizes, and accelerates the entire application deployment life cycle.

Cisco Application Centric Infrastructure Fundamentals


1
Cisco Application Centric Infrastructure
About the Cisco Application Policy Infrastructure Controller

About the Cisco Application Policy Infrastructure Controller


The Cisco Application Policy Infrastructure Controller (APIC) API enables applications to directly connect
with a secure, shared, high-performance resource pool that includes network, compute, and storage capabilities.
The following figure provides an overview of the APIC.

Figure 1: APIC Overview

The APIC manages the scalable ACI multitenant fabric. The APIC provides a unified point of automation
and management, policy programming, application deployment, and health monitoring for the fabric. The
APIC, which is implemented as a replicated synchronized clustered controller, optimizes performance, supports
any application anywhere, and provides unified operation of the physical and virtual infrastructure. The APIC
enables network administrators to easily define the optimal network for applications. Data center operators
can clearly see how applications consume network resources, easily isolate and troubleshoot application and
infrastructure problems, and monitor and profile resource usage patterns.

Cisco Application Centric Infrastructure Fabric Overview


The Cisco Application Centric Infrastructure Fabric (ACI) fabric includes Cisco Nexus 9000 Series switches
with the APIC to run in the leaf/spine ACI fabric mode. These switches form a “fat-tree” network by connecting
each leaf node to each spine node; all other devices connect to the leaf nodes. The APIC manages the ACI
fabric. The recommended minimum configuration for the APIC is a cluster of three replicated hosts. The

Cisco Application Centric Infrastructure Fundamentals


2
Cisco Application Centric Infrastructure
Cisco Application Centric Infrastructure Fabric Overview

APIC fabric management functions do not operate in the data path of the fabric. The following figure shows
an overview of the leaf/spin ACI fabric.

Figure 2: ACI Fabric Overview

The ACI fabric provides consistent low-latency forwarding across high-bandwidth links (40 Gbps, with a
100-Gbps future capability). Traffic with the source and destination on the same leaf switch is handled locally,
and all other traffic travels from the ingress leaf to the egress leaf through a spine switch. Although this
architecture appears as two hops from a physical perspective, it is actually a single Layer 3 hop because the
fabric operates as a single Layer 3 switch.
The ACI fabric object-oriented operating system (OS) runs on each Cisco Nexus 9000 Series node. It enables
programming of objects for each configurable element of the system.
The ACI fabric OS renders policies from the APIC into a concrete model that runs in the physical infrastructure.
The concrete model is analogous to compiled software; it is the form of the model that the switch operating
system can execute. The figure below shows the relationship of the logical model to the concrete model and
the switch OS.

Figure 3: Logical Model Rendered into a Concrete Model

All the switch nodes contain a complete copy of the concrete model. When an administrator creates a policy
in the APIC that represents a configuration, the APIC updates the logical model. The APIC then performs the
intermediate step of creating a fully elaborated policy that it pushes into all the switch nodes where the concrete
model is updated.

Note The Cisco Nexus 9000 Series switches can only execute the concrete model. Each switch has a copy of
the concrete model. If the APIC goes off line, the fabric keeps functioning but modifications to the fabric
policies are not possible.
The APIC is responsible for fabric activation, switch firmware management, network policy configuration,
and instantiation. While the APIC acts as the centralized policy and network management engine for the
fabric, it is completely removed from the data path, including the forwarding topology. Therefore, the fabric
can still forward traffic even when communication with the APIC is lost.

Cisco Application Centric Infrastructure Fundamentals


3
Cisco Application Centric Infrastructure
Determining How the Fabric Behaves

The Cisco Nexus 9000 Series switches offer modular and fixed 1-, 10-, and 40-Gigabit Ethernet switch
configurations that operate in either Cisco NX-OS stand-alone mode for compatibility and consistency with
the current Cisco Nexus switches or in ACI mode to take full advantage of the APIC's application policy-driven
services and infrastructure automation features.

Determining How the Fabric Behaves


The ACI fabric allows customers to automate and orchestrate scalable, high performance network, compute
and storage resources for cloud deployments. Key players who define how the ACI fabric behaves include
the following:
• IT planners, network engineers, and security engineers
• Developers who access the system via the APIC APIs
• Application and network administrators

The Representational State Transfer (REST) architecture is a key development method that supports cloud
computing. The ACI API is REST-based. The World Wide Web represents the largest implementation of a
system that conforms to the REST architectural style.
Cloud computing differs from conventional computing in scale and approach. Conventional environments
include software and maintenance requirements with their associated skill sets that consume substantial
operating expenses. Cloud applications use system designs that are supported by a very large scale infrastructure
that is deployed along a rapidly declining cost curve. In this infrastructure type, the system administrator,
development teams, and network professionals collaborate to provide a much higher valued contribution.
In conventional settings, network access for compute resources and endpoints is managed through virtual
LANs (VLANs) or rigid overlays, such as Multiprotocol Label Switching (MPLS), that force traffic through
rigidly defined network services such as load balancers and firewalls. The APIC is designed for programmability
and centralized management. By abstracting the network, the ACI fabric enables operators to dynamically
provision resources in the network instead of in a static fashion. The result is that the time to deployment
(time to market) can be reduced from months or weeks to minutes. Changes to the configuration of virtual or
physical switches, adapters, policies, and other hardware and software components can be made in minutes
with API calls.
The transformation from conventional practices to cloud computing methods increases the demand for flexible
and scalable services from data centers. These changes call for a large pool of highly skilled personnel to
enable this transformation. The APIC is designed for programmability and centralized management. A key
feature of the APIC is the web API called REST. The APIC REST API accepts and returns HTTP or HTTPS
messages that contain JavaScript Object Notation (JSON) or Extensible Markup Language (XML) documents.
Today, many web developers use RESTful methods. Adopting web APIs across the network enables enterprises
to easily open up and combine services with other internal or external providers. This process transforms the
network from a complex mixture of static resources to a dynamic exchange of services on offer.

Cisco Application Centric Infrastructure Fundamentals


4
CHAPTER 2
ACI Policy Model
This chapter contains the following sections:

• About the ACI Policy Model, page 5


• Policy Model Key Characteristics, page 6
• Logical Constructs, page 6
• Management Information Model, page 7
• Tenants, page 9
• Endpoint Groups, page 11
• Application Profiles, page 12
• Contracts, page 13
• Labels, Filters, and Subjects Govern EPG Communications, page 14
• Contexts, page 15
• Bridge Domains and Subnets, page 16
• Outside Networks, page 17
• Managed Object Relations and Policy Resolution, page 17
• Trans Tenant EPG Communications, page 18
• Tags, page 18

About the ACI Policy Model


The ACI policy model enables the specification of application requirements policies. The APIC automatically
renders policies in the fabric infrastructure. When a user or process initiates an administrative change to an
object in the fabric, the APIC first applies that change to the policy model. This policy model change then
triggers a change to the actual managed endpoint. This approach is called a model-driven framework.

Cisco Application Centric Infrastructure Fundamentals


5
ACI Policy Model
Policy Model Key Characteristics

Policy Model Key Characteristics


Key characteristics of the policy model include the following:
• As a model-driven architecture, the software maintains a complete representation of the administrative
and operational state of the system (the model). The model applies uniformly to fabric, services, system
behaviors, and virtual and physical devices attached to the network.
• The logical and concrete domains are separated; the logical configurations are rendered into concrete
configurations by applying the policies in relation to the available physical resources. No configuration
is carried out against concrete entities. Concrete entities are configured implicitly as a side effect of the
changes to the APIC policy model. Concrete entities can be but do not have to be physical (such as a
virtual machine or a VLAN).
• The system prohibits communications with newly connected devices until the policy model is updated
to include the new device.
• Network administrators do not configure logical and physical system resources directly but rather define
logical (hardware independent) configurations and APIC policies that control different aspects of the
system behavior.

Managed object manipulation in the model relieves engineers from the task of administering isolated, individual
component configurations. These characteristics enable automation and flexible workload provisioning that
can locate any workload anywhere in the infrastructure. Network-attached services can be easily deployed,
and the APIC provides an automation framework to manage the life cycle of those network-attached services.

Logical Constructs
The policy model manages the entire fabric, including the infrastructure, authentication, security, services,
applications, and diagnostics. Logical constructs in the policy model define how the fabric meets the needs

Cisco Application Centric Infrastructure Fundamentals


6
ACI Policy Model
Management Information Model

of any of the functions of the fabric. The following figure provides an overview of the ACI policy model
logical constructs.

Figure 4: ACI Policy Model Logical Constructs Overview

Fabric-wide or tenant administrators create predefined policies that contain application or shared resource
requirements. These policies automate the provisioning of applications, network-attached services, security
policies, and tenant subnets, which puts administrators in the position of approaching the resource pool in
terms of applications rather than infrastructure building blocks. The application needs to drive the networking
behavior, not the other way around.

Management Information Model


The fabric comprises the physical and logical components as recorded in the Management Information Model
(MIM), which can be represented in a hierarchical management information tree (MIT). The information
model is stored and managed by processes that run on the APIC. Similar to the OSI Common Management
Information Protocol (CMIP) and other X.500 variants, the APIC enables the control of managed resources
by presenting their manageable characteristics as object properties that can be inherited according to the
location of the object within the hierarchical structure of the MIT.

Cisco Application Centric Infrastructure Fundamentals


7
ACI Policy Model
Management Information Model

Each node in the tree represents a managed object (MO) or group of objects. MOs are abstractions of fabric
resources. An MO can represent a concrete object, such as a switch, adapter, or a logical object, such as an
application profile, endpoint group, or fault. The following figure provides an overview of the MIT.

Figure 5: Management Information Tree Overview

The hierarchical structure starts with the policy universe at the top (Root) and contains parent and child nodes.
Each node in the tree is an MO and each object in the fabric has a unique distinguished name (DN) that
describes the object and locates its place in the tree.
The following managed objects contain the policies that govern the operation of the system:
• APIC controllers comprise a replicated synchronized clustered controller that provides management,
policy programming, application deployment, and health monitoring for the multitenant fabric.
• A tenant is a container for policies that enable an administrator to exercise domain-based access control.
The system provides the following four kinds of tenants:
◦User tenants are defined by the administrator according to the needs of users. They contain policies
that govern the operation of resources such as applications, data bases, web servers, network-attached
storage, virtual machines, and so on.
◦The common tenant is provided by the system but can be configured by the fabric administrator.
It contains policies that govern the operation of resources accessible to all tenants, such as firewalls,
load balancers, Layer 4 to Layer 7 services, intrusion detection appliances, and so on.
◦The infrastructure tenant is provided by the system but can be configured by the fabric administrator.
It contains policies that govern the operation of infrastructure resources such as the fabric VXLAN
overlay. It also enables a fabric provider to selectively deploy resources to one or more user tenants.
Infrastructure tenant polices are configurable by the fabric administrator.
◦The management tenant is provided by the system but can be configured by the fabric administrator.
It contains policies that govern the operation of fabric management functions used for in-band and
out-of-band configuration of fabric nodes. The management tenant contains a private out-of-bound
address space for the APIC/fabric internal communications that is outside the fabric data path that
provides access through the management port of the switches. The management tenant enables
discovery and automation of communications with virtual machine controllers.

• Access policies govern the operation of switch access ports that provide connectivity to resources such
as storage, compute, Layer 2 and Layer 3 (bridged and routed) connectivity, virtual machine hypervisors,
Layer 4 to Layer 7 devices, and so on. If a tenant requires interface configurations other than those
provided in the default link, Cisco Discovery Protocol (CDP), Link Layer Discovery Protocol (LLDP),

Cisco Application Centric Infrastructure Fundamentals


8
ACI Policy Model
Tenants

Link Aggregation Control Protocol (LACP), or Spanning Tree, an administrator must configure access
policies to enable such configurations on the access ports of the leaf switches.
• Fabric policies govern the operation of the switch fabric ports, including such functions as Network
Time Protocol server synchronization (NTP), Intermediate System-to-Intermediate System Protocol
(IS-IS), Border Gateway Protocol (BGP) route reflectors, Domain Name System (DNS) and so on. The
fabric MO contains objects such as power supplies, fans, chassis, and so on.
• Virtual Machine (VM) domains group VM controllers with similar networking policy requirements .
VM controllers can share VLAN or Virtual Extensible Local Area Network (VXLAN) space and
application endpoint groups (EPGs). The APIC communicates with the VM controller to publish network
configurations such as port groups that are then applied to the virtual workloads.
• Layer 4 to Layer 7 service integration life cycle automation framework enables the system to dynamically
respond when a service comes online or goes offline. Policies provide service device package and
inventory management functions.
• Access, authentication, and accounting (AAA) policies govern user privileges, roles, and security domains
of the Cisco ACI fabric.

The hierarchical policy model fits well with the RESTful API interface. When invoked, the API reads from
or writes to objects in the MIT. URLs map directly into distinguished names that identify objects in the MIT.
Any data in the MIT can be described as a self-contained structured tree text document encoded in XML or
JSON.

Tenants
A tenant (fvTenant) is a logical container for application policies that enable an administrator to exercise
domain-based access control. A tenant represents a unit of isolation from a policy perspective, but it does not
represent a private network. Tenants can represent a customer in a service provider setting, an organization

Cisco Application Centric Infrastructure Fundamentals


9
ACI Policy Model
Tenants

or domain in an enterprise setting, or just a convenient grouping of policies. The following figure provides
an overview of the tenant portion of the management information tree (MIT).

Figure 6: Tenants

Tenants can be isolated from one another or can share resources. The primary elements that the tenant contains
are filters, contracts, outside networks, bridge domains, contexts, and application profiles that contain endpoint
groups (EPGs). Entities in the tenant inherit its policies. A tenant can contain one or more virtual routing and
forwarding (VRF) instances or contexts; each context can associated with multiple bridge domains.
Tenants are logical containers for application policies. The fabric can contain multiple tenants. You must
configure a tenant before you can deploy any Layer 4 to Layer 7 services.

Cisco Application Centric Infrastructure Fundamentals


10
ACI Policy Model
Endpoint Groups

Endpoint Groups
The endpoint group (EPG) is the most important object in the policy model. The following figure shows where
application EPGs are located in the management information tree (MIT) and their relation to other objects in
the tenant.

Figure 7: Endpoint Groups

An EPG is a managed object that is a named logical entity that contains a collection of endpoints. Endpoints
are devices that are connected to the network directly or indirectly. They have an address (identity), a location,
attributes (such as version or patch level), and can be physical or virtual. Knowing the address of an endpoint
also enables access to all its other identity details. EPGs are fully decoupled from the physical and logical
topology. Endpoint examples include servers, virtual machines, network-attached storage, or clients on the
Internet. Endpoint membership in an EPG can be dynamic or static.
EPGs contain endpoints that have common policy requirements such as security, QoS, or Layer 4 to Layer 7
services. Rather than configure and manage endpoints individually, they are placed in an EPG and are managed
as a group. The ACI fabric can contain the following types of EPGs:
• Application endpoint group (fvAEPg)
• Layer 2 external outside network instance endpoint group (l2extInstP)
• Layer 3 external outside network instance endpoint group (l3extInstP)
• Management endpoint groups for out-of-band ( mgmtOoB) or in-band ( mgmtInB) access.

Cisco Application Centric Infrastructure Fundamentals


11
ACI Policy Model
Application Profiles

Application Profiles
An application profile (fvAp) models application requirements. An application profile is a convenient logical
container for grouping EPGs. The following figure shows the location of application profiles in the management
information tree (MIT) and their relation to other objects in the tenant.

Figure 8: Application Profiles

Application profiles contain one or more EPGs. Modern applications contain multiple components. For
example, an e-commerce application could require a web server, a database server, data located in a storage
area network, and access to outside resources that enable financial transactions. The application profile contains
as many (or as few) EPGs as necessary that are logically related to providing the capabilities of an application.
EPGs can be organized according to one of the following:
• The application they provide (such as sap in the example in Appendix A)
• The function they provide (such as infrastructure)
• Where they are in the structure of the data center (such as DMZ)
• Whatever organizing principle that a fabric or tenant administrator chooses to use

Cisco Application Centric Infrastructure Fundamentals


12
ACI Policy Model
Contracts

Contracts
In addition to EPGs, contracts (vzBrCP) are key objects in the policy model. EPGs can only communicate
with other EPGs according to contract rules. The following figure shows the location of contracts in the
management information tree (MIT) and their relation to other objects in the tenant.

Figure 9: Contracts

An administrator uses a contract to select the type(s) of traffic that can pass between EPGs, including the
protocols and ports allowed. If there is no contract, inter-EPG communication is disabled by default. There
is no contract required for intra-EPG communication; intra-EPG communication is always implicitly allowed.
Contracts govern the following types of endpoint group communications:
• Between ACI fabric application EPGs (fvAEPg), both intra-tenant and inter-tenant

Note In the case of a shared service mode, a contract is required for inter-tenant
communication. A contract is used to specify static routes across contexts, even though
the tenant context does not enforce a policy.

• Between ACI fabric application EPGs and Layer 2 external outside network instance EPGs (l2extInstP)
• Between ACI fabric application EPGs and Layer 3 external outside network instance EPGs (l3extInstP)
• Between ACI fabric out-of-band or in-band management EPGs

Contracts govern the communication between EPGs that are labeled providers, consumers, or both. EPG
providers expose contracts with which a would-be consumer EPG must comply. The relationship between an
EPG and a contract can be either a provider or consumer. When an EPG provides a contract, communication
with that EPG can be initiated from other EPGs as long as the communication complies with the provided
contract. When an EPG consumes a contact, the endpoints in the consuming EPG may initiate communication
with any endpoint in an EPG that is providing that contract.

Cisco Application Centric Infrastructure Fundamentals


13
ACI Policy Model
Labels, Filters, and Subjects Govern EPG Communications

Note An EPG can both provide and consume the same contract. An EPG can also provide and consume multiple
contracts simultaneously.

Labels, Filters, and Subjects Govern EPG Communications


Label, subject, and filter managed objects enable mixing and matching among EPGs and contracts so as to
satisfy various applications or service delivery requirements. The following figure shows the location of
application subjects and filters in the management information tree (MIT) and their relation to other objects
in the tenant.

Figure 10: Labels, Subjects, and Filters

Contracts can contain multiple communication rules and multiple EPGs can both consume and provide multiple
contracts. Labels control which rules apply when communicating between a specific pair of EPGs. A policy
designer can compactly represent complex communication policies and re-use these policies across multiple
instances of an application. For example, the sample policy in Appendix A shows how the same contract uses
labels, subjects, and filters to differentiate how communications occur among different EPGs that require
HTTP or HTTPS.
Labels, subjects, and filters define EPG communications according to the following options:
• Labels are managed objects with only one property: a name. Labels enable classifying which objects
can and cannot communicate with one another. Label matching is done first. If the labels do not match,
no other contract or filter information is processed. The label match attribute can be one of these values:
at least one (the default), all, none, or exactly one. Appendix B shows simple examples of all the label
match types and their results.

Note Labels can be applied to a variety of provider and consumer managed objects, including
EPGs, contracts, bridge domains, DHCP relay policies, and DNS policies. Labels do
not apply across object types; a label on an application EPG has no relevance to a label
on a bridge domain.

Cisco Application Centric Infrastructure Fundamentals


14
ACI Policy Model
Contexts

Labels determine which EPG consumers and EPG providers can communicate with one another. Label
matching determines which subjects of a contract are used with a given EPG provider or EPG consumer
of that contract.
The two types of labels are as follows:
◦Subject labels that are applied to EPGs. Subject label matching enables EPGs to choose a subset
of the subjects in a contract.
◦Provider/consumer labels that are applied to EPGs. Provider/consumer label matching enables
consumer EPGs to choose their provider EPGs and vice versa.

• Filters are Layer 2 to Layer 4 fields, TCP/IP header fields such as Layer 3 protocol type, Layer 4 ports,
and so forth. According to its related contract, an EPG provider dictates the protocols and ports in both
the in and out directions. Contract subjects contain associations to the filters (and their directions) that
are applied between EPGs that produce and consume the contract.
• Subjects are contained in contracts. One or more subjects within a contract use filters to specify the type
of traffic that can be communicated and how it occurs. For example, for HTTPS messages, the subject
specifies the direction and the filters that specify the IP address type (for example, IPv4), the HTTP
protocol, and the ports allowed. Subjects determine if filters are unidirectional or bidirectional. A
unidirectional filter is used in one direction. Unidirectional filters define in or out communications but
not the same for both. Bidirectional filters are the same for both; they define both in and out
communications.

Contexts
A context (fvCtx) is a unique Layer 3 forwarding and application policy domain. The following figure shows
the location of contexts in the management information tree (MIT) and their relation to other objects in the
tenant.

Figure 11: Contexts

A context defines a Layer 3 address domain. One or more bridge domains are associated with a context. All
of the endpoints within the Layer 3 domain must have unique IP addresses because it is possible to forward
packets directly between these devices if the policy allows it. A tenant can contain multiple contexts. After
an administrator creates a logical device, the administrator can create a logical device context, which provides

Cisco Application Centric Infrastructure Fundamentals


15
ACI Policy Model
Bridge Domains and Subnets

a selection criteria policy for a device cluster. A logical device can be selected based on a contract name, a
graph name, or the function node name inside the graph.

Note A context is equivalent to a virtual routing and forwarding (VRF) instance in the networking world.

Bridge Domains and Subnets


A bridge domain (fvBD) represents a Layer 2 (L2) forwarding construct within the fabric. The following figure
shows the location of bridge domains in the management information tree (MIT) and their relation to other
objects in the tenant.

Figure 12: Bridge Domains

A bridge domain must be linked to a context and have at least one subnet (fvSubnet) that is associated with
it. The bridge domain defines the unique Layer 2 MAC address space and a Layer 2 flood domain if such
flooding is enabled. While a context defines a unique IP address space, that address space can consist of
multiple subnets. Those subnets are defined in one or more bridge domains that reference the corresponding
context.
Bridge domains can span multiple switches. A bridge domain can contain multiple subnets, but a subnet is
contained within a single bridge domain. Subnets can span multiple EPGs; one or more EPGs can be associated
with one bridge domain or subnet.

Cisco Application Centric Infrastructure Fundamentals


16
ACI Policy Model
Outside Networks

Outside Networks
Outside network object policies control connectivity to the outside. A tenant can contain multiple outside
network objects. The following figure shows the location of outside networks in the management information
tree (MIT) and their relation to other objects in the tenant.

Figure 13: Outside Networks

Outside network policies specify the relevant Layer 2 (l2extOut) or Layer 3 (l3extOut) properties that control
communications between an outside public or private network and the ACI fabric. External devices, such as
routers that connect to the WAN and enterprise core, or existing Layer 2 switches, connect to the front panel
interface of a leaf switch. The leaf switch that provides such connectivity is known as a border leaf. The border
leaf switch interface that connects to an external device can be configured as either a bridged or routed interface.
In the case of a routed interface, static or dynamic routing can be used. The border leaf switch can also perform
all the functions of a normal leaf switch.

Managed Object Relations and Policy Resolution


Relationship managed objects express the relation between managed object instances that do not share
containment (parent-child) relations. MO relations are established between the source MO and a target MO
in one of the following two ways:
• An explicit relation (fvRsPathAtt) defines a relationship based on the target MO domain name (DN).
• A named relation defines a relationship based on the target MO name.

Cisco Application Centric Infrastructure Fundamentals


17
ACI Policy Model
Trans Tenant EPG Communications

The dotted lines in the following figure shows several common MO relations.

Figure 14: MO Relations

For example, the dotted line between the EPG and the bridge domain defines the relation between those two
MOs. In this figure, the EPG (fvAEPg) contains a relationship MO (fvRsBD) that is named with the name of
the target bridge domain MO (fvDB). For example, if production is the bridge domain name
(tnFvBDName=production), then the relation name would be production (fvRsBdName=production).
In the case of policy resolution based on named relations, if a target MO with a matching name is not found
in the current tenant, the ACI fabric tries to resolve in the common tenant. For example, if the user tenant
EPG contained a relationship MO targeted to a bridge domain that did not exist in the tenant, the system tries
to resolve the relationship in the common tenant. If a named relation cannot be resolved in either the current
tenant or the common tenant, the ACI fabric attempts to resolve to a default policy. If a default policy exists
in the current tenant, it is used. If it does not exist, the ACI fabric looks for a default policy in the common
tenant. Bridge domain, context, and contract (security policy) named relations do not resolve to a default.

Trans Tenant EPG Communications


EPGs in one tenant can communicate EPGs in another tenant through a contract interface contained in a shared
tenant. The contract interface is an MO that can be used as a contract consumption interface by the EPGs that
are contained in different tenants. By associating to an interface, an EPG consumes the subjects represented
by the interface to a contract contained in the shared tenant. Tenants can participate in a single contract, which
is defined at some third place. More strict security requirements can be satisfied by defining the tenants,
contract, subjects, and filter directions so that tenants remain completely isolated from one another.

Tags
Object tags simplify API operations. In an API operation, an object or group of objects can be referenced by
the tag name instead of by the distinguished name (DN). Tags are child objects of the item they tag; besides
the name, they have no other properties.

Cisco Application Centric Infrastructure Fundamentals


18
ACI Policy Model
Tags

Use a tag to assign a descriptive name to a group of objects. The same tag name can be assigned to multiple
objects. Multiple tag names can be assigned to an object. For example, to enable easy searchable access to all
web server EPGs, assign a web server tag to all such EPGs. Web server EPGs throughout the fabric can be
located by referencing the web server tag.

Cisco Application Centric Infrastructure Fundamentals


19
ACI Policy Model
Tags

Cisco Application Centric Infrastructure Fundamentals


20
CHAPTER 3
ACI Fabric Fundamentals
This chapter contains the following sections:

• About ACI Fabric Fundamentals, page 21


• Decoupled Identity and Location, page 22
• Policy Identification and Enforcement, page 22
• Encapsulation Normalization, page 24
• Multicast Tree Topology, page 24
• Load Balancing, page 25
• Endpoint Retention, page 26
• ACI Fabric Security Policy Model, page 27

About ACI Fabric Fundamentals


The ACI fabric supports more than 64,000 dedicated tenant networks. A single fabric can support more than
one million IPv4/IPv6 endpoints, more than 64,000 tenants, and more than 200,000 10G ports. The ACI fabric
enables any service (physical or virtual) anywhere with no need for additional software or hardware gateways
to connect between the physical and virtual services and normalizes encapsulations for Virtual Extensible
Local Area Network (VXLAN) / VLAN / Network Virtualization using Generic Routing Encapsulation
(NVGRE).
The ACI fabric decouples the endpoint identity and associated policy from the underlying forwarding graph.
It provides a distributed Layer 3 gateway that ensures optimal Layer 3 and Layer 2 forwarding. The fabric
supports standard bridging and routing semantics without standard location constraints (any IP address
anywhere), and removes flooding requirements for the IP control plane Address Resolution Protocol (ARP)
/ Generic Attribute Registration Protocol (GARP). All traffic within the fabric is encapsulated within VXLAN.

Cisco Application Centric Infrastructure Fundamentals


21
ACI Fabric Fundamentals
Decoupled Identity and Location

Decoupled Identity and Location


The ACI fabric decouples the tenant endpoint address, its identifier, from the location of the endpoint that is
defined by its locator or VXLAN tunnel endpoint (VTEP) address. The following figure shows decoupled
identity and location.

Figure 15: Decoupled Identity and Location

Forwarding within the fabric is between VTEPs. The mapping of the internal tenant MAC or IP address to a
location is performed by the VTEP using a distributed mapping database.

Policy Identification and Enforcement


An application policy is decoupled from forwarding by using a distinct tagging attribute that is also carried
in the VXLAN packet. Policy identification is carried in every packet in the ACI fabric, which enables

Cisco Application Centric Infrastructure Fundamentals


22
ACI Fabric Fundamentals
Policy Identification and Enforcement

consistent enforcement of the policy in a fully distributed manner. The following figure shows policy
identification.

Figure 16: Policy Identification and Enforcement

Fabric and access policies govern the operation of internal fabric and external access interfaces. The system
automatically creates default fabric and access policies. Fabric administrators (who have access rights to the
entire fabric) can modify the default policies or create new policies according to their requirements. Fabric
and access policies can enable various functions or protocols. Selectors in the APIC enable fabric administrators
to choose the nodes and interfaces to which they will apply policies.

Cisco Application Centric Infrastructure Fundamentals


23
ACI Fabric Fundamentals
Encapsulation Normalization

Encapsulation Normalization
Traffic within the fabric is encapsulated as VXLAN. External VLAN/VXLAN/NVGRE tags are mapped at
ingress to an internal VXLAN tag. The following figure shows encapsulation normalization.

Figure 17: Encapsulation Normalization

Forwarding is not limited to or constrained by the encapsulation type or encapsulation overlay network.
External identifiers are localized to the leaf or leaf port, which allows reuse or translation if required. A bridge
domain forwarding policy can be defined to provide standard VLAN behavior where required.

Multicast Tree Topology


The ACI fabric supports forwarding of unicast, multicast, and broadcast traffic from access ports. All
multidestination traffic from the endpoint hosts is carried as multicast traffic in the fabric.
The ACI fabric consists of spine and leaf switches that are connected in a Clos topology (named after Charles
Clos) where traffic that enters an ingress interface can be routed through any of the available middle stage
spine switches, to the relevant egress switch. The leaf switches have two types of ports: fabric ports for
connecting to spine switches and access ports for connecting servers, service appliances, routers, Fabric
Extender (FEX), and so forth.
The top of rack (ToR) switches are the leaf switches and they are attached to the spine switches. The leaf
switches are not connected to each other and spine switches only connect to the leaf switches. In this Clos
topology, every lower-tier switch is connected to each of the top-tier switches in a full-mesh topology. If a
spine switch fails, it only slightly degrades the performance through the ACI fabric. The data path is chosen
so that the traffic load is evenly distributed between the spine switches.
The ACI fabric uses Forwarding Tag (FTAG) trees to load balance multi-destination traffic. All multidestination
traffic is forwarded in the form of encapsulated IP multicast traffic within the fabric. The ingress leaf assigns
an FTAG to the traffic when forwarding it to the spine. The FTAG is assigned in the packet as part of the
destination multicast address. In the fabric, the traffic is forwarded along the specified FTAG tree. Spine and
any intermediate leaf switches forward traffic based on the FTAG ID. One forwarding tree is built per FTAG
ID. Between any two nodes, only one link forwards per FTAG. Because of the use of multiple FTAGs, parallel
links can be used with each FTAG choosing a different link for forwarding. The larger the number of FTAG
trees in the fabric means the better the load balancing potential is. The ACI fabric supports up to 12 FTAGs.

Cisco Application Centric Infrastructure Fundamentals


24
ACI Fabric Fundamentals
Load Balancing

The following figure shows a topology with four FTAGs. Every leaf switch in the fabric is connected to each
FTAG either directly or through transit nodes. One FTAG is rooted on each of the spine nodes.

Figure 18: Multicast Tree Topology

If a leaf switch has direct connectivity to the spine, it uses the direct path to connect to the FTAG tree. If there
is no direct link, the leaf switch uses transit nodes that are connected to the FTAG tree, as shown in the figure
above. Although the figure shows each spine as the root of one FTAG tree, multiple FTAG tree roots could
be on one spine.
As part of the ACI Fabric bring up discovery process, the FTAG roots are placed on the spine switches. The
APIC configures each of the spine switches with the FTAGs that the spine anchors. The identity of the roots
and the number of FTAGs is derived from the configuration. The APIC specifies the number of FTAG trees
to be used and the roots for each of those trees. FTAG trees are recalculated every time there is a topology
change in the fabric.
Root placement is configuration driven and is not re-rooted dynamically on run-time events such as a spine
switch failure. Typically, FTAG configurations are static. An FTAG can be reanchored from one spine to
another when a spine switch is added or removed because the administrator might decide to redistribute the
FTAG across the remaining or expanded set of spine switches.

Load Balancing
The ACI fabric provides several load balancing options for balancing the traffic among the available uplink
links. Static hash load balancing is the traditional load balancing mechanism used in networks where each
flow is allocated to an uplink based on a hash of its 5-tuple. This load balancing gives a distribution of flows
across the available links that is roughly even. Usually, with a large number of flows, the even distribution
of flows results in an even distribution of bandwidth as well. However, if a few flows are much larger than
the rest, static load balancing might give suboptimal results.
Dynamic load balancing (DLB) adjusts the traffic allocations according to congestion levels. It measures the
congestion across the available paths and places the flows on the least congested paths, which results in an
optimal or near optimal placement of the data.
DLB can be configured to place traffic on the available uplinks using the granularity of flows or of flowlets.
Flowlets are bursts of packets from a flow that are separated by suitably large gaps in time. If the idle interval
between two bursts of packets is larger than the maximum difference in latency among available paths, the
second burst (or flowlet) can be sent along a different path than the first without reordering packets. This idle
interval is measured with a timer called the flowlet timer. Flowlets provide a higher granular alternative to
flows for load balancing without causing packet reordering.

Cisco Application Centric Infrastructure Fundamentals


25
ACI Fabric Fundamentals
Endpoint Retention

DLB modes of operation are aggressive or conservative. These modes pertain to the timeout value used for
the flowlet timer. The aggressive mode flowlet timeout is a relatively small value. This very fine-grained load
balancing is optimal for the distribution of traffic, but some packet reordering might occur. However, the
overall benefit to application performance is equal to or better than the conservative mode. The conservative
mode flowlet timeout is a larger value that guarantees packets are not to be re-ordered. The tradeoff is less
granular load balancing because new flowlet opportunities are less frequent. While DLB is not always able
to provide the most optimal load balancing, it is never worse than static hash load balancing.
The ACI fabric adjusts traffic when the number of available links changes due to a link going off-line or
coming on-line. The fabric redistributes the traffic across the new set of links.
In all modes of load balancing, static or dynamic, the traffic is sent only on those uplinks or paths that meet
the criteria for equal cost multipath (ECMP); these paths are equal and the lowest cost from a routing
perspective.
Dynamic Packet Prioritization (DPP), while not a load balancing technology, uses some of the same mechanisms
as DLB in the switch. DPP configuration is exclusive of DLB. DPP prioritizes short flows higher than long
flows; a short flow is less than approximately 15 packets. Short flows are more sensitive to latency than long
ones. DPP can improve overall application performance.
The ACI fabric default configuration uses a traditional static hash. A static hashing function distributes the
traffic between uplinks from the leaf switch to the spine switch. When a link goes down or comes up, traffic
on all links is redistributed based on the new number of uplinks.

Endpoint Retention
Retaining cached endpoint MAC and IP addresses in the switch improves performance. The switch learns
about endpoints as they become active. Local endpoints are on the local switch. Remote endpoints are on
another switch but are cached locally. The leaf switches store location and policy information about endpoints
that are attached directly to them (or through a directly attached layer 2 switch or Fabric Extender), local
endpoints, and endpoints that are attached to other leaf switches on the fabric (remote endpoints in the
hardware). The switch uses a 32-Kb entry cache for local endpoints and a 64-Kb entry cache for remote
endpoints.
Software that runs on the leaf switch actively manages these tables. For the locally attached endpoints, the
software ages out entries after a retention timer for each entry has expired. Endpoint entried are pruned from
the switch cache as the endpoint activity ceases, the endpoint location moves to another switch, or the life
cycle state changes to offline. The default value for the local retention timer is 15 minutes. Before removing
an inactive entry, the leaf switch sends three ARP requests to the endpoint to see if it really has gone away.
For remotely attached endpoints, the switch ages out the entries after 3 minutes of inactivity. The remote
endpoint is immediately reentered in the table if it becomes active again. There is no performance penalty for
not having the remote endpoint in the table other than policies are enforced at the remote leaf switch until the
endpoint is cached again.
The endpoint retention timer policy can be modified. Configuring a static endpoint MAC and IP address
enables permanently storing it in the switch cache by setting its retention timer to zero. Setting the retention
timer to zero for an entry means that it will not be removed. Care must be taken when doing so. If the endpoint
moves or its policy changes, the entry must be refreshed with the updated information through the APIC.
When the retention timer is nonzero, this information is checked and updated virtually instantly on each packet
without APIC intervention.
The endpoint retention policy determines how pruning is done. Use the default policy algorithm for most
operations. Changing the endpoint retention policy can affect system performance. In the case of a switch that
communicates with thousands of endpoints, lowering the aging interval increases the number of cache windows

Cisco Application Centric Infrastructure Fundamentals


26
ACI Fabric Fundamentals
ACI Fabric Security Policy Model

available to support large numbers of active endpoints. When the endpoint count exceeds 10,000, we recommend
distributing endpoints across multiple switches.

ACI Fabric Security Policy Model


The ACI fabric security policy model is based on contracts. This approach addresses limitations of traditional
access control lists (ACLs). Contracts contain the specifications for security policies that are enforced on
traffic between endpoint groups.
EPG communications require a contract; EPG to EPG communication is not allowed without a contract. The
APIC renders the entire policy model, including contracts and their associated EPGs, into the concrete model
in each switch. Upon ingress, every packet entering the fabric is marked with the required policy details.
Because contracts are required to select what types of traffic can pass between EPGs, contracts enforce security
policies. While contracts satisfy the security requirements handled by access control lists (ACLs) in conventional
network settings, they are a more flexible, manageable, and comprehensive security policy solution.

Access Control List Limitations


Traditional access control lists (ACLs) have a number of limitations that the ACI fabric security model
addresses. The traditional ACL is very tightly coupled with the network topology. They are typically configured
per router or switch ingress and egress interface and are customized to that interface and the traffic that is
expected to flow through those interfaces. Due to this customization, they often cannot be reused across
interfaces, much less across routers or switches.
Traditional ACLs can be very complicated and cryptic because they contain lists of specific IP addresses,
subnets, and protocols that are allowed as well as many that are specifically not allowed. This complexity
means that they are difficult to maintain and often simply just grow as administrators are reluctant to remove
any ACL rules for fear of creating a problem. Their complexity means that they are generally only deployed
at specific demarcation points in the network such as the demarcation between the WAN and the enterprise
or the WAN and the data center. In this case, the security benefits of ACLs are not exploited inside the
enterprise or for traffic that is contained within the data center.
Another issue is the possible huge increase in the number of entries in a single ACL. Users often want to
create an ACL that allows a set of sources to communicate with a set of destinations by using a set of protocols.
In the worst case, if N sources are talking to M destinations using K protocols, there might be N*M*K lines
in the ACL. The ACL must list each source that communicates with each destination for each protocol. It
does not take many devices or protocols before the ACL gets very large.
The ACI fabric security model addresses these ACL issues. The ACI fabric security model directly expresses
the intent of the administrator. Administrators use contract, filter, and label managed objects to specify how
groups of endpoints are allowed to communicate. These managed objects are not tied to the topology of the
network because they are not applied to a specific interface. They are simply rules that the network must
enforce irrespective of where these groups of endpoints are connected. This topology independence means
that these managed objects can easily be deployed and reused throughout the data center not just as specific
demarcation points.
The ACI fabric security model uses the endpoint grouping construct directly so the idea of allowing groups
of servers to communicate with one another is simple. A single rule can allow an arbitrary number of sources
to communicate with an equally arbitrary number of destinations. This reduction in size dramatically improves
their scale and maintainability which also means they are easier to use and to use throughout the data center.

Cisco Application Centric Infrastructure Fundamentals


27
ACI Fabric Fundamentals
Contracts Contain Security Policy Specifications

Contracts Contain Security Policy Specifications


In the ACI security model, contracts contain the policies that govern the communication between EPGs. The
contract specifies what can be communicated and the EPGs specify the source and destination of the
communications. Contracts link EPGs, as shown below.
EPG 1 --------------- CONTRACT --------------- EPG 2
Endpoints in EPG 1 can communicate with endpoints in EPG 2 and vice versa if the contract allows it. This
policy construct is very flexible. There can be many contracts between EPG 1 and EPG2, there can be more
than two EPGs that use a contract, and contracts can be reused across multiple sets of EPGs, and more.
There is also directionality in the relationship between EPGs and contracts. EPGs can either provide or consume
a contract. An EPG that provides a contract is typically a set of endpoints that provide a service to a set of
client devices. The protocols used by that service are defined in the contract. An EPG that consumes a contract
is typically a set of endpoints that are clients of that service. When the client endpoint (consumer) tries to
connect to a server endpoint (provider), the contract checks to see if that connection is allowed. Unless
otherwise specified, that contract would not allow a server to initiate a connection to a client. However, another
contract between the EPGs could easily allow a connection in that direction.
This providing/consuming relationship is typically shown graphically with arrows between the EPGs and the
contract. Note the direction of the arrows shown below.
EPG 1 <-------consumes-------- CONTRACT <-------provides-------- EPG 2
The contract is constructed in a hierarchical manner. It consists of one or more subjects, each subject contains
one or more filters, and each filter can define one or more protocols.
The following figure shows how contracts govern EPG communications.

Figure 19: Contracts Determine EPG to EPG Communications

For example, you may define a filter called HTTP that specifies TCP port 80 and port 8080 and another filter
called HTTPS that specifies TCP port 443. You might then create a contract called webCtrct that has two sets
of subjects. openProv and openCons are the subjects that contain the HTTP filter. secureProv and secureCons
are the subjects that contain the HTTPS filter. This webCtrct contract can be used to allow both secure and
non-secure web traffic between EPGs that provide the web service and EPGs that contain endpoints that want
to consume that service.

Cisco Application Centric Infrastructure Fundamentals


28
ACI Fabric Fundamentals
Security Policy Enforcement

These same constructs also apply for policies that govern virtual machine hypervisors. When an EPG is placed
in a virtual machine manager (VMM) domain, the APIC downloads all of the policies that are associated with
the EPG to the leaf switches with interfaces connecting to the VMM domain. For a full explanation of VMM
domains, see the Virtual Machine Manager Domains chapter of the ACI Fundamentals manual. When this
policy is created, the APIC pushes it (pre-populates it) to a VMM domain that specifies which switches allow
connectivity for the endpoints in the EPGs. The VMM domain defines the set of switches and ports that allow
endpoints in an EPG to connect to. When an endpoint comes on-line, it is associated with the appropriate
EPGs. When it sends a packet, the source EPG and destination EPG are derived from the packet and the policy
defined by the corresponding contract is checked to see if the packet is allowed. If yes, the packet is forwarded.
If no, the packet is dropped.
The contract also allows more complex actions than just allow or deny. The contract can specify that traffic
that matches a given subject can be re-directed to a service, can be copied, or can have its QoS level modified.
With pre-population of the access policy in the concrete model, endpoints can move, new ones can come
on-line, and communication can be occur even if the APIC is off-line or otherwise inaccessible. The APIC is
removed from being a single point of failure for the network. Upon packet ingress to the ACI fabric, security
policies are enforced by the concrete model running in the switch.

Security Policy Enforcement


As traffic enters the leaf switch from the front panel interfaces, the packets are marked with the EPG of the
source EPG. The leaf switch then performs a forwarding lookup on the packet destination IP address within
the tenant space. A hit can result in any of the following scenarios:
1 A unicast (/32) hit provides the EPG of the destination endpoint and either the local interface or the remote
leaf switch VTEP IP address where the destination endpoint is present.
2 A unicast hit of a subnet prefix (not /32) provides the EPG of the destination subnet prefix and either the
local interface or the remote leaf switch VTEP IP address where the destination subnet prefix is present.
3 A multicast hit provides the local interfaces of local receivers and the outer destination IP address to use
in the VXLAN encapsulation across the fabric and the EPG of the multicast group.

Note Multicast and external router subnets always result in a hit on the ingress leaf switch. Security policy
enforcement occurs as soon as the destination EPG is known by the ingress leaf switch.

A miss result in the forwarding table causes the packet to be sent to the forwarding proxy in the spine switch.
The forwarding proxy then performs a forwarding table lookup. If it is a miss, the packet is dropped. If it is
a hit, the packet is sent to the egress leaf switch that contains the destination endpoint. Because the egress leaf
switch knows the EPG of the destination, it performs the security policy enforcement. The egress leaf switch
must also know the EPG of the packet source. The fabric header enables this process because it carries the
EPG from the ingress leaf switch to the egress leaf switch. The spine switch preserves the original EPG in
the packet when it performs the forwarding proxy function.
On the egress leaf switch, the source IP address, source VTEP, and source EPG information are stored in the
local forwarding table through learning. Because most flows are bidirectional, a return packet populates the
forwarding table on both sides of the flow, which enables the traffic to be ingress filtered in both directions.

Cisco Application Centric Infrastructure Fundamentals


29
ACI Fabric Fundamentals
Multicast and EPG Security

Multicast and EPG Security


Multicast traffic introduces an interesting problem. With unicast traffic, the destination EPG is clearly known
from examining the packet’s destination. However, with multicast traffic, the destination is an abstract entity:
the multicast group. Because the source of a packet is never a multicast address, the source EPG is determined
in the same manner as in the previous unicast examples. The derivation of the destination group is where
multicast differs.
Because multicast groups are somewhat independent of the network topology, static configuration of the (S,
G) and (*, G) to group binding is acceptable. When the multicast group is placed in the forwarding table, the
EPG that corresponds to the multicast group is also put in the forwarding table.

Note This document refers to multicast stream as a multicast group.

The leaf switch always views the group that corresponds to the multicast stream as the destination EPG and
never the source EPG. In the access control matrix shown previously, the row contents are invalid where the
multicast EPG is the source. The traffic is sent to the multicast stream from either the source of the multicast
stream or the destination that wants to join the multicast stream. Because the multicast stream must be in the
forwarding table and there is no hierarchical addressing within the stream, multicast traffic is access controlled
at the ingress fabric edge. As a result, IPv4 multicast is always enforced as ingress filtering.
The receiver of the multicast stream must first join the multicast stream before it receives traffic. When sending
the IGMP Join request, the multicast receiver is actually the source of the IGMP packet. The destination is
defined as the multicast group and the destination EPG is retrieved from the forwarding table. At the ingress
point where the router receives the IGMP Join request, access control is applied. If the Join request is denied,
the receiver does not receive any traffic from that particular multicast stream.
The policy enforcement for multicast EPGs occurs on the ingress by the leaf switch according to contract
rules as described earlier. Also, the multicast group to EPG binding is pushed by the APIC to all leaf switches
that contain the particular tenant (VRF).

Taboos
While the normal processes for ensuring security still apply, the ACI policy model aids in assuring the integrity
of whatever security practices are employed. In the ACI policy model approach, all communications must
conform to these conditions:
• Communication is allowed only based on contracts, which are managed objects in the model. If there
is no contract, inter-EPG communication is disabled by default.
• No direct access to the hardware; all interaction is managed through the policy model.

Taboos are special contract managed objects in the model that the network administrator can use to deny
specific classes of traffic. Taboos can be used to drop traffic matching a pattern (any EPG, a specific EPG,
matching a filter, and so forth). Taboo rules are applied in the hardware before the rules of regular contracts
are applied.

Cisco Application Centric Infrastructure Fundamentals


30
CHAPTER 4
Fabric Provisioning
This chapter contains the following sections:

• Fabric Provisioning, page 31


• Startup Discovery and Configuration, page 32
• Cluster Management Guidelines, page 33
• Fabric Inventory, page 35
• Provisioning, page 37
• Default Policies, page 37
• Fabric Policies Overview, page 38
• Fabric Policy Configuration, page 39
• Access Policies Overview, page 41
• Access Policy Configuration, page 42
• Scheduler, page 44
• Firmware Upgrade, page 45
• Geolocation, page 48

Fabric Provisioning
Cisco Application Centric Infrastructure (ACI) automation and self-provisioning offers these operation
advantages over the traditional switching infrastructure:
• A clustered logically centralized but physically distributed APIC provides policy, bootstrap, and image
management for the entire fabric.
• The APIC startup topology auto discovery, automated configuration, and infrastructure addressing uses
these industry-standard protocols: Intermediate System-to-Intermediate System (IS-IS), Link Layer
Discovery Protocol (LLDP), and Dynamic Host Configuration Protocol (DHCP).

Cisco Application Centric Infrastructure Fundamentals


31
Fabric Provisioning
Startup Discovery and Configuration

• The APIC provides a simple and automated policy-based provisioning and upgrade process, and automated
image management.
• APIC provides scalable configuration management. Because ACI data centers can be very large,
configuring switches or interfaces individually does not scale well, even using scripts. APIC pod,
controller, switch, module and interface selectors (all, range, specific instances) enable symmetric
configurations across the fabric. To apply a symmetric configuration, an administrator defines switch
profiles that associate interface configurations in a single policy group.

Startup Discovery and Configuration


The clustered APIC controller provides DHCP, bootstrap configuration, and image management to the fabric
for automated startup and upgrades. The following figure shows startup discovery.

Figure 20: Startup Discovery Configuration

The Cisco Nexus ACI fabric software is bundled as an ISO image, which can be installed on the Cisco APIC
server through the management console. The Cisco Nexus ACI Software ISO contains the Cisco APIC image,
the firmware image for the leaf node, the firmware image for the spine node, default fabric infrastructure
policies, and the protocols required for operation.
The ACI fabric bootstrap sequence begins when the fabric is booted with factory-installed images on all the
switches. The Cisco Nexus 9000 Series switches that run the ACI firmware and APICs use a reserved overlay
for the boot process. This infrastructure space is hard-coded on the switches. The APIC can connect to a leaf
through the default overlay, or it can use a locally significant identifier.
The ACI fabric uses an infrastructure space, which is securely isolated in the fabric and is where all the
topology discovery, fabric management, and infrastructure addressing is performed. ACI fabric management
communication within the fabric takes place in the infrastructure space through internal private IP addresses.
This addressing scheme allows the APIC to communicate with fabric nodes and other Cisco APIC controllers
in the cluster. The APIC discovers the IP address and node information of other Cisco APIC controllers in
the cluster using the Link Layer Discovery Protocol (LLDP)-based discovery process.
The following describes the APIC cluster discovery process:
• Each APIC in the Cisco ACI uses an internal private IP address to communicate with the ACI nodes
and other APICs in the cluster. The APIC discovers the IP address of other APIC controllers in the
cluster through the LLDP-based discovery process.
• APICs maintain an appliance vector (AV), which provides a mapping from an APIC ID to an APIC IP
address and a universally unique identifier (UUID) of the APIC. Initially, each APIC starts with an AV
filled with its local IP address, and all other APIC slots are marked as unknown.

Cisco Application Centric Infrastructure Fundamentals


32
Fabric Provisioning
Cluster Management Guidelines

• When a switch reboots, the policy element (PE) on the leaf gets its AV from the APIC. The switch then
advertises this AV to all of its neighbors and reports any discrepancies between its local AV and neighbors'
AVs to all the APICs in its local AV.

Using this process, the APIC learns about the other APIC controllers in the ACI through switches. After
validating these newly discovered APIC controllers in the cluster, the APIC controllers update their local AV
and program the switches with the new AV. Switches then start advertising this new AV. This process continues
until all the switches have the identical AV and all APIC controllers know the IP address of all the other APIC
controllers.
The ACI fabric is brought up in a cascading manner, starting with the leaf node nodes that are directly attached
to the APIC. LLDP and control-plane IS-IS convergence occurs in parallel to this boot process. The ACI
fabric uses LLDP- and DHCP-based fabric discovery to automatically discover the fabric switch nodes, assign
the infrastructure VXLAN tunnel endpoint (VTEP) addresses, and install the firmware on the switches. Prior
to this automated process, a minimal bootstrap configuration must be performed on the Cisco APIC controller.

Cluster Management Guidelines


The APIC cluster is comprised of multiple APIC controllers that provide operators a unified real time
monitoring, diagnostic, and configuration management capability for the ACI fabric. To assure optimal system
performance, follow the guidelines below for making changes to the APIC cluster.

Note Prior to initiating a change to the cluster, always verify its health. When performing planned changes to
the cluster, all controllers in the cluster should be healthy. If one or more of the APIC controllers in the
cluster is not healthy, remedy that situation before proceeding. See the Cisco APIC Troubleshooting Guide
for more information on resolving APIC cluster health issues.

Follow these general guidelines when managing clusters:


• Disregard cluster information from APICs that are not currently in the cluster; they do not provide
accurate cluster information.
• Cluster slots contain an APIC ChassisID. Once you configure a slot, it remains unavailable until you
decommission the APIC with the assigned ChassisID.
• If an APIC firmware upgrade is in progress, wait for it to complete and the cluster to be fully fit before
proceeding with any other changes to the cluster.

Expanding the APIC Cluster Size


Follow these guidelines to expand the APIC cluster size:
• Schedule the cluster expansion at a time when the demands of the fabric workload will not be impacted
by the cluster expansion.
• Stage the new APIC controller(s) according to the instructions in their hardware installation guide. Verify
in-band connectivity with a PING test.
• Increase the cluster target size to be equal to the existing cluster size controller count plus the new
controller count. For example, if the existing cluster size controller count is 3 and you are adding 3

Cisco Application Centric Infrastructure Fundamentals


33
Fabric Provisioning
Replacing APIC Controllers in the Cluster

controllers, set the new cluster target size to 6. The cluster proceeds to sequentially increase its size one
controller at a time until all new the controllers are included in the cluster.

Note Cluster expansion stops if an existing APIC controller becomes unavailable. Resolve
this issue before attempting to proceed with the cluster expansion.

• Depending on the amount of data the APIC must synchronize upon the addition of each appliance, the
time required to complete the expansion could be more than 10 minutes per appliance. Upon successful
expansion of the cluster, the APIC operational size and the target size will be equal.

Note Allow the APIC to complete the cluster expansion before making additional changes
to the cluster.

Replacing APIC Controllers in the Cluster


Follow these guidelines to replace APIC controllers:
• Make note of the ID number of the APIC controller that will be replaced.
• Decommission the APIC controller that will be replaced.

Note Failure to decommission APIC controllers before attempting their replacement will
preclude the cluster from absorbing the replacement controllers.

• Stage the replacement APIC controller according to the instructions in their hardware installation guide.
Verify in-band connectivity with a PING test.
• When adding the replacement controller to the APIC cluster, assign the previously used APIC controller
ID number to the replacement APIC controller. The APIC proceeds to synchronize the replacement
controller with the cluster.

Note Cluster synchronization stops if an existing APIC controller becomes unavailable.


Resolve this issue before attempting to proceed with the cluster synchronization.

• Depending on the amount of data the APIC must synchronize upon the replacement of a controller, the
time required to complete the replacement could be more than 10 minutes per replacement controller.
Upon successful synchronization of the replacement controller with the cluster, the APIC operational
size and the target size will remain unchanged.

Note Allow the APIC to complete the cluster synchronization before making additional
changes to the cluster.

Cisco Application Centric Infrastructure Fundamentals


34
Fabric Provisioning
Reducing the APIC Cluster Size

• Schedule the APIC controller replacement at a time when the demands of the fabric workload will not
be impacted by the cluster synchronization.
• The UUID and fabric domain name persist in an APIC controller across reboots. However, a clean
back-to-factory reboot removes this information. If an APIC controller is to be moved from one fabric
to another, a clean back-to-factory reboot must be done before attempting to add such an controller to
a different ACI fabric.

Reducing the APIC Cluster Size


Follow these guidelines to reduce the APIC cluster size and decommission the APIC controllers that are
removed from the cluster:

Note Failure to follow an orderly process to decommission and power down APIC controllers from a reduced
cluster can lead to unpredictable outcomes. Do not allow unrecognized APIC controllers to remain
connected to the fabric.

• Reducing the cluster size increases the load on the remaining APIC controllers. Schedule the APIC
controller size reduction at a time when the demands of the fabric workload will not be impacted by the
cluster synchronization.
• Reduce the cluster target size to the new lower value. For example if the existing cluster size is 6 and
you will remove 3 controllers, reduce the cluster target size to 3.
• Starting with the highest numbered controller ID in the existing cluster, decommission, power down,
and disconnect the APIC controller one by one until the cluster reaches the new lower target size.
Upon the decommissioning and removal of each controller, the APIC synchronizes the cluster.
• Cluster synchronization stops if an existing APIC controller becomes unavailable. Resolve this issue
before attempting to proceed with the cluster synchronization.
• Depending on the amount of data the APIC must synchronize upon the removal of a controller, the time
required to decommission and complete cluster synchronization for each controller could be more than
10 minutes per controller.

Note Complete the entire necessary decommissioning steps, allowing the APIC to complete the cluster
synchronization accordingly before making additional changes to the cluster.

Fabric Inventory
The policy model contains a complete real time inventory of the fabric, including all nodes and interfaces.
This inventory capability enables automation of provisioning, troubleshooting, auditing, and monitoring.
For Cisco ACI fabric switches, the fabric membership node inventory contains policies that identify the node
ID, serial number and name. Third-party nodes are recorded as unmanaged fabric nodes. Cisco ACI switches
can be automatically discovered, or their policy information can be imported. The policy model also maintains
fabric member node state information.

Cisco Application Centric Infrastructure Fundamentals


35
Fabric Provisioning
Fabric Inventory

Node States Condition


Unknown No policy. All nodes require a policy; without a policy, a member node state is unknown.
Discovering A transient state showing that the node is being discovered.
Undiscovered The node has policy but has never been brought up in the fabric.
Unsupported The node is a Cisco switch but it is not supported. For example, the firmware version
is not compatible with ACI fabric.
Decommissioned The node has as policy, was discovered, but a user disabled it. The node can be
reenabled.
Inactive The node is unreachable. It had been discovered but currently is not accessible. For
example, it may be powered off, or its cables may be disconnected.
Active The node is an active member of the fabric.

Disabled interfaces can be ones blacklisted by an administrator or ones taken down because the APIC detects
anomalies. Examples of link state anomalies include the following:
• A wiring mismatch, such as a spine connected to a spine, a leaf connected to a leaf, a spine connected
to a leaf access port, a spine connected to a non-ACI node, or a leaf fabric port connected to a non-ACI
device.
• A fabric name mismatch. The fabric name is stored in each ACI node. If a node is moved to another
fabric without doing a back to factory default state, it will retain the fabric name.
• A UUID mismatch causes the APIC to disable the node.

Note If an administrator uses the APIC to disable all the leaf nodes on a spine, a spine reboot is required to
recover access to the spine.

Cisco Application Centric Infrastructure Fundamentals


36
Fabric Provisioning
Provisioning

Provisioning
The APIC provisioning method automatically brings up the ACI fabric with the appropriate connections. The
following figure shows fabric provisioning.

Figure 21: Fabric Provisioning

After Link Layer Discovery Protocol (LLDP) discovery learns all neighboring connections dynamically, these
connections are validated against a loose specification rule such as "LEAF can connect to only SPINE-L1-*"
or "SPINE-L1-* can connect to SPINE-L2-* or LEAF." If a rule mismatch occurs, a fault occurs and the
connection is blocked. In addition, an alarm is created to indicate that the connection needs attention. The
Cisco ACI fabric administrator can import the names and serial numbers of all the fabric nodes from a text
file into the APIC or allow the fabric to discover the serial numbers automatically and then assign names to
the nodes using the APIC GUI, command-line interface (CLI), or API.

Default Policies
The initial values of the APIC default policies values are taken from the concrete model that is loaded in the
switch. A fabric administrator can modify default policies. A default policy serves multiple purposes:
1 Allows a fabric administrator to override the default values in the model.

Cisco Application Centric Infrastructure Fundamentals


37
Fabric Provisioning
Fabric Policies Overview

2 If an administrator does not provide an explicit policy, the APIC applies the default policy. An administrator
can create a default policy and the APIC uses that unless the administrator provides any explicit policy.

Figure 22: Default Policies

For example, according to actions the administrator does or does not take, the APIC will do the following:
• Because the administrator does not specify the LLDP policy for the selected ports, the APIC applies the
default LLDP interface policy for the ports specified in the port selector.
• If the administrator removes a port from a port selector, the APIC applies the default policies to that
port. In this example, if the administrator removes port 1/15 from the port selector, the port is no longer
part of the port channel and the APIC applies all the default policies to that port.

When the ACI fabric is upgraded, the existing policy default values persist, even if the default value changes
in the newer release. When the node connects to the APIC for the first time, the node registers itself with
APIC which pushes all the default policies to the node. Any change in the default policy is pushed to the node.

Fabric Policies Overview


Fabric policies govern the operation of internal fabric interfaces and enable the configuration of various
functions, protocols, and interfaces that connect spine and leaf switches. Administrators who have fabric
administrator privileges can create new fabric policies according to their requirements. The APIC enables
administrators to select the pods, switches, and interfaces to which they will apply fabric policies. The following
figure provides an overview of the fabric policy model.

Figure 23: Fabric Polices Overview

Cisco Application Centric Infrastructure Fundamentals


38
Fabric Provisioning
Fabric Policy Configuration

Fabric policies are grouped into the following categories:


• Switch profiles specify which switches to configure and the switch configuration policy.
• Module profiles specify which spine switch modules to configure and the spine switch configuration
policy.
• Interface profiles specify which fabric interfaces to configure and the interface configuration policy.
• Global policies specify DNS, fabric MTU default, multicast tree, and load balancer configurations to
be used throughout the fabric.
• Pod profiles specify date and time, SNMP, cooperative key server (COOP), IS-IS and Border Gateway
Protocol (BGP) route reflector policies.
• Monitoring and troubleshooting policies specify what to monitor, thresholds, how to handle faults and
logs, and how to perform diagnostics.

Fabric Policy Configuration


Fabric policies configure interfaces that connect spine and leaf switches. Fabric policies can enable features
such as monitoring (statistics collection and statistics export), troubleshooting (on-demand diagnostics and
SPAN), IS-IS, co-operative key server (COOP), SNMP, Border Gateway Protocol (BGP) route reflectors,
DNS, or Network Time Protocol (NTP).
To apply a configuration across the fabric, an administrator associates a defined group of policies to interfaces
on switches in a single step. In this way, large numbers of interfaces across the fabric can be configured at

Cisco Application Centric Infrastructure Fundamentals


39
Fabric Provisioning
Fabric Policy Configuration

once; configuring one port at a time is not scalable. The following figure shows how the process works for
configuring the ACI fabric.

Figure 24: Fabric Policy Configuration Process

Cisco Application Centric Infrastructure Fundamentals


40
Fabric Provisioning
Access Policies Overview

The following figure shows the result of applying Switch Profile 1 and Switch Profile 2 to the ACI fabric.

Figure 25: Application of a Fabric Switch Policy

This combination of infrastructure and scope enables administrators to manage fabric configuration in a
scalable fashion. These configurations can be implemented using the REST API, the CLI, or the GUI. The
Quick Start Fabric Interface Configuration wizard in the GUI automatically creates the necessary underlying
objects to implement such policies.

Access Policies Overview


Access policies configure external-facing interfaces that connect to devices such as virtual machine controllers
and hypervisors, hosts, network attached storage, routers, or Fabric Extender (FEX) interfaces. Access policies
enable the configuration of port channels and virtual port channels, protocols such as Link Layer Discovery
Protocol (LLDP), Cisco Discovery Protocol (CDP), or Link Aggregation Control Protocol (LACP), and

Cisco Application Centric Infrastructure Fundamentals


41
Fabric Provisioning
Access Policy Configuration

features such as statistics gathering, monitoring, and diagnostics. The following figure provides an overview
of the access policy model.

Figure 26: Access Policy Model Overview

Access policies are grouped into the following categories:


• Switch profiles specify which switches to configure and the switch configuration policy.
• Module profiles specify which leaf switch access cards and access modules to configure and the leaf
switch configuration policy.
• Interface profiles specify which access interfaces to configure and the interface configuration policy.
• Global policies enable the configuration of DHCP, QoS, and attachable access entity (AEP) profile
functions that can be used throughout the fabric. AEP profiles provide a template to deploy hypervisor
policies on a large set of leaf ports and associate a Virtual Machine Management (VMM) domain and
the physical network infrastructure. They are also required for Layer 2 and Layer 3 external network
connectivity.
• Pools specify VLAN, VXLAN, and multicast address pools. A pool is a shared resource that can be
consumed by multiple domains such as VMM and Layer 4 to Layer 7 services. A pool represents a range
of traffic encapsulation identifiers (for example, VLAN IDs, VNIDs, and multicast addresses).
• Physical and external domains policies include the following:
◦External bridged domain Layer 2 domain profiles contain the port and VLAN specifications that
a bridged Layer 2 network connected to the fabric uses.
◦External routed domain Layer 3 domain profiles contain the port and VLAN specifications that a
routed Layer 3 network connected to the fabric uses.
◦Physical domain policies contain physical infrastructure specifications, such as ports and VLAN,
used by a tenant or endpoint group.

• Monitoring and troubleshooting policies specify what to monitor, thresholds, how to handle faults and
logs, and how to perform diagnostics.

Access Policy Configuration


Access policies configure external-facing interfaces that do not connect to a spine switch. External-facing
interfaces connect to external devices such as virtual machine controllers and hypervisors, hosts, routers, or
Fabric Extenders (FEXs). Access policies enable an administrator to configure port channels and virtual port

Cisco Application Centric Infrastructure Fundamentals


42
Fabric Provisioning
Access Policy Configuration

channels, protocols such as LLDP, CDP or, LACP, and features such as monitoring or diagnostics. Sample
XML policies for switch interfaces, port channels, virtual port channels, and change interface speeds are
provided in Appendix C: Access Policy Examples.

Note While tenant network policies are configured separately from fabric access policies, tenant policies are
not activated unless the underlying access policies they depend on are in place.

To apply a configuration across a potentially large number of switches, an administrator defines switch profiles
that associate interface configurations in a single policy group. In this way, large numbers of interfaces across
the fabric can be configured at once. Switch profiles can contain symmetric configurations for multiple switches
or unique special purpose configurations. The following figure shows the process for configuring access to
the ACI fabric.

Figure 27: Access Policy Configuration Process

Cisco Application Centric Infrastructure Fundamentals


43
Fabric Provisioning
Scheduler

The following figure shows the result of applying Switch Profile 1 and Switch Profile 2 to the ACI fabric.

Figure 28: Applying an Access Switch Policy

This combination of infrastructure and scope enables administrators to manage fabric configuration in a
scalable fashion. These configurations can be implemented using the REST API, the CLI, or the GUI. The
Quick Start Interface, PC, VPC Configuration wizard in the GUI automatically creates the necessary underlying
objects to implement such policies.

Scheduler
A schedule allows operations, such as configuration import/export or tech support collection, to occur during
one or more specified windows of time.
A schedule contains a set of time windows (occurrences). These windows can be one time only or can recur
at a specified time and day each week. The options defined in the window, such as the duration or the maximum
number of tasks to be run, determine when a scheduled task executes. For example, if a change cannot be
deployed during a given maintenance window because the maximum duration or number of tasks has been
reached, that deployment is carried over to the next maintenance window.
Each schedule checks periodically to see whether the APIC has entered one or more maintenance windows.
If it has, the schedule executes the deployments that are eligible according to the constraints specified in the
maintenance policy.
A schedule contains one or more occurrences, which determine the maintenance windows associated with
that schedule. An occurrence can be one of the following:
• One Time Window—Defines a schedule that occurs only once. This window continues until the maximum
duration of the window or the maximum number of tasks that can be run in the window has been reached.

Cisco Application Centric Infrastructure Fundamentals


44
Fabric Provisioning
Firmware Upgrade

• Recurring Window—Defines a repeating schedule. This window continues until the maximum number
of tasks or the end of the day specified in the window has been reached.

Firmware Upgrade
Policies on the APIC manage the following aspects of the firmware upgrade processes:
• What version of firmware to use.
• Downloading firmware images from Cisco to the APIC repository.
• Compatibility enforcement.
• What to upgrade:
◦Switches
◦The APIC
◦The compatibility catalog

• When the upgrade will be performed.


• How to handle failures (retry, pause, ignore, and so on).

Each firmware image includes a compatibility catalog that identifies supported types and switch models. The
APIC maintains a catalog of the firmware images, switch types, and models that are allowed to use that
firmware image. The default setting is to reject a firmware update when it does not conform to the compatibility
catalog.
The APIC, which performs image management, has an image repository for compatibility catalogs, APIC
controller firmware images, and switch images. The administrator can download new firmware images to the
APIC image repository from an external HTTP server or SCP server by creating an image source policy.
Firmware Group policies on the APIC define what firmware version is needed.
Maintenance Group policies define when to upgrade firmware, which nodes to upgrade, and how to handle
failures. In addition, maintenance Group policies define groups of nodes that can be upgraded together and
assign those maintenance groups to schedules. Node group options include all leaf nodes, all spine nodes, or
sets of nodes that are a portion of the fabric.
The APIC controller firmware upgrade policy always applies to all nodes in the cluster, but the upgrade is
always done one node at a time. The APIC GUI provides real time status information about firmware upgrades.

Cisco Application Centric Infrastructure Fundamentals


45
Fabric Provisioning
Firmware Upgrade

The following figure shows the APIC cluster nodes firmware upgrade process.

Figure 29: APIC Cluster Controller Firmware Upgrade Process

The APIC applies this controller firmware upgrade policy as follows:


• The controller cluster upgrade begins at midnight on Saturday.
• The system checks for compatibility of the existing firmware to upgrade to the new version according
to the compatibility catalog provided with the new firmware image.
• The upgrade proceeds one node at a time until all nodes in the cluster are upgraded.

Note Because the APIC is a replicated cluster of nodes, disruption should be minimal. An
administrator should be aware of the system load when considering scheduling APIC
upgrades.

• The ACI fabric, including the APIC, continues to run while the upgrade proceeds.

Note The controllers upgrade in random order. Each APIC controller takes about 10 minutes
to upgrade. Once a controller image is upgraded, it drops from the cluster, and it reboots
with the newer version while the other APIC controllers in the cluster are still operational.
Once the controller reboots, it joins the cluster again. Then the cluster converges, and
the next controller image starts to upgrade. If the cluster does not immediately converge
and is not fully fit, the upgrade will wait until the cluster converges and is fully fit.
During this period, a Waiting for Cluster Convergence message is displayed.

• If a controller node upgrade fails, the upgrade pauses and waits for manual intervention.

Cisco Application Centric Infrastructure Fundamentals


46
Fabric Provisioning
Firmware Upgrade

The following figure shows how this process works for upgrading all the ACI fabric switch nodes firmware.

Figure 30: Switch Firmware Upgrade Process

The APIC applies this switch upgrade policy as follows:


• The APIC begins the upgrade at midnight on Saturday.
• The system checks for compatibility of the existing firmware to upgrade to the new version according
to the compatibility catalog provided with the new firmware image.
• The upgrade proceeds five node at a time until all the specified nodes are upgraded.

Note A firmware upgrade causes a switch reboot; the reboot can disrupt the operation of the
switch for several minutes.

• If a switch node fails to upgrade, the upgrade pauses and waits for manual intervention.

Cisco Application Centric Infrastructure Fundamentals


47
Fabric Provisioning
Geolocation

Geolocation
Administrators use geolocation policies to map the physical location of ACI fabric nodes in data center
facilities. The following figure shows an example of the geolocation mapping feature.

Figure 31: Geolocation

For example, for fabric deployment in a single room, an administrator would use the default room object, and
then create one or more racks to match the physical location of the switches. For a larger deployment, an
administrator can create one or more site objects. Each site can contain one or more buildings. Each building
has one or more floors. Each floor has one or more rooms, and each room has one or more racks. Finally each
rack can be associated with one or more switches.

Cisco Application Centric Infrastructure Fundamentals


48
CHAPTER 5
Networking and Management Connectivity
This chapter contains the following sections:

• Routing Within the Tenant, page 49


• WAN and Other External Networks, page 51
• DHCP Relay, page 55
• DNS, page 58
• In-Band and Out-of-Band Management Access, page 58
• Shared Services Contracts Usage, page 62

Routing Within the Tenant


The Application Centric Infrastructure (ACI) fabric provide tenant default gateway functionality and route
between the fabric virtual extensible local area (VXLAN) networks. For each tenant, the fabric provides a
virtual default gateway that spans all of the leaf switches to which the tenant connects at the ingress interface
of the first leaf switch that is connected to the endpoint. Each ingress interface supports the default gateway
interface and all of the ingress interfaces across the fabric share the same router IP address and MAC address
for a given tenant subnet.

Cisco Application Centric Infrastructure Fundamentals


49
Networking and Management Connectivity
Layer 3 VNIDs Used to Transport Intersubnet Tenant Traffic

Layer 3 VNIDs Used to Transport Intersubnet Tenant Traffic


In the ACI model, traffic that arrives at the fabric ingress that is sent to the ACI fabric default gateway is
routed into a virtual network segment known as the Layer 3 VNID. A single Layer 3 VNID is assigned for
each tenant context. The following figure shows how routing within the tenant is done.

Figure 32: Layer 3 VNIDs Transport Intersubnet Tenant Traffic

The Layer 3 VNID is allocated by the APIC. The traffic that goes across the fabric is transported using the
VNID of the Layer 3 segment. In the egress leaf switch, the packet is routed from the Layer 3 segment VNID
to the VNID of the egress subnet.
The ACI model provides much more efficient forwarding in the fabric for the traffic that is routed within the
tenant. With this model, the traffic between two virtual machines (VM) belongs to the same tenant on the
same physical host, but on different subnets. The traffic travels only to the ingress switch before it is routed
(using the minimal path cost) to the correct destination. In the current VM environments, the traffic travels
to an edge VM (possibly on a different physical server) before it is routed to the correct destination.

Configuring Route Reflectors


The ACI fabric route reflectors use multiprotocol BGP (MP-BGP) to distribute external routes within the
fabric. To enable route reflectors in the ACI fabric, the fabric administrator must select the spine switches
that will be the route reflectors, and provide the autonomous system (AS) number. Once route reflectors are
enabled in the ACI fabric, administrators can configure connectivity to external networks as described in the
following sections.
To connect external routers to the ACI fabric, the fabric infrastructure administrator configures spine nodes
as Border Gateway Protocol (BGP) route reflectors. For redundancy purposes, more than one spine is configured
as a router reflector node (one primary and one secondary reflector).
When a tenant needs to attach a WAN router to the ACI fabric, the infrastructure administrator configures
the leaf node (as described below) to which the WAN router is being connected as WAN top of rack (ToR)
and pairs this WAN ToR with one of the route reflector nodes as a BGP peer. When route reflectors are
configured on the WAN ToR, they are able to advertise the tenant routes into the fabric.

Cisco Application Centric Infrastructure Fundamentals


50
Networking and Management Connectivity
WAN and Other External Networks

Each leaf node can store up to 4000 routes. If a WAN router has to advertise more than 4000 routes, it should
peer with multiple leaf nodes. The infrastructure administrator configures each of the paired leaf nodes with
the routes (or route prefixes) that it can advertise.
The infrastructure administrator must configure an external WAN router connected to the fabric as follows:
1 Configure up to two spine nodes as route reflectors. For redundancy, configure primary and secondary
route reflectors.
2 On WAN ToRs, configure the primary and secondary route reflector nodes.
3 On WAN ToRs, configure the routes that the ToR is responsible for advertising. This is optional and needs
to be done only when the tenant router is known to advertise more than 4000 routes.

WAN and Other External Networks


External routers that connect to the WAN and the enterprise core connect to the front panel interfaces of the
leaf switch. The leaf switch interface that connects to the external router can be configured as a bridged
interface or a routing peer.

Bridged Interface to an External Router


As shown in the figure below, when the leaf switch interface is configured as a bridged interface, the default
gateway for the tenant VNID is the external router.

Figure 33: Bridged External Router

The ACI fabric is unaware of the presence of the external router and the APIC statically assigns the leaf switch
interface to its EPG.

Cisco Application Centric Infrastructure Fundamentals


51
Networking and Management Connectivity
Router Peering and Route Distribution

Router Peering and Route Distribution


As shown in the figure below, when the routing peer model is used, the leaf switch interface is statically
configured to peer with the external router’s routing protocol.

Figure 34: Router Peering

The routes that are learned through peering are sent to the spine switches. The spine switches act as route
reflectors and distribute the external routes to all of the leaf switches that have interfaces that belong to the
same tenant. These routes are longest prefix match (LPM) summarized addresses and are placed in the leaf
switch's forwarding table with the VTEP IP address of the remote leaf switch where the external router is
connected. WAN routes have no forwarding proxy. If the WAN routes do not fit in the leaf switch's forwarding
table, the traffic is dropped. Because the external router is not the default gateway, packets from the tenant
endpoints (EPs) are sent to the default gateway in the ACI fabric.

Attach Entity Profile


The ACI fabric provides multiple attachment points that connect through leaf ports to various external entities
such as baremetal servers, hypervisors, Layer 2 switches (for example, the Cisco UCS fabric interconnect),

Cisco Application Centric Infrastructure Fundamentals


52
Networking and Management Connectivity
Attach Entity Profile

or Layer 3 routers (for example Cisco Nexus 7000 Series switches). These attachment points can be physical
ports, port channels, or a virtual port channel (vPC) on leaf switches, as shown in the figure below.

Figure 35: Attachable Entity Profile

An Attachable Entitiy Profile (AEP) represents a group of external entities with similar infrastructure policy
requirements. The infrastructure policies consist of physical interface policies, such as Cisco Discovery
Protocol (CDP), Link Layer Discovery Protocol (LLDP), Maximum Transmission Unit (MTU), or Link
Aggregation Control Protocol (LACP).
An AEP is required to deploy VLAN pools on leaf switches. Encapsulation pools (and associated VLAN) are
reuseable across leaf switches. An AEP implicitly provides the scope of the VLAN pool to the physical
infrastructure.

Note The following AEP requirements and dependencies must be accounted for in various configuration
scenarios:
• While an AEP provisions a VLAN pool (and associated VLANs) on a leaf switch, endpoint groups
(EPGs) enable VLANs on the port(s). No traffic flows unless an EPG is deployed on the port.
• Without AEP VLAN pool deployment, a VLAN is not enabled on the leaf port even if an EPG is
provisioned.
• A particular VLAN is provisioned or enabled on the leaf port that is based on EPG events either
statically binding on a leaf port or based on VM events from external controllers such as VMware
vCenter.
• A leaf switch does not support overlapping VLAN pools. Different overlapping VLAN pools must
not be associated with the same AEP.

Cisco Application Centric Infrastructure Fundamentals


53
Networking and Management Connectivity
Bridged and Routed Connectivity to External Networks

Bridged and Routed Connectivity to External Networks


Outside network managed objects enable Layer 2 and Layer 3 tenant connectivity to external networks. The
GUI, CLI, or REST API can be used to configure tenant connectivity to external networks. APPENDIX D:
Tenant Layer 3 External Network Policy Example contains a sample XML policy. To easily locate all such
external network access points in the fabric, Layer 2 and Layer 3 external leaf nodes can be tagged as "Border
Leaf Nodes."
Tenant routed connectivity to external networks is enabled by associating a fabric access (infraInfra) external
routed domain (l3extDomP) with a tenant Layer 3 external instance profile (l3extInstP) EPG of a Layer 3
external outside network (l3extOut) as shown in the figure below.

Figure 36: Tenant Routed Connectivity to External Networks

The l3extOut includes the routing protocol options (BGP, OSPF, or both) and the switch-specific configuration
and interface-specific configuration.

Note While the Layer 3 external outside network contains the routing protocol (for example, OSPF with its
related context and area ID), the Layer 3 external interface profile contains the necessary OSPF interface
configuration details. Both are needed to enable OSPF.

The l3extInstP EPG exposes the external network to tenant EPGs through a contract. For example, a tenant
EPG that contains a group of web servers could communicate through a contract with the l3extInstP EPG

Cisco Application Centric Infrastructure Fundamentals


54
Networking and Management Connectivity
DHCP Relay

according to the network configuration contained in the Layer 3 external outside network . Only one outside
network can be configured per leaf switch (node). However, the outside network configuration can easily be
reused for multiple nodes by associating multiple nodes with the L3 external node profile. Multiple nodes
that use the same profile can be configured for fail-over or load balancing.
A similar process is used for configuring tenant bridged connectivity to external networks. Tenant Layer 2
bridged external network connectivity is enabled by associating a fabric access (infraInfra) external bridged
domain (L2extDomP) with the Layer 2 external instance profile (l2extInstP) EPG of a Layer 2 external outside
network (l2extOut) as shown in the figure below.

Figure 37: Tenant Bridged Connectivity to External Networks

The l2extOut includes the switch-specific configuration and interface-specific configuration. The l2extInstP
EPG exposes the external network to tenant EPGs through a contract. For example, a tenant EPG that contains
a group of network-attached storage devices could communicate through a contract with the l2extInstP EPG
according to the network configuration contained in the Layer 2 external outside network. Only one outside
network can be configured per leaf switch. However, the outside network configuration can easily be reused
for multiple nodes by associating multiple nodes with the L2 external node profile. Multiple nodes that use
the same profile can be configured for fail-over or load balancing.

DHCP Relay
While ACI fabric-wide flooding is disabled by default, flooding within a bridge domain is enabled by default.
Because flooding within a bridge domain is enabled by default, clients can connect to DHCP servers within
the same EPG. However, when the DHCP server is in a different EPG or context than the clients, DHCP Relay

Cisco Application Centric Infrastructure Fundamentals


55
Networking and Management Connectivity
DHCP Relay

is required. Also, when Layer 2 flooding is disabled, DHCP Relay is required. The figure below shows the
managed objects in the management information tree (MIT) that can contain DHCP relays: user tenants, the
common tenant, the infrastructure tenant, the management tenant, and fabric access.

Figure 38: DHCP Relay Locations in the MIT

The figure below shows the logical relationships of the DHCP relay objects within a user tenant.

Figure 39: Tenant DHCP Relay

The DHCP Relay profile contains one or more providers. An EPG contains one or more DHCP servers, and
the relation between the EPG and DHCP Relay specifies the DHCP server IP address. The consumer bridge
domain contains a DHCP label that associates the provider DHCP server with the bridge domain. Label
matching enables the bridge domain to consume the DHCP Relay.

Cisco Application Centric Infrastructure Fundamentals


56
Networking and Management Connectivity
DHCP Relay

Note The bridge domain DHCP label must match the DHCP Relay name.

The DHCP label object also specifies the owner. The owner can be a tenant or the access infrastructure. If the
owner is a tenant, the ACI fabric first looks within the tenant for a matching DHCP Relay. If there is no match
within a user tenant, the ACI fabric then looks in the common tenant.
DHCP Relay operates in one of the following two modes:
• Visible—the provider's IP and subnet are leaked into the consumer's context. When the DHCP Relay is
visible, it is exclusive to the consumer's context.
• Not Visible—the provider's IP and subnet are not leaked into the consumer's context.

Note When the DHCP Relay operates in the not visible mode, the bridge domain of the
provider must be on the same leaf switch as the consumer.

While the tenant and access DHCP Relays are configured in a similar way, the following use cases vary
accordingly:
• Common Tenant DHCP Relays can be used by any tenant.
• Infrastructure Tenant DHCP Relays are exposed selectively by the ACI fabric service provider to other
tenants.
• Fabric Access (infraInfra) DHCP Relays can be used by any tenant and allow more granular
configuration of the DHCP servers. In this case, it is possible to provision separate DHCP servers within
the same bridge domain for each leaf switch in the node profile.

Cisco Application Centric Infrastructure Fundamentals


57
Networking and Management Connectivity
DNS

DNS
The ACI fabric DNS service is contained in the fabric managed object. The fabric global default DNS profile
can be accessed throughout the fabric. The figure below shows the logical relationships of the DNS-managed
objects within the fabric. See Appendix F DNS for sample DNS XMP policies.

Figure 40: DNS

A context must contain a dnsLBL object in order to use the global default DNS service. Label matching enables
tenant contexts to consume the global DNS provider. Because the name of the global DNS profile is “default,”
the context label name is "default" (dnsLBL name = default).

In-Band and Out-of-Band Management Access


The mgmt tenant provides a convenient means to configure access to fabric management functions. While
fabric management functions are accessible through the APIC, they can also be accessed directly through
in-band and out-of-band network policies.

Cisco Application Centric Infrastructure Fundamentals


58
Networking and Management Connectivity
In-Band Management Access

In-Band Management Access


The following figure shows an overview of the mgmt tenant in-band fabric management access policy.

Figure 41: In-Band Management Access Policy

The management profile includes the in-band EPG MO that provides access to management functions via the
in-band contract (vzBrCP). The vzBrCP enables fvAEPg, l2extInstP, andl3extInstP EPGs to consume the
in-band EPG. This exposes the exposes fabric management to locally connected devices, as well as devices
connected over Layer 2 bridged external networks, and Layer 3 routed external networks. If the consumer
and provider EPGs are in different tenants, they can use a bridge domain and context from the common tenant.
Authentication, access, and audit logging apply to these connections; any user attempting to access management
functions through the in-band EPG must have the appropriate access privileges.

Cisco Application Centric Infrastructure Fundamentals


59
Networking and Management Connectivity
In-Band Management Access

The figure below shows an in-band management access scenario.

Figure 42: In-Band Management Access Scenario

Cisco Application Centric Infrastructure Fundamentals


60
Networking and Management Connectivity
Out-of-Band Management Access

Out-of-Band Management Access


The following figure shows an overview of the mgmt tenant out-of-band fabric management access policy.

Figure 43: Out-of-Band Management Access Policy

The management profile includes the out-of-band EPG MO that provides access to management functions
via the out-of-band contract (vzOOBBrCp). The vzOOBBrCp enables the external management instance profile
(mgmtExtInstP) EPG to consume the out-of-band EPG. This exposes the exposes fabric node supervisor ports
to locally or remotely connected devices, according to the preference of the service provider. While the
bandwidth of the supervisor ports will be lower than the in-band ports, the supervisor ports can provide direct
access to the fabric nodes when access through the in-band ports is unavailable. Authentication, access, and
audit logging apply to these connections; any user attempting to access management functions through the
out-of-band EPG must have the appropriate access privileges.

Cisco Application Centric Infrastructure Fundamentals


61
Networking and Management Connectivity
Shared Services Contracts Usage

The figure below shows how out-of-band management access can be consolidated through a dedicated switch.

Figure 44: Out-of-Band Access Scenario

Note While some service providers choose to restrict out-of-band connectivity to local connections, others can
choose to enable routed or bridged connections from external networks. Also, a service provider can
choose to configure a set of policies that include both in-band and out-of-band management access for
local devices only, or both local and remote devices.

Shared Services Contracts Usage


Follow these guidelines when configuring shared services contracts.
• Contracts between in-band and out-of-band endpoint groups (EPGs)—When a contract is configured
between in-band and out-of-band EPGs, the following restrictions apply:
◦Both EPGs should be in the same context (VRF).
◦The filters apply in the incoming direction only.
◦Layer 2 filters are not supported.

Cisco Application Centric Infrastructure Fundamentals


62
Networking and Management Connectivity
Shared Services Contracts Usage

◦QoS does not apply to in-band Layer 4 to Layer 7 services.


◦Management statistics are not available.
◦Shared services for CPU-bound traffic are not supported.

• Contracts are needed for inter-bridge domain traffic when a private network is unenforced.
• Prefix-based EPGs are not supported.
Shared Services are not supported for a Layer 3 external outside network. Contracts provided or consumed
by a external Layer 3 external outside network need to be consumed or provided by EPGs that share the
same Layer 3 context.
• A shared service is supported only with non-overlapping and non-duplicate subnets. Follow these
guidelines:
◦Configure the subnet for a shared service provider under the EPG, not under the bridge domain.
◦Subnets configured under an EPG that share the same context must be disjointed and must not
overlap.
◦Subnets leaked from one context to another must be disjointed and must not overlap.
◦Subnets leaked from multiple consumer networks into a context or vice versa must be disjointed
and must not overlap.

If two consumers are mistakenly configured with the same subnet, recover from this condition by
removing the subnet configuration for both then reconfigure the subnets correctly.
• Do not configure a shared service with AnyToProv in the provider context. The APIC rejects it internally
with a fault.
• The private network of a provider cannot be in unenforced mode while providing a shared service.

Cisco Application Centric Infrastructure Fundamentals


63
Networking and Management Connectivity
Shared Services Contracts Usage

Cisco Application Centric Infrastructure Fundamentals


64
CHAPTER 6
User Access, Authentication, and Accounting
This chapter contains the following sections:

• User Access, Authentication, and Accounting, page 65


• Multiple Tenant Support, page 65
• User Access: Roles, Privileges, and Security Domains, page 65
• APIC Local Users, page 66
• Externally Managed Authentication Server Users, page 69
• User IDs in the APIC Bash Shell, page 72
• Login Domains, page 72

User Access, Authentication, and Accounting


APIC policies manage the access, authentication, and accounting (AAA) functions of the Cisco ACI fabric.
The combination of user privileges, roles, and domains with access rights inheritance enables administrators
to configure AAA functions at the managed object level in a very granular fashion. These configurations can
be implemented using the REST API, the CLI, or the GUI.

Multiple Tenant Support


A core APIC internal data access control system provides multitenant isolation and prevents information
privacy from being compromised across tenants. Read/write restrictions prevent any tenant from seeing any
other tenant’s configuration, statistics, faults, or event data. Unless the administrator assigns permissions to
do so, tenants are restricted from reading fabric configuration, policies, statistics, faults, or events.

User Access: Roles, Privileges, and Security Domains


The APIC provides access according to a user’s role through role-based access control (RBAC). An ACI
fabric user is associated with the following:

Cisco Application Centric Infrastructure Fundamentals


65
User Access, Authentication, and Accounting
APIC Local Users

• A set of roles
• For each role, a privilege type: no access, read-only, or read-write
• One or more security domain tags that identify the portions of the management information tree (MIT)
that a user can access

The ACI fabric manages access privileges at the managed object (MO) level. A privilege is an MO that enables
or restricts access to a particular function within the system. For example, fabric-equipment is a privilege bit.
This bit is set by the APIC on all objects that correspond to equipment in the physical fabric.
A role is a collection of privilege bits. For example, because an “admin” role is configured with privilege bits
for “fabric-equipment” and “tenant-security,” the “admin” role has access to all objects that correspond to
equipment of the fabric and tenant security.
A security domain is a tag associated with a certain subtree in the ACI MIT object hierarchy. For example,
the default tenant “common” has a domain tag “common.” Similarly, a special domain tag “all” includes the
entire MIT object tree. An admin user can assign custom domain tags to the MIT object hierarchy. For example,
a “solar” domain tag is assigned to the tenant solar. Within the MIT, only certain objects can be tagged as
security domains. For example, a tenant can be tagged as a security domain but objects within a tenant cannot.

APIC Local Users


An administrator can choose not to use external AAA servers but rather configure users on the APIC itself.
These users are called APIC-local users. The APIC also enables administrators to grant access to users
configured on externally managed authentication Lightweight Directory Access Protocol (LDAP), RADIUS,
or TACACS+ servers. Users can belong to different authentication systems and can log in simultaneously to
the APIC.

Cisco Application Centric Infrastructure Fundamentals


66
User Access, Authentication, and Accounting
APIC Local Users

The following figure shows how the process works for configuring an admin user in the local APIC
authentication database who has full access to the entire ACI fabric.

Figure 45: APIC Local User Configuration Process

Note The security domain “all” represents the entire Managed Information Tree (MIT). This domain includes
all policies in the system and all nodes managed by the APIC. Tenant domains contain all the users and
managed objects of a tenant. Tenant administrators should not be granted access to the “all” domain.

Cisco Application Centric Infrastructure Fundamentals


67
User Access, Authentication, and Accounting
APIC Local Users

The following figure shows the access that the admin user Joe Stratus has to the system.

Figure 46: Result of Configuring Admin User for "all" Domain

The user Joe Stratus with readwrite “admin” privileges is assigned to the domain “all” which gives him full
access to the entire system.

Cisco Application Centric Infrastructure Fundamentals


68
User Access, Authentication, and Accounting
Externally Managed Authentication Server Users

Externally Managed Authentication Server Users


The following figure shows how the process works for configuring an admin user in an external RADIUS
server who has full access to the tenant Solar.

Figure 47: Process for Configuring Users on External Authentication Servers

Cisco Application Centric Infrastructure Fundamentals


69
User Access, Authentication, and Accounting
Externally Managed Authentication Server Users

The following figure shows the access the admin user Jane Cirrus has to the system.

Figure 48: Result of Configuring Admin User for Tenant Solar

In this example, the Solar tenant administrator has full access to all the objects contained in the Solar tenant
as well as read-only access to the tenant Common. Tenant admin Jane Cirrus has full access to the tenant
Solar, including the ability to create new users in tenant Solar. Tenant users are able to modify configuration
parameters of the ACI fabric that they own and control. They also are able to read statistics and monitor faults
and events for the entities (managed objects) that apply to them such as endpoints, endpoint groups (EPGs)
and application profiles.
In the example above, the user Jane Cirrus was configured on an external RADIUS authentication server. To
configure an AV Pair on an external authentication server, add a Cisco AV Pair to the existing user record.
The Cisco AV Pair specifies the Role-Based Access Control (RBAC) roles and privileges for the user on the
APIC. The RADIUS server then propagates the user privileges to the APIC controller.
In the example above, the configuration for an open radius server (/etc/raddb/users) is as follows:
janecirrus Cleartext-Password := "<password>"
Cisco-avpair = "shell:domains = solar/admin/,common//read-all(16001)"
This example includes the following elements:
• janecirrus is the tenant administrator
• solar is the tenant
• admin is the role with write privileges
• common is the tenant-common subtree that all users should have read-only access to
• read-all is the role with read privileges

Cisco Application Centric Infrastructure Fundamentals


70
User Access, Authentication, and Accounting
Cisco AV Pair Format

Cisco AV Pair Format


The Cisco APIC requires that an administrator configure a Cisco AV Pair on an external authentication server.
To do so, an administrator adds a Cisco AV pair to the existing user record. The Cisco AV pair specifies the
APIC required RBAC roles and privileges for the user. The Cisco AV Pair format is the same for RADIUS,
LDAP, or TACACS+.
The Cisco AV pair format is as follows:
shell:domains =
domainA/writeRole1|writeRole2|writeRole3/readRole1|readRole2,domainB/writeRole1|writeRole2|writeRole3/readRole1|readRole2
shell:domains =
domainA/writeRole1|writeRole2|writeRole3/readRole1|readRole2,domainB/writeRole1|writeRole2|writeRole3/readRole1|readRole2(16003)
The first av-pair format has no UNIX user ID, while the second one does. Both are correct.
The APIC supports the following regexes:
shell:domains\\s*[=:]\\s*((\\S+?/\\S*?/\\S*?)(,\\S+?/\\S*?/\\S*?){0,31})(\\(\\d+\\))$
shell:domains\\s*[=:]\\s*((\\S+?/\\S*?/\\S*?)(,\\S+?/\\S*?/\\S*?){0,31})$

RADIUS
To configure users on RADIUS servers, the APIC administrator must configure the required attributes
(shell:roles and shell:domains) using the cisco-av-pair attribute.
If the role option is not specified in the cisco-av-pair attribute, the default user role is network-operator.
SNMPv3 authentication and privacy protocol attributes can be specified as follows:
shell:roles="roleA roleB..." snmpv3:auth=SHA priv=AES-128
The SNMPv3 authentication protocol options are SHA and MD5. The privacy protocol options are AES-128
and DES. If these options are not specified in the cisco-av-pair attribute, MD5 and DES are the default
authentication protocols. Similarly, the list of domains would be as follows:
shell:domains="domainA domainB …"

TACACS+ Authentication
Terminal Access Controller Access Control device Plus (TACACS+) is another remote AAA protocol that
is supported by Cisco devices. TACACS+ has the following advantages over RADIUS authentication:
• Provides independent AAA facilities. For example, the APIC can authorize access without authenticating.
• Uses TCP to send data between the AAA client and server, making reliable transfers with a
connection-oriented protocol.
• Encrypts the entire protocol payload between the switch and the AAA server to ensure higher data
confidentiality. RADIUS encrypts passwords only.
• Uses the av-pairs that are syntactically and configurationally different than RADIUS but the APIC
supports the same list of strings (shell:roles and shell:domains).

Cisco Application Centric Infrastructure Fundamentals


71
User Access, Authentication, and Accounting
LDAP/Active Directory Authentication

LDAP/Active Directory Authentication


Similar to RADIUS and TACACS+, LDAP allows a network element to retrieve AAA credentials that can
be used to authenticate and then authorize the user to perform certain actions. An added certificate authority
configuration can be performed by an administrator to enable LDAPS (LDAP over SSL) trust and prevent
man-in-the-middle attacks.
The only difference between RADIUS/TACACS+ and LDAP is that LDAP groups can be used to map to
shell:roles in the APIC. The AAA LDAP client searches the LDAP provider groups that are mapped into
the local node. If the remote user is found, AAA assigns the user roles and locales that are defined for that
LDAP group in the associated LDAP group map. This feature is optional. Active Directory has a feature called
recursive traversal, which allows an user to get all the ancestors of a user’s group and apply the roles. This
feature is possible because Active Directory supports nested groups.

User IDs in the APIC Bash Shell


User IDs on the APIC for the Linux shell are generated within the APIC for local users. Users whose
authentication credential is managed on external servers, the user ID for the Linux shell can be specified in
the cisco-av-pair. Omitting the (16001) in the above cisco-av-pair is legal, in which case the remote user gets
a default Linux user ID of 23999. Linux User IDs are used during bash sessions, allowing standard Linux
permissions enforcement. Also, all managed objects created by a user are marked as created-by that user’s
Linux user ID.
The following is an example of a user ID as seen in the APIC Bash shell:
admin@ifav17-ifc1:~> touch myfile
admin@ifav17-ifc1:~> ls -l myfile
-rw-rw-r-- 1 admin admin 0 Apr 13 21:43 myfile
admin@ifav17-ifc1:~> ls -ln myfile
-rw-rw-r-- 1 15374 15374 0 Apr 13 21:43 myfile
admin@ifav17-ifc1:~> id
uid=15374(admin) gid=15374(admin) groups=15374(admin)

Login Domains
A login domain defines the authentication domain for a user. Login domains can be set to the Local, LDAP,
RADIUS, or TACACS+ authentication mechanisms. When accessing the system from REST, the CLI, or the
GUI, the APIC enables the user to select the correct authentication domain.
For example, in the REST scenario, the username is prefixed with a string so that the full login username
looks as follows:
apic:<domain>\<username>
If accessing the system from the GUI, the APIC offers a drop-down list of domains for the user to select. If
no apic: domain is specified, the default authentication domain servers are used to look up the username.

Cisco Application Centric Infrastructure Fundamentals


72
CHAPTER 7
Virtual Machine Manager Domains
This chapter contains the following sections:

• Virtual Machine Manager Domains, page 73


• VMM Policy Model, page 76
• vCenter Domain Configuration Workflow , page 77
• vCenter and vShield Domain Configuration Workflow , page 81
• Creating Application EPGs Policy Resolution and Deployment Immediacy, page 86

Virtual Machine Manager Domains


The APIC is a single pane of glass that automates the entire networking for all virtual and physical workloads
including access policies and Layer 4 to Layer 7 services. In the case of the VMware vCenter, all the networking
functionalities of the Virtual Distributed Switch (VDS) and port groups are performed using the APIC. The
only function that a vCenter administrator needs to perform on the vCenter is to place the vNICs into the
appropriate groups that were created by the APIC.
VM controller—Represents an external virtual machine management system such as the VMware vCenter,
the VMware vShield, and the Microsoft System Center Virtual Machine Manager (SCVMM).
Virtual Machine Manager (VMM) domain—Groups VM controllers with similar networking policy
requirements. For example, the VM controllers can share VLAN or Virtual Extensible Local Area Network
(VXLAN) space and application endpoint groups (EPGs). The APIC communicates with the controller to
publish network configurations such as port groups that are then applied to the virtual workloads.

Note A single VMM domain can contain multiple instances of VM controllers, but they must be from the same
vendor (for example, from VMware or from Microsoft).

Provisioning of EPGs in VMM Domain—Associates application profile EPGs to VMM domains as follows:
• The APIC pushes these EPGs as port groups in the VM controller. The compute administrator then
places vNICs into these port groups.
• An EPG can span multiple VMM domains, and a VMM domain can contain multiple EPGs.

Cisco Application Centric Infrastructure Fundamentals


73
Virtual Machine Manager Domains
Virtual Machine Manager Domains

EPG scalability in the fabric—EPGs can use multiple VMM domains to do the following:
• An EPG within a VMM domain is identified by using an encapsulation identifier that is automatically
managed by the APIC. An example is a VLAN, a Virtual Network ID (VNID for VXLAN), or a Virtual
Subnet Identifier (VSID for NVGRE).
• An EPG can be mapped to multiple physical (for baremetal servers) or virtual domains. It can use
different VLAN, VNID, VSID ID encapsulations in each domain.
• The ingress leaf switch normalizes and translates the encapsulation (VLAN/VNID/VSID) from the
packet into a fabric local VXLAN VNID (segment ID), which makes the EPG encapsulation local to a
leaf switch.
• It is possible to reuse the encapsulation IDs across different leaf switches. For example, VLAN-based
encapsulation restricts the number of EPGs within a VMM domain to 4096. It is possible to scale EPGs
by creating multiple VMM domains, and associate the same EPG across multiple VMM domains.

Note Multiple VMM domains can connect to the same leaf switch if they do not have overlapping VLAN pools.
See the following figure. Similarly, the same VLAN pools can be used across different domains if they
do not use the same leaf switch.

Figure 49: Multiple VMM Domains and Scaling EPGs in the Fabric

Attach Entity Profiles


The ACI fabric provides multiple attachment points that connect through leaf ports to various external
entities such as baremetal servers, hypervisors, Layer 2 switches (for example, the Cisco UCS fabric

Cisco Application Centric Infrastructure Fundamentals


74
Virtual Machine Manager Domains
Virtual Machine Manager Domains

interconnect), and Layer 3 routers (for example Cisco Nexus 7000 Series switches). These attachment points
can be physical ports, port channels, or a virtual port channel (vPC) on the leaf switches.
An attachable entity profile (AEP) represents a group of external entities with similar infrastructure policy
requirements. The infrastructure policies consist of physical interface policies, for example, Cisco Discovery
Protocol (CDP), Link Layer Discovery Protocol (LLDP), maximum transmission unit (MTU), and Link
Aggregation Control Protocol (LACP).
A VM Management (VMM) domain automatically derives the physical interfaces policies from the interface
policy groups that are associated with an AEP.
• An override policy at AEP can be used to specify a different physical interface policy for a VMM domain.
This policy is useful in scenarios where a hypervisor is connected to the leaf switch through an
intermediate Layer 2 node, and a different policy is desired at the leaf switch and hypervisor physical
ports. For example, you can configure LACP between a leaf switch and a Layer 2 node. At the same
time, you can disable LACP between the hypervisor and the Layer 2 switch by disabling LACP under
the AEP override policy.

An AEP is required to deploy any VLAN pools on the leaf switches. It is possible to reuse the encapsulation
pools (for example, VLAN) across different leaf switches. An AEP implicitly provides the scope of the VLAN
pool (associated to the VMM domain) to the physical infrastructure.

Note • An AEP provisions the VLAN pool (and associated VLANs) on the leaf. The VLANs are not actually
enabled on the port. No traffic flows unless an EPG is deployed on the port.
• Without VLAN pool deployment using an AEP, a VLAN is not enabled on the leaf port even if an
EPG is provisioned.
◦A particular VLAN is provisioned or enabled on the leaf port based on EPG events either
statically binding on a leaf port or based on VM events from external controllers such as
VMware vCenter.

• A leaf switch does not support overlapping VLAN pools. Different overlapping VLAN pools must
not be associated with the same AEP.

Pools
A pool represents a range of traffic encapsulation identifiers (for example, VLAN IDs, VNIDs, and multicast
addresses). A pool is a shared resource and can be consumed by multiple domains such as VMM and Layer
4 to Layer 7 services. A leaf switch does not support overlapping VLAN pools. You must not associate
different overlapping VLAN pools with the same attachable entity profile (AEP). The two types of VLAN-based
pools are as follows:
• Dynamic pools—Managed internally by the APIC to allocate VLANs for endpoint groups (EPGs). A
vCenter Domain can associate only to a dynamic pool.
• Static pools— One or more EPGs are associated with a domain, and that domain is associated with a
static range of VLANs. You must configure statically deployed EPGs within that range of VLANs.

Cisco Application Centric Infrastructure Fundamentals


75
Virtual Machine Manager Domains
VMM Policy Model

VMM Policy Model


ACI fabric VM networking enables an administrator to configure connectivity policies for virtual machine
controllers. The following figure shows the objects of the VM networking policy model and their relation to
other objects in the VM domain profile.

Figure 50: VMM Policy Model

VM domain profiles contain the following MOs:


• Credential—Associates users with a VM domain.

Cisco Application Centric Infrastructure Fundamentals


76
Virtual Machine Manager Domains
vCenter Domain Configuration Workflow

• Controller—Specify how to connect to a VMM controller that is part of a containing policy enforcement
domain. For example, the controller specifies the connection to a VMware vCenter that is part a VM
domain.
• Application EPG—An application endpoint group is a policy that regulates connectivity and visibility
among the end points within the scope of the policy.
• Atachable Entity Profile—Provides a template to deploy hypervisor policies on a large set of leaf ports
and also provides the association of a VM domain and the physical network infrastructure. The attachable
entity profile contains the following:
◦Policy groups that specify the interface policies to use.
◦Host port selectors that specify the ports to configure and how to configure those ports.
◦Port blocks that specify a range of interfaces.
◦Interface profiles that specify the interface configuration.
◦Node profiles that specify node configurations.
◦Leaf selectors that specify which leaf nodes will be configured.
◦Node blocks that specify a range of nodes.

• VLAN Pool—A VLAN pool specifies the address used for VLAN encapsulation that the VMM domain
will consume.

vCenter Domain Configuration Workflow


1 The APIC administrator configures the vCenter domain policies in the APIC. See the following figure.
The APIC administrator provides the following vCenter connectivity information:
• vCenter IP address, vCenter credentials, VMM domain policies, and VMM domain SPAN
• Policies (VLAN pools, domain type such as VMware VDS, Cisco Nexus 1000V switch)
• Connectivity to physical leaf inerfaces (using attach entity profiles)

Figure 51: The APIC Administrator Configures the vCenter Domain Policies

The APIC automatically connects to the vCenter and creates a VDS under the vCenter. See the following
figure.

Cisco Application Centric Infrastructure Fundamentals


77
Virtual Machine Manager Domains
vCenter Domain Configuration Workflow

Note The VDS name is a concatenation of the VMM domain name and the data center name.

Figure 52: Creating a VDS Under the vCenter

2 The APIC administrator creates and associates application EPGs to the VMM domains.
• The APIC automatically creates port groups in the VMware vCenter under the VDS.
• This process provisions the network policy in the VMware vCenter.

See the following figure.

Cisco Application Centric Infrastructure Fundamentals


78
Virtual Machine Manager Domains
vCenter Domain Configuration Workflow

Note • The port group name is a concatenation of the tenant name, the application profile name, and the
EPG name.
• The port group is created under the VDS, and it was created earlier by the APIC.

Figure 53: Associating the Application EPGs to the VMM Domain

3 The vCenter administrator or the compute management tool adds the ESX host or hypervisor to the APIC
VDS and assigns the ESX host hypervisor ports as uplinks on the APIC VDS. These uplinks must connect
to the ACI leaf switches.

Cisco Application Centric Infrastructure Fundamentals


79
Virtual Machine Manager Domains
vCenter Domain Configuration Workflow

• The APIC learns the location of the hypervisor host to the leaf connectivity using LLDP or CDP
information of the hypervisors as shown in the following figure.

Figure 54: Using the Management Tool to Attach the Hypervisors to the VDS

4 The vCenter administrator or the compute management tool instantiates and assigns VMs to the port
groups.
• The APIC learns about the VM placements based on the vCenter events.

Cisco Application Centric Infrastructure Fundamentals


80
Virtual Machine Manager Domains
vCenter and vShield Domain Configuration Workflow

• The APIC automatically pushes the application EPG and its associated policy (for example, contracts
and filters) to the ACI fabric. See the following figure.

Figure 55: Pushing the Policy to the ACI Fabric

vCenter and vShield Domain Configuration Workflow


This workflow shows how the APIC integrates with the vShield Manager to use the hypervisor VXLAN
functionality provided by VMware.

Note The APIC controls and automates the entire VXLAN preparation and deployment on the vShield Manager
so that users are not required to perform any actions on the vShield Manager.

The following prerequisites must be met before configuration begins:


• The vCenter Server IP address must be configured in the vShield Manager.
• The fabric infrastructure VLAN must be extended to the hypervisor ports. The fabric infrastructure
VLAN is used as the outer VLAN in the Ethernet header of the VXLAN data packet. The APIC
automatically pushes the fabric infrastructure VLAN to the vShield Manager when preparing the APIC
VDS for the VXLAN.
• To allow the data path to work, the fabric infrastructure VLAN must be extended to the hypervisor ports.

Cisco Application Centric Infrastructure Fundamentals


81
Virtual Machine Manager Domains
vCenter and vShield Domain Configuration Workflow

◦On the tenant-facing ports of the leaf switches, the infrastructure VLAN can be provisioned by
creating an attach entity profile on the APIC. (For information about creating attach entity profiles,
see the APIC Getting Started Guide.)
◦If any intermediate Layer 2 switches are between the hypervisor and a leaf switch, the network
administrator must manually provision the infrastructure VLAN on the intermediate Layer 2 nodes.

1 The APIC administrator configures the vCenter and vShield domain policies in the APIC.

Note • The APIC administrator must provide the association between vShield Manager and the vCenter
Server on the APIC.
• The APIC administrator must provide the segment ID and multicast address pool that is required for
the VXLAN. The segment ID pool in the vShield Manager must not overlap with pools in other
vShield Managers that are configured on the APIC.

a The APIC connects to vCenter and creates the VDS. See the following figure.

Figure 56: Connecting to vCenter and Creating the VDS

Cisco Application Centric Infrastructure Fundamentals


82
Virtual Machine Manager Domains
vCenter and vShield Domain Configuration Workflow

b The APIC connects to the vShield Manager, pushes the segment ID and multicast address pool, and
prepares the VDS for VXLAN. See the following figure.

Figure 57: Connecting to the vShield Manager and Preparing the VDS for a VXLAN

2 The APIC administrator creates application profiles and EPGs, and associates them to VMM domains.
See the following figure.
• The APIC automatically creates virtual wires in the vShield Manager under the VDS.
• The APIC reads the segment ID and the multicast address from the VXLAN virtual wire sent from
the vShield Manager.
• The vShield Manager pushes the virtual wires as port groups in the vCenter Server under the VDS.

Cisco Application Centric Infrastructure Fundamentals


83
Virtual Machine Manager Domains
vCenter and vShield Domain Configuration Workflow

Note The virtual wire name is a concatenation of the tenant name, the application profile name, and the EPG
name.

Figure 58: Creating Application Profiles and EPGs

3 The vCenter administrator or the compute management tool attaches the hypervisors to the VDS. See the
following figure.

Cisco Application Centric Infrastructure Fundamentals


84
Virtual Machine Manager Domains
vCenter and vShield Domain Configuration Workflow

• The APIC learns the location of the hypervisor host to the leaf connectivity using LLDP or CDP
information from the hypervisors.

Figure 59: Attaching the Hypervisors to the VDS

4 The vCenter administrator or compute management tool instantiates and assigns VMs to the port groups.

Cisco Application Centric Infrastructure Fundamentals


85
Virtual Machine Manager Domains
Creating Application EPGs Policy Resolution and Deployment Immediacy

The APIC automatically pushes the policy to the ACI fabric. See the following figure.

Figure 60: Pushing the Policy to the ACI Fabric

Creating Application EPGs Policy Resolution and Deployment


Immediacy
Whenever an EPG associates to a VMM domain, the administrator can choose the resolution and deployment
preferences to specify when a policy should be pushed.

Resolution Immediacy
• Immediate—Specifies that EPG policies (including contracts and filters) are downloaded to the associated
leaf switch software upon hypervisor attachment to VDS. LLDP or OpFlex permissions are used to
resolve the hypervisor to leaf node attachments.
• On Demand—Specifies that a policy (for example, VLAN, VXLAN bindings, contracts, or filters) is
pushed to the leaf node only when a pNIC attaches to the hypervisor connector and a VM is placed in
the port group (EPG).

Deployment Immediacy
Once the policies are downloaded to the leaf software, instrumentation immediacy can specify when the policy
is pushed into the hardware policy CAM.

Cisco Application Centric Infrastructure Fundamentals


86
Virtual Machine Manager Domains
Creating Application EPGs Policy Resolution and Deployment Immediacy

• Immediate—Specifies that the policy is programmed in the hardware policy CAM as soon as the policy
is downloaded in the leaf software.
• On Demand—Specifies that the policy is programmed in the hardware policy CAM only when the first
packet is received through the data path. This process helps to optimize the hardware space.

Cisco Application Centric Infrastructure Fundamentals


87
Virtual Machine Manager Domains
Creating Application EPGs Policy Resolution and Deployment Immediacy

Cisco Application Centric Infrastructure Fundamentals


88
CHAPTER 8
Layer 4 to Layer 7 Service Insertion
This chapter contains the following sections:

• Layer 4 to Layer 7 Service Insertion, page 89


• Layer 4 to Layer 7 Policy Model, page 90
• Service Graphs, page 90
• Automated Service Insertion, page 91
• Device Packages, page 92
• About Device Clusters (Logical Devices), page 94
• About Concrete Devices, page 94
• Function Nodes, page 94
• Function Node Connectors, page 94
• Terminal Nodes, page 94
• About Privileges, page 95
• Service Automation and Configuration Management, page 95
• Service Resource Pooling, page 95

Layer 4 to Layer 7 Service Insertion


The Cisco Application Policy Infrastructure Controller (APIC) manages network services. Policies are used
to insert services. APIC service integration provides a life cycle automation framework that enables the system
to dynamically respond when a service comes online or goes offline. Shared services that are available to the
entire fabric are administered by the fabric administrator. Services that are for a single tenant are administered
by the tenant administrator.
The APIC provides automated service insertion while acting as a central point of policy control. APIC policies
manage both the network fabric and services appliances. The APIC can configure the network automatically
so that traffic flows through the services. Also, the APIC can automatically configure the service according
to the application's requirements. This approach allows organizations to automate service insertion and

Cisco Application Centric Infrastructure Fundamentals


89
Layer 4 to Layer 7 Service Insertion
Layer 4 to Layer 7 Policy Model

eliminate the challenge of managing all of the complex traffic-steering techniques of traditional service
insertion.

Layer 4 to Layer 7 Policy Model


The Layer 4 to Layer 7 service device type policies includes key managed objects such as services supported
by the package and device scripts. The following figure shows the objects of the Layer 4 to Layer 7 service
device type policy model.

Figure 61: Layer 4 to Layer 7 Policy Model

Layer 4 to Layer 7 service policies contain the following:


• Services—Contains metadata for all the functions provided by a device such as SSL offloading and
load-balancing. This MO contains the connector names, encapsulation type, such as VLAN and VXLAN,
and any interface labels.
• Device Script—Represents a device script handler that contains meta information about the related
attributes of the script handler including its name, package name, and version.
• Function Profile Group Container—Objects that contain the functions available to the service device
type. Function profiles contain all the configurable parameters supported by the device organized into
folders.

Service Graphs
The Cisco Application Centric Infrastructure (ACI) treats services as an integral part of an application. Any
services that are required are treated as a service graph that is instantiated on the ACI fabric from the Cisco
Application Policy Infrastructure Controller (APIC). Users define the service for the application, while service
graphs identify the set of network or service functions that are needed by the application. Each function is
represented as a node.
After the graph is configured in the APIC, the APIC automatically configures the services according to the
service function requirements that are specified in the service graph. The APIC also automatically configures
the network according to the needs of the service function that is specified in the service graph, which does
not require any change in the service device.
A service graph is represented as two or more tiers of an application with the appropriate service function
inserted between.

Cisco Application Centric Infrastructure Fundamentals


90
Layer 4 to Layer 7 Service Insertion
Service Graph Configuration Parameters

A service appliance (device) performs a service function within the graph. One or more service appliances
might be required to render the services required by a graph. One or more service functions can be performed
by a single service device.
Service graphs and service functions have the following characteristics:
• Traffic sent or received by an endpoint group (EPG) can be filtered based on a policy, and a subset of
the traffic can be redirected to different edges in the graph.
• Service graph edges are directional.
• Taps (hardware-based packet copy service) can be attached to different points in the service graph.
• Logical functions can be rendered on the appropriate (physical or virtual) device, based on the policy.
• The service graph supports splits and joins of edges, and it does not restrict the administrator to linear
service chains.
• Traffic can be reclassified again in the network after a service appliance emits it.
• Logical service functions can be scaled up or down or can be deployed in a cluster mode or 1:1
active-standby high-availability mode, depending on the requirements.

Service Graph Configuration Parameters


A service graph can have configuration parameters that are specified by the device package. Configuration
parameters can also be specified by an EPG, application profile, or tenant context. A function node within a
service graph can require one or more configuration parameters. The parameter values can be locked to prevent
any additional changes.
When you configure a service graph and specify the values of the configuration parameters, the APIC passes
the parameters to the device script that is within the device package. The device script converts the parameter
data to the configuration that is downloaded onto the device.

Service Graph Connections


A service graph connection connects one function node to another function node.

Automated Service Insertion


Although VLAN and virtual routing and forwarding (VRF) stitching is supported by traditional service
insertion models, the Application Policy Infrastructure Controller (APIC) can automate service insertion and
the provisioning of network services, such as Secure Sockets Layer (SSL) offload, server load balancing
(SLB), Web Application firewall (WAF), and firewall, while acting as a central point of policy control. The
network services are typically rendered by service appliances, such as Application Delivery Controllers (ADCs)
and firewalls. The APIC policies manage both the network fabric and services appliances. The APIC can
configure the network automatically so that traffic flows through the services. The APIC can also automatically
configure the service according to the application's requirements, which allows organizations to automate
service insertion and eliminate the challenge of managing the complex techniques of traditional service
insertion.

Cisco Application Centric Infrastructure Fundamentals


91
Layer 4 to Layer 7 Service Insertion
Device Packages

Device Packages
The Application Policy Infrastructure Controller (APIC) requires a device package to configure and monitor
service devices. A device package manages a class of service devices and provides the APIC with information
about the devices so that the APIC knows what the device is and what the device can do. A device package
allows an administrator to add, modify, or remove a network service on the APIC without interruption. Adding
a new device type to the APIC is done by uploading a device package.
A device package is a zip file that contains the following parts:

Device specification An XML file that defines the following properties:


• Device properties:
◦Model—Model of the device.
◦Vendor—Vendor of the device.
◦Version—Software version of the device.

• Functions provided by a device, such as load balancing, content


switching, and SSL termination.
• Interfaces and network connectivity information for each function.
• Device configuration parameters.
• Configuration parameters for each function.

Device script A Python script that performs the integration between the APIC and a device.
The APIC events are mapped to function calls that are defined in the device
script.

Function profile A profile of parameters with default values that are specified by the vendor.
You can configure a function to use these default values.

Device-level configuration A configuration file that specifies parameters that are required by a device at
parameters the device level. The configuration can be shared by one or more of the graphs
that are using the device.

Cisco Application Centric Infrastructure Fundamentals


92
Layer 4 to Layer 7 Service Insertion
Device Packages

The following figure shows the APIC service automation and insertion architecture through the device package.

Figure 62: Device Package Architecture

A device package can be provided by a device vendor or can be created by Cisco.


The device package enables an administrator to automate the management of the following services:
• Device attachment and detachment
• Endpoint attachment and detachment
• Service graph rendering
• Health monitoring
• Alarms, notifications, and logging
• Counters

When a device package is uploaded through the GUI or the northbound APIC interface, the APIC creates a
namespace for each unique device package. The content of the device package is unzipped and copied to the
name space. The file structure created for a device package name space is as follows:
root@apic1:/# ls
bin dbin dev etc fwk install images lib lib64 logs pipe sbin tmp usr util

root@apic1:/install# ls
DeviceScript.py DeviceSpecification.xml feature common images lib util.py
The contents of the device package are copied under the install directory.
The APIC parses the device model. The managed objects defined in the XML file are added to the APIC's
managed object tree that is maintained by the policy manager.
The Python scripts that are defined in the device package are launched within a script wrapper process in the
namespace. Access to the file system is restricted. Python scripts can create temporary files under /tmp and
can access any text files that were bundled as part of the device package. However, the Python scripts should
not create or store any persistent data in a file.
The device scripts can generate debug logs through the ACI logging framework. The logs are written to a
circular file called debug.log under the logs directory.
Multiple versions of a device package can coexist on the APIC, because each device package version runs in
its own namespace. Administrators can select a specific version for managing a set of devices.

Cisco Application Centric Infrastructure Fundamentals


93
Layer 4 to Layer 7 Service Insertion
About Device Clusters (Logical Devices)

About Device Clusters (Logical Devices)


A device cluster (also known as a logical device) is one or more concrete devices that act as a single device.
A device cluster has logical interfaces, which describe the interface information for the device cluster. During
service graph rendering, function node connectors are associated with logical interfaces. The Application
Policy Infrastructure Controller (APIC) allocates the network resources (VLAN or Virtual Extensible Local
Area Network [VXLAN]) for a function node connector during service graph instantiation and rendering and
programs the network resources onto the logical interfaces.
The service graph uses a specific device cluster that is based on a device cluster selection policy (called a
logical device context) that an administrator defines.
An administrator can set up a maximum of two concrete device clusters in the active-standby mode.

About Concrete Devices


A concrete device has concrete interfaces. When a concrete device is added to a logical device cluster, concrete
interfaces are mapped to the logical interfaces. During service graph instantiation, VLANs and VXLANs are
programmed on concrete interfaces that are based on their association with logical interfaces.

Function Nodes
A function node represents a single service function. A function node has function node connectors, which
represent the network requirement of a service function.
A function node within a service graph can require one or more parameters. The parameters can be specified
by an endpoint group (EPG), an application profile, or a tenant context. Parameters can also be assigned at
the time that an administrator defines a service graph. The parameter values can be locked to prevent any
additional changes.

Function Node Connectors


A function node connector connects a function node to the service graph and is associated with the appropriate
bridge domain and connections based on the graph's connector's subset. Each connector is associated with a
VLAN or Virtual Extensible LAN (VXLAN). Each side of a connector is treated as an endpoint group (EPG),
and whitelists are downloaded to the switch to enable communication between the two function nodes.

Terminal Nodes
Terminal nodes connect a service graph with the contracts. An administrator can insert a service graph for
the traffic between two application endpoint groups (EPGs) by connecting the terminal node to a contract.
Once connected, traffic between the consumer EPG and provider EPG of the contract is redirected to the
service graph.

Cisco Application Centric Infrastructure Fundamentals


94
Layer 4 to Layer 7 Service Insertion
About Privileges

About Privileges
An administrator can grant privileges to the roles in the (APIC). Privileges determine what tasks a role is
allowed to perform. Administrators can grant the following privileges to the administrator roles:

Privilege Description
nw-svc-connectivity
• Create a management EPG
• Create management connectivity to other objects

nw-svc-policy
• Create a service graph
• Attach a service graph to an application EPG
and a contract
• Monitor a service graph

nw-svc-device
• Create a device cluster
• Create a concrete device
• Create a device context

Note Only an infrastructure administrator can upload a device package to the APIC.

Service Automation and Configuration Management


The Cisco APIC can optionally act as a point of configuration management and automation for service devices
and coordinate the service devices with the network automation. The Cisco APIC interfaces with a service
device by using Python scripts and calls device-specific Python script functions on various events.
The device scripts and a device specification that defines functions supported by the service device are bundled
as a device package and installed on the Cisco APIC. The device script handlers interface with the device by
using its REST interface (preferred) or CLI based on the device configuration model.

Service Resource Pooling


The Cisco ACI fabric can perform nonstateful load distribution across many destinations. This capability
allows organizations to group physical and virtual service devices into service resource pools, which can be
further grouped by function or location. These pools can offer high availability by using standard
high-availability mechanisms or they can be used as simple stateful service engines with the load redistributed
to the other members if a failure occurs. Either option provides horizontal scale out that far exceeds the current

Cisco Application Centric Infrastructure Fundamentals


95
Layer 4 to Layer 7 Service Insertion
Service Resource Pooling

limitations of the equal-cost multipath (ECMP), port channel features, and service appliance clustering, which
requires a shared state.
Cisco ACI can perform a simple version of resource pooling with any service devices if the service devices
do not have to interact with the fabric, and it can perform more advanced pooling that involves coordination
between the fabric and the service devices.

Cisco Application Centric Infrastructure Fundamentals


96
CHAPTER 9
Management Tools
This chapter contains the following sections:

• Management Tools, page 97


• About the Management GUI, page 97
• About the CLI, page 98
• Visore Managed Object Viewer, page 98
• Management Information Model Reference, page 99
• API Inspector, page 100
• User Login Menu Options, page 101
• Locating Objects in the MIT, page 101
• Configuration Export/Import, page 106

Management Tools
Cisco Application Centric Infrastructure (ACI) tools help fabric administrators, network engineers, and
developers to develop, configure, debug, and automate the deployment of tenants and applications.

About the Management GUI


The following management GUI features provide access to the fabric and its components (leaves and spines):
• Based on universal web standards (HTML5). No installers or plugins are required.
• Access to monitoring (statistics, faults, events, audit logs), operational and configuration data.
• Access to the APIC and spine and leaf switches through a single sign-on mechanism.
• Communication with the APIC using the same RESTful APIs that are available to third parties.

Cisco Application Centric Infrastructure Fundamentals


97
Management Tools
About the CLI

About the CLI


The CLI features an operational and configuration interface to the APIC, leaf, and spine switches:
• Implemented from the ground up in Python; can switch between the Python interpreter and CLI
• Plugin architecture for extensibility
• Context-based access to monitoring, operation, and configuration data
• Automation through Python commands or batch scripting

Visore Managed Object Viewer


Visore is a read-only management information tree (MIT) browser as shown in the figure below. It enables
distinguished name (DN) and class queries with optional filters.

Figure 63: Visore MO Viewer

The Visore managed object viewer is at this location: http(s)://host[:port]/visore.html

Cisco Application Centric Infrastructure Fundamentals


98
Management Tools
Management Information Model Reference

Management Information Model Reference


The Management Information Model (MIM) contains all of the managed objects in the system and their
properties. See the following figure for an example of how an administrator can use the MIM to research an
object in the MIT.

Figure 64: MIM Reference

Cisco Application Centric Infrastructure Fundamentals


99
Management Tools
API Inspector

API Inspector
The API Inspector provides a real time display of REST API commands that the APIC processes to perform
GUI interactions. The figure below shows REST API commands that the API Inspector displays upon navigating
to the main tenant section of the GUI.

Figure 65: API Inspector

Cisco Application Centric Infrastructure Fundamentals


100
Management Tools
User Login Menu Options

User Login Menu Options


The user login drop down menu provides several configuration, diagnostic, reference, and preference options.
The figure below shows this drop-down menu.

Figure 66: User Login Menu Options

The options include the following:


• AAA options for changing the user password, SSH Keys, X509 Certificate, and viewing the permissions
of the logged on user.
• Show API Inspector opens the API Inspector.
• API Documentation opens the Management Information Model reference.
• Remote Logging.
• Debug information.
• About the current version number of the software.
• Settings preferences for using the GUI.
• Logout to exit the system.

Locating Objects in the MIT


The Cisco ACI uses an information-model-based architecture (management information tree [MIT]) in which
the model describes all the information that can be controlled by a management process. Object instances are
referred to as managed objects (MOs).

Cisco Application Centric Infrastructure Fundamentals


101
Management Tools
Locating Objects in the MIT

The following figure shows the distinguished name, which uniquely represents any given MO instance, and
the relative name, which represents the MO locally underneath its parent MO. All objects in the MIT exist
under the root object.

Figure 67: MO Distinguished and Relative Names

Every MO in the system can be identified by a unique distinguished name (DN). This approach allows the
object to be referred to globally. In addition to its distinguished name, each object can be referred to by its
relative name (RN). The relative name identifies an object relative to its parent object. Any given object's
distinguished name is derived from its own relative name that is appended to its parent object's distinguished
name.
A DN is a sequence of relative names that uniquely identifies an object:
dn = {rn}/{rn}/{rn}/{rn}
dn =”sys/ch/lcslot-1/lc/leafport-1”

Distinguished names are directly mapped to URLs. Either the relative name or the distinguished name can be
used to access an object, depending on the current location in the MIT.
Because of the hierarchical nature of the tree and the attribute system used to identify object classes, the tree
can be queried in several ways for obtaining managed object information. Queries can be performed on an
object itself through its distinguished name, on a class of objects such as a switch chassis, or on a tree-level
to discover all members of an object.

Cisco Application Centric Infrastructure Fundamentals


102
Management Tools
Tree-Level Queries

Tree-Level Queries
The following figure shows two chassis that are queried at the tree level.

Figure 68: Tree-Level Queries

Both queries return the referenced object and its child objects. This approach is useful for discovering the
components of a larger system. In this example, the query discovers the cards and ports of a given switch
chassis.

Class-Level Queries
The following figure shows the second query type: the class-level query.

Figure 69: Class-Level Queries

Cisco Application Centric Infrastructure Fundamentals


103
Management Tools
Object-Level Queries

Class-level queries return all the objects of a given class. This approach is useful for discovering all the objects
of a certain type that are available in the MIT. In this example, the class used is Cards, which returns all the
objects of type Cards.

Object-Level Queries
The third query type is an object-level query. In an object-level query a distinguished name is used to return
a specific object. The figure below shows two object-level queries: for Node 1 in Chassis 2, and one for Node
1 in Chassis 1 in Card 1 in Port 2.

Figure 70: Object-Level Queries

For all MIT queries, an administrator can optionally return the entire subtree or a partial subtree. Additionally,
the role-based access control (RBAC) mechanism in the system dictates which objects are returned; only the
objects that the user has rights to view will ever be returned.

Managed-Object Properties
Managed objects in the Cisco ACI contain properties that define the managed object. Properties in a managed
object are divided into chunks that are managed by processes in the operating system. Any object can have

Cisco Application Centric Infrastructure Fundamentals


104
Management Tools
Accessing the Object Data Through REST Interfaces

several processes that access it. All these properties together are compiled at runtime and are presented to the
user as a single object. The following figure shows an example of this relationship.

Figure 71: Managed Object Properties

The example object has three processes that write to property chunks that are in the object. The data management
engine (DME), which is the interface between the Cisco APIC (the user) and the object, the port manager,
which handles port configuration, and the spanning tree protocol (STP) all interact with chunks of this object.
The APIC presents the object to the user as a single entity compiled at runtime.

Accessing the Object Data Through REST Interfaces


REST is a software architecture style for distributed systems such as the World Wide Web. REST has
increasingly displaced other design models such as Simple Object Access Protocol (SOAP) and Web Services
Description Language (WSDL) due to its simpler style. The Cisco APIC supports REST interfaces for
programmatic access to the entire Cisco ACI solution.
The object-based information model of Cisco ACI makes it a very good fit for REST interfaces: URLs and
URIs map directly to distinguished names that identify objects on the MIT, and any data on the MIT can be
described as a self-contained structured text tree document that is encoded in XML or JSON. The objects
have parent-child relationships that are identified using distinguished names and properties, which are read
and modified by a set of create, read, update, and delete (CRUD) operations.
Objects can be accessed at their well-defined address, their REST URLs, using standard HTTP commands
for retrieval and manipulation of Cisco APIC object data. The URL format used can be represented as follows:
<system>/api/[mo|class]/[dn|class][:method].[xml|json]?{options}
The various building blocks of the preceding URL are as follows:
• system: System identifier; an IP address or DNS-resolvable hostname
• mo | class: Indication of whether this is a MO in the MIT, or class-level query
• class: MO class (as specified in the information model) of the objects queried; the class name is
represented as <pkgName><ManagedObjectClassName>

Cisco Application Centric Infrastructure Fundamentals


105
Management Tools
Configuration Export/Import

• dn: Distinguished name (unique hierarchical name of the object in the MIT) of the object queried
• method: Optional indication of the method being invoked on the object; applies only to HTTP POST
requests
• xml | json: Encoding format
• options: Query options, filters, and arguments

With the capability to address and access an individual object or a class of objects with the REST URL, an
administrator can achieve complete programmatic access to the entire object tree and to the entire system.
The following are REST query examples:
• Find all EPGs and their faults under tenant solar.
http://192.168.10.1:7580/api/mo/uni/tn-solar.xml?query-target=subtree&target-subtree-class=fvAEPg&rsp-subtree-include=faults

• Filtered EPG query


http://192.168.10.1:7580/api/class/fvAEPg.xml?query-target-filter=eq(fvAEPg.fabEncap,%20"vxlan-12780288")

Configuration Export/Import
All APIC policies and configuration data can be exported to create backups. This is configurable via an export
policy that allows either scheduled or immediate backups to a remote server. Scheduled backups can be
configured to execute periodic or recurring backup jobs. By default, all policies and tenants are backed up,
but the administrator can optionally specify only a specific subtree of the management information tree.
Backups can be imported into the APIC through an import policy, which allows the system to be restored to
a previous configuration.

Cisco Application Centric Infrastructure Fundamentals


106
Management Tools
Configuration Export/Import

The following figure shows how the process works for configuring an export policy.

Figure 72: Workflow for Configuring an Export Policy

The APIC applies this policy in the following way:


• A complete system configuration backup is performed once a month.
• The backup is stored in XML format on the BigBackup FTP site.
• The policy is triggered (it is active).

Cisco Application Centric Infrastructure Fundamentals


107
Management Tools
Configuration Export/Import

The following figure shows how the process works for configuring an import policy.

Figure 73: Workflow for Configuring an Import Policy

The APIC applies this policy in the following way:


• A policy is created to perform a complete system configuration restore from monthly backup.
• The restore atomic mode will skip an entire shard when an attempt is made to import invalid configuration
to a shard.
• The policy is untriggered (it is available but has not been activated).

An import policy supports the following options:


Action
• Merge—Imported configuration is merged with existing configuration.
• Replace—Imported configuration replaced the existing configuration. Previously existing configuration
data that is not present in the files being imported is deleted.

Mode
• Atomic—Attempts to import all configuration data. The operation fails import for the shard if any objects
cannot be imported.
• Best-effort—Attempts to import all configuration but ignores objects that cannot be imported.

Cisco Application Centric Infrastructure Fundamentals


108
Management Tools
Tech Support, Statistics, Core

Tech Support, Statistics, Core


An administrator can configure export policies in the APIC to export statistics, technical support collections,
faults and events, to process core files and debug data from the fabric (APIC as well as switch) to any external
host. The exports can be in a variety of formats, including XML, JSON, web sockets, SCP, or HTTP. Exports
are subscribable, and can be streaming, periodic, or on-demand.
An administrator can configure policy details such as the transfer protocol, compression algorithm, and
frequency of transfer. Policies can be configured by users who are authenticated using AAA. A security
mechanism for the actual transfer is based on a username and password. Internally, a policy element handles
the triggering of data.

Cisco Application Centric Infrastructure Fundamentals


109
Management Tools
Tech Support, Statistics, Core

Cisco Application Centric Infrastructure Fundamentals


110
CHAPTER 10
Monitoring
This chapter contains the following sections:

• Faults, Errors, Events, Audit Logs, page 111


• Statistics Properties, Tiers, Thresholds, and Monitoring, page 114
• Configuring Monitoring Policies, page 115

Faults, Errors, Events, Audit Logs

Note For information about faults, events, errors, and system messages, see the Cisco APIC Faults, Events, and
Error Messages User Guide and the Cisco APIC Management Information Model Reference, a Web-based
application.

The APIC maintains a comprehensive, current run-time representation of the administrative and operational
state of the ACI Fabric system in the form of a collection of MOs. The system generates faults, errors, events,
and audit log data according to the run-time state of the system and the policies that the system and user create
to manage these processes.

Faults
Based on the run-time state of the system, the APIC automatically detects anomalies and creates fault objects
to represent them. Fault objects contain various properties that are meant to help users diagnose the issue,
assess its impact and provide a remedy.
For example, if the system detects a problem associated with a port, such as a high parity-error rate, a fault
object is automatically created and placed in the management information tree (MIT) as a child of the port
object. If the same condition is detected multiple times, no additional instances of the fault object are created.

Cisco Application Centric Infrastructure Fundamentals


111
Monitoring
Events

After the condition that triggered the fault is remedied, the fault object is preserved for a period of time
specified in a fault life cycle policy and is finally deleted. See the following figure.

Figure 74: Fault Life Cycle

A life cycle represents the current state of the issue. It starts in the soak time when the issue is first detected,
and it changes to raised and remains in that state if the issue is still present. When the condition is cleared, it
moves to a state called "raised-clearing" in which the condition is still considered as potentially present. Then
it moves to a "clearing time" and finally to "retaining". At this point, the issue is considered to be resolved
and the fault object is retained only to provide the user visibility into recently resolved issues.
Each time that a life cycle transition occurs, the system automatically creates a fault record object to log it.
Fault records are never modified after they are created and they are deleted only when their number exceeds
the maximum value specified in the fault retention policy.
The severity is an estimate of the impact of the condition on the capability of the system to provide service.
Possible values are warning, minor, major and critical. A fault with a severity equal to warning indicates a
potential issue (including, for example, an incomplete or inconsistent configuration) that is not currently
affecting any deployed service. Minor and major faults indicate that there is potential degradation in the service
being provided. Critical means that a major outage is severely degrading a service or impairing it altogether.
Description contains a human-readable description of the issue that is meant to provide additional information
and help in troubleshooting.

Events
Event records are objects that are created by the system to log the occurrence of a specific condition that might
be of interest to the user. They contain the fully qualified domain name (FQDN) of the affected object, a
timestamp and a description of the condition. Examples include link state transitions, starting and stopping

Cisco Application Centric Infrastructure Fundamentals


112
Monitoring
Errors

of protocols, and detection of new hardware components. Event records are never modified after creation and
are deleted only when their number exceeds the maximum value specified in the event retention policy.
The following figure shows the process for fault and events reporting.

Figure 75: Faults and Events Reporting/Export

1 Process detects a faulty condition.


2 Process notifies Event and Fault Manager.
3 Event and Fault Manager processes the notification according to the fault rules.
4 Event and Fault Manager creates a fault Instance in the MIM and manages its life cycle according to the
fault policy.
5 Event and Fault Manager notifies the APIC and connected clients of the state transitions.
6 Event and Fault Manager triggers further actions (such as syslog or call home).

Errors
APIC error messages typically display in the APIC GUI and the APIC CLI. These error messages are specific
to the action that a user is performing or the object that a user is configuring or administering. These messages
can be the following:
• Informational messages that provide assistance and tips about the action being performed
• Warning messages that provide information about system errors related to an object, such as a user
account or service profile, that the user is configuring or administering
• Finite state machine (FSM) status messages that provide information about the status of an FSM stage

Cisco Application Centric Infrastructure Fundamentals


113
Monitoring
Audit Logs

Many error messages contain one or more variables. The information that the APIC uses to replace these
variables depends upon the context of the message. Some messages can be generated by more than one type
of error.

Audit Logs
Audit records are objects that are created by the system to log user-initiated actions, such as login/logout and
configuration changes. They contain the name of the user who is performing the action, a timestamp, a
description of the action and, if applicable, the FQDN of the affected object. Audit records are never modified
after creation and are deleted only when their number exceeds the maximum value specified in the audit
retention policy.

Statistics Properties, Tiers, Thresholds, and Monitoring


Statistics enable trend analysis and troubleshooting. Statistics gathering can be configured for collection in
on an ongoing or on an on demand basis. Statistics provide real-time measures of observed objects. Statistics
can be collected in cumulative counters and gauges. See the following figure.
Policies define what statistics are gathered, at what intervals, and what actions to take. For example, a policy
could raise a fault on an EPG if a threshold of dropped packets on an ingress VLAN is greater than 1000 per
second.

Figure 76: Various Sources of Statistics

Cisco Application Centric Infrastructure Fundamentals


114
Monitoring
Configuring Monitoring Policies

Statistics data are gathered from a variety of sources, including interfaces, VLANs, EPGs, application profiles,
ACL rules, tenants, or internal APIC processes. Statistics accumulate data in 5 minute, 15 minute, 1 hour, 1
day, 1 week, 1 month, 1 quarter, or 1 year sampling intervals. Shorter duration intervals feed longer intervals.
A variety of statistics properties are available, including last value, cumulative, periodic, rate of change, trend,
maximum, min, average. Collection and retention times are configurable. Policies can specify if the statistics
are to be gathered from the current state of the system or to be accumulated historically or both. For example,
a policy could specify that historical statistics be gathered for 5 minutes intervals over a period of 1 hour. The
1 hour is a moving window. Once an hour has elapsed, the incoming 5 minutes of statistics are added, and
the earliest 5 minutes of data are abandoned.

Configuring Monitoring Policies


Administrators can create monitoring policies with the following four broad scopes:
• Fabric Wide: includes both fabric and access objects
• Access (also known as infrastructure): access ports, FEX, VM controllers, and so on
• Fabric: fabric ports, cards, chassis, fans, and so on
• Tenant: EPGs , application profiles, services, and so on

The APIC includes the following four classes of default monitoring policies:
• monCommonPol (uni/fabric/moncommon): applies to both fabric and access infrastructure hierarchies
• monFabricPol (uni/fabric/monfab-default): applies to fabric hierarchies
• monInfraPol (uni/infra/monifra-default): applies to the access infrastructure hierarchy
• monEPGPol (uni/tn-common/monepg-default): applies to tenant hierarchies

In each of the four classes of monitoring policies, the default policy can be overridden by a specific policy.
For example, a monitoring policy applied to the Solar tenant (tn-solar) would override the default one for the
Solar tenant while other tenants would still be monitored by the default policy.
Each of the four objects in the figure below contains monitoring targets.

Figure 77: Four Classes of Default Monitoring Policies

The Infra monitoring policy contains monInfraTargets, the fabric monitoring policy contains monFabTargets,
and the tenant monitoring policy contains monEPGTargets. Each of the targets represent the corresponding
class of objects in this hierarchy. For example, under the monInfra-default monitoring policy, there is a target

Cisco Application Centric Infrastructure Fundamentals


115
Monitoring
Configuring Monitoring Policies

representing FEX fabric facing ports. The policy details regarding how to monitor these FEX fabric facing
ports are contained in this target. Only policies applicable to a target are allowed under that target. Note that
not all possible targets are auto-created by default. The administrator can add more targets under a policy if
the target is not there.
The following figure shows how the process works for configuring a fabric monitoring policy for statistics.

Figure 78: Workflow for Configuring an Access Monitoring Policy

Cisco Application Centric Infrastructure Fundamentals


116
Monitoring
Configuring Monitoring Policies

The APIC applies this monitoring policy as shown in the following figure:

Figure 79: Result of Sample Access Monitoring Policy

Monitoring policies can also be configured for other system operations, such as faults or health scores. The
structure of monitoring policies map to this hierarchy:
Monitoring Policy
• Statistics Export
• Collection Rules
• Monitoring Targets
◦Statistics Export
◦Collection Rules
◦Statistics
◦Collection Rules
◦Thresholds Rules
◦Statistics Export

Cisco Application Centric Infrastructure Fundamentals


117
Monitoring
Configuring Monitoring Policies

Statistics Export policies option in the following figure define the format and destination for statistics to be
exported. The output can be exported using FTP, HTTP, or SCP protocols. The format can be JSON or XML.
The user or administrator can also choose to compress the output. Export can be defined under Statistics,
Monitoring Targets, or under the top level monitoring policy. The higher level definition of Statistics Export
takes precedence unless there is a defined lower level policy.
As shown in the figure below, monitoring policies are applied to specific observable objects (such as ports,
cards, EPGs, and tenants) or groups of observable objects by using selectors or relations.

Figure 80: Fabric Statistics Collection Policy

Monitoring policies define the following:


• Statistics are collected and retained in the history.
• Threshold crossing faults are triggered.
• Statistics are exported.

As shown in the figure below, Collection Rules are defined per sampling interval.

Figure 81: Statistics Monitoring Interval

Cisco Application Centric Infrastructure Fundamentals


118
Monitoring
Configuring Monitoring Policies

They configure whether the collection of statistics should be turned on or off, and when turned on, what the
history retention period should be. Monitoring Targets correspond to observable objects (such as ports and
EPGs).
Statistics correspond to groups of statistical counters (such as ingress-counters, egress-counters, or
drop-counters).

Figure 82: Statistics Types

Collection Rules can be defined under Statistics, Monitoring Targets, or under the top level Monitoring Policy.
The higher level definition of Collection Rules takes precedence unless there is a defined lower level policy.
As shown in the figure below, threshold rules are defined under collection rules and would be applied to the
corresponding sampling-interval that is defined in the parent collection rule.

Figure 83: Statistics Thresholds

Cisco Application Centric Infrastructure Fundamentals


119
Monitoring
Configuring Monitoring Policies

Cisco Application Centric Infrastructure Fundamentals


120
CHAPTER 11
Troubleshooting
This chapter contains the following sections:

• Troubleshooting, page 121


• Health Score, page 122
• Atomic Counters, page 124
• Multinode SPAN, page 125
• ARPs, ICMP Pings, and Traceroute, page 125

Troubleshooting
The ACI fabric provides extensive troubleshooting and monitoring tools as shown in the following figure.

Figure 84: Troubleshooting

Cisco Application Centric Infrastructure Fundamentals


121
Troubleshooting
Health Score

Health Score
The APIC uses a policy model to combine data into a health score. Health scores can be aggregated for a
variety of areas such as for the infrastructure, applications, or services. See the following figure.

Figure 85: Health Score Calculation

Each managed object (MO) belongs to a health score category. By default, the health score category of an
MO is the same as its MO class name. The health score category of an MO class can be changed by using a
policy. For example, the default health score category of a leaf port is eqpt:LeafP and the default health score
category of fabric ports is eqpt:FabP. However, a policy that includes both leaf ports and fabric ports can be
made to be part of the same category called ports.
Each health score category is assigned an impact level. The five health score impact levels are Maximum,
High, Medium, Low, and None. For example, the default impact level of fabric ports is Maximum and the
default impact level of leaf ports is High. Certain categories of children MOs can be excluded from health
score calculations of its parent MO by assigning a health score impact of None. These impact levels between

Cisco Application Centric Infrastructure Fundamentals


122
Troubleshooting
Health Score Aggregation and Impact

objects are user configurable. However, if the default impact level is None, the administrator cannot override
it.
The following factors are the various impact levels:
Maximum: 100% High: 80% Medium: 50% Low: 20% None: 0%
The category health score is calculated using an Lp -Norm formula. The health score penalty equals 100 minus
the health score. The health score penalty represents the overall health score penalties of a set of MOs that
belong to a given category and are children or direct relatives of the MO for which a health score is being
calculated.

Health Score Aggregation and Impact


The health score of an application component can be distributed across multiple leaf switches as shown in the
following figure.

Figure 86: Health Score Aggregation

The aggregated health score is computed at the APIC.

Cisco Application Centric Infrastructure Fundamentals


123
Troubleshooting
Atomic Counters

In the following figure, a hardware fault impacts the health score of an application component.

Figure 87: Simplified Health Score Impact Example

Atomic Counters
Atomic counters detect drops and misrouting in the fabric, which enables quick debugging and isolation of
application connectivity issues. For example, an administrator can enable atomic counters on all leaf switches
to trace packets from endpoint 1 to endpoint 2. If any leafs have nonzero counters, other than the source and
destination leafs, an administrator can drill down to those leafs.
In conventional settings, it is nearly impossible to monitor the amount of traffic from a baremetal NIC to a
specific IP address (an endpoint) or to any IP address. Atomic counters allow an administrator to count the
number of packets that are received from a baremetal endpoint without any interference to its data path. In
addition, atomic counters can monitor per-protocol traffic that is sent to and from an endpoint or an application
group.
Leaf-to-leaf (TEP to TEP) atomic counters can provide the following:
• Counts of drops, admits, and excess packets
• Short-term data collection such as the last 30 seconds, and long-term data collection such as 5 minutes,
15 minutes, or more
• A breakdown of per-spine traffic
• Ongoing monitoring

Cisco Application Centric Infrastructure Fundamentals


124
Troubleshooting
Multinode SPAN

Tenant atomic counters can provide the following:


• Application-specific counters for traffic across the fabric, including drops, admits, and excess packets
• Modes include the following:
◦Endpoint to endpoint
◦EPG to EPG with optional drill down
◦EPG to endpoint
◦EPG to * (any)
◦Endpoint to external IP address

Multinode SPAN
The APIC traffic monitoring policies can span policies at the appropriate places to keep track of all the members
of each application group and where they are connected. If any member moves, the APIC automatically pushes
the policy to the new leaf. For example, when an endpoint VMotions to a new leaf, the span configuration
automatically adjusts.

ARPs, ICMP Pings, and Traceroute


ARPs for the default gateway IP address are trapped at the ingress leaf switch. The ingress leaf switch unicasts
the ARP request to the destination and the destination sends the ARP response.

Figure 88: APIC Endpoint to Endpoint Traceroute

A traceroute that is initiated from the tenant endpoints shows the default gateway as an intermediate hop
appears at the ingress leaf switch.

Cisco Application Centric Infrastructure Fundamentals


125
Troubleshooting
ARPs, ICMP Pings, and Traceroute

Traceroute modes include from endpoint to endpoint, and from leaf to leaf (TEP to TEP). Traceroute discovers
all paths across the fabric, points of exit for external endpoints, and helps to detect if any path is blocked.

Cisco Application Centric Infrastructure Fundamentals


126
APPENDIX A
Tenant Policy Example
This chapter contains the following sections:

• Tenant Policy Example Overview, page 127


• Tenant Policy Example XML Code, page 128
• Tenant Policy Example Explanation, page 129
• What the Example Tenant Policy Does, page 136

Tenant Policy Example Overview


The description of the tenant policy example in this appendix uses XML terminology
(http://en.wikipedia.org/wiki/XML#Key_terminology). This example demonstrates how basic APIC policy
model constructs are rendered into the XML code. The following figure provides an overview of the tenant
policy example.

Figure 89: EPGs and Contract Contained in Tenant Solar

Cisco Application Centric Infrastructure Fundamentals


127
Tenant Policy Example
Tenant Policy Example XML Code

In the figure, according to the contract called webCtrct and the EPG labels, the green-labeled EPG:web1 can
communicate with green-labeled EPG:app using both http and https, the red -abeled EPG:web2 can
communicate with the red-labeled EPG:db using only https.

Tenant Policy Example XML Code


<polUni>
<fvTenant name="solar">

<vzFilter name="Http">
<vzEntry name="e1" etherT="ipv4" prot="tcp" dFromPort="80" dToPort="80"/>
</vzFilter>

<vzFilter name="Https">
<vzEntry name="e1" etherT="ipv4" prot="tcp" dFromPort="443" dToPort="443"/>
</vzFilter>

<vzBrCP name="webCtrct">
<vzSubj name="http" revFltPorts="true" provmatchT="All">
<vzRsSubjFiltAtt tnVzFilterName="Http"/>
<vzRsSubjGraphAtt graphName="G1" termNodeName="TProv"/>
<vzProvSubjLbl name="openProv"/>
<vzConsSubjLbl name="openCons"/>
</vzSubj>
<vzSubj name="https" revFltPorts="true" provmatchT="All">
<vzProvSubjLbl name="secureProv"/>
<vzConsSubjLbl name="secureCons"/>
< vzRsSubjFiltAtt tnVzFilterName="Https"/>
<vzRsOutTermGraphAtt graphName="G2" termNodeName="TProv"/>
</vzSubj>
</vzBrCP>

<fvCtx name="solarctx1"/>

<fvBD name="solarBD1">
<fvRsCtx tnFvCtxName="solarctx1" />
<fvSubnet ip="11.22.22.20/24">
<fvRsBDSubnetToProfile tnL3extOutName="rout1" tnRtctrlProfileName="profExport"
/>
</fvSubnet>
<fvSubnet ip="11.22.22.211/24">
<fvRsBDSubnetToProfile tnL3extOutName="rout1"
tnRtctrlProfileName="profExport"/>
</fvSubnet>
</fvBD>

<fvAp name="sap">
<fvAEPg name="web1">
<fvRsBd tnFvBDName="solarBD1" />
<fvRsDomAtt tDn="uni/vmmp-VMware/dom-mininet" />
<fvRsProv tnVzBrCPName="webCtrct" matchT="All">
<vzProvSubjLbl name="openProv"/>
<vzProvSubjLbl name="secureProv"/>
<vzProvLbl name="green"/>
</fvRsProv>
</fvAEPg>
<fvAEPg name="web2">
<fvRsBd tnFvBDName="solarBD1" />
<fvRsDomAtt tDn="uni/vmmp-VMware/dom-mininet" />
<fvRsProv tnVzBrCPName="webCtrct" matchT="All">
<vzProvSubjLbl name="secureProv"/>
<vzProvLbl name="red"/>
</fvRsProv>
</fvAEPg>
<fvAEPg name="app">
<fvRsBd tnFvBDName="solarBD1" />
<fvRsDomAtt tDn="uni/vmmp-VMware/dom-mininet" />

Cisco Application Centric Infrastructure Fundamentals


128
Tenant Policy Example
Tenant Policy Example Explanation

<fvRsCons tnVzBrCPName="webCtrct">
<vzConsSubjLbl name="openCons"/>
<vzConsSubjLbl name="secureCons"/>
<vzConsLbl name="green"/>
</fvRsCons>
</fvAEPg>
<fvAEPg name="db">
<fvRsBd tnFvBDName="solarBD1" />
<fvRsDomAtt tDn="uni/vmmp-VMware/dom-mininet" />
<fvRsCons tnVzBrCPName="webCtrct">
<vzConsSubjLbl name="secureCons"/>
<vzConsLbl name="red"/>
</fvRsCons>
</fvAEPg>
</fvAp>
</fvTenant>
</polUni>

Tenant Policy Example Explanation


This section contains a detailed explanation of the tenant policy example.

Policy Universe
The policy universe contains all the tenant-managed objects where the policy for each tenant is defined.
<polUni>

This starting tag, <polUni>, in the first line indicates the beginning of the policy universe element. This tag
is matched with </polUni> at the end of the policy. Everything in between is the policy definition.

Tenant Policy Example


The <fvTenant> tag identifies the beginning of the tenant element.
<fvTenant name="solar">

All of the policies for this tenant are defined in this element. The name of the tenant in this example is solar.
The tenant name must be unique in the system. The primary elements that the tenant contains are filters,
contracts, outside networks, bridge domains, and application profiles that contain EPGs.

Filters
The filter element starts with a <vzFilter> tag and contains elements that are indicated with a <vzEntry>
tag.
The following example defines "Http" and "Https" filters. The first attribute of the filter is its name and the
value of the name attribute is a string that is unique to the tenant. These names can be reused in different
tenants. These filters are used in the subject elements within contracts later on in the example.

<vzFilter name="Http">
<vzEntry name="e1" etherT="ipv4" prot="tcp" dFromPort="80" dToPort="80"/>
</vzFilter>

<vzFilter name="Https">

Cisco Application Centric Infrastructure Fundamentals


129
Tenant Policy Example
Filters

<vzEntry name="e1" etherT="ipv4" prot="tcp" dFromPort="443" dToPort="443"/>


</vzFilter>
This example defines these two filters: Http and Https. The first attribute of the filter is its name and the value
of the name attribute is a string that is unique to the tenant, i.e. these names can be re-used in different tenants.
These filters will be used in the subject elements within contracts later on in the example.
Each filter can have one or more entries where each entry describes a set of Layer 4 TCP or UDP port numbers.
Some of the possible attributes of the <vzEntry> element are as follows:

• name
• prot
• dFromPort
• dToPort
• sFromPort
• sToPort
• etherT
• ipFlags
• arpOpc
• tcpRules

In this example, each entry’s name attribute is specified. The name is an ASCII string that must be unique
within the filter but can be reused in other filters. Because this example does not refer to a specific entry later
on, it is given a simple name of “e1”.
The EtherType attribute, etherT, is next. It is assigned the value of ipv4 to specify that the filter is for IPv4
packets. There are many other possible values for this attribute. Common ones include ARP, RARP, and in a
future release, IPv6. The default is unspecified so it is important to assign it a value.
Following the EtherType attribute is the prot attribute that is set to tcp to indicate that this filter is for TCP
traffic. Alternate protocol attributes include udp, icmp, and unspecified (default).
After the protocol, the destination TCP port number is assigned to be in the range from 80 to 80 (exactly TCP
port 80) with the dFromPort and dToPort attributes. If the from and to are different, they specify a range of
port numbers.
In this example, these destination port numbers are specified with the attributes dFromPort and dToPort.
However, when they are used in the contract, they should be used for the destination port from the TCP client
to the server and as the source port for the return traffic. See the attribute revFltPorts later in this example
for more information.
The second filter does essentially the same thing, but for port 443 instead.
Filters are referred to by subjects within contracts by their target distinguished name, tDn. The tDn name is
constructed as follows:
uni/tn-<tenant name>/flt-<filter name>

For example, the tDn of the first filter above is uni/tn-coke/flt-Http. The second filter has the name
uni/tn-coke/flt-Https. In both cases, solar comes from the tenant name.

Cisco Application Centric Infrastructure Fundamentals


130
Tenant Policy Example
Contracts

Contracts
The contract element is tagged vzBrCP and it has a name attribute.

<vzBrCP name="webCtrct">
<vzSubj name="http" revFltPorts="true" provmatchT="All">
<vzRsSubjFiltAtt tnVzFilterName="Http"/>
<vzRsSubjGraphAtt graphName="G1" termNodeName="TProv"/>
<vzProvSubjLbl name="openProv"/>
<vzConsSubjLbl name="openCons"/>
</vzSubj>
<vzSubj name="https" revFltPorts="true" provmatchT="All">
<vzProvSubjLbl name="secureProv"/>
<vzConsSubjLbl name="secureCons"/>
<vzRsFiltAtt tnVzFilterName="Https "/>
<vzRsOutTermGraphAtt graphName="G2" termNodeName="TProv"/>
</vzSubj>
</vzBrCP>
Contracts are the policy elements between EPGs. They contain all of the filters that are applied between EPGs
that produce and consume the contract. The contract element is tagged vzBrCP and it has a name attribute.
Refer to the object model reference documentation for other attributes that can be used in the contract element.
This example has one contract named webCtrct.
The contract contains multiple subject elements where each subject contains a set of filters. In this example,
the two subjects are http and https.
The contact is later referenced by EPGs that either provide or consume it. They reference it by its name in in
the following manner:
uni/tn-[tenant-name]/brc-[contract-name]

tenant-name is the name of the tenant, “solar” in this example, and the contract-name is the name of the
contract. For this example, the tDn name of the contract is uni/tn-solar/brc-webCtrct.

Subjects
The subject element starts with the tag vzSubj and has three attributes: name, revFltPorts, and matchT. The
name is simply the ASCII name of the subject.

revFltPorts is a flag that indicates that the Layer 4 source and destination ports in the filters of this subject
should be used as specified in the filter description in the forward direction (that is, in the direction of from
consumer to producer EPG), and should be used in the opposite manner for the reverse direction. In this
example, the “http” subject contains the “Http” filter that defined TCP destination port 80 and did not specify
the source port. Because the revFltPorts flag is set to true, the policy will be TCP destination port 80 and
any source port for traffic from the consumer to the producer, and it will be TCP destination port any and
source port 80 for traffic from the producer to the consumer. The assumption is that the consumer initiates
the TCP connection to the producer (the consumer is the client and the producer is the server).
The default value for the revFltPrts attribute is false if it is not specified.

Cisco Application Centric Infrastructure Fundamentals


131
Tenant Policy Example
Labels

Labels
The match type attribute, provmatchT (for provider matching) and consmatchT (for consumer matching)
determines how the subject labels are compared to determine if the subject applies for a given pair of consumers
and producers. The following match type values are available:
• All
• AtLeastOne (default)
• None
• ExactlyOne

When deciding whether a subject applies to the traffic between a producer and consumer EPG, the match
attribute determines how the subject labels that are defined (or not) in those EPGs should be compared to the
labels in the subject. If the match attribute value is set to All, it only applies to the providers whose provider
subject labels, vzProvSubjLbl, match all of the vzProvSubjLbl labels that are defined in the subject. If two
labels are defined, both must also be in the provider. If a provider EPG has 10 labels, as long as all of the
provider labels in the subject are present, a match is confirmed. A similar criteria is used for the consumers
that use the vzConsSubjLbl. If the matchT attribute value is AtLeastOne, only one of the labels must match.
If the matchT attribute is None, the match only occurs if none of the provider labels in the subject match the
provider labels of the provider EPGs and similarly for the consumer.
If the producer or consumer EPGs do not have any subject labels and the subject does not have any labels, a
match occurs for All, AtLeastOne, and None (if you do not use labels, the subject is used and the matchT
attribute does not matter).
An optional attribute of the subject not shown in the example is prio where the priority of the traffic that
matches the filter is specified. Possible values are gold, silver, bronze, or unspecified (default).
In the example, the subject element contains references to filter elements, subject label elements, and graph
elements. <vzRsSubjFiltAtt tDn=“uni/tn-coke/flt-Http”/> is a reference to a previously defined filter.
This element is identified by the vzRsSubjFiltAtt tag.
<vzRsSubjGraphAtt graphName=“G1” termNodeName=“TProv”/> defines a terminal connection.
<vzProvSubjLbl name=“openProv”/> defines a provider label named “openProv”. The label is used to qualify
or filter which subjects get applied to which EPGs. This particular one is a provider label and the corresponding
consumer label is identified by the tag vzConsSubjLbl. These labels are matched with the corresponding label
of the provider or consumer EPG that is associated with the current contract. If a match occurs according to
the matchT criteria described above, a particular subject applies to the EPG. If no match occurs, the subject
is ignored.
Multiple provider and consumer subject labels can be added to a subject to allow more complicated matching
criteria. In this example, there is just one label of each type on each subject. However, the labels on the first
subject are different from the labels on the second subject, which allows these two subjects to be handled
differently depending on the labels of the corresponding EPGs. The order of the elements within the subject
element does not matter.

Context
The context is identified by the fvCtx tag and contains a name attribute.

Cisco Application Centric Infrastructure Fundamentals


132
Tenant Policy Example
Bridge Domains

<fvCtx name="solarctx1"/>

A tenant can contain multiple contexts. For this example, the tenant uses one context named “solartx1”. The
name must be unique within the tenant.
The context defines a Layer 3 address domain. All of the endpoints within the Layer 3 domain must have
unique IPv4 or IPv6 addresses because it is possible to directly forward packets between these devices if the
policy allows it. A context is equivalent to a virtual routing and forwarding (VRF) instance in the networking
world.
While a context defines a unique IP address space, the corresponding subnets are defined within bridge
domains. Each bridge domain is then associated with a context.

Bridge Domains
The bridge domain element is identified with the fvBD tag and has a name attribute.

<fvBD name="solarBD1">
<fvRsCtx tnFvCtxName="solarctx1" />
<fvSubnet ip="11.22.22.20/24">
<fvRsBDSubnetToProfile tnL3extOutName="rout1"
tnRtctrlProfileName="profExport" />
</fvSubnet>
<fvSubnet ip="11.22.23.211/24">
<fvRsBDSubnetToProfile tnL3extOutName="rout1"
tnRtctrlProfileName="profExport"/>
</fvSubnet>
</fvBD>
Within the bridge domain element, subnets are defined and a reference is made to the corresponding Layer 3
context. Each bridge domain must be linked to a context and have at least one subnet.
This example uses one bridge domain named “solarBD1”. In this example, the “solarctx1” context is referenced
by using the element tagged fvRsCtx and the tnFvCtxName attribute is given the value “solarctx1”. This name
comes from the context defined above.
The subnets are contained within the bridge domain and a bridge domain can contain multiple subnets. This
example defines two subnets. All of the addresses used within a bridge domain must fall into one of the address
ranges that is defined by the subnets. However, the subnet can also be a supernet which is a very large subnet
that includes many addresses that might never be used. Specifying one giant subnet that covers all current
future addresses can simplify the bridge domain specification. However, different subnets must not overlap
within a bridge domain or with subnets defined in other bridge domains that are associated with the same
context. Subnets can overlap with other subnets that are associated with other contexts.
The subnets described above are 11.22.22.xx/24 and 11.22.23.xx/24. However, the full 32 bits of the address
is given even though the mask says that only 24 are used, because this IP attribute also tells what the full IP
address of the router is for that subnet. In the first case, the router IP address (default gateway) is 11.22.22.20
and for the second subnet, it is 11.22.23.211.
The entry 11.22.22.20/24 is equivalent to the following, but in compact form:
• Subnet: 11.22.22.00
• Subnet Mask: 255.255.255.0
• Default gateway: 11.22.22.20

Cisco Application Centric Infrastructure Fundamentals


133
Tenant Policy Example
Application Profiles

Application Profiles
The start of the application profile is indicated by the fvAp tag and has a name attribute.
<fvAp name="sap">

This example has one application network profile and it is named “sap”.
The application profile is a container that holds the EPGs. EPGs can communicate with other EPGs in the
same application profile and with EPGs in other application profiles. The application profile is simply a
convenient container that is used to hold multiple EPGs that are logically related to one another. They can be
organized by the application they provide such as “sap”, by the function they provide such as “infrastructure”,
by where they are in the structure of the data center such as “DMZ”, or whatever organizing principle the
administrator chooses to use.
The primary object that the application profile contains is an endpoint group (EPG). In this example, the “sap”
application profile contains 4 EPGs: web1, web2, app, and db.

Endpoints and Endpoint Groups (EPGs)


EPGs begin with the tag fvAEPg and have a name attribute.

<fvAEPg name="web1">
<fvRsBd tnFvBDName="solarBD1" />
<fvRsDomAtt tDn="uni/vmmp-VMware/dom-mininet" />
<fvRsProv tnVzBrCPName="webCtrct" matchT ="All">
<vzProvSubjLbl name="openProv"/>
<vzProvSubjLbl name="secureProv"/>
<vzProvLbl name="green"/>
</fvRsProv>
</fvAEPg>
The EPG is the most important fundamental object in the policy model. It represents a collection of endpoints
that are treated in the same fashion from a policy perspective. Rather than configure and manage those endpoints
individually, they are placed within an EPG and are managed as a collection or group.
The EPG object is where labels are defined that govern what policies are applied and which other EPGs can
communicate with this EPG. It also contains a reference to the bridge domain that the endpoints within the
EPG are associated with as well as which virtual machine manager (VMM) domain they are associated with.
VMM allows virtual machine mobility between two VM servers instantaneously with no application downtime.
The first EPG in the example is named “web1”. The fvRsBd element within the EPG defines which bridge
domain that it is associated with. The bridge domain is identified by the value of the tnFxBDName attribute.
This EPG is associated with the “solarBD1” bridge domain named in the “Bridge Domain” section above. The
binding to the bridge domain is used by the system to understand what the default gateway address should be
for the endpoints in this EPG. It does not imply that the endpoints are all in the same subnet or that they can
only communicate through bridging. Whether an endpoint’s packets are bridged or routed is determined by
whether the source endpoint sends the packet to its default gateway or the final destination desired. If it sends
the packet to the default gateway, the packet is routed.
The VMM domain used by this EPG is identified by the fvRsDomAtt tag. This element references the VMM
domain object defined elsewhere. The VMM domain object is identified by its tDn name attribute. This
example shows only one VMM domain called “uni/vmmp-VMware/dom-mininet”.

Cisco Application Centric Infrastructure Fundamentals


134
Tenant Policy Example
Closing

The next element in the “web1” EPG defines which contract this EPG provides and is identified by the fvRsProv
tag. If “web1” were to provide multiple contracts, there would be multiple fvRsProv elements. Similarly, if it
were to consume one or more contracts, there would be fvRsCons elements as well.
The fvRsProv element has a required attribute that is the name of the contract that is being provided. “web1”
is providing the contract “webCtrct” that was defined earlier that was called tDn=“uni/tn-coke/brc-webCtrct”.
The next attribute is the matchT attribute, which has the same semantics for matching provider or consumer
labels as it did in the contract for subject labels (it can take on the values of All, AtLeastOne, or None). This
criteria applies to the provider labels as they are compared to the corresponding consumer labels. A match of
the labels implies that the consumer and provider can communicate if the contract between them allows it. In
other words, the contract has to allow communication and the consumer and provider labels have to match
using the match criteria specified at the provider.
The consumer has no corresponding match criteria. The match type used is always determined by the provider.
Inside the provider element, fvRsProv, an administrator needs to specify the labels that are to be used. There
are two kinds of labels, provider labels and provider subject labels. The provider labels, vzProvLbl, are used
to match consumer labels in other EPGs that use the matchT criteria described earlier. The provider subject
labels, vzProvSubjLbl, are used to match the subject labels that are specified in the contract. The only attribute
of the label is its name attribute.
In the “web1” EPG, two provider subject labels, openProv and secureProv, are specified to match with the
“http” and “https” subjects of the “webCtrct” contract. One provider label, “green” is specified with a match
criteria of All that will match with the same label in the “App” EPG.
The next EPG in the example, “web2”, is very similar to “web1” except that there is only one vzProvSubjLbl
and the labels themselves are different.
The third EPG is one called “app” and it is defined as follows:

<fvAEPg name="app">
<fvRsBd tnFvBDName="solarBD1" />
<fvRsDomAtt tDn="uni/vmmp-VMware/dom-mininet" />
<fvRsCons tnVzBrCPName="webCtrct">
<vzConsSubjLbl name="openCons"/>
<vzConsSubjLbl name="secureCons"/>
<vzConsLbl name="green"/>
</fvRsCons>
</fvAEPg>

The first part is nearly the same as the “web1” EPG. The major difference is that this EPG is a consumer of
the “webCtrct” and has the corresponding consumer labels and consumer subject labels. The syntax is nearly
the same except that “Prov” is replaced by “Cons” in the tags. There is no match attribute in the FvRsCons
element because the match type for matching the provider with consumer labels is specified in the provider.
In the last EPG, “db” is very similar to the “app” EPG in that it is purely a consumer.
While in this example, the EPGs were either consumers or producers of a single contract, it is typical for an
EPG to be at once a producer of multiple contracts and a consumer of multiple contracts.

Closing
</fvAp>
</fvTenant>
</polUni>
The final few lines complete the policy.

Cisco Application Centric Infrastructure Fundamentals


135
Tenant Policy Example
What the Example Tenant Policy Does

What the Example Tenant Policy Does


The following figure shows how contracts govern endpoint group (EPG) communications.

Figure 90: Labels and Contract Determine EPG to EPG Communications

The four EPGs are named EPG:web1, EPG:web2, EPG:app, and EPG:db. EPG:web1 and EPG:web2 provide
a contract called webCtrct. EPG:app and EPG:db consume that same contract.
EPG:web1 can only communicate with EPG:app and EPG:web2 can only communicate with EPG:db. This
interaction is controlled through the provider and consumer labels “green” and “red”.
When EPG:web1 communicates with EPG:app, they use the webCtrct contract. EPG:app can initiate connections
to EPG:web1 because it consumes the contract that EPG:web1 provides.
The subjects that EPG:web1 and EPG:app can use to communicate are both http and https because EPG:web1
has the provider subject label “openProv” and the http subject also has it. EPG:web1 has the provider subject
label “secureProv” as does the subject https. In a similar fashion, EPG:app has subject labels “openCons” and
“secureCons” that subjects http and https have.

Cisco Application Centric Infrastructure Fundamentals


136
Tenant Policy Example
What the Example Tenant Policy Does

When EPG:web2 communicates with EPG:db, they can only use the https subject because only the https
subject carries the provider and consumer subject labels. EPG:db can initiate the TCP connection to EPG:web2
because EPG:db consumes the contract provided by EPG:web2.

Figure 91: Bridge Domain, Subnets, and Layer 3 Context

The example policy specifies the relationship between EPGs, application profiles, bridge domains, and Layer
3 contexts in the following manner: the EPGs EPG:web1, EPG:web2, EPG:app, and EPG:db are all members
of the application profile called “sap”.
These EPGs are also linked to the bridge domain “solarBD1”. solarBD1 has two subnets, 11.22.22.XX/24 and
11.22.23.XX/24. The endpoints in the four EPGs must be within these two subnet ranges. The IP address of
the default gateway in those two subnets will be 11.22.22.20 and 11.22.23.211. The solarBD1 bridge domain
is linked to the “solarctx1” Layer 3 context.
All these policy details are contained within a tenant called “solar”.

Cisco Application Centric Infrastructure Fundamentals


137
Tenant Policy Example
What the Example Tenant Policy Does

Cisco Application Centric Infrastructure Fundamentals


138
APPENDIX B
Label Matching
This chapter contains the following sections:

• Label Matching, page 139

Label Matching
Label matching is used to determine which subjects of a contract are used with a given producer or consumer
of that contract and they are used to determine which consumers and providers can communicate.
The match type or algorithm is determined by the matchT attribute and it can take on the following values:
• All
• AtLeastOne (default)
• None
• ExactlyOne

When checking for a match of provider labels, vzProvLbl, and consumer labels, vzConsLbl, the matchT is
determined by the provider EPG.
When checking for a match of provider or consumer subject labels, vzProvSubjLbl, vzConsSubjLbl, in EPGs
that have a subject, the matchT is determined by the subject.
The following table shows simple examples of all the EPG provider and consumer match types and their
results. In this table, a [ ] entry indicates no labels.

matchT vzProvLbl vzConsLbl Result

All LabelX, LabelY LabelX, LabelY match

All LabelX, LabelY LabelX, LabelZ no match

All LabelX, LabelY LabelX no match

All LabelX LabelX, LabelY match

Cisco Application Centric Infrastructure Fundamentals


139
Label Matching
Label Matching

matchT vzProvLbl vzConsLbl Result


All [] LabelX no match

All LabelX [] no match

All [] [] no match

AtLeastOne LabelX, LabelY LabelX match

AtLeastOne LabelX, LabelY LabelZ no match

AtLeastOne LabelX [] no match

AtLeastOne [] LabelX no match

AtLeastOne [] [] match

None LabelX LabelY match

None LabelX LabelX no match

None LabelX, LabelY LabelY no match

None LabelX LabelX, LabelY no match

None [] LabelX match

None LabelX [] match

None [] [] match

ExactlyOne LabelX LabelX match

ExactlyOne LabelX, LabelY LabelX, LabelY no match

ExactlyOne LabelX, LabelZ LabelX, LabelY match

ExactlyOne LabelX LabelY no match

ExactlyOne [] LabelX no match

ExactlyOne LabelX [] no match

ExactlyOne [] [] match

The same logic applies to subject labels as well. The subject labels in the contract would be in the second
column and the EPG subject labels would be in the third column.

Cisco Application Centric Infrastructure Fundamentals


140
APPENDIX C
Access Policy Examples
This chapter contains the following sections:

• Single Port Channel Configuration Applied to Multiple Switches, page 141


• Two Port Channel Configurations Applied to Multiple Switches, page 142
• Single Virtual Port Channel Across Two Switches, page 142
• One Virtual Port Channel on Selected Port Blocks of Two Switches , page 143
• Setting the Interface Speed , page 144

Single Port Channel Configuration Applied to Multiple Switches


This sample XML policy creates a port channel on leaf switch 17, another port channel on leaf switch 18, and
a third one on leaf switch 20. On each leaf switch, the same interfaces will be part of the port channel (interfaces
1/10 to 1/15 and 1/20 to 1/25). All these port channels will have the same configuration.
<infraInfra dn="uni/infra">

<infraNodeP name=”test">
<infraLeafS name="leafs" type="range">
<infraNodeBlk name="nblk” from_=”17" to_=”18”/>
<infraNodeBlk name="nblk” from_=”20" to_=”20”/>
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-test"/>
</infraNodeP>

<infraAccPortP name="test">
<infraHPortS name="pselc" type="range">
<infraPortBlk name="blk1” fromCard="1" toCard="1" fromPort="10" toPort=”15”/>
<infraPortBlk name="blk2" fromCard="1" toCard="1" fromPort=”20" toPort=”25”/>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accbundle-bndlgrp" />
</infraHPortS>
</infraAccPortP>

<infraFuncP>
<infraAccBndlGrp name="bndlgrp" lagT="link">
<infraRsHIfPol tnFabricHIfPolName=“default"/>
<infraRsCdpIfPol tnCdpIfPolName=”default”/>
<infraRsLacpPol tnLacpLagPolName=”default"/>
</infraAccBndlGrp>
</infraFuncP>

Cisco Application Centric Infrastructure Fundamentals


141
Access Policy Examples
Two Port Channel Configurations Applied to Multiple Switches

</infraInfra>

Two Port Channel Configurations Applied to Multiple Switches


This sample XML policy creates two port channels on leaf switch 17, another port channel on leaf switch 18,
and a third one on leaf switch 20. On each leaf switch, the same interfaces will be part of the port channel
(interface 1/10 to 1/15 for port channel 1 and 1/20 to 1/25 for port channel 2). The policy uses two switch
blocks because each a switch block can contain only one group of consecutive switch IDs. All these port
channels will have the same configuration.

Note Even though the port channel configurations are the same, this example uses two different interface policy
groups. Each Interface Policy Group represents a port channel on a switch. All interfaces associated with
a given interface policy group are part of the same port channels.

<infraInfra dn="uni/infra">

<infraNodeP name=”test">
<infraLeafS name="leafs" type="range">
<infraNodeBlk name="nblk” from_=”17" to_=”18”/>
<infraNodeBlk name="nblk” from_=”20" to_=”20”/>
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-test1"/>
<infraRsAccPortP tDn="uni/infra/accportprof-test2"/>
</infraNodeP>

<infraAccPortP name="test1">
<infraHPortS name="pselc" type="range">
<infraPortBlk name="blk1” fromCard="1" toCard="1" fromPort="10" toPort=”15”/>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accbundle-bndlgrp1" />
</infraHPortS>
</infraAccPortP>

<infraAccPortP name="test2">
<infraHPortS name="pselc" type="range">
<infraPortBlk name="blk1” fromCard="1" toCard="1" fromPort=“20" toPort=”25”/>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accbundle-bndlgrp2" />
</infraHPortS>
</infraAccPortP>

<infraFuncP>
<infraAccBndlGrp name="bndlgrp1" lagT="link">
<infraRsHIfPol tnFabricHIfPolName=“default"/>
<infraRsCdpIfPol tnCdpIfPolName=”default”/>
<infraRsLacpPol tnLacpLagPolName=”default"/>
</infraAccBndlGrp>

<infraAccBndlGrp name="bndlgrp2" lagT="link">


<infraRsHIfPol tnFabricHIfPolName=“default"/>
<infraRsCdpIfPol tnCdpIfPolName=”default”/>
<infraRsLacpPol tnLacpLagPolName=”default"/>
</infraAccBndlGrp>
</infraFuncP>

</infraInfra>

Single Virtual Port Channel Across Two Switches


The two steps for creating a virtual port channel across two switches are as follows:

Cisco Application Centric Infrastructure Fundamentals


142
Access Policy Examples
One Virtual Port Channel on Selected Port Blocks of Two Switches

• Create a fabricExplicitGEp: this policy specifies the leaf switch that pairs to form the virtual port
channel.
• Use the infra selector to specify the interface configuration.

The APIC performs several validations of the fabricExplicitGEp and faults are raised when any of these
validations fail. A leaf can be paired with only one other leaf. The APIC rejects any configuration that breaks
this rule. When creating a fabricExplicitGEp, an administrator must provide the IDs of both of the leaf
switches to be paired. The APIC rejects any configuration which breaks this rule. Both switches must be up
when fabricExplicitGEp is created. If one switch is not up, the APIC accepts the configuration but raises a
fault. Both switches must be leaf switches. If one or both switch IDs corresponds to a spine, the APIC accepts
the configuration but raises a fault.
<fabricProtPol pairT="explicit">
<fabricExplicitGEp name="tG" id="2">
<fabricNodePEp id=”18”/>
<fabricNodePEp id=”25"/>
</fabricExplicitGEp>
</fabricProtPol>

One Virtual Port Channel on Selected Port Blocks of Two


Switches
This policy creates a single virtual port channel on leaf switches 18 and 25, using interfaces 1/10 to 1/15 on
leaf 18, and interfaces 1/20 to 1/25 on leaf 25.
<infraInfra dn="uni/infra">

<infraNodeP name=”test1">
<infraLeafS name="leafs" type="range">
<infraNodeBlk name="nblk” from_=”18" to_=”18”/>
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-test1"/>
</infraNodeP>

<infraNodeP name=”test2">
<infraLeafS name="leafs" type="range">
<infraNodeBlk name="nblk” from_=”25" to_=”25”/>
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-test2"/>
</infraNodeP>

<infraAccPortP name="test1">
<infraHPortS name="pselc" type="range">
<infraPortBlk name="blk1” fromCard="1" toCard="1" fromPort="10" toPort=”15”/>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accbundle-bndlgrp" />
</infraHPortS>
</infraAccPortP>

<infraAccPortP name="test2">
<infraHPortS name="pselc" type="range">
<infraPortBlk name="blk1” fromCard="1" toCard="1" fromPort=“20" toPort=”25”/>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accbundle-bndlgrp" />
</infraHPortS>
</infraAccPortP>

<infraFuncP>
<infraAccBndlGrp name="bndlgrp" lagT=”node">
<infraRsHIfPol tnFabricHIfPolName=“default"/>
<infraRsCdpIfPol tnCdpIfPolName=”default”/>
<infraRsLacpPol tnLacpLagPolName=”default"/>
</infraAccBndlGrp>

Cisco Application Centric Infrastructure Fundamentals


143
Access Policy Examples
Setting the Interface Speed

</infraFuncP>

</infraInfra>

Setting the Interface Speed


This policy sets the port speed of a set of interfaces.
<infraInfra dn="uni/infra">

<infraNodeP name=”test1">
<infraLeafS name="leafs" type="range">
<infraNodeBlk name="nblk” from_=”18" to_=”18”/>
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-test1"/>
</infraNodeP>

<infraNodeP name=”test2">
<infraLeafS name="leafs" type="range">
<infraNodeBlk name="nblk” from_=”25" to_=”25”/>
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-test2"/>
</infraNodeP>

<infraAccPortP name="test1">
<infraHPortS name="pselc" type="range">
<infraPortBlk name="blk1” fromCard="1" toCard="1" fromPort="10" toPort=”15”/>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accbundle-bndlgrp" />
</infraHPortS>
</infraAccPortP>

<infraAccPortP name="test2">
<infraHPortS name="pselc" type="range">
<infraPortBlk name="blk1” fromCard="1" toCard="1" fromPort=“20" toPort=”25”/>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accbundle-bndlgrp" />
</infraHPortS>
</infraAccPortP>

<infraFuncP>
<infraAccBndlGrp name="bndlgrp" lagT=”node">
<infraRsHIfPol tnFabricHIfPolName=“default"/>
<infraRsCdpIfPol tnCdpIfPolName=”default”/>
<infraRsLacpPol tnLacpLagPolName=”default"/>
</infraAccBndlGrp>
</infraFuncP>

</infraInfra>

Cisco Application Centric Infrastructure Fundamentals


144
APPENDIX D
Tenant Layer 3 External Network Policy Example
This chapter contains the following sections:

• Tenant External Network Policy Example, page 145

Tenant External Network Policy Example


The following XML code is an example of a Tenant Layer 3 external network policy.

<polUni>

<fvTenant name='t0'>
<fvCtx name="o1">
<fvRsOspfCtxPol tnOspfCtxPolName="ospfCtxPol"/>
</fvCtx>
<fvCtx name="o2">
</fvCtx>

<fvBD name="bd1">
<fvRsBDToOut tnL3extOutName='T0-o1-L3OUT-1'/>
<fvSubnet ip='10.16.1.1/24' scope='public'/>
<fvRsCtx tnFvCtxName="o1"/>
</fvBD>

<fvAp name="AP1">
<fvAEPg name="bd1-epg1">
<fvRsCons tnVzBrCPName="vzBrCP-1">
</fvRsCons>
<fvRsProv tnVzBrCPName="vzBrCP-1">
</fvRsProv>
<fvSubnet ip='10.16.2.1/24' scope='private'/>
<fvSubnet ip='10.16.3.1/24' scope='private'/>
<fvRsBd tnFvBDName="bd1"/>
<fvRsDomAtt tDn="uni/phys-physDomP"/>
<fvRsPathAtt tDn="topology/pod-1/paths-101/pathep-[eth1/40]" encap='vlan-100'
mode='regular' instrImedcy='immediate' />
</fvAEPg>

<fvAEPg name="bd1-epg2">
<fvRsCons tnVzBrCPName="vzBrCP-1">
</fvRsCons>
<fvRsProv tnVzBrCPName="vzBrCP-1">
</fvRsProv>
<fvSubnet ip='10.16.4.1/24' scope='private'/>
<fvSubnet ip='10.16.5.1/24' scope='private'/>
<fvRsBd tnFvBDName="bd1"/>
<fvRsDomAtt tDn="uni/phys-physDomP"/>

Cisco Application Centric Infrastructure Fundamentals


145
Tenant Layer 3 External Network Policy Example
Tenant External Network Policy Example

<fvRsPathAtt tDn="topology/pod-1/paths-101/pathep-[eth1/41]" encap='vlan-200'


mode='regular' instrImedcy='immediate'/>
</fvAEPg>
</fvAp>

<l3extOut name="T0-o1-L3OUT-1">

<l3extRsEctx tnFvCtxName="o1"/>
<ospfExtP areaId='60'/>
<l3extInstP name="l3extInstP-1">
<fvRsCons tnVzBrCPName="vzBrCP-1">
</fvRsCons>
<fvRsProv tnVzBrCPName="vzBrCP-1">
</fvRsProv>
<l3extSubnet ip="192.5.1.0/24" />
<l3extSubnet ip="192.5.2.0/24" />
<l3extSubnet ip="192.6.0.0/16" />
<l3extSubnet ip="199.0.0.0/8" />
</l3extInstP>

<l3extLNodeP name="l3extLNodeP-1">
<l3extRsNodeL3OutAtt tDn=“topology/pod-1/node-101" rtrId="10.17.1.1">
<ipRouteP ip="10.16.101.1/32">
<ipNexthopP nhAddr="10.17.1.99"/>
</ipRouteP>
<ipRouteP ip="10.16.102.1/32">
<ipNexthopP nhAddr="10.17.1.99"/>
</ipRouteP>
<ipRouteP ip="10.17.1.3/32">
<ipNexthopP nhAddr="10.11.2.2"/>
</ipRouteP>
< /l3extRsNodeL3OutAtt >

<l3extLIfP name='l3extLIfP-1'>
<l3extRsPathL3OutAtt tDn=“topology/pod-1/paths-101/pathep-[eth1/25]"
encap='vlan-1001' ifInstT='sub-interface' addr="10.11.2.1/24" mtu="1500"/>
<ospfIfP>
<ospfRsIfPol tnOspfIfPolName='ospfIfPol'/>
</ospfIfP>
</l3extLIfP>
</l3extLNodeP>
</l3extOut>

<ospfIfPol name="ospfIfPol" />


<ospfCtxPol name="ospfCtxPol" />

<vzFilter name="vzFilter-in-1">
<vzEntry name="vzEntry-in-1"/>
</vzFilter>
<vzFilter name="vzFilter-out-1">
<vzEntry name="vzEntry-out-1"/>
</vzFilter>

<vzBrCP name="vzBrCP-1">
<vzSubj name="vzSubj-1">
<vzInTerm>
<vzRsFiltAtt tnVzFilterName="vzFilter-in-1"/>
</vzInTerm>
<vzOutTerm>
<vzRsFiltAtt tnVzFilterName="vzFilter-out-1"/>
</vzOutTerm>
</vzSubj>
</vzBrCP>
</fvTenant>
</polUni>

Cisco Application Centric Infrastructure Fundamentals


146
APPENDIX E
DHCP Relay Policy Examples
This chapter contains the following sections:

• Layer 2 and Layer 3 DHCP Relay Sample Policies, page 147

Layer 2 and Layer 3 DHCP Relay Sample Policies


This sample policy provides an example of a consumer tenant L3extOut DHCP Relay configuration.
<polUni>
<!-- Consumer Tenant 2 -->
<fvTenant
dn="uni/tn-tenant1"
name="tenant1">
<fvCtx name="dhcp"/>

<!-- DHCP client bridge domain -->


<fvBD name="cons2">
<fvRsBDToOut tnL3extOutName='L3OUT'/>
<fvRsCtx tnFvCtxName="dhcp" />
<fvSubnet ip="20.20.20.1/24"/>
<dhcpLbl name="DhcpRelayP" owner="tenant"/>
</fvBD>
<!-- L3Out EPG DHCP -->
<l3extOut name="L3OUT">
<l3extRsEctx tnFvCtxName="dhcp"/>
<l3extInstP name="l3extInstP-1">
<!-- Allowed routes to L3out to send traffic -->
<l3extSubnet ip="100.100.100.0/24" />
</l3extInstP>
<l3extLNodeP name="l3extLNodeP-pc">
<!-- VRF External loopback interface on node -->
<l3extRsNodeL3OutAtt tDn="topology/pod-1/node-1018" rtrId="10.10.10.1" />

<l3extLIfP name='l3extLIfP-pc'>
<l3extRsPathL3OutAtt tDn="topology/pod-1/paths-1018/pathep-[eth1/7]"
encap='vlan-900' ifInstT='sub-interface' addr="100.100.100.50/24" mtu="1500"/>
</l3extLIfP>
</l3extLNodeP>
</l3extOut>
<!-- Static DHCP Client Configuration -->
<fvAp name="cons2">
<fvAEPg name="APP">
<fvRsBd tnFvBDName="cons2" />
<fvRsDomAtt tDn="uni/phys-mininet" />
<fvRsPathAtt tDn="topology/pod-1/paths-1017/pathep-[eth1/3]" encap="vlan-1000"
instrImedcy='immediate' mode='native'/>

Cisco Application Centric Infrastructure Fundamentals


147
DHCP Relay Policy Examples
Layer 2 and Layer 3 DHCP Relay Sample Policies

</fvAEPg>
</fvAp>
<!-- DHCP Server Configuration -->
<dhcpRelayP name="DhcpRelayP" owner="tenant" mode="visible">
<dhcpRsProv tDn="uni/tn-tenant1/out-L3OUT/instP-l3extInstP-1" addr="100.100.100.1"/>

</dhcpRelayP>
</fvTenant>
</polUni>

This sample policy provides an example of a consumer tenant L2extOut DHCP Relay configuration.
<fvTenant
dn="uni/tn-dhcpl2Out"
name="dhcpl2Out">
<fvCtx name="dhcpl2Out"/>
<!-- bridge domain -->

<fvBD name="provBD">
<fvRsCtx tnFvCtxName="dhcpl2Out" />
<fvSubnet ip="100.100.100.50/24" scope="shared"/>
</fvBD>

<!-- Consumer bridge domain -->


<fvBD name="cons2">
<fvRsCtx tnFvCtxName="dhcpl2Out" />
<fvSubnet ip="20.20.20.1/24"/>
<dhcpLbl name="DhcpRelayP" owner="tenant"/>
</fvBD>

<vzFilter name='t0f0' >


<vzEntry name='t0f0e9'></vzEntry>
</vzFilter>

<vzBrCP name="webCtrct" scope="global">


<vzSubj name="app">
<vzRsSubjFiltAtt tnVzFilterName="t0f0"/>
</vzSubj>
</vzBrCP>

<l2extOut name="l2Out">
<l2extLNodeP name='l2ext'>
<l2extLIfP name='l2LifP'>
<l2extRsPathL2OutAtt tDn="topology/pod-1/paths-1018/pathep-[eth1/7]"/>
</l2extLIfP>
</l2extLNodeP>
<l2extInstP name='l2inst'>
<fvRsProv tnVzBrCPName="webCtrct"/>
</l2extInstP>
<l2extRsEBd tnFvBDName="provBD" encap='vlan-900'/>
</l2extOut>

<fvAp name="cons2">
<fvAEPg name="APP">
<fvRsBd tnFvBDName="cons2" />
<fvRsDomAtt tDn="uni/phys-mininet" />
<fvRsBd tnFvBDName="SolarBD2" />
<fvRsPathAtt tDn="topology/pod-1/paths-1018/pathep-[eth1/48]"
encap="vlan-1000" instrImedcy='immediate' mode='native'/>
</fvAEPg>
</fvAp>
<dhcpRelayP name="DhcpRelayP" owner="tenant" mode="visible">
<dhcpRsProv tDn="uni/tn-dhcpl2Out/l2out-l2Out/instP-l2inst" addr="100.100.100.1"/>
</dhcpRelayP>
</fvTenant>

Cisco Application Centric Infrastructure Fundamentals


148
APPENDIX F
DNS Policy Example
This chapter contains the following sections:

• DNS Policy Example, page 149

DNS Policy Example


Sample XML for dnsProfile:
<!-- /api/policymgr/mo/.xml -->
<polUni>
<fabricInst>
<dnsProfile name="default">
<dnsProv addr="172.21.157.5" preferred="yes"/>
<dnsDomain name="insieme.local" isDefault="yes"/>
<dnsRsProfileToEpg tDn="uni/tn-mgmt/mgmtp-default/oob-default"/>
</dnsProfile>
</fabricInst>
</polUni>

Sample xml for dns label:


<!-- /api/policymgr/mo/.xml -->
<polUni>
<fvTenant name=’t1’>
<fvCtx name=’ctx0’>
<dnsLbl name=’default’/>
</fvCtx>
</fvTenant>
</polUni>

Cisco Application Centric Infrastructure Fundamentals


149
DNS Policy Example
DNS Policy Example

Cisco Application Centric Infrastructure Fundamentals


150
APPENDIX G
List of Terms
This chapter contains the following sections:

• List of Terms, page 151

List of Terms
Application Centric Infrastructure (ACI)—ACI is a holistic data center architecture with centralized
automation and policy-driven application profiles.
Application Policy Infrastructure Controller (APIC)—The key ACI architectural component that manages
a scalable multitenant fabric. APIC controllers comprise a replicated synchronized clustered controller that
provides management, policy programming, application deployment, and health monitoring for the multitenant
fabric.
consumer—An endpoint group (EPG) that consumes a service.
context—Defines a Layer 3 address domain.
contract—The rules that specify what and how communication between EPGs takes place.
distinguished name (DN)—A unique name that describes a MO and locates its place in the MIT.
endpoint group (EPG)—An MO that is a named logical entity which contains a collection of endpoints.
Endpoints are devices connected to the network directly or indirectly. They have an address (identity), a
location, attributes (such as version or patch level), and can be physical or virtual. Endpoint examples include
servers, virtual machines, storage, or clients on the Internet.
filter—A TCP/IP header field such as Layer 3 protocol type, Layer 4 ports, and so on that is used in a contract
to define inbound or outbound communications between EPGs.
label—Managed objects with only one property, a name. Labels enable classifying which objects can and
cannot communicate with one another.
managed object (MO)—An abstraction of the fabric resources.
management information tree (MIT)—A hierarchical management information tree that contains all the
MOs of the fabric.
outside network—A MO that defines connectivity to a network outside the fabric.

Cisco Application Centric Infrastructure Fundamentals


151
List of Terms
List of Terms

policy—Named entity that contains generic specifications for controlling some aspect of system behavior.
For example, a Layer 3 outside network policy would contain the BGP protocol to enable BGP routing
functions when connecting the fabric to an outside Layer 3 network.
profile—Named entity that contains the necessary configuration details for implementing one or more instances
of a policy. For example, a switch node profile for a routing policy would contain all the switch specific
configuration details needed to implement BGP.
provider—An EPG that provides a service.
subject—MOs contained in a contract that specify what information can be communicated and how.
target DN (tDn)—An explicit reference that defines a relationship between a source MO and a specific
instance of a target MO. The target instance is identified by a target DN (tDn) property that is explicitly set
in the relationship source (Rs) MO.
tenant—A tenant is a logical container for application policies that enable an administrator to exercise
domain-based access control. A tenant represents a unit of isolation from a policy perspective, but it does not
represent a private network. Tenants can represent a customer in a service provider setting, an organization
or domain in an enterprise setting, or just a convenient grouping of policies. The primary elements that the
tenant contains are filters, contracts, outsides, bridge domains, and application profiles that contain EPGs.

Cisco Application Centric Infrastructure Fundamentals


152

You might also like