Cisco ACI Interview
Cisco ACI Interview
Answer: We have Cisco Nexus 9000 series. In this we mainly have Nexus 9500 Modular,
Nexus 9300 Non-Modular series switches. In my course, I used 9500 as spine and 9300 as
Leaf Switches.
Answer: We have two modes in which nexus 9K Switches can be used, namely NX-
OS and ACI Mode. These are exclusive modes; means you cannot run both modes at the
same time in a switch. If you switch the mode, then complete config will be deleted.
Answer: This architecture was designed by Charles Clos. In today’s IT world, Applications
are increasingly deployed in a distributed fashion which leads to increased east-west traffic.
Traditional 3-Tier Data Centers are unable to meet the high bandwidth and low latency
demands. This is where Leaf-Spine 2-layer network topology (composed of leaf switches and
spine switches) addresses the challenges faced in traditional network architecture. Leaf-Spine
2-layer data center network topology that's useful for data centers that experience more east-
west network traffic than north-south traffic. The topology is composed of leaf switches (to
which servers and storage connect) and spine switches (to which leaf switches connect). In
this two-tier Clos architecture, every lower-tier switch (leaf layer) is connected to each of the
top-tier switches (spine layer) in a full-mesh topology. The leaf layer consists of access
switches that connect to devices such as servers. The spine layer is the backbone of the
network and is responsible for interconnecting all leaf switches. Every leaf switch connects to
every spine switch in the fabric. The path is randomly chosen so that the traffic load is evenly
distributed among the top-tier switches. If one of the top tier switches were to fail, it would
only slightly degrade performance throughout the data center.
If oversubscription of a link occurs (that is, if more traffic is generated than can be
aggregated on the active link at one time), the process for expanding capacity is
straightforward. An additional spine switch can be added, and uplinks can be extended to
every leaf switch, resulting in the addition of interlayer bandwidth and reduction of the
oversubscription. If device port capacity becomes a concern, a new leaf switch can be added
by connecting it to every spine switch and adding the network configuration to the switch.
The ease of expansion optimizes the IT department’s process of scaling the network. If no
oversubscription occurs between the lower-tier switches and their uplinks, then a nonblocking
architecture can be achieved.
Answer: We can only connect Leaf switches to Spine Switches and vice versa.
5. In ACI mode of operation, can we connect Spine with another Spine switch?
Answer: No, connection will only work between Spine and Leaf. No Spine to Spine
connectivity can be established.
Answer: No, only connectivity from Leaf to Spine is possible. No leaf to leaf or spine to
spine connection is possible.
Answer: It is known as Cisco Application Policy Infrastructure Controller. Cisco APIC is the
main architectural component of the Cisco ACI solution. It is the unified point of automation
and management for the Cisco ACI fabric, policy enforcement, and health monitoring in both
physical and virtual environments.
The controller optimizes performance and manages and operates a scalable multitenant Cisco
ACI fabric. ACI Fabric is managed from APIC controller only, however, we also have an
option to login into individual switches for troubleshooting and verification purposes.
8. In ACI, how many APIC Controller can exist?
Answer: You may choose to have only one APIC controller, however, cisco recommends
using minimum 3 APIC controller and in order of 3,5,7.
9. In ACI mode deployment (Layer2/Layer3 fabric), how many Spine, Leaf Switches and
FEX can be deployed?
Answer: In L2 Fabric, we can use up to 80 Leaf Switches, 24 Spine Switches per fabric ( 6
Spine per POD) , 650 FEX per fabric (20 FEX per leaf Switch) & 1000 Tenants can be
created.
In large L3 Fabric, we can use up to 200 Leaf Switches, 24 Spine switches per fabric (6 spine
per POD), 650 FEX per fabric (20 FEX per leaf Switch) & 3000 Tenants can be created.
https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/verified-
scalability/Cisco-ACI-Verified-Scalability-Guide-422.html
10. What are the benefits of Nexus ACI compared to tradition network solution/architecture?
· From operations standpoint, ACI will allow network teams to simplify management and
operations across the network by providing a common place to manage & enforce policies.
· ACI’s template-based provisioning and automation improves network agility, real time
monitoring of physical and virtual environment and hence faster troubleshooting.
· Hypervisors compatibility and integration without the need to add software to the
hypervisor.
· ACI is tailor made for Data Centers requiring multi-tenancy setup (Virtualized) with easy to
configure steps in GUI.
· Can run as a conventional switch NX-OS or in “ACI” mode and supports FEX.
· Enable seamless connectivity between on-premises and remote data centers and
geographically dispersed multiple data centers under a single pane of policy orchestration.
· Open APIs allows easy integration with 3rd party devices like firewall and ADCs.
· ACI centralizes policy-based management and enables the automation of repetitive tasks to
man-hours and reduce errors.
· It streamlines configuration management. ACI’s configurations are for the entire fabric. It
makes backing up and rolling back all the devices in the fabric a simple process.
o APIC Controller is the unified point of automation and management for the Cisco
ACI fabric, policy enforcement, and health monitoring in both physical and virtual
environments, allowing administrators/designers to build fully automated and multi-
tenant networks with scalability.
o The main function of Cisco APIC is to offer policy authority and resolution methods
for the Cisco ACI, as well as devices attached to Cisco ACI.
o The controller manages and operates a scalable multitenant Cisco ACI fabric.
o In ACI networks, network admins use the APIC to manage the network – they no
longer need to access the CLI on every node to configure or provision network
resources.
o We can do monitoring of Tenant, Application and health monitoring of fabric
devices.
o Cisco APIC includes a CLI and a GUI as central points of management for the
entire Cisco ACI fabric
o Cisco APIC also has completely open APIs so that users can use Representational
State Transfer (REST)-based calls (through XML or JavaScript Object Notation
[JSON]) to provision, manage, monitor, or troubleshoot the system
Answer: Cisco APIC Controller does not sit in data plane; therefore, it does not
forward data plane traffic. It works as orchestrator of ACI fabric.
Answer: If all the APIC controllers go down then there won’t be any outage in data
forwarding of traffic, however, we cannot make any changes to the fabric. We need
to bring up the APIC controller to be able to make new policies or
monitor/troubleshoot the ACI fabric.
Answer: All endpoints including APIC controller will be connected on Leaf Switches
only. If you have one server connected to two leaf switches, then you may form vpc
(Virtual Port channel) at leaf switches. Here, we do not have any VPC Peer link
between Leaf Switches because cisco architecture does not allow link connection
between leaf and leaf switch.
16. Once fabric is up, can endpoints (Like Servers, Firewalls, IDS, IPS, Bare metal
servers etc.) communicate to each other?
Answer: A Bridge Domain is a layer 2 construct in Cisco ACI Fabric. It must be part of
VRF (Virtual Routing Forwarder).
The bridge domain is like a container for subnets — it’s used to define a L2 boundary,
but not like a VLAN, infact it is a VXLAN, represented as VNI (VXLAN Network
Identifier).
The BD defines the unique Layer 2 MAC address space and a Layer 2 flood domain if
such flooding is enabled. It can carry multiple subnets in a single bridge domain.
Inter-subnet communication within Bridge domain is enabled.
We can create multiple Bridge Domains inside of a single VRF. We can not link one
BD to two different VRFs.
Bridge domains can be public, private, or shared. Public bridge domains are where
the subnet can be exported to a routed connection, whereas private ones apply only
within the tenancy. Shared bridge domains can be exported to multiple VRFs within
the same tenant, or across tenants when part of a shared service.
Answer: A VLAN means one network whereas a BD can carry multiple subnets.
Bridge domain is represented with VNI i.e. VXLAN Network Identifier. Behind the
scene, this VNI will be mapped to an internal VLAN.
19. What do you mean by Endpoint, End Point Group (EPG)?
Answer: Endpoints are the devices that are connected to the network directly or
indirectly. They have an address, a location, attributes (like version or patch level)
and can be virtual or physical e.g. Bare-metal server, Switch, Router, Firewall, IDS, IPS
etc.
Tenants allow re-use of an IP Address space i.e. multiple tenants can have same IP
Address schemas.
Cisco ACI tenants can contain multiple private networks (VRF instances). One user
created tenant can’t talk to another tenant.
Tenant contains VRFs, BDs, Subnets, Application Profiles, EPGs, Subjects, Filters,
Contracts.
The Common Tenant is preconfigured for defining policies that provides common
behavior for all the tenants in the fabric. The policies defined within the Common
Tenant can be used by all the Tenants, if needed.
22. What is Infrastructure Tenant?
Answer: Infrastructure Tenant is used for internal fabric communication. This tenant
does not get exposed to user tenant. Fabric discovery, image management and
DHCP for fabric functions are all handled within this tenant.
configuration of host and fabric nodes (leaf, spine & controllers). MGMT Tenant is
used for In-Band and out of band services. It provides convenient means to
configure access policies for fabric nodes.
Answer: VRF Is virtual Routing Forwarder, also known as Context and used for
creating separate routing table. IP Address networks can be duplicated between
VRFs. VRFs contain Bridge Domains.
Answer: In that case, default policy will be applicable e.g. Default CDP, LLDP, MCP
polices will be applicable on interfaces.
Answer: In ACI, each leaf switch or the pair of leaf switches (for vPC) need to be
identified or represented with Switch Profile.
Thereafter, these switch profiles will need to be associated with Interface Profiles.
The Access Policies govern the operation of the interfaces that provide
external access to the fabric. Access policies are used for configuring the interfaces
or ports on Leaf Switches which connect to Servers, Hosts, Routers, Firewalls, or
other endpoint devices.
We can enable port channel, vPC and protocols like LLDP, CDP, LACP and some of
the features like monitoring and diagnostics. Once the ACI Access policy is setup,
then it can automate the configuration for rest of the interfaces.
Answer: IS-IS (Intermediate System to Intermediate System), LLDP , DHCP & VXLAN
are pre-enabled in ACI Fabric.
Contract is more of extended bidirectional Access list. Contracts are the rules that
govern the interaction of EPGs. Contracts determine how applications use the
network.
“Contracts” are group of subjects which define communications between source and
destination EPGs.
Compared with ACLs we won’t find source and destination IP definitions here. This
data is determined on the grounds of belonging to a specific EPG object.
Answer: Taboo contracts are used to deny, and log traffic related to regular
contracts and are configured into the hardware before the regular contract.
For example, if the objective was to allow traffic with source ports 100 through 900
with the exception of port 415, then the regular contract would allow all ports in the
range of 100 through 900 while the taboo contract would have a single entry denying
port 415.
The taboo contract denying port 415 would be programmed into the hardware before
the regular contract allowing ports 100 through 900.
33. Can I have same VRF Name in multiple Tenants?
Answer: Yes, we can have same VRF in multiple tenants. Each Tenant is different
logical unit, so we can have duplicate VRF names between Tenants.
34. Can we link one EPG Endpoint group to multiple Bridge Domains?
Answer: No, Single EPG can not be referenced to multiple Bridge Domains.
Answer: Application profiles (APs) are containers for the grouping of endpoint
groups (EPGs). Application profiles contain one or more EPGs. Modern applications
contain multiple components.
The application profile contains as many (or as few) EPGs as necessary that are
logically related to providing the capabilities of an application. EPGs are assigned to
different bridge domains. Remember, One EPG can be assigned to one BD only.
Answer:
EPGs contain endpoints that have common policy requirements such as security,
virtual machine mobility (VMM), QoS, or Layer 4 to Layer 7 services. Rather than
configuring & managing endpoints individually, they are placed in an EPG and are
managed as a group.
Answer: No, policies can only be applied to EPGs. Rather than configuring &
managing endpoints individually, they are placed in an EPG and are managed as a
group. Therefore, policies are applied to EPGs.
Answer: Yes, we can always create more than one bridge domain in same VRF;
however, we cannot duplicate the subnets. Bridge domain is a Layer 2 construct
within the fabric, used to define a flood domain, also represented with VNI (VXLAN
Network Identifier).
Answer: VRF or Private network in ACI is same as VRF in traditional networking. VRF
is also known as context or virtual routing table. It contains L3 Routing instances and
IPs. VRFs are part of a tenant and networks inside of VRFs must be unique but can
have duplicate subnets between VRFs.
VRF can have duplicate name if these are part of different Tenants.
Answer: Using a Layer 3 Out, ACI can extend its connectivity to the external devices.
These external devices may be External Router, firewall or Layer 3 Switch and are
connected on Leaf Switches (therefore, known as Border Leaf Switches). Border
leaves use EIGRP OSPF, BGP dynamic routing protocol and static routing to
exchange external prefixes and networks. We create External L3 EPG based on
prefixes we receive from external network. In one EPG, we can have all networks as
well i.e. 0.0.0.0/0.
41. Which routing protocol runs for internal communication between ACI Spine and
Leaf?
Answer: Within the ACI fabric, we use Multiprotocol BGP (MP-BGP) between the leaf
and spine switches to propagate external routes within the fabric. External prefixes
will be redistributed in to BGP and then there will be mutual redistribution from BGP
to the dynamic routing protocol being used at Border Leaf. We need to enable this
MP-BGP at POD level by creating POD policy, POD policy group & POD Policy Profile.
Only one AS will be used in the ACI fabric, therefore, Leaf and Spine relationship will
be iBGP.
42. In ACI Fabric, which node is configured as BGP Route Reflector? Why it is
required?
Answer: We use one AS within ACI Fabric. It means only IBGP peers will exist.
Since prefixes of one IBGP can’t be shared with other IBGP peer so we need to use
either full mesh or BGP Route Reflector.
ACI fabric is 2-tier architecture and we can’t have full mesh, so we will use BGP RR
by making Spine as RR and Leaf switches will become BGP RR Client.
43. Which Cisco 9K models are used as Spine Nodes in ACI Setup?
44. Which Cisco 9K models are used as Leaf Nodes in ACI Setup?
Answer: Yes, we can configure Network Switches (Catalyst, Nexus, or other Vendor
Switches) as downlink to ACI Leaf Switches. Though the management of these non-
ACI Fabric switches will remain separate and cannot be bundled into ACI Fabric
controlled/managed via APIC controllers.
46. I have Trunk ports configured in one EPG. Can the access ports also be added in
same EPG? Answer: Yes, it can be configured. See below snapshot, you can see that
in the App EPG-1, we can see one port in trunk whereas other in access (untagged).
48. What are the options available to establish local serial connection to the APIC
controllers for Initial Setup?
· Use a KVM cable to connect a keyboard and monitor to the KVM connector on the
front panel of the server.
· Connect a USB keyboard and VGA monitor to the corresponding connectors on the
rear panel of the server.
Note, we cannot use the front panel VGA and the rear panel VGA at the same time.
· LST (Local station Table) - This table contains address of all host attached directly
to leaf. When End Points are discovered, this table is populated and is synchronized
with spine-proxy full GST. When any Bridge Domain is not configured for routing,
then LST learns only MAC address(s) and if the BD is enabled with routing option,
this table will learn both IP address and MAC address of End Points.
· GST (Global Switching Table) - GST contains addresses of all hosts learned as
remote end points through active conversation and are locally cached. The table
contains
Local Mac and IP entries of End Points, Remote MAC if there is an active
conversation (VRFs, BD, Mac Address), Remote IP if there is an active conversation:
VRF, IP address
50. What is the latest version of ACI Fabric in market?
Answer: The APIC cluster uses a large database technology called Sharding. This
technology provides scalability and reliability to the data sets generated and
processed by the APIC. The data for APIC configurations is partitioned into logically
bounded subsets called shards which are analogous to database shards. A shard is
a unit of data management, and the APIC manages shards in the following ways:
· Shards are evenly distributed across the appliances that comprise the APIC cluster.
One or more shards are located on each APIC appliance. The shard data
assignments are based on a predetermined hash function, and a static shard layout
determines the assignment of shards to appliances.
Answer: ACI Multi-Pod represents the natural evolution of the original ACI Stretched
Fabric design and allows to interconnect and centrally manage separate ACI
networks.
ACI Multi-Pod is part of the “Single APIC Cluster/Single Domain” family of solutions
as a single APIC cluster is deployed to manage all the different ACI fabrics that are
interconnected.
Those separate ACI fabrics are named “Pods” and each of them looks like a regular
two-tiers spine-leaf fabrics.
The same APIC cluster can manage several Pods and to increase the resiliency of
the solution the various controller nodes that make up the cluster can be deployed
across different Pods.
Microsegmentation with Cisco ACI provides support for virtual endpoints attached to
the following:
· Microsoft vSwitch
Endpoint groups (EPGs) are used to group virtual machines (VMs) within a tenant
and apply filtering and forwarding policies to them. Microsegmentation with Cisco
ACI adds the ability to associate EPGs with network or VM-based attributes, enabling
you to filter with those attributes and apply more dynamic policies.
Microsegmentation with Cisco ACI also allows you to apply policies to any endpoints
within the tenant.
VXLAN uses a 24-bit VNID for tagging traffic which allows for 16 million segments
as opposed to the 12-bit 802.1Q VLAN ID which only gives you 4096 segments.
User traffic is encapsulated from the user space into VXLAN and use the VXLAN
overlay to provide layer 2 adjacency when need to.
So, we can emulate the layer 2 connectivity while providing the extensibility of
VXLAN for scalability and flexibility.
All traffic within the ACI Fabric is encapsulated with an extended VxLAN header
along with its VTEP, VXLAN Tunnel End Point.
The ACI VXLAN packet contains both Layer 2 MAC address and Layer 3 IP address
source and destination fields, which enables efficient and scalable forwarding within
the fabric
When traffic is received from a host at the Leaf, frames are translated to VxLAN and
transported to the destination on the fabric. ACI fabric gives the ability to completely
normalize traffic coming from one Leaf and send to another (it can be on the same
Leaf). When the frames exit the destination Leaf, they are re-encapsulated to
whatever the destination network is asking for. It can be formatted to untagged
frames, 802.1Q truck, VxLAN or NVGRE.