Test Flexpod Overview
Test Flexpod Overview
1
About Cisco Validated Designs
http://www.cisco.com/go/designzone.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION
OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO,
ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING
THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.
CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco WebEx, the Cisco
logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We Work, Live, Play, and Learn
and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS, Bringing the Meeting To You, Catalyst,
CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the Cisco Certified Internetwork Expert logo, Cisco IOS,
Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity, Collaboration Without Limita-
tion, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet
Quotient, IOS, iPhone, iQuick Study, IronPort, the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace,
MeetingPlace Chime Sound, MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels,
ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to Increase Your In-
ternet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of Cisco Systems, Inc. and/or its
affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective owners. The use of
the word partner does not imply a partnership relationship between Cisco and any other company. (0809R)
2
Table of Contents
Table of Contents
About Cisco Validated Designs ............................................................................................................................................................ 2
Table of Contents ................................................................................................................................................................................. 3
Executive Summary .............................................................................................................................................................................. 5
Solution Overview ................................................................................................................................................................................. 6
Introduction ...................................................................................................................................................................................... 6
Audience .......................................................................................................................................................................................... 6
Changes in FlexPod ......................................................................................................................................................................... 6
FlexPod Program Benefits ............................................................................................................................................................... 6
Technology Overview ........................................................................................................................................................................... 8
FlexPod System Overview ............................................................................................................................................................... 8
FlexPod Design Principles ............................................................................................................................................................... 9
FlexPod and Application Centric Infrastructure ................................................................................................................................ 9
Cisco ACI Fabric .............................................................................................................................................................................. 9
FlexPod with Cisco ACI—Components .......................................................................................................................................... 10
Validated System Hardware Components ...................................................................................................................................... 13
Cisco Unified Computing System ............................................................................................................................................... 13
Cisco Nexus 2232PP 10GE Fabric Extender ................................................................................................................................. 14
Cisco Nexus 9000 Series Switch ............................................................................................................................................... 14
NetApp FAS and Data ONTAP .................................................................................................................................................. 14
NetApp All Flash FAS ................................................................................................................................................................ 15
NetApp Clustered Data ONTAP ................................................................................................................................................. 16
NetApp Storage Virtual Machines .............................................................................................................................................. 16
VMware vSphere ....................................................................................................................................................................... 17
Domain and Element Management ................................................................................................................................................ 17
Cisco Unified Computing System Manager................................................................................................................................ 17
Cisco Application Policy Infrastructure Controller (APIC) ........................................................................................................... 18
VMware vCenter Server ............................................................................................................................................................. 18
NetApp OnCommand System and Unified Manager .................................................................................................................. 19
NetApp Virtual Storage Console ................................................................................................................................................ 19
NetApp OnCommand Performance Manager ............................................................................................................................ 19
NetApp SnapManager and SnapDrive ....................................................................................................................................... 19
Solution Design................................................................................................................................................................................... 20
Hardware and Software Revisions ................................................................................................................................................. 20
FlexPod Infrastructure Physical Building Blocks............................................................................................................................. 20
Physical Topology ...................................................................................................................................................................... 20
Cisco Unified Computing System ............................................................................................................................................... 21
NetApp Storage Design ............................................................................................................................................................. 28
Cisco Nexus 9000 ...................................................................................................................................................................... 34
3
Table of Contents
4
Executive Summary
Executive Summary
Cisco® Validated Designs include systems and solutions that are designed, tested, and documented to facilitate
and improve customer deployments. These designs incorporate a wide range of technologies and products into a
portfolio of solutions that have been developed to address the business needs of customers.
This document describes the Cisco and NetApp® FlexPod Datacenter with NetApp All Flash FAS (AFF), Cisco
Application Centric Infrastructure (ACI), and VMware vSphere 5.5 Update 2. Cisco ACI is a holistic architecture
that introduces hardware and software innovations built upon the new Cisco Nexus 9000® Series product line.
Cisco ACI provides a centralized policy-driven application deployment architecture that is managed through the
Cisco Application Policy Infrastructure Controller (APIC). Cisco ACI delivers software flexibility with the scalability
of hardware performance.
FlexPod Datacenter with NetApp AFF and Cisco ACI is a predesigned, best-practice data center architecture built
on the Cisco Unified Computing System (UCS), the Cisco Nexus® 9000 family of switches, and NetApp (AFF).
Some of the key design details and best practices of this new architecture are covered in the following sections.
5
Solution Overview
Solution Overview
Introduction
Industry trends indicate a vast data center transformation toward shared infrastructure and cloud computing.
Business agility requires application agility, so IT teams need to provision applications in hours instead of months.
Resources need to scale up (or down) in minutes, not hours.
To simplify the evolution to a shared cloud infrastructure based on an application driven policy model, Cisco and
NetApp have developed the solution called VMware vSphere® on FlexPod with Cisco Application Centric
Infrastructure (ACI). Cisco ACI in the data center is a holistic architecture with centralized automation and policy-
driven application profiles that delivers software flexibility with hardware performance.
Audience
The audience for this document includes, but is not limited to; sales engineers, field consultants, professional
services, IT managers, partner engineers, and customers who want to take advantage of an infrastructure built to
deliver IT efficiency and enable IT innovation.
Changes in FlexPod
The following design elements distinguish this version of FlexPod from previous models:
Validation of the Cisco ACI with a NetApp All-Flash FAS storage array
Support for the Cisco UCS 2.2 release and Cisco UCS B200-M4 servers
An IP-based storage design supporting both NAS datastores and iSCSI based SAN LUNs.
Support for direct attached Fiber Chanel storage access for boot LUNs
Application design guidance for multi-tiered applications using Cisco ACI application profiles and policies
Cisco Validated Designs (CVDs) and NetApp Validated Architectures (NVAs) covering a variety of use cases
6
Solution Overview
Cisco and NetApp have also built a robust and experienced support team focused on FlexPod solutions, from
customer account and technical sales representatives to professional services and technical support engineers.
The support alliance between NetApp and Cisco gives customers and channel services partners direct access to
technical experts who collaborate with cross vendors and have access to shared lab resources to resolve potential
issues.
FlexPod supports tight integration with virtualized and cloud infrastructures, making it the logical choice for long-
term investment. FlexPod also provides a uniform approach to IT architecture, offering a well-characterized and
documented shared pool of resources for application workloads. FlexPod delivers operational efficiency and
consistency with the versatility to meet a variety of SLAs and IT initiatives, including:
Desktop virtualization
Cloud delivery models (public, private, hybrid) and service models (IaaS, PaaS, SaaS)
7
Technology Overview
Technology Overview
These components are connected and configured according to best practices of both Cisco and NetApp and
provide the ideal platform for running a variety of enterprise workloads with confidence. FlexPod can scale up for
greater performance and capacity (adding compute, network, or storage resources individually as needed), or it
can scale out for environments that require multiple consistent deployments (rolling out additional FlexPod stacks).
The reference architecture covered in this document leverages the Cisco Nexus 9000 for the switching element.
One of the key benefits of FlexPod is the ability to maintain consistency at scale. Each of the component families
shown (Cisco UCS, Cisco Nexus, and NetApp FAS) offers platform and resource options to scale the infrastructure
up or down, while supporting the same features and functionality that are required under the configuration and
connectivity best practices of FlexPod.
8
Technology Overview
Application availability. Makes sure that services are accessible and ready to use.
Flexibility. Provides new services or recovers resources without requiring infrastructure modification.
Manageability. Facilitates efficient infrastructure operations through open standards and APIs.
Note: Performance is a key design criterion that is not directly addressed in this document. It has been addressed in
other collateral, benchmarking, and solution testing efforts; this design guide validates the functionality.
Cisco ACI delivers a resilient fabric to satisfy today's dynamic applications. ACI leverages a network fabric that
employs industry proven protocols coupled with innovative technologies to create a flexible, scalable, and highly
available architecture of low-latency, high-bandwidth links. This fabric delivers application instantiations using
profiles that house the requisite characteristics to enable end-to-end connectivity.
The ACI fabric is designed to support the industry trends of management automation, programmatic policies, and
dynamic workload provisioning. The ACI fabric accomplishes this with a combination of hardware, policy-based
control systems, and closely coupled software to provide advantages not possible in other architectures.
Spine switches
Leaf switches
The ACI switching architecture is presented in a leaf-and-spine topology where every leaf connects to every spine
using 40G Ethernet interface(s). The ACI Fabric Architecture is outlined in Figure 2.
9
Technology Overview
The software controller, APIC, is delivered as an appliance and three or more such appliances form a cluster for
high availability and enhanced performance. APIC is responsible for all tasks enabling traffic transport including:
Fabric activation
Though the APIC acts as the centralized point of configuration for policy and network connectivity, it is never in line
with the data path or the forwarding topology. The fabric can still forward traffic even when communication with the
APIC is lost.
APIC provides both a command-line interface (CLI) and graphical-user interface (GUI) to configure and control the
ACI fabric. APIC also exposes a northbound API through XML and JavaScript Object Notation (JSON) and an
open source southbound API.
10
Technology Overview
Figure 3 FlexPod Design with Cisco ACI and NetApp Clustered Data ONTAP
Fabric: As in the previous designs of FlexPod, link aggregation technologies play an important role in FlexPod with
ACI providing improved aggregate bandwidth and link resiliency across the solution stack. The NetApp storage
controllers, Cisco Unified Computing System, and Cisco Nexus 9000 platforms support active port channeling
using 802.3ad standard Link Aggregation Control Protocol (LACP). Port channeling is a link aggregation technique
offering link fault tolerance and traffic distribution (load balancing) for improved aggregate bandwidth across
member ports. In addition, the Cisco Nexus 9000 series features virtual Port Channel (vPC) capabilities. vPC
allows links that are physically connected to two different Cisco Nexus 9000 Series devices to appear as a single
"logical" port channel to a third device, essentially offering device fault tolerance. The Cisco UCS Fabric
Interconnects and NetApp FAS controllers benefit from the Cisco Nexus vPC abstraction, gaining link and device
resiliency as well as full utilization of a non-blocking Ethernet fabric.
Compute: Each Fabric Interconnect (FI) is connected to both the leaf switches and the links provide a robust
40GbE connection between Cisco Unified Computing System and ACI fabric. Figure 4 illustrates the use of vPC
enabled 10GbE uplinks between the Cisco Nexus 9000 leaf switches and Cisco UCS FI. Additional ports can be
easily added to the design for increased bandwidth as needed. Each Cisco UCS 5108 chassis is connected to the
FIs using a pair of ports from each IO Module for a combined 40G uplink. Current FlexPod design supports Cisco
UCS C-Series connectivity both for direct attaching the Cisco UCS C-Series servers into the FIs or by connecting
Cisco UCS C-Series to a Cisco Nexus 2232 Fabric Extender hanging off of the Cisco UCS FIs. FlexPod designs
mandate Cisco UCS C-Series management using Cisco UCS Manager to provide a uniform look and feel across
blade and standalone servers.
11
Technology Overview
Storage: The ACI-based FlexPod design is an end-to-end IP-based storage solution that supports SAN access by
using iSCSI. The solution provides a 10/40GbE fabric that is defined by Ethernet uplinks from the Cisco UCS
Fabric Interconnects and NetApp storage devices connected to the Cisco Nexus switches. Optionally, the ACI-
based FlexPod design can be configured for SAN boot by using Fibre Channel over Ethernet (FCoE). FCoE
access is provided by directly connecting the NetApp FAS controller to the Cisco UCS Fabric Interconnects as
shown in Figure 5.
12
Technology Overview
Figure 5 shows the initial storage configuration of this solution as a two-node high availability (HA) pair running
clustered Data ONTAP in a switchless cluster configuration. Storage system scalability is easily achieved by
adding storage capacity (disks and shelves) to an existing HA pair, or by adding more HA pairs to the cluster or
storage domain.
Note: For SAN environments, NetApp clustered Data ONTAP allows up to 4 HA pairs or 8 nodes. For NAS environ-
ments, it allows 12 HA pairs or 24 nodes to form a logical entity.
The HA interconnect allows each node in an HA pair to assume control of its partner's storage (disks and shelves)
directly. The local physical HA storage failover capability does not extend beyond the HA pair. Furthermore, a
cluster of nodes does not have to include similar hardware. Rather, individual nodes in an HA pair are configured
alike, allowing customers to scale as needed, as they bring additional HA pairs into the larger cluster.
For more information about the virtual design of the environment that consist of VMware vSphere, Cisco Nexus
1000v virtual distributed switching, and NetApp storage controllers, refer to the section FlexPod Infrastructure
Physical Build.
13
Technology Overview
When a Cisco UCS C-Series Rack-Mount Server is integrated with Cisco UCS Manager, through the Cisco Nexus
2232 platform, the server is managed using the Cisco UCS Manager GUI or Cisco UCS Manager CLI. The Cisco
Nexus 2232 provides data and control traffic support for the integrated Cisco UCS C-Series server.
NetApp offers unified storage architecture, which simultaneously supports storage area network (SAN), network-
attached storage (NAS), and iSCSI across many operating environments, including VMware, Windows ®, and
UNIX®. This single architecture provides access to data with industry-standard protocols, including NFS, CIFS,
iSCSI, and FC/FCoE. Connectivity options include standard Ethernet (10/100/1000MbE or 10GbE) and Fibre
Channel (4, 8, or 16Gb/sec).
In addition, all systems can be configured with high-performance SSD or SAS disks for primary storage
applications, low-cost SATA disks for secondary applications (such as backup and archive), or a mix of different
disk types. You can see the NetApp disk options in the figure below. Note that the All Flash FAS configuration can
only support SSDs. Also supported is a hybrid cluster with a mix of All Flash FAS HA pairs and FAS HA pairs with
HDDs and/or SSDs.
14
Technology Overview
https://library.netapp.com/ecm/ecm_get_file/ECMP1644424
http://www.netapp.com/us/products/platform-os/data-ontap-8/index.aspx.
Note: The validated design described in the document focuses on clustered Data ONTAP and IP-based storage. As an
optional configuration, FCoE-based boot from SAN is covered.
The storage operating system employs the NetApp WAFL® (Write Anywhere File Layout) system, which is
natively enabled for flash media
The All Flash FAS system delivers 4 to 12 times higher IOPS and 20 times faster response for databases than
traditional hard disk drive HDD systems
High performance enables server consolidation and can reduce database licensing costs by 50%
As the industry’s only unified all-flash storage that supports synchronous replication, All Flash FAS supports all
your backup and recovery needs with a complete suite of integrated data-protection utilities
15
Technology Overview
— Newly enhanced inline compression delivers near-zero performance effect. Incompressible data detection
eliminates wasted cycles.
— Always-on deduplication runs continuously in the background and provides additional space savings for
use cases such as virtual desktop deployments
All Flash FAS systems are ready for the data fabric. Data can move between the performance and capacity
tiers on premises or in the cloud
All Flash FAS offers application and ecosystem integration for virtual desktop integration VDI, database, and
server virtualization
Without silos, you can non-disruptively scale out and move workloads between flash and HDD within a cluster
Coalesced writes to free blocks, maximizing the performance and longevity of flash media
A random read I/O processing path that is designed from the ground up for flash
Built-in quality of service (QoS) that safeguards SLAs in multi-workload and multi-tenant environments
For more information on All Flash FAS, click the following link:
http://www.netapp.com/us/products/storage-systems/all-flash-fas
Data ONTAP scale-out is a way to respond to growth in a storage environment. As the storage environment grows,
additional controllers are added seamlessly to the resource pool residing on a shared storage infrastructure. Host
and client connections as well as datastores can move seamlessly and non-disruptively anywhere in the resource
pool, so that existing workloads can be easily balanced over the available resources, and new workloads can be
easily deployed. Technology refreshes (replacing disk shelves, adding or completely replacing storage controllers)
are accomplished while the environment remains online and continues serving data. Data ONTAP is the first
product to offer a complete scale-out solution, and it offers an adaptable, always-available storage infrastructure
for today's highly virtualized environment.
16
Technology Overview
volumes and network logical interfaces (LIFs) are created and assigned to an SVM and may reside on any node in
the cluster to which the SVM has been given access. An SVM may own resources on multiple nodes concurrently,
and those resources can be moved non-disruptively from one node to another. For example, a flexible volume can
be non-disruptively moved to a new node and aggregate, or a data LIF can be transparently reassigned to a
different physical network port. The SVM abstracts the cluster hardware and it is not tied to any specific physical
hardware.
An SVM is capable of supporting multiple data protocols concurrently. Volumes within the SVM can be junctioned
together to form a single NAS namespace, which makes all of an SVM's data available through a single share or
mount point to NFS and CIFS clients. SVMs also support block-based protocols, and LUNs can be created and
exported using iSCSI, Fiber Channel, or FCoE. Any or all of these data protocols may be configured for use within
a given SVM.
Because it is a secure entity, an SVM is only aware of the resources that are assigned to it and has no knowledge
of other SVMs and their respective resources. Each SVM operates as a separate and distinct entity with its own
security domain. Tenants may manage the resources allocated to them through a delegated SVM administration
account. Each SVM may connect to unique authentication zones such as Active Directory®, LDAP, or NIS.
VMware vSphere
VMware vSphere is a virtualization platform for holistically managing large collections of infrastructure resources-
CPUs, storage, networking-as a seamless, versatile, and dynamic operating environment. Unlike traditional
operating systems that manage an individual machine, VMware vSphere aggregates the infrastructure of an entire
data center to create a single powerhouse with resources that can be allocated quickly and dynamically to any
application in need.
The VMware vSphere environment delivers a robust application environment. For example, with VMware vSphere,
all applications can be protected from downtime with VMware High Availability (HA) without the complexity of
conventional clustering. In addition, applications can be scaled dynamically to meet changing loads with
capabilities such as Hot Add and VMware Distributed Resource Scheduler (DRS).
http://www.vmware.com/products/datacenter-virtualization/vsphere/overview.html
Cisco APIC
17
Technology Overview
the software to manage the entire Cisco Unified Computing System as a single logical entity through an intuitive
GUI, a command-line interface (CLI), or an XML API.
The Cisco UCS Manager resides on a pair of Cisco UCS 6200 Series Fabric Interconnects using a clustered,
active-standby configuration for high availability. The software gives administrators a single interface for
performing server provisioning, device discovery, inventory, configuration, diagnostics, monitoring, fault detection,
auditing, and statistics collection. Cisco UCS Manager service profiles and templates support versatile role- and
policy-based management, and system configuration information can be exported to configuration management
databases (CMDBs) to facilitate processes based on IT Infrastructure Library (ITIL) concepts. Service profiles
benefit both virtualized and non-virtualized environments and increase the mobility of non-virtualized servers, such
as when moving workloads from server to server or taking a server offline for service or upgrade. Profiles can be
used in conjunction with virtualization clusters to bring new resources online easily, complementing existing virtual
machine mobility.
For more information on Cisco UCS Manager, click the following link:
http://www.cisco.com/en/US/products/ps10281/index.html
Centralized application-level policy engine for physical, virtual, and cloud infrastructures
Robust implementation of multi-tenant security, quality of service (QoS), and high availability
Cisco APIC exposes northbound APIs through XML and JSON and provides both a command-line interface (CLI)
and GUI that utilize the APIs to manage the fabric holistically. For redundancy and load distribution, three APIC
controllers are recommended for managing ACI fabric.
http://www.cisco.com/c/en/us/products/cloud-systems-management/application-policy-infrastructure-controller-
apic/index.htm
http://www.vmware.com/products/vcenter-server/overview.html
18
Technology Overview
This solution uses both OnCommand System Manager and OnCommand Unified Manager to provide storage
provisioning and monitoring capabilities within the infrastructure.
VSC is a single VMware vCenter Server plug-in that provides end-to-end VM lifecycle management for VMware
environments that use NetApp storage. VSC is available to all VMware vSphere Clients that connect to the
vCenter Server. This availability is different from a client-side plug-in that must be installed on every VMware
vSphere Client. The VSC software can be installed either on the vCenter Server or on a separate Microsoft
Windows Server® instance or VM.
To create a backup, SnapManager interacts with the application to put the application data in a state such that a
consistent NetApp Snapshot® copy of that data can be made. It then signals to SnapDrive to interact with the
storage system SVM to create the Snapshot copy, effectively backing up the application data. In addition to
managing Snapshot copies of application data, SnapDrive can be used to accomplish the following tasks:
Provision application data LUNs in the SVM as mapped disks on the application VM
Snapshot copy management of application data LUNs is handled by the interaction of SnapDrive with the SVM
management LIF.
19
Solution Design
Solution Design
Physical Topology
Figure 7 illustrates the new ACI connected FlexPod design. The infrastructure is physically redundant across the
stack, addressing Layer 1 high-availability requirements where the integrated stack can withstand failure of a link
or failure of a device. The solution also incorporates additional Cisco and NetApp technologies and features that to
further increase the design efficiency. Figure 7 illustrates the compute, network and storage design overview of the
FlexPod solution. The individual details of these components will be covered in the upcoming sections.
20
Solution Design
21
Solution Design
The validated design utilized two uplinks from each FI to the leaf switches for an aggregate bandwidth of 40GbE (4
x 10GbE). The number of links can be easily increased based on customer data throughput requirements.
The disjoint Layer 2 feature simplifies deployments within Cisco UCS end-host mode without the need to turn on
switch mode. The disjoint layer-2 functionality is enabled by defining groups of VLANs and associating them to
uplink ports. Since a server vNIC can only be associated with a single uplink ports, two additional vNICS,
associated with the out of band management uplinks, are deployed per ESXi host. Figure 9 shows how different
VLAN groups are deployed and configured on Cisco Unified Computing System. Figure 16 covers the network
interface design for the ESXi hosts.
22
Solution Design
FCoE Connectivity
The FlexPod with ACI design optionally supports boot from SAN using FCoE by directly connecting NetApp
controller to the Cisco UCS Fabric Interconnects. The updated physical design changes are covered in Figure 10.
In the FCoE design, zoning and related SAN configuration is configured on Cisco UCS Manager and Fabric
Interconnects provide the SAN-A and SAN-B separation. On NetApp, Unified Target Adapter is needed to provide
physical connectivity.
23
Solution Design
UCS 2204XP 4 16
UCS 2208XP 8 32
24
Solution Design
* Cisco UCS B200M4 with VIC 1340 and FEX 2208 configuration is very similar to Cisco UCS B200M3 with VIC 1240 and
FEX 2208 configuration as shown in Figure 13 and is therefore not covered separately.
Figure 12 and Figure 13 illustrates the connectivity for the first two configurations.
In Figure 12, the FEX 2204XP enables two KR lanes to the half-width blade while the global discovery policy
dictates the formation of a fabric port channel. This results in 20GbE connection to the blade server.
25
Solution Design
In Figure 13, the FEX 2208XP enables 8 KR lanes to the half-width blade while the global discovery policy dictates
the formation of a fabric port channel. Since VIC 1240 is not using a Port Expander module, this configuration
results in 40GbE connection to the blade server.
Figure 14 Cisco UCS Chassis Discovery Policy—Discrete Mode vs. Port Channel Mode
26
Solution Design
Note: When setting the Jumbo frames, it is important to make sure MTU settings are applied uniformly across the
stack to prevent fragmentation and the negative performance.
Cisco UCS Manager 2.2 now allows customers to connect Cisco UCS C-Series servers directly to Cisco UCS
Fabric Interconnects without requiring a Fabric Extender (FEX). While the Cisco UCS C-Series connectivity using
Cisco Nexus 2232 FEX is still supported and recommended for large scale Cisco UCS C-Series server
deployments, direct attached design allows customers to connect and manage Cisco UCS C-Series servers on a
smaller scale without buying additional hardware.
Figure 15 illustrates the connectivity of the Cisco UCS C-Series server into the Cisco UCS domain using a Fabric
Extender. Functionally, the one RU Nexus FEX 2232PP replaces the Cisco UCS 2204 or 2208 IOM (located within
the Cisco UCS 5108 blade chassis). Each 10GbE VIC port connects to Fabric A or B through the FEX. The FEX
and Fabric Interconnects form port channels automatically based on the chassis discovery policy providing a link
resiliency to the Cisco UCS C-series server. This is identical to the behavior of the IOM to Fabric Interconnect
connectivity. Logically, the virtual circuits formed within the Cisco UCS domain are consistent between B and C
series deployment models and the virtual constructs formed at the vSphere are unaware of the platform in use.
27
Solution Design
supporting infrastructure services such as vSphere Virtual Center, Microsoft Active Directory and NetApp Virtual
Storage Console (VSC).
At the server level, the Cisco 1225/1240 VIC presents multiple virtual PCIe devices to the ESXi node and the
vSphere environment identifies these interfaces as vmnics. The ESXi operating system is unaware of the fact that
the NICs are virtual adapters. In the FlexPod design, six vNICs are created and utilized as follows:
These vNICs are pinned to different Fabric Interconnect uplink interfaces based on which VLANs they are
associated.
Note: Two vNICs for dedicated storage (NFS) access are additionally required for ESXi servers hosting infrastructure
services.
Figure 16 details the ESXi server design showing both virtual interfaces and VMkernel ports. All the Ethernet
adapters vmnic0 through vmnic5 are virtual NICs created using Service Profile.
In the clustered Data ONTAP architecture, all data is accessed through secure virtual storage partitions known as
storage virtual machines (SVMs). It is possible to have a single SVM that represents the resources of the entire
cluster or multiple SVMs that are assigned specific subsets of cluster resources for given applications, tenants or
workloads. In the current implementation of ACI, the SVM serves as the storage basis for each application with
ESXi hosts booted from SAN by using iSCSI and for application data presented as iSCSI, CIFS or NFS traffic.
For more information about the AFF8000 product family, click the following links:
28
Solution Design
http://www.netapp.com/us/products/storage-systems/all-flash-fas
For more information about the FAS 8000 product family, see:
http://www.netapp.com/us/products/storage-systems/fas8000/
For more information about the FAS 2500 product family, see:
http://www.netapp.com/us/products/storage-systems/fas2500/index.aspx
http://www.netapp.com/us/products/platform-os/data-ontap-8/index.aspx
A clustered Data ONTAP storage solution includes the following fundamental connections or network types:
HA interconnect. A dedicated interconnect between two nodes to form HA pairs. These pairs are also known
as storage failover pairs.
Cluster interconnect. A dedicated high-speed, low-latency, private network used for communication between
nodes. This network can be implemented through the deployment of a switchless cluster or by leveraging
dedicated cluster interconnect switches.
Note: NetApp switchless cluster is only appropriate for two node clusters.
Management network. A network used for the administration of nodes, cluster, and SVMs.
Ports. A physical port such as e0a or e1a or a logical port such as a virtual LAN (VLAN) or an interface group.
Interface groups. A collection of physical ports to create one logical port. The NetApp interface group is a link
aggregation technology that can be deployed in single (active/passive), multiple ("always on"), or dynamic
(active LACP) mode.
This validation uses two storage nodes configured as a two-node storage failover pair through an internal HA
interconnect direct connection. The FlexPod design uses the following port and interface assignments:
Ethernet ports e0e and e0g on each node are members of a multimode LACP interface group for Ethernet
data. This design leverages an interface group that has LIFs associated with it to support NFS and iSCSI
traffic.
Ethernet ports e0a and e0c on each node are connected to the corresponding ports on the other node to form
the switchless cluster interconnect.
Ports e0M on each node support a LIF dedicated to node management. Port e0i is defined as a failover port
supporting the “node_mgmt” role.
Port e0i supports cluster management data traffic through the cluster management LIF. This port and LIF allow
for administration of the cluster from the failover port and LIF if necessary.
29
Solution Design
connectivity, the NetApp controllers are directly connected to out of band management switches as shown in
Figure 17.
Note: The AFF8040 controllers are sold in a single-chassis, dual-controller option only. Figure 17 represents the
NetApp storage controllers as a dual chassis dual controllers deployment. Figure 17 shows two AFF8040 controllers for
visual purposes.
Figure 18 highlights the rear of the AFF8040 chassis. The AFF8040 is configured in single HA enclosure, that is
two controllers are housed in a single chassis. External disk shelves are connected through onboard SAS ports,
data is accessed through the onboard UTA2 ports, and cluster interconnect traffic is over the onboard 10GbE port.
30
Solution Design
Logical interfaces: All SVM networking is done through logical interfaces (LIFs) that are created within the
SVM. As logical constructs, LIFs are abstracted from the physical networking ports on which they reside.
Flexible volumes: A flexible volume is the basic unit of storage for an SVM. An SVM has a root volume and
can have one or more data volumes. Data volumes can be created in any aggregate that has been delegated
by the cluster administrator for use by the SVM. Depending on the data protocols used by the SVM, volumes
can contain either LUNs for use with block protocols, files for use with NAS protocols, or both concurrently.
Namespace: Each SVM has a distinct namespace through which all of the NAS data shared from that SVM
can be accessed. This namespace can be thought of as a map to all of the junctioned volumes for the SVM,
no matter on which node or aggregate they might physically reside. Volumes may be junctioned at the root of
the namespace or beneath other volumes that are part of the namespace hierarchy.
Storage QoS: Storage QoS (Quality of Service) can help manage risks around meeting performance
objectives. You can use storage QoS to limit the throughput to workloads and to monitor workload
performance. You can reactively limit workloads to address performance problems and you can proactively
limit workloads to prevent performance problems. You can also limit workloads to support SLAs with
customers. Workloads can be limited on either a workload IOPs or bandwidth in MB/s basis.
A workload represents the input/output (I/O) operations to one of the following storage objects:
A flexible volume
A LUN
In the ACI architecture, because an SVM is usually associated with an application, a QoS policy group would
normally be applied to the SVM, setting up an overall storage rate limit for the workload. Storage QoS is
administered by the cluster administrator.
Storage objects are assigned to a QoS policy group to control and monitor a workload. You can monitor workloads
without controlling them in order to size the workload and determine appropriate limits within the storage cluster.
For more information about managing workload performance by using storage QoS, see "Managing system
performance" in the Clustered Data ONTAP 8.3 System Administration Guide for Cluster Administrators.
31
Solution Design
The following key components to allow connectivity to data on a per application basis:
LIF: A logical interface that is associated to a physical port, interface group, or VLAN interface. More than one
LIF may be associated to a physical port at the same time. There are three types of LIFs:
— NFS LIF
— iSCSI LIF
— FCLIF
LIFs are logical network entities that have the same characteristics as physical network devices but are not
tied to physical objects. LIFs used for Ethernet traffic are assigned specific Ethernet-based details such as IP
addresses and iSCSI-qualified names and then are associated with a specific physical port capable of
supporting Ethernet traffic. NAS LIFs can be non-disruptively migrated to any other physical network port
throughout the entire cluster at any time, either manually or automatically (by using policies).
In this Cisco Validated Design, LIFs are layered on top of the physical interface groups and are associated
with a given VLAN interface. LIFs are then consumed by the SVMs and are typically associated with a given
protocol and data store.
SVM: An SVM is a secure virtual storage server that contains data volumes and one or more LIFs, through
which it serves data to the clients. An SVM securely isolates the shared virtualized data storage and network
and appears as a single dedicated server to its clients. Each SVM has a separate administrator authentication
domain and can be managed independently by an SVM administrator.
32
Solution Design
Root volume. A flexible volume that contains the root of the SVM namespace.
Root volume load-sharing mirrors. Mirrored volume of the root volume to accelerate read
throughput. In this
instance, they are labeled root_vol_m01 and root_vol_m02.
Boot volume. A flexible volume that contains ESXi boot LUNs. These ESXi boot LUNs are
exported through
iSCSI to the Cisco UCS servers.
Infrastructure datastore volume. A flexible volume that is exported through NFS to the ESXi host and is used
as the infrastructure NFS datastore to store VM files.
Infrastructure swap volume. A flexible volume that is exported through NFS to each ESXi host and used to
store VM swap data.
The NFS datastores are mounted on each VMware ESXi host in the VMware cluster and are provided by NetApp
clustered Data ONTAP through NFS over the 10GbE network. The SVM has a minimum of one LIF per protocol
per node to maintain volume availability across the cluster nodes. The LIFs use failover groups, which are network
polices defining the ports or interface groups available to support a single LIF migration or a group of LIFs
migrating within or across nodes in a cluster. Multiple LIFs may be associated with a network port or interface
group. In addition to failover groups, the clustered Data ONTAP system uses failover policies. Failover polices
define the order in which the ports in the failover group are prioritized. Failover policies define migration policy in
the event of port failures, port recoveries, or user-initiated requests. The most basic possible storage failover
scenarios in this cluster are as follows:
33
Solution Design
Eliminates Spanning Tree Protocol blocked ports and uses all available uplink bandwidth
Provides fast convergence if either one of the physical links or a device fails
Unlike an NxOS based design, a vPC configuration in ACI does not require a vPC peer-link to be explicitly
connected and configured between the peer-devices (leaves). The peer communication is carried over the 40G
connections through the Spines.
iSCSI VLANs to provide direct attached storage access including boot LUNs
The PortChannels connecting Cisco UCS Fabric Interconnects to the ACI fabric are also configured with three
types of VLANs:
NFS VLANs to access infrastructure and swap datastores to be used by vSphere environment to host
infrastructure services
A pool of VLANs associated with ACI Virtual Machine Manager (VMM) domain. VLANs from this pool are
dynamically allocated by APIC to newly created end point groups (EPGs)
34
Solution Design
In an ACI based configuration, Cisco APIC connects to VMware vCenter and automatically configures port-groups
on the VMware distributed switch based on the user-defined End Point Group (EPG) configuration. These port-
groups are associated with a dynamically assigned VLAN from a pre-defined pool in Cisco APIC. Since Cisco
APIC does not configure the Cisco UCS Fabric Interconnect, this range of pool VLANs has to be pre-configured on
the uplink vNIC interfaces of the ESXi service profiles. In Figure 22, VLAN 1101-1200 is part of the APIC defined
pool.
Note: In the future releases of FlexPod with ACI solution, Cisco UCS Director (UCSD) will be incorporated into the solu-
tion. Cisco UCS Director will add the appropriate VLANs to the vNIC interfaces on demand. Defining the complete
range will be unnecessary.
35
Solution Design
Note: Currently in ACI, a VLAN can only be associated with a single name space, therefore the NFS and iSCSI VLAN IDs
used on Cisco Unified Computing System and NetApp controllers are different. In Figure 22 and Figure 23, VLANs 3270,
911 and 912 are defined on Cisco Unified Computing System where as VLANs 3170, 901 and 902 are defined on NetApp
controllers for the same storage path. The ACI fabric provides the necessary VLAN translation to enable communica-
tion between the VMkernel and the LIF EPGs. For additional information about EPGs and VLAN mapping, refer to the
Application Centric Infrastructure (ACI) Design section.
36
Solution Design
fabric is deployed in a leaf-spine architecture. The network provisioning in ACI based FlexPod is quite different
from traditional FlexPod and requires a basic knowledge of some of the core concepts of ACI.
ACI Components
Leaf switches: The ACI leaf provides physical server and storage connectivity as well as enforces ACI policies. A
leaf typically is a fixed form factor switches such as the Cisco Nexus N9K-C9396PX, the N9K-C9396TX and N9K-
C93128TX switches. Leaf switches also provide a connection point to the existing enterprise or service provider
infrastructure. The leaf switches provide both 10G and 40G Ethernet ports for connectivity.
In the FlexPod with ACI design, Cisco UCS Fabric Interconnect, NetApp Controllers and WAN/Enterprise routers
are connected to both the leaves for high availability.
Spine switches: The ACI spine provides the mapping database function and connectivity among leaf switches. A
spine can be the Cisco Nexus® N9K-C9508 switch equipped with N9K-X9736PQ line cards or fixed form-factor
switches such as the Cisco Nexus N9K-C9336PQ ACI spine switch. Spine switches provide high-density 40
Gigabit Ethernet connectivity between leaf switches.
Tenant: A tenant (Figure 24) is a logical container or a folder for application policies. This container can represent
an actual tenant, an organization, an application or can just be used for the convenience of organizing information.
A tenant represents a unit of isolation from a policy perspective. All application configurations in Cisco ACI are part
of a tenant. Within a tenant, you define one or more Layer 3 networks (VRF instances), one or more bridge
domains per network, and EPGs to divide the bridge domains.
FlexPod with ACI design requires creation of a tenant called "Foundation" for providing compute to storage
connectivity to setup boot from SAN environment as well as for accessing Infrastructure datastore using NFS. The
design also utilizes the predefined "common" tenant to host services (such as DNS, AD etc.) required by all the
tenants. In most cases, each subsequent application deployment will require creation of a dedicated tenant.
Application Profile: Modern applications contain multiple components. For example, an e-commerce application
could require a web server, a database server, data located in a storage area network, and access to outside
resources that enable financial transactions. An application profile (Figure 24) models application requirements
and contains as many (or as few) End Point Groups (EPGs) as necessary that are logically related to providing the
capabilities of an application. Depending on the tenant requirements, in FlexPod with ACI design, an application
profiles will be used to define a multi-tier application (such as Microsoft SharePoint) as well as to define storage
connectivity using different storage protocols (NFS and iSCSI).
Bridge Domain: A bridge domain represents a L2 forwarding construct within the fabric. One or more EPG can be
associated with one bridge domain or subnet. A bridge domain can have one or more subnets associated with it.
One or more bridge domains together form a tenant network.
For FlexPod design, setting up a bridge domain is an important consideration. A bridge domain in ACI is equivalent
to a broadcast layer-2 domain in traditional Ethernet networks. When a bridge domain contains endpoints
belonging to different VLANs (outside of ACI fabric), a unique MAC address is required for every unique endpoint.
NetApp controllers, however, use the same MAC address for an interface group and all the VLANs defined for that
interface group. As a result, all the LIFs on NetApp end up sharing a single MAC address even though these LIFs
belong to different VLANs.
network port show -fields mac
37
Solution Design
To overcome potential issues caused by overlapping MAC addresses, multiple bridge domains need to be
deployed for correct storage connectivity. The details of the required bridge domains are covered in the design
section below.
End Point Group (EPG): An End Point Group (EPG) is a collection of physical and/or virtual end points that
require common services and policies. An End Point Group example is a set of servers or storage LIFs on a
common VLAN providing a common application function or service. While the scope of an EPG definition is much
wider, in the simplest terms an EPG can be defined on a per VLAN segment basis where all the servers or VMs on
a common LAN segment become part of the same EPG.
In the FlexPod design, various application tiers, ESXi VMkernel ports for iSCSI, NFS and vMotion connectivity, and
NetApp LIFs for SVM-Management and NFS and iSCSI datastores are placed in separate EPGs. The design
details are covered in the following sections.
Contracts: A service contract can exist between two or more participating peer entities, such as two applications
running and talking to each other behind different endpoint groups, or between providers and consumers, such as
a DNS contract between a provider entity and a consumer entity. Contracts utilize filters to limit the traffic between
the applications to certain ports and protocols.
Figure 24 covers relationship between the ACI elements defined above. As shown in the figure, a Tenant can
contain one or more application profiles and an application profile can contain one or more end point groups. The
devices in the same EPG can talk to each other without any special configuration. Devices in different EPGs can
talk to each other using contracts and associated filters. A tenant can also contain one or more bridge domains
and multiple application profiles and end point groups can utilize the same bridge domain.
Associating an EPG with a Virtual Machine Manager (VMM) domain and allocating a VLAN dynamically from a
pre-defined pool in APIC (Figure 26)
38
Solution Design
The first method of statically mapping a VLAN is useful for the following:
Mapping storage VLANs on NetApp Controller to storage related EPGs. These storage EPGs become the
storage "providers" and are accessed by the ESXi host (or the VMs) EPGs through contracts as shown in
Figure 26.
Connecting an ACI environment to an existing layer-2 bridge domain, such as an existing management
segment. A VLAN on an out of band management switch is statically mapped to a management EPG in the
common tenant to provide management services to VMs across all the tenants.
Mapping iSCSI and NFS datastores VLANs on Cisco UCS to EPGs that consume the NetApp storage EPGs
defined in Step 1. Figure 26 illustrates this mapping.
The second method of dynamically mapping a VLAN to an EPG by defining a VMM domain is used for the
following:
Deploying iSCSI and NFS related storage access for the application Tenant as shown in Figure 33
39
Solution Design
If the management infrastructure, especially vCenter, vCenter database and AD servers are hosted on the
FlexPod infrastructure, a separate set of service profiles is recommended to support infrastructure services. These
infrastructure ESXi hosts will be configured with two additional vNICs tied to a dedicated storage (NFS) vSwitch as
shown in 0. This updated server design helps maintain access to the infrastructure services including the NFS
datastores hosting the core services independent of APIC managed VDS.
40
Solution Design
41
Solution Design
Application Profile, EPGs and Contracts: The foundation tenant comprises of three application profiles,
"iSCSI", "NFS" and "vMotion".
Application Profile "NFS" comprises of two EPGs, "lif-NFS" and "vmk-NFS" as shown in Figure 30.
— EPG "lif-NFS" statically maps the VLAN associated with NFS LIF interface on the NetApp Infrastructure
SVM (VLAN 3170). This EPG "provides" NFS storage access to the compute environment.
— EPG "vmk-NFS" statically maps the VLAN associated with NFS VMkernel port (0) for the infrastructure
ESXi server (VLAN 3270*).
A contract "Allow-NFS" is defined to allow NFS traffic. This contract is "Provided" by EPG lif-NFS and is
"Consumed" by EPG vmk-NFS.
Note: Each EPGs within ACI environment is mapped to a unique VLAN. Even though VMkernel ports on ESXi host and
the NFS LIF interface on NetApp SVM are part of the same layer-2 domain, two different VLANs (3270 and 3170) are
configured for these EPGs. By utilizing contracts, ACI fabric allows the necessary connectivity between ESXi hosts and
NetApp controllers; different VLAN IDs within the ACI fabric do not matter. A similar configuration (for example, differ-
ent VLANs) is utilized for iSCSI connectivity as well.
42
Solution Design
Application Profile "iSCSI" comprised of four EPGs, "iSCSI-a-lif", "iSCSI-b-lif", "iSCSI-a-vmk" and "iSCSI-b-
vmk" as shown in Figure 31.
— EPGs "iSCSI-a-lif" and "iSCSI-b-lif" statically maps the VLANs associated with iSCSI-A and iSCSI-B LIF
interfaces on the NetApp Infrastructure SVM (VLAN 901 and 902). These EPGs "provide" boot LUN
access for the ESXi environment.
— EPGs "iSCSI-a-vmk" and "iSCSI-b-vmk" statically maps the VLAN associated with iSCSI VMkernel ports
(0) on the ESXi servers (VLAN 911 and 912).
A contract "Allow-iSCSI" is defined to allow iSCSI traffic. This contract is "Provided" by EPGs iSCSI-a-lif and
iSCSI-b-lif and is "Consumed" by EPGs iSCSI-a-vmk and iSCSI-b-vmk.
Bridge Domains: While all the EPGs in a tenant can theoretically share the same bridge domain, overlapping
MAC address usage by NetApp controllers across multiple VLANs determines the actual number of bridge
domains required. As shown in Figure 29, the "Foundation" tenant connects to two iSCSI LIFs and one NFS
LIF to provide storage connectivity to the infrastructure SVM. Since these three LIFs share the same MAC
address, a separate BD is required for each LIF. The "Foundation" tenant therefore comprises of three bridge
domains: BD_iSCSI-a, BD_iSCSI-b and BD_Internal.
— BD_iSCSI-a is the bridge domain configured to host EPGs for iSCSI-A traffic
— BD_iSCSI-b is the bridge domain configured to host EPGs for iSCSI-B traffic
— BD_Internal is the bridge domain configured to host EPGs for NFS traffic. This bridge domain is also
utilized for hosting EPGs related to application traffic since there is no MAC address overlap with the
application VMs
43
Solution Design
Note: Prior to the ACI release 1.0(3k), the ACI fabric only allowed up to 8 IP addresses to be mapped to a single MAC
address. In a FlexPod environment, this is a useful scalability design consideration when multiple LIFs are defined in the
same subnet and share the same interface VLAN (ifgroup-VLAN) on the NetApp controller. In the 1.0(3k) version and
later, ACI support up to 128 IP addresses mapped to a single MAC.
Some of the key highlights of the sample 3-Tier Application deployment are as follows:
Three application profiles, NFS, iSCSI and 3-Tier-App are utilized to deploy the application.
ESXi servers will map an NFS datastore from a dedicated Application SVM on NetApp controllers. This
datastore hosts all the application VMs.
The VMkernel port-group for mounting NFS datastores is managed and deployed by APIC.
To provide VMs a direct access to storage LUNs, two iSCSI port-groups are deployed using APIC
for
redundant iSCSI paths.
VMs that need direct iSCSI access to storage LUNs will be configured with additional NICs in the appropriate
iSCSI port-groups.
Three unique bridge domains are needs to host iSCSI, NFS and VM traffic.
The NFS and application traffic share a bridge domain while two iSCSI EPGs use remaining two bridge
domains.
44
Solution Design
Application Profile and EPGs: The "App-A" tenant comprises of three application profiles, "3-Tier-App",
"iSCSI" and "NFS".
Application Profile "NFS" comprises of two EPGs, "lif-NFS" and "vmk-NFS" as shown in Figure 34.
— EPG "lif-NFS" statically maps the VLAN associated with NFS LIF on the App-A SVM (VLAN 3180). This
EPG "provides" NFS storage access to the tenant environment.
— EPG "vmk-NFS" is attached to the VMM domain to provide an NFS port-group in the vSphere
environment. This port-group is utilized by the tenant (App-A) ESXi servers.
A contract "Allow-NFS" is defined to allow NFS traffic. This contract is "Provided" by EPG lif-NFS and is
"Consumed" by EPG vmk-NFS.
45
Solution Design
Application Profile "iSCSI" is comprised of four EPGs, "lif-iSCSI-a", "lif-iSCSI-b", "vmk-iSCSI-a" and "vmk-
iSCSI-b" as shown in Figure 35.
— EPGs "lif-iSCSI-a" and "lif-iSCSI-b" statically maps the VLANs associated with iSCSI-A and iSCSI-B LIF
interfaces on the NetApp Infrastructure SVM (VLAN 921 and 922). These EPGs "provide" LUN access to
VMs.
— EPGs "vmk-iSCSI-a" and "vmk-iSCSI-b" are attached to the VMM domain to provide iSCSI port-groups.
These port-groups are utilized by VMs that require direct access to storage LUNs.
A contract "Allow-iSCSI" is defined to allow iSCSI traffic. This contract is "Provided" by EPGs iSCSI-a-lif and
iSCSI-b-lif and is "Consumed" by EPGs iSCSI-a-vmk and iSCSI-b-vmk.
Application Profile "3-Tier-App" comprises of four EPGs, "Web", "App", "DB" and "External"
46
Solution Design
— EPG "Web" is attached to the VMM domain and provides a port-group on VDS to connect the
web
servers.
— EPG "App" is attached to the VMM domain and provides a port-group on VDS to connect the application
servers.
— EPG "DB" is attached to the VMM domain and provides a port-group on VDS to connect the database
servers.
— EPG "External" is attached to the VMM domain and provides a port-group that allows application to
connect to existing datacenter infrastructure. Any VM connected to this port-group will be able to access
infrastructure outside ACI domain.
Appropriate contracts are defined to allow traffic between various application tiers.
Figure 36 App-A—3-Tier-App Application Profile
Bridge Domain: The "App-A" tenant comprises of four bridge domains, BD_iSCSI-a, BD_iSCSI-b,
BD_Internal and BD_External. As explained before, overlapping MAC addresses on NetApp Controllers
require iSCSI-A, iSCSI-B and NFS traffic to use separate bridge domains.
— BD_iSCSI-a is the bridge domain configured to host EPGs configured for iSCSI-A traffic
— BD_iSCSI-b is the bridge domain configured to host EPGs configured for iSCSI-B traffic
— BD_Internal is the bridge domain configured to host EPGs for NFS traffic. This bridge domain is also
utilized for hosting EPGs related to application traffic since there is no MAC address overlap with the
application VMs
— BD_External is the bridge domain configured to host EPGs configured for connectivity to the external
infrastructure.
47
Solution Design
In the FlexPod environment, access to common services is provided as shown in Figure 37. To provide this
access:
A common services segment is defined where common services VMs connect using a secondary NIC. A
separate services segment ensures that the access from the tenant VMs is limited to only common services'
VMs
The EPG for common services segment "Management-Access" is defined in the "common" tenant
The tenant VMs access the common management segment by defining contracts between
application tenant
and "common" tenant
The contract filters are configured to only allow specific services related ports
Figure 37 shows both "provider" EPG "Management-Access" in the tenant "common" and the consumer EPGs
"Web", "App" and "DB" in tenant "App-A".
48
Solution Design
dedicated for each tenant to define the management LIF. This VLAN is then statically mapped to an EPG in the
application tenant as show in Figure 38. Application VMs can access this LIF by defining and utilizing contracts.
Note: When an application tenant contains mappings for NetApp LIFs for storage access (iSCSI, NFS etc.), a separate
bridge domain is required for SVM management LIF because of the overlapping MAC addresses.
The interaction of SnapDrive and the SVM management LIF also handle LUN provisioning. If VMware RDM LUNs
are being used for the application data, SnapDrive must interact with VMware vCenter to perform the RDM LUN
mapping. The SnapDrive interaction with NetApp VSC handles the Snapshot copy management of application
VMDK disks on NFS or VMFS datastores. In this design, secondary network interfaces on the VMware vCenter
and VSC VMs are placed in the ACI "common" tenant. Application VMs from multiple tenants can then consume
contracts to access the vCenter and VSC VMs simultaneously while not allowing any communication between
tenant VMs. The vCenter interaction takes place through HTTPS on TCP port 443 by default, while the VSC
interaction takes place on TCP port 8043 by default. Specific contracts with only these TCP ports are configured.
These ACI capabilities are combined with the Role-Based Access Control (RBAC) capabilities of vCenter, NetApp
VSC, and NetApp clustered Data ONTAP to allow multiple-tenant administrators and individual-application
administrators to simultaneously and securely provision and back up application data storage while taking
advantage of the NetApp storage efficiency capabilities.
49
Solution Design
Each Leaf switches are connected to both Cisco Nexus 7000 switches for redundancy.
A unique private network and a dedicated external facing bridge domain is defined for every tenant. This
private network (VRF) is setup with OSPF to provide connectivity to external infrastructure. App-A-External
and App-B-External are such two private networks shown Figure 38.
Unique VLANs are configured for each tenant to provide per-tenant multi-path connectivity between ACI fabric
and the core infrastructure. VLAN 101-104 and VLAN 111-114 are configured for App-A and App-B tenants as
shown.
On ACI fabric, per-VRF OSPF is configured to maintain the traffic segregation. Each tenant learns a default
route from the core router and each tenant advertises a single "public" routable subnet to the core
infrastructure.
Core router is configured with a single instance of OSPF - no per-VRF OSPF configuration is used on core
routers.
50
Solution Design
Application Connectivity
In a FlexPod with ACI environment, when an application is deployed based on the design described, the resulting
configuration looks like Figure 39. In this configuration, an application tenant is configured with a minimum of two
separate bridge domains in addition to any bridge domains required for storage configuration; a bridge domain for
internal application communication and a bridge domain for external application connectivity.
Application tiers communicate internally using contracts between various EPGs. The external communication for
the application is provided by defining an external facing private network (VRF) and configuring OSPF routing
between this network and core router. External facing VMs such as web front-end servers connect to both internal
and external networks using separate NICs. Client requests are received by the Web server on the interface
connected to the EPG (port-group) "App-External" while the web server talks to App and DB servers on the
internal network (BD_Internal) using the second NIC. After processing client requests, server response leaves the
Web server same way it entered the web server , for example,using NIC connected to "App-External" EPG.
51
Summary
Summary
Conclusion
FlexPod with Cisco ACI is the optimal shared infrastructure foundation to deploy a variety of IT workloads. Cisco
and NetApp have created a platform that is both flexible and scalable for multiple use cases and applications.
From virtual desktop infrastructure to SAP®, FlexPod can efficiently and effectively support business-critical
applications running simultaneously from the same shared infrastructure. The flexibility and scalability of FlexPod
also enable customers to start out with a right-sized infrastructure that can ultimately grow with and adapt to their
evolving business requirements.
52
References
References
http://www.cisco.com/en/US/products/ps10265/index.html
http://www.cisco.com/en/US/products/ps11544/index.html
http://www.cisco.com/en/US/products/ps10279/index.html
http://www.cisco.com/en/US/partner/products/ps10280/index.html
http://www.cisco.com/en/US/products/ps10277/prod_module_series_home.html
http://www.cisco.com/en/US/products/ps10281/index.html
http://www.cisco.com/c/en/us/support/switches/nexus-9000-series-switches/tsd-products-support-serie s-
home.html
http://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/in dex.html
http://www.vmware.com/products/vcenter-server/overview.html
VMware vSphere:
http://www.vmware.com/products/datacenter-virtualization/vsphere/index.html
http://www.netapp.com/us/products/platform-os/data-ontap-8/index.aspx.
NetApp FAS8000:
http://www.netapp.com/us/products/storage-systems/fas8000/
NetApp OnCommand:
http://www.netapp.com/us/products/management-software/
NetApp VSC:
http://www.netapp.com/us/products/management-software/vsc/
53
References
NetApp SnapManager:
http://www.netapp.com/us/products/management-software/snapmanager/
Interoperability Matrixes
Cisco UCS Hardware Compatibility Matrix:
http://www.cisco.com/c/en/us/support/servers-unified-computing/unified-computing-system/products-t echnical-
reference-list.html
http://www.vmware.com/resources/compatibility
http://support.netapp.com/matrix/mtx/login.do
54
About Authors
About Authors
Haseeb Niazi, Technical Marketing Engineer, Cisco UCS Data Center Solutions Engineering, Cisco
Systems, Inc.
Haseeb Niazi has over 16 years of experience at Cisco focused on Data Center, Security, WAN Optimization, and
related technologies. As a member of various solution teams and advanced services, Haseeb has helped many
enterprise and service provider customers evaluate and deploy a wide range of Cisco solutions. Haseeb holds a
master's degree in Computer Engineering from the University of Southern California.
Lindsey Street is a Solutions Architect in the NetApp Infrastructure and Cloud Engineering team. She focuses on
the architecture, implementation, compatibility, and security of innovative vendor technologies to develop
competitive and high-performance end-to-end cloud solutions for customers. Lindsey started her career in 2006 at
Nortel as an interoperability test engineer, testing customer equipment interoperability for certification. Lindsey has
her Bachelors of Science degree in Computer Networking and her Masters of Science in Information Security from
East Carolina University.
John George is a Reference Architect in the NetApp Infrastructure and Cloud Engineering team and is focused on
developing, validating, and supporting cloud infrastructure solutions that include NetApp products. Before his
current role, he supported and administered Nortel's worldwide training network and VPN infrastructure. John
holds a Master's degree in computer engineering from Clemson University.
Acknowledgements
For their support and contribution to the design, validation, and creation of this Cisco Validated Design, the authors
would like to thank:
55