Cisco Application Centric Infrastructure Multi-Site Lab v2: About This Demonstration
Cisco Application Centric Infrastructure Multi-Site Lab v2: About This Demonstration
Cisco Application Centric Infrastructure Multi-Site Lab v2: About This Demonstration
Cisco dCloud
Requirements
Topology
Get Started
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 60
Lab Guide
Cisco dCloud
Limitations
The demonstration environment is a simulated environment and there is no actual data plane, therefore the fabrics will not establish
OSPF/BGP adjacencies. All configuration will be lost after a reboot of the APIC simulator
Customization Options
For streamlined client demos, the following customizations are suggested:
If a customer is only interested in a demonstration of adding APIC Multi-Site to a brownfield environment, use the FixMyDemo
script (on the workstation desktop) and choose Option 4 to skip the setup and configuration scenarios. Then proceed immediately
to Scenario 6.
To decrease the time it takes to run the lab, use the FixMyDemo script (on the workstation desktop) and choose Option 4 to
automatically perform the setup and configuration scenarios. Then proceed to Scenario 3.
Requirements
The table below outlines the requirements for this preconfigured demonstration.
Table 1. Requirements
Required Optional
Laptop Cisco AnyConnect®
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 2 of 60
Lab Guide
Cisco dCloud
Two or more ACI fabrics built with Nexus 9000 switches deployed as leaf and spine nodes.
An inter-site policy manager, named Cisco ACI Multi-Site, which is used to manage the different fabrics and to define inter-site
policies.
Complementary with Cisco APIC, in Multi-Site each site is an availability zone (APIC cluster domain), which can be configured to
be a shared or isolated change-control zone.
MP-BGP EVPN is used as the control plane between sites, with data-plane VXLAN encapsulation across sites.
The Multi-Site solution enables extending the policy domain end-to-end across fabrics. You can create policies in the Multi-Site
GUI and push them to all sites or selected sites. Alternatively, you can import tenants and their policies from a single site and
deploy them on other sites.
From the GUI of the Multi-Site Policy Manager, you can launch site APICs.
Cross-site namespace normalization is performed by the connecting spine switches. This function requires Cisco Nexus 9000
Series switches with EX on the end of the name, or newer.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 3 of 60
Lab Guide
Cisco dCloud
Disaster recovery scenarios offering IP mobility across sites is one of the typical Multi-Site use cases.
Terminology
As a complementary product with Cisco ACI, much of the Cisco ACI Multi-Site terminology is shared with ACI and APIC (for example,
they both use the terms fabric, tenant, contract, application profile, EPG, bridge domain, and L3Out). For definitions of ACI terminology,
see Cisco Application Centric Infrastructure Fundamentals.
Micro-services architecture: In its first implementation, the Cisco ACI Multi-Site (inter-site policy manager) is represented by a cluster
of three Virtual Machines (VMs) running on ESXi hosts. These ESXi hosts do not need to be connected to the ACI leaf nodes, because
it is only required to establish IP connectivity between the VMs and the OOB IP addresses of the different APIC cluster nodes.
Namespace: Each fabric maintains separate data in its name space, including such objects as the TEP pools, Class-IDs (EPG
identifiers) and VNIDs (identifying the different Bridge Domains and the defined VRFs). The site-connecting spine switches (EX or later)
perform the necessary namespace translation (normalization) between sites.
Schema: Profile including the site-configuration objects that will be pushed to sites.
Site: APIC cluster domain or single fabric, treated as an ACI region and availability zone. It can be located in the same metro-area as
other sites, or spaced world-wide.
Stretched: Objects (tenants, VRFs, EPGs, bridge-domains, subnets or contracts) are stretched when they are deployed to multiple
sites.
Template: Child of a schema, a template contains configuration-objects that are shared between sites or site-specific.
Template Conformity: When templates are stretched across sites, their configuration details are shared and standardized across
sites. To maintain template conformity, it is recommended to only make changes in the templates, using the Multi-Site GUI and not in a
local site's APIC GUI.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 4 of 60
Lab Guide
Cisco dCloud
Topology
This content includes preconfigured users and components to illustrate the scripted scenarios and features of the solution. Most
components are fully configurable with predefined administrative user accounts. You can see the IP address and user account
credentials to use to access a component by clicking the component icon in the Topology menu of your active session and in the
scenario steps that require their use.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 5 of 60
Lab Guide
Cisco dCloud
Get Started
BEFORE PRESENTING
Cisco dCloud strongly recommends that you perform the tasks in this document with an active session before presenting in front of a
live audience. This will allow you to become familiar with the structure of the document and content.
It may be necessary to schedule a new session after following this guide in order to reset the environment to its original configuration.
Follow the steps to schedule a session of the content and configure your presentation environment.
2. For best performance, connect to the workstation with Cisco AnyConnect VPN [Show Me How] and the local RDP client on your
laptop [Show Me How]
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 6 of 60
Lab Guide
Cisco dCloud
NOTE: In the context of this guide, the terms Multi-Site Policy Manager, Multi-Site Manager and Multi-Site Controller (MSC) will
be used interchangeably.
Steps
1. Double-click the ACI MultiSite Controller icon on the workstation desktop, and log in (admin/C1sco12345!).
NOTE: If necessary, scroll to the bottom and close the blue screen at start up.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 7 of 60
Lab Guide
Cisco dCloud
Username: demouser
Password: C1sco12345!
NOTE: The User Roles screen shows a set of predefined roles in the Multi-Site Manager. The following user roles are available
in Cisco ACI Multi-Site.
Power User: A power user can perform all the operations as an admin user.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 8 of 60
Lab Guide
Cisco dCloud
Site and Tenant Manager: A site and tenant manager can manage sites, tenants, and associations.
Schema Manager: A schema manager can manage all schemas regardless of tenant associations.
Schema Manager (Restricted): A restricted schema manager can manage schemas that contain at least one tenant to which the
user is explicitly associated.
User and Role Manager: A user and role manager can manage all the users, their roles, and passwords.
Admin User: In the initial configuration script, the admin account is configured and the admin is the only user when the system
starts. The initial password for the admin user is set by the system. The admin user is assigned the role of a Power User. The
admin user should be used for creating other users and for all other Day-0 configurations. The account status of the admin user
cannot be set to Inactive.
5. Scroll down and toggle the Site and Tenant Manager switch to ON, to associate to the newly created user with that role.
6. Click Save.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 9 of 60
Lab Guide
Cisco dCloud
8. In the Reset Password dialog, enter any new password that meets the requirements. The demouser account is not used again in
this lab.
9. Verify that only a subset of the functions exposed to the admin user are now available (specifically, only the possibility of adding
Sites and Tenants to the Multi-Site manager).
10. Log out of ACI Multi-Site and log back in as an administrator (admin/C1sco12345!).
The figure below shows the physical topology used for the lab.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 10 of 60
Lab Guide
Cisco dCloud
1. On the ACI Multi-Site dashboard, select Sites in the vertical menu and select Add Site.
2. In the Add Site wizard, enter details for Site 1 as follows, and click Save. If an error is displayed, click Save again.
Username: admin
NOTE: Each fabric connected to the MSC must be assigned a unique Site-ID value. MSC will give an error when trying to
configure overlapping Site-IDs to separate fabrics.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 11 of 60
Lab Guide
Cisco dCloud
4. Enter details for Site 2 (New York) as follows, and click Save.
Username: admin
5. Verify that both sites show green status and that the configuration URLs are correct.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 12 of 60
Lab Guide
Cisco dCloud
Control Plane E-TEP configuration (Used for BGP EVPN peering between sites)
OSPF area configuration for spine to IP network connections (area id, area type)
The following configuration tasks are managed from the APIC at each site (site local configuration).
Configuration of the access policies for the External L3 domain (Spine switch profile, interface profile, interface policy group,
attachable entity profile, external L3 domain)
The MSC will read in the BGP ASN and External L3 Domains from each site. Add these to each site from the respective APICs.
Steps
1. Double-click the APIC SF icon on the workstation desktop, and log in (admin/C1sco12345).
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 13 of 60
Lab Guide
Cisco dCloud
3. Select System > System Settings in the top menu and click BGP Route Reflector.
4. Enter 65001 in the Autonomous System Number field and click Submit.
6. Click Fabric > Access Policies in the top menu and expand Physical and External Domains.
8. Expand the Tools menu and select Create Layer 3 Domain from the menu.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 14 of 60
Lab Guide
Cisco dCloud
1. Double-click the APIC NY icon on the workstation desktop, and log in (admin/C1sco12345).
3. Select System > System Settings in the top menu and click BGP Route Reflector.
4. Enter 65002 in the Autonomous System Number field and click Submit.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 15 of 60
Lab Guide
Cisco dCloud
6. Click Fabric > Access Policies in the top menu and expand Physical and External Domains.
8. Expand the Tools menu and select Create Layer 3 Domain from the menu.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 16 of 60
Lab Guide
Cisco dCloud
1. Return to the Multi-Site Controller and log in (admin/C1sco12345!). Click Configure Infra to display the BGP and OSPF settings
page.
NOTE: The default setting for BGP is full mesh and uses standard BGP timer values. The default OSPF network type is point-to-
point.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 17 of 60
Lab Guide
Cisco dCloud
3. Select San Francisco in the side menu to add Infra settings for Site 1.
NOTE: These settings are the same for San Francisco and New York.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 18 of 60
Lab Guide
Cisco dCloud
9. Enter the appropriate IP address in the BGP-EVPN ROUTER_ID field for Spine 1.
Site 1: 10.1.100.1
Site 2: 10.2.100.1
NOTE: Intersite control plane: Endpoint reachability information is exchanged across sites using a Multiprotocol-BGP (MP-BGP)
Ethernet VPN (EVPN) control plane. This approach allows the exchange of MAC and IP address information for the endpoints that
communicate across sites. MP-BGP EVPN sessions are established between the spine nodes deployed in separate fabrics .
NOTE: These settings are the same for San Francisco and New York.
Site 1: 10.1.0.3/31
Site 2: 10.2.0.3/31
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 19 of 60
Lab Guide
Cisco dCloud
13. Select inherit from the MTU drop-down and click Save.
15. Enter the appropriate IP address in the BGP-EVPN ROUTER_ID field for Spine 2.
Site 1: 10.1.100.2
Site 2: 10.2.100.2
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 20 of 60
Lab Guide
Cisco dCloud
a. Select pod-1.
c. Select the site box (San Francisco or New York) to bring up the pane to enable Multi-Site.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 21 of 60
Lab Guide
Cisco dCloud
NOTE: The BGP AS number will be prepopulated as this is read in from the APIC. The External Routed Domain drop down will
display the domain previously configured on the APIC (Multisite_External_L3_domain).
17. Wait for the success message and close the Fabric Connectivity Intra window.
a. In the APIC window for the site being configured, browse to Tenants > infra > Policies > Protocol > Fabric External
Connections Policies and click Fabric External Connection Policy.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 22 of 60
Lab Guide
Cisco dCloud
f. Click Update, then click Close. Finally click Submit to confirm the changes.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 23 of 60
Lab Guide
Cisco dCloud
19. Expand Tenant > infra > Networking > External Routed Networks and verify that an L3out called intersite has been configured
under the infra tenant. This indicates that Infra L3out has been successfully configured on APIC for Site 1.
NOTE: The object for the L3Out also includes a small cloud icon . All ACI objects configured by the MSC will include this icon.
20. Repeat all of the steps for the New York site, using the New York values indicated in the text.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 24 of 60
Lab Guide
Cisco dCloud
Steps
1. In the Multi-Site Configuration window, select Tenants in the side menu and click Add Tenant.
3. Select both San Francisco and New York and click Save to push the Tenant configuration to APIC.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 25 of 60
Lab Guide
Cisco dCloud
4. Return to the APIC SF and APIC NY windows. Click Tenants > ALL TENANTS in each window and verify that the Tesla tenant
has been created on both fabrics.
5. In either APIC SF or APIC NY, double-click Tesla to proceed to the APIC window for Tesla.
6. Note that the tenant object includes the cloud symbol, indicating that this object has been configured from the MSC. The
APIC GUI will also display a message to this effect.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 26 of 60
Lab Guide
Cisco dCloud
Steps
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 27 of 60
Lab Guide
Cisco dCloud
1. In the Multi-Site Configuration window, select Schemas from the vertical menu and click Add Schema.
2. Click Untitled Schema in the upper left corner to make the field editable, and enter L3-stretch-schema as the schema name.
NOTE: Schemas contain templates. The templates are associated to one or more sites and are used to define the objects that
will be stretched between sites or will remain site-local.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 28 of 60
Lab Guide
Cisco dCloud
3. Since this configuration will stretch the Tenant and VRF, click Template1, then click the pencil icon and name the template SF
and NY Template.
6. Click + under VRF. For this use case the only stretched object (commonly defined across sites) will be the VRF.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 29 of 60
Lab Guide
Cisco dCloud
2. Select both San Francisco and New York and click Save.
NOTE: When configuration is added to the MSC there is a Save button and a Deploy to Sites button. Saving the template
configuration saves it to the MSC database but does not make any changes to the APICs. Only after selecting Deploy to Sites is
the configuration change pushed to the APICs. At this point in the configuration we have added a template and created a VRF
but have not saved nor deployed the configuration
3. Click Save to save the configuration to the MSC without deploying to APIC.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 30 of 60
Lab Guide
Cisco dCloud
5. A window will appear showing which changes will be deployed and to which sites. Click Deploy, which will result in the creation of
the VRF in both sites.
2. Click on Template1, then on the pencil icon so you can rename it.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 31 of 60
Lab Guide
Cisco dCloud
3. Click Add EPG and enter Web in the Display Name field.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 32 of 60
Lab Guide
Cisco dCloud
4. Select the dropdown for the Bridge Domain to associate the Web EPG to a bridge domain.
5. Enter Web-BD in the Bridge Domain field. Since Web-BD does not currently exist, the option to create the object is one of the
choices on the drop-down.
6. Scroll down and select the Web-BD. The default BD settings will appear on the right side pane. This BD will not be stretched
across sites. Uncheck the L2STRETCH box.
NOTE: When the L2STRETCH box is unchecked the option to add a BD subnet is removed. This is because the BD becomes a
site local configuration. The site local configuration will be covered in a few more steps
8. On the Virtual Routing and Forwarding drop-down, select VRF1 (the VRF created in the SF and NY Template).
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 33 of 60
Lab Guide
Cisco dCloud
9. Select + next to Sites, use the drop-down to add the SF Only template to San Francisco. This associates the template to only
San Francisco Site.
NOTE: You can save and deploy to sites in one step by just selecting deploy to sites. The configuration will be saved to the MSC.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 34 of 60
Lab Guide
Cisco dCloud
13. Click Deploy. The MSC will show that the changes are only being pushed to San Francisco.
NOTE: Site-local configuration changes are not displayed in the Template view. They are visible only in the site view.
15. Select San Francisco SF Only in the vertical menu to view the site-local changes.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 35 of 60
Lab Guide
Cisco dCloud
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 36 of 60
Lab Guide
Cisco dCloud
18. At the top, click Save then click Deploy to Sites to deploy the changes to the site, then click Deploy.
NOTE: EPGs are also associated to domains (physical or VMM domains). The domain association and static path binding
configuration is also done from the MSC. This will always be a site local configuration task and will be configured from selecting
the site just as what was done for the BD subnet. In this lab we will not be configuring the domain but be aware that this
configuration is always site local.
Add the BD subnet for App-BD. Remember to select the NY Site on the left pane for site local configuration.
At this point both sites have been configured with a tenant called Tesla and an application profile called Webapp. A Web EPG and BD
has been configured in San Francisco and an App EPG and BD has been configured in New York. There is no communication at this
time between sites, and the Web BD subnet in San Francisco is not known to New York and vice versa for the App BD in New York. A
contract is required in order to allow communication between sites and to advertise endpoint IP address information between sites.
Since this contract will be used by both EPGs we will configure it under the SF and NY Template.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 37 of 60
Lab Guide
Cisco dCloud
4. Scroll down to Filter and click +. Enter any in the Display Name field.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 38 of 60
Lab Guide
Cisco dCloud
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 39 of 60
Lab Guide
Cisco dCloud
10. Select none in the Directive field to configure the Filter Chain.
12. Click Deploy to deploy contract to both San Francisco and New York sites.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 40 of 60
Lab Guide
Cisco dCloud
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 41 of 60
Lab Guide
Cisco dCloud
13. Click Deploy to deploy the NY Only template to the New York site.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 42 of 60
Lab Guide
Cisco dCloud
16. Click Deploy to deploy the SF Only template to the San Francisco site.
3. Expand Tenant Tesla > Application Profiles > Webapp > Application EPGs and verify the presence of the App and Web
EPGs.
4. Expand Tenant Tesla > Networking > Bridge Domains and verify the presence of the App-BD and Web-BD bridge domains.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 43 of 60
Lab Guide
Cisco dCloud
6. Click Topology tab and show the presence of the contract that allows the EPGs to communicate.
The purpose of this scenario is to create the inter-site policies required to enable the Web endpoints to access the shared DNS
services. If necessary, refer to the previous use case for specifics on each step.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 44 of 60
Lab Guide
Cisco dCloud
1. In ACI MultiSite Controller, select Schemas > L3-stretch-schema > and the SF Only Template.
2. Under VRF, select + to create a new VRF. Enter VRF2 as the name.
3. Still in the SF Only Template, select Web-BD, and choose VRF2 from the Virtual Routing and Forwarding drop down menu.
4. Select the NY Only template, click + below VRF to create a new VRF, enter Shared VRF as the Display Name.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 45 of 60
Lab Guide
Cisco dCloud
5. Still in the NY Only template, in the Bridge Domain field, select +, and create a new Bridge Domain with the following attributes:
8. Create a new subnet with the following attributes, and click Save:
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 46 of 60
Lab Guide
Cisco dCloud
9. Scroll up a little way to the AP section, and click +Add EPG to create a new EPG.
11. Since this EPG will provide shared services to other EPGs, it is critical to remember to add the IP subnet information under the
EPG (at the site level). Still in the work pane for the new DNS EPG, select +Subnet.
12. Create a new subnet with the following attributes and click Save:
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 47 of 60
Lab Guide
Cisco dCloud
NOTE: Notice that the sole purpose of defining the IP subnet information under the provider EPG is enabling the necessary VRF
route-leaking functionalities between the ‘Shared VRF’ and the other VRFs accessing the shared services. The default gateway
services are provided by the IP subnet configured at the BD, so it is important here to select the NO DEFAULT SVI GATEWAY
flag.
13. Create the web-to-dns contract and ensure that it is provided by the DNS EPG. In the left menu pane, under Templates, select NY
Only.
14. In the Contracts field select + to create a new Contract. Enter the following attributes:
b. Scope: tenant
NOTE: By default the contracts are created with VRF scope. In this case, the contract must be consumed by the ‘Web’ EPG that
is part of a different VRF, so it is essential to modify the scope of the contract to ‘tenant’ or ‘global’.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 48 of 60
Lab Guide
Cisco dCloud
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 49 of 60
Lab Guide
Cisco dCloud
16. Select the web-to-dns contract and set as a provider. Then Save.
17. Select Save and then Deploy to Sites. Click Deploy again to push out the configuration to New York.
18. Now configure the Web EPG to consume the web-to-dns contract. From the left pane, under Templates, select SF Only, and the
Web EPG.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 50 of 60
Lab Guide
Cisco dCloud
20. Select the web-to-dns contract and configure it as a consumer. Then select Save.
21. Select Save and Deploy to Sites. Click Deploy again to push out the new configuration.
22. Verify the configuration has been deployed to San Francisco (apic-1a). Login to the APIC, and browse to Tenants > Tesla >
Application Profiles > Webapp.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 51 of 60
Lab Guide
Cisco dCloud
24. Verify the configuration has been deployed to New York (apic-1b). Login to the APIC, and browse to Tenants > Tesla >
Application Profiles > Webapp.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 52 of 60
Lab Guide
Cisco dCloud
This use case is typical for disaster recovery sites where there is not a requirement for mobility (vMotion) across sites, but allows the
application to be brought up at a DR site without having to re-IP the application servers.
The suppression of BUM flooding across sites provides more resiliency across sites, since a problem (like a broadcast storm) hitting
site 1 won’t be able to propagate to the other sites.
A new BD, called DB-BD, will exist on both the San Francisco and New York sites and will enable this use case on the Tesla tenant.
Steps
1. In ACI MSC, browse to Schemas > L3-stretch-schema > SF and NY Template.
2. In the Bridge Domain field select +, and configure with the following:
c. INTERSITE BUM TRAFFIC ALLOW: Deselected (Click Yes on the Warning dialogue)
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 53 of 60
Lab Guide
Cisco dCloud
NOTE: At this point there are no EPGs that are shared across sites so the Webapp Application profile has not been configured
under the SF and NY template.
3. Click Save.
4. Ensure the SF and NY Template is selected. Click +Application Profile and enter Webapp as the Display Name. (The
Application Profile name is case sensitive, so be consistent with the name used previously).
5. Click +Add EPG to create a new EPG with the following configuration:
a. Display Name: DB
6. Click Save and Deploy to Sites. Click Deploy to push out the configuration.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 54 of 60
Lab Guide
Cisco dCloud
7. Verify the configuration has been deployed to San Francisco (apic-1a) and New York (apic-2a). Login to the APIC, and browse to
Tenants > Tesla > Application Profiles > Webapp.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 55 of 60
Lab Guide
Cisco dCloud
This implies that the ACI Multi-Site Policy Manager will be inserted in the deployment and both sites will be added to it (as described as
part of the Add Sites to the Multi-Site Policy Manager’ section).
The purpose of this section is to import the existing configuration for a tenant from a brownfield ACI fabric, and stretch the objects
associated to that tenant (application profile with corresponding EPGs, BDs and VRFs) toward one (or more) greenfield ACI fabrics. In
the context of this lab, the brownfield site is San Francisco and the new greenfield fabric is New York.
Steps
1. In the APIC for the San Francisco (apic-1a) screen, click Tenants > Brownfield.
VRF: Brownfield-VRF
Application Profile: AP
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 56 of 60
Lab Guide
Cisco dCloud
4. Return to ACI Multisite Controller and select Tenants and Add Tenant.
5. Enter the Display Name Brownfield and select both San Francisco and New York as the associated sites, and Save.
NOTE: It is essential that the name of the tenant created on MSC matches the name of the tenant in the brownfield fabric from
where the configuration should be imported. The newly created tenant should then be associated to both existing sites, since
the configuration will be imported from one and stretched toward the other.
7. Create a new schema called Migration-Schema. This new schema will be used to perform the import of the configuration from
‘San Francisco’ into a new ‘Migration-Template’. Select the Brownfield tenant.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 57 of 60
Lab Guide
Cisco dCloud
8. Click Import and select San Francisco to import the Brownfield tenant configuration into the Multi-Site manager.
9. In the resulting window, select the Application Profile Brownfield-AP and make sure the Include Relations toggle is ON to import
all the objects associated to the Brownfield-AP application profile.
10. Hover over AP to see the sources. Click Import and click Save.
11. Select the Brownfield-BD bridge domain to verify that the configuration is not stretched (which is expected, since it was imported
from a specific site).
12. Click the L2Stretch check box to allow to stretch it to the Greenfield site. Click Yes to acknowledge the warning.
13. Check the Intersite BUM Traffic Allow checkbox to ensure that BUM traffic is allowed, as this is required in a real life scenario to
be able to migrate workloads from brownfield to greenfield leveraging live migration technologies (for example, vMotion in a
vSphere environment).
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 58 of 60
Lab Guide
Cisco dCloud
14. Now that the configuration has been imported to the Multi-Site manager, it is required to push the objects toward the Greenfield
ACI fabric (New York site). First rename the Template1 to Migration-Template.
15. Now associate the Migration-Template to the New York site. Under Sites, click +, and select New York and Save.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 59 of 60
Lab Guide
Cisco dCloud
17. Click Deploy to Sites to push the configuration to the Greenfield site.
18. Click Deploy again. This will push the objects imported from the brownfield site toward the Greenfield ACI fabric.
19. Verify that the configuration is now displayed correctly in the New York APIC controller.
© 2019 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 60 of 60