DNA Udemy01
DNA Udemy01
DNA Udemy01
DNA
● List the components that make up the SD-Access solution and describe how SD-
Access operates
● Configure and verify the secure integration between ISE and DNA Center
● Configure and verify the network underlay and use DNA Center to discover the
nodes and add them to the inventory
● Build a basic campus fabric using DNA Center and onboard hosts using DNA
Center and ISE
● Connect the fabric to external networks that include shared services and the
Internet
● Use a layered methodology to test and troubleshoot the SD-Access solution
● Use micro-segmentation to segment groups within a virtual network or restrict
specific traffic types between groups
● Use Cisco DNA Assurance to determine the health of the fabric and isolate
endpoint device activity
Network evolution — the challenges
Today's networks support a very different IT environment compared to just a few
years ago. There has been a significant rise in the use of mobile clients, increased
adoption of cloud-based applications, and the increasing deployment and use of
Internet of Things (IoT) in the network environment.
Over the years, the networking technologies that have been the foundation of
interconnectivity between clients, devices, and applications have remained fairly
static. While today's IT teams have a number of technology choices to design and
operate their net‐works, there hasn't been a comprehensive, turnkey solution to
address today's rapidly-evolving enterprise needs around mobility, IoT, cloud, and
security.
Network requirements have evolved, but technologies and operations have not.
Modern networking challenges in the context of a number of common use cases:
Network deployment
• Implementation complexity
•Wireless considerations
Service deployment
•Network segmentation
Network operations
Over time, network operators have had to accommodate new network services
by implementing new features and design approaches. However, they have had
to do so on top of a traditional, inflexible network infrastructure.
The ability of a company to adopt any of these is impeded if the network is slow
to change and adapt. It is difficult to automate the many, many potential variations
of "snowflake" network designs and this limits the ability to adopt automation in
today's networks to drive greater operational efficiencies for an organization.
In addition, one of the major challenges with wireless deployment today is that it
does not easily utilize network segmentation. While wireless can leverage multiple
SSIDs for traffic separation over-the-air, these are limited in the number that can be
deployed and are which are ultimately mapped back into VLANs at the WLC.
•Large Layer 2 designs are very inefficient (50% of ports blocking typically)
Traffic filtering options for intra-VLAN traffic are also typically much more
limited than those available at a Layer 3 boundary
VLANs are simple but, in this case, simple is not best — a flat Layer 2 design
exposes the organization to too many potential events that could take down the
network, and in addition, managing hundreds of VLANs is daunting for most
organizations.
Segmentation using VRF-Lite
Another approach — one that leverages Layer 3 — is to segment the network by using VRFs
(Virtual Routing and Forwarding instances—essentially, separate versions of the IP routing
table). This has the benefit that segmentation can be provided without the need to build large ,
complex ACLs to control traffic flows—since traffic between different VRFs can only flow as
the network manager dictates via the network topology (typically, via route leaking or through
a firewall).
• VRF-Lite using 802.1q trunks between devices is relatively simple to implement on a few
devices but becomes very cumbersome very quickly when implemented more widely
• VRF-Lite requires separate routing protocol processes per VRF, resulting in increased CPU
load and complexity
• The typical rule-of-thumb is that VRF-Lite deployments should not be scaled beyond 8-10
VRFs, as they become far too unwieldy to manage end-to-end in an enterprise deployment
at a larger scale
Segmentation using MPLS VPNs
An alternative technology available for network segmentation, MPLS VPNs, has a steep
learning curve since they require the network manager to become familiar with many new
MPLS-specific capabilities and network protocols, including LDP for label distribu‐tion and
Multi-Protocol BGP as a control plane. Moreover, they need to understand how to troubleshoot
MPLS-enabled networks when issues arise.
• MPLS VPNs will scale much better than VRF-Lite; however, they are often too complex for
many network managers to tackle, especially across an end-to-end network deployment.
•MPLS VPN support is not available pervasively across all network platforms.
Despite having VRF capabilities for more than ten years, only a small percentage
of organizations have deployed VRF segmentation in any form. Why is this? In a word —
complexity.
Network policies
Policy is one of those abstract words that can mean many different things to many
different people. However, in the context of networking, every organization has
multiple policies that they implement. Use of security ACLs on a switch, or security
rule-sets on a firewall, is security policy.
Using QoS to sort traffic into different classes, and using queues on network devices
to prioritize one application versus another, is QoS policy. Placing devices into
separate VLANs based on their role, is device-level access control policy.
Today's network manager typically uses a few sets of common policy tools every
day: VLANs, subnets, and ACLs. For example:
Adding voice to a network? This implies carving a new set of voice VLANs and
associated subnets.
• Adding IoT devices? Door locks, badge readers, et cetera? More VLANs and sub‐nets.
• Adding IP cameras and streaming video endpoints? More VLANs and subnets again.
This is why an enterprise network today ends up with hundreds, or even thousands, of VLANs
and subnets. The level of complexity in designing and maintaining this is obvious in and of
itself—and yet it also requires the further maintenance of many DHCP scopes, IPAM tools, and
the complexity associated with managing a large IP address space across all of these various
VLANs and functions.
Today's network, faced with many internal and external threats, also needs to be secure. This
makes it necessary to create and implement — and maintain on an ongoing basis — large
Access Control Lists, implemented on network devices including switches, routers, and
firewalls, most often at Layer 3 boundaries in the network deployment.
The traditional methods used today for policy administration (large and complex
ACLs on devices and hrewalls) are very difficult to implement and maintain.
User and device onboarding
No matter which solution is chosen today — a Layer 2 or Layer 3 network design, a
segmented or non-segmented network approach — there is always the issue of the
optimal approach to onboard users and devices into the network.
• While functional, this offers little real security, since anyone connecting into that
port or SSID is associated with that "role" in the network.
• Either on the first-hop switch, or on a firewall ten hops away, that user's IP ad‐
dress will be examined and the appropriate security policy will be controlled and
enforced. Essentially, the IP address ends up being used as a proxy for identity.
However, this is hard to scale, and to manage.
Otherwise, a VLAN/subnet could be assigned dynamically using 802.1x or another
au‐ thentication method, but there are some common challenges with this as well:
Finally, once that user/device identity is established, how can it be carried end-to-
end within the network today?
Many networks today provide very limited visibility into network operation and
use. The wide variety of available network monitoring methods — SNMP, Netflow,
screen scraping, and the like — and the mixture of availability of these tools across
various platforms, makes it very difficult to provide comprehensive, real-time, end-
to-end insights derived from ongoing monitoring in today's network deployments.
Without insight into ongoing operational status, organizations often find themselves
reacting to network problems, rather than addressing them proactively — whether
these problems are caused by issues or outages, or simply brought on by growth or
changes in user / application patterns.
Many organizations would place significant value on being able to be more
knowledge‐able about how their network is being used, and more proactive in terms
of network visibility and monitoring. A more comprehensive, end-to-end approach
is needed—one that allows insights to be drawn from the mass of data that
potentially can be re‐ported from the underlying infrastructure.
Most organizations lack comprehensive visibility into network operation and use
—
limiting their ability to proactively respond to changes.
Tying it all together
So, what does it take to roll out networks and the associated policies end-to-end
today?
Based on the diagram below, the following steps represent a typical service
deployment:
1Map to user groups in active directory (AD) or a similar database for user au‐
thentication.
2Link these AD identities to the AAA server (such as Cisco Identity Services En‐
gine, ISE) if using dynamic authentication. This provides each identity with an
appropriate corresponding VLAN/subnet.
3Define and carve out new VLANs and associated subnets for the new services to
be offered. Then, implement these VLANs and subnets on all necessary devices
(switches, routers, and WLCs).
4 Secure those subnets with the appropriate device or firewall ACLs, or network
segmentation. If using a segmented, virtualized network approach, extend these
VRFs end-to-end using VRF-Lite or MPLS VPNs.
5 To do all of this, it is necessary to work across multiple user interfaces — the AD
GUI, the AAA GUI, the WLC GUI for wireless; the switch or router CLI for wired
— and stitch together all of the necessary constructs manually.
No wonder it takes days or weeks to roll out new network services today!
DIAGRAM Service deployment overview
What is DNA ?
Digital transformation is creating new opportunities in every industry. In
healthcare, doctors are now able to monitor patients remotely and to leverage
medical analytics to predict health issues.
This is accomplished with a single network fabric across LAN and WLAN which
creates a consistent user experience, anywhere, without compromising on security.
SD-Access benefits
25
Appliance Hardware Specifications
Cisco supplies Cisco Digital Network Architecture (DNA) Center in the form of a rack-
mountable, physical appliance.
The second generation Cisco DNA Center appliance consists of either a Cisco Unified
Computing System (UCS) C220 M5 small form-factor (SFF) chassis or Cisco UCS C480
M5 chassis, both with the addition of one Intel X710-DA2 network interface card (NIC) and
one Intel X710-DA4 NIC. Six versions of the second generation appliance are available:
•44 core appliance: Cisco part number DN2-HW-APL
•44 core upgrade appliance: DN2-HW-APL-U
•56 core appliance: Cisco part number DN2-HW-APL-L
•56 core upgrade appliance: Cisco part number DN2-HW-APL-L-U
•112 core appliance: Cisco part number DN2-HW-APL-XL
•112 core upgrade appliance: DN2-HW-APL-XL-U
Each installed drive has a fault LED and an activity LED. When the drive fault LED is:
• Off: The drive is operating properly.
• Amber: The drive has failed.
• Amber, blinking: The drive is rebuilding.
Component Description
1 Modular LAN-on-motherboard (mLOM) card bay (x16 PCIe lane)
2 Two USB 3.0 ports
3 1Gbps/10-Gbps Management Port (1, eno1, Network Adapter 1): This Ethernet port is embedded
on the appliance motherboard and can support 1 Gbps and 10 Gbps, depending on the link partner
capability. It is identified as 1 on the rear panel, and as eno1 and Network Adapter 1 in the
Maglev Configuration wizard. Connect this port to a switch that provides access to your
enterprise management network.
This port has a link status LED and a link speed LED. When the status LED is:
• Off: No link is present.
• Green, blinking: Traffic is present on the active link.
• Green: Link is active, but there is no traffic present.
Note Although capable of operating at lower speeds, the enterprise and cluster ports are intended
to operate at 10 Gbps only.
Component Description
12 10Gbps Enterprise Port (enp94s0f0, Network Adapter 3): This is the left-hand 10-Gbps port on
the Intel X710-DA2 NIC in the appliance PCIe riser 1/slot 1. It is identified as enp94s0f0 and
Network Adapter 3 in the Maglev Configuration wizard. Connect this port to a switch with
connections to the enterprise network.
This port has a link status (ACT) LED and a link speed (LINK) LED. When the link
status LED is:
• Off: No link is present.
• Green, blinking: Traffic is present on the active link.
• Green: Link is active, but there is no traffic present.
When the speed LED is:
• Off: Link speed is 100 Mbps or less.
• Green: Link speed is 10 Gbps.
• Amber: Link speed is 1 Gbps.
Note Although capable of operating at lower speeds, the enterprise and cluster ports are
intended to operate at 10 Gbps only.
13 Threaded holes for dual-hole grounding lug.
Plan the
Deployment
•Planning Workflow, on page 21
•Cisco DNA Center and Cisco Software-Defined Access, on page 22
•Interface Cable Connections, on page 22
•Required IP Addresses and Subnets, on page 27
•Required Internet URLs and Fully Qualified Domain Names, on page 31
•Provide Secure Access to the Internet, on page 33
•Required Network Ports, on page 34
•Required Ports and Protocols for Cisco Software-Defined Access, on page 35
•Required Configuration Information, on page 43
•Required First-Time Setup Information, on page 44
Planning Workflow
You must perform the following planning and information-gathering tasks before attempting
to install, configure, and set up your Cisco DNA Center appliance. After you complete these
tasks, you can continue by physically installing your appliance in the data center.
46
Planning Workflow
You must perform the following planning and information-gathering tasks
before attempting to install, configure, and set up your Cisco DNA Center
appliance. After you complete these tasks, you can continue by physically
installing your appliance in the data center.
PLA
DNA Install Pre-requisite
1.Review the recommended cabling and switching requirements for standalone and
cluster installations. For more information, see Interface Cable Connections.
2.Gather the IP addressing, subnetting, and other IP traffic information that you
will apply during appliance configuration. For more information, see
Required IP Addresses and Subnets.
3.Prepare a solution for the required access to web-based resources. For more
information, see Required Internet URLs and Fully Qualified Domain Names and
Provide Secure Access to the Internet.
48
4. Reconfigure your firewalls and security policies for Cisco DNA Center traffic.
For more information, see Required Network Ports. If you are using Cisco DNA
Center to manage a Cisco Software-Defined Access(SD-Access) network, also see
Required Ports and Protocols for Cisco Software-Defined Access.
5. Gather the additional information used during appliance configuration and first-
time setup. For more information, see Required Configuration Information and
Required First-Time Setup Information.
•For a brief introduction to Cisco SD-Access and Cisco DNA, see the white paper "The
Cisco Digital Network Architecture Vision – An Overview".
•For more information on how Cisco DNA Center leverages Cisco SD-Access to automate
solutions that are not possible with normal networking approaches and techniques, see
Software Defined Access: Enabling Intent-Based Networking.
Note The interface name assigned to ports on 44, 56, and 112 core appliances differs.
Whenever two interface names are provided, the first applies to both 44 and 56 core
appliances and the second applies to 112 core appliances.
With this in mind, we recommend that you set up the Cluster Port with an IP
address, so as to allow for expansion to a three-node cluster in the future. Also,
make sure that the cluster link interface is connected to a switch port and is in the
UP state.
For a description of the tasks you need to complete in order to reimage your Cisco DNA
Center appliance, see Reimage the Appliance.
•(Optional, but strongly recommended) 1-Gbps CIMC Port: This port provides
browser access to the Cisco Integrated Management Controller (CIMC) out-of-band
appliance management interface and its GUI. Its purpose is to allow you to manage the
appliance and its hardware. Connect this port to a switch with connections to your
enterprise management network and configure an IP address with a subnet mask for the
port.
The following figures show the recommended connections for a single-node Cisco DNA
Center cluster:
Figure 6: Recommended Cabling for Single-Node Cluster: 44 and 56 Core Appliance
Figure 7: Recommended Cabling for Single-Node Cluster: 112 Core Appliance
The following figures show the recommended connections for a three-node Cisco DNA
Center cluster. All but one of the connections for each node in the three-node cluster are the
same as those for the single-node cluster, and use the same ports. The exception is the
Cluster Port (enp94s0f1/enp69s0f1, Network Adapter 4), which is required so that each host
in the three-node cluster can communicate with the other hosts.
Figure 8: Recommended Cabling for Three-Node Cluster: 44 and 56 Core Appliance
Figure 9: Recommended Cabling for Three-Node Cluster: 112 Core Appliance
For more details on each of the ports, see the rear panel diagram and accompanying descriptions for your
chassis in Front and Rear Panels.
Note Multinode cluster deployments require all the member nodes to be in the same
network and at the same site.
The appliance does not support distribution of nodes across multiple networks or sites.
When cabling the 10-Gbps enterprise and cluster ports, note that the ports support only the
following media types:
•SFP-10G-SR (Short range, MMF)
•SFP-10G-SR-S (Short range, MMF)
•SFP-10G-LR (Long range, SMF)
•SFP-H10GB-CU1M (Twinax cable, passive, 1 Meter)
•SFP-H10GB-CU3M (Twinax cable, passive, 3 Meters)
•SFP-H10GB-CU5M (Twinax cable, passive, 5 Meters)
•SFP-H10GB-CU7M (Twinax cable, passive, 7 Meters)
•SFP-H10GB-ACU7M (Twinax cable, active, 7 Meters)
Required IP Addresses and Subnets
Before beginning the installation, you must ensure that your network has sufficient
IP addresses available to assign to each of the appliance ports that you plan on
using. Depending on whether you are installing the appliance as a single-node
cluster or as a master or add-on node in a three-node cluster, you will need the
following appliance port (NIC) addresses:
•Enterprise Port Address (Required): One IP address with subnet mask.
•Cluster Port Address (Required): One IP address with subnet mask.
•Management Port Address (Optional): One IP address with subnet mask.
•Cloud Port Address (Optional): One IP address with subnet mask. This is an
optional port, used only when you cannot connect to the cloud using the
Enterprise port. You do not need an IP address for the Cloud port unless you must
use it for this purpose.
•CIMC Port Address (Optional, but strongly recommended): One IP address
with subnet mask.
All of the IP addresses called for in these requirements must be valid, physical IPv4
addresses with valid IPv4 netmasks. Ensure that the addresses and their corresponding
subnets do not overlap. Service communication issues can result if they do.
You will also need the following additional IP addresses and dedicated IP subnets, which are
prompted for and applied during configuration of the appliance:
•Cluster Virtual IP Addresses: One virtual IP (VIP) address per configured network
interface per cluster. This requirement applies to three-node clusters and single-node
clusters that are likely to be converted into a three-node cluster in the future. You must
supply a VIP for each network interface you configure. Each VIP should be from the same
subnet as the IP address of the corresponding configured interface. There are four
interfaces on each appliance: Enterprise, Cluster, Management, and Cloud. At a minimum,
you must configure the Enterprise and Cluster port interfaces, as they are required for Cisco
DNA Center functionality.
•An interface is considered configured if you supply an IP for that interface,
along with a subnet mask and one or more associated gateways or static routes.
If you skip an interface entirely during configuration, that interface is
considered as not configured.
Note the following:
•If you have a single-node setup and do not plan to convert it into a three-node
cluster in the future, you are not required to specify a VIP address. However, if
you decide to do so, you must specify a VIP address for every configured
network interface (just as you would for a three-node cluster).
•If the intracluster link for a single-node cluster goes down, the VIP addresses associated
with the Management and Enterprise interfaces also go down. When this happens, Cisco
DNA Center is unusable until the intracluster link is restored (because the Software Image
Management [SWIM] and Cisco Identity Services Engine [ISE] integration is not
operational and Cisco DNA Assurance data is not displayed because information cannot
be gathered from Network Data Platform [NDP] collectors).
•Default Gateway IP Address: The IP address for your network's preferred default gateway.
If no other routes match the traffic, traffic will be routed through this IP address. Typically,
you should assign the default gateway to the interface in your network configuration that
accesses the internet. For information on security considerations to keep in mind when
deploying Cisco DNA Center, see the
Cisco Digital Network Architecture Center Security Best Practices Guide.
•DNS Server IP Addresses: The IP address for one or more of your network's preferred
Domain Name System (DNS) servers. During configuration, you can specify multiple DNS
server IP addresses and netmasks by entering them as a space-separated list.
•(Optional) Static Route Addresses: The IP addresses, subnet masks, and gateways for
one or more static routes. During configuration, you can specify multiple static-route IP
addresses, netmasks, and gateways by entering them as a space-separated list.
You can set one or more static routes for an interface on the appliance. You should supply
static routes when you want to route traffic in a specific direction other than the default
gateway. Each of the interfaces with static routes will be set as the device the traffic will be
routed through in the IP route command table. For this reason, it is important to match the
static route directions with the interface though which the traffic will be sent.
Static routes are not recommended in network device routing tables such as those used by
switches and routers. Dynamic routing protocols are better for this. However, you should
add static routes where needed, to allow the appliance access to particular parts of the
network that can be reached no other way.
•NTP Server IP Addresses: The DNS-resolvable hostname or IP address for at least
one Network Time Protocol (NTP) server.
During configuration, you can specify multiple NTP server IPs/masks or hostnames
by entering them as a space-separated list. For a production deployment, we
recommend that you configure a minimum of three NTP servers.
Specify these NTP servers during preflight hardware synchronization, and again
during the configuration of the software on each appliance in the cluster. Time
synchronization is critical to the accuracy of data and the coordination of processing
across a multihost cluster. Before deploying the appliance in a production
environment, make sure that the time on the appliance system clock is current and
that the NTP servers you specified are keeping accurate time. If you are planning to
integrate the appliance with ISE, you should also ensure that ISE is synchronizing
with the same NTP servers as the appliance.
•Services Subnet: Identifies a dedicated IP subnet for the appliance to use in
managing and getting IPs for communications among its internal application
services, such as Cisco DNA Assurance, inventory collection, and so on. The
dedicated IPv4 Services subnet must not conflict with or overlap any other subnet
used by the Cisco DNA Center internal network or an external network. The
minimum size of the subnet is 21 bits. The IPv4 Services subnet must conform
with the IETF RFC 1918 and 6598 specifications for private networks, which
support the following address ranges:
•10.0.0.0/8
•172.16.0.0/12
•192.168.0.0/16
•100.64.0.0/10
For details, see RFC 1918, Address Allocation for Private Internets, and RFC
6598, IANA-Reserved IPv4 Prefix for Shared Address Space.
•Ensure that you specify a valid CIDR subnet. Otherwise, incorrect bits will be present
Important
Important •Ensure that you specify a valid CIDR subnet. Otherwise, incorrect bits will be
present in the 172.17.1.0/20 and 172.17.61.0/20 subnets.
•After configuration of your Cisco DNA Center appliance is completed, you
cannot assign a different subnet without first reimaging the appliance (see
Reimage the Appliance for more information).
The recommended total IP address space for the two Services and Cluster Services
subnets contains 4,096 addresses, broken down into two /21 subnets of 2,048
addresses each. The two /21 subnets must not overlap. The Cisco DNA Center
internal services require a dedicated set of IP addresses to operate (a Cisco DNA
Center microservice architecture requirement). To accommodate this requirement,
you must allocate two dedicated subnets for each Cisco DNA Center system.
One reason the appliance requires this amount of address space is to maintain system
performance. Because it uses internal routing and tunneling technologies for east-
west (inter-node) communications, using overlapping address spaces forces the
appliance to run Virtual Routing and Forwarding (VRF) FIBs internally. This leads
to multiple encaps and decaps for packets going from one service to another, causing
high internal latency at a very low level, with cascading impacts at higher layers.
Another reason is the Cisco DNA Center Kubernetes-based service containerization
architecture. Each appliance uses the IP addresses in this space for each Kubernetes K8
node. Multiple nodes can make up a single service. Currently, Cisco DNA Center
supports more than 100 services, each requiring several IP addresses, and new features
and corresponding services being added all the time.
The address space requirement is purposely kept large at the start to ensure that Cisco
can add new services and features without running out of IP addresses or requiring
customers to reallocate contiguous address spaces simply to upgrade their systems.
The services supported over these subnets are also enabled at Layer 3. The Cluster
Services space, in particular, carries data between application and infrastructure services,
and is heavily used.
The RFC 1918 and RFC 6598 requirement is because of the requirement by Cisco DNA
Center to download packages and updates from the cloud. If the selected IP ranges do
not conform with RFC 1918 and RFC 6598, this can quickly lead to problems with
public IP overlaps.
Interface Names and Wizard Configuration Order
Interface names and the order in which these interfaces are configured in the
Maglev Configuration wizard differ between the first and second generation Cisco
DNA Center appliance, as illustrated in the following table. Refer to these Cisco
part numbers to determine whether you have a first or second generation appliance:
•First generation 44 core appliance: DN1-HW-APL
•Second generation:
•44 core appliance: DN2-HW-APL
•44 core upgrade appliance: DN2-HW-APL-U
•56 core appliance: DN2-HW-APL-L
•56 core upgrade appliance: DN2-HW-APL-L-U
•112 core appliance: DN2-HW-APL-XL
•112 core upgrade appliance: DN2-HW-APL-XL-U
Table 7: Interface Names and Wizard Configuration Order
The table describes the features that make use of each URL and FQDN. You
must configure either your network firewall or a proxy server so that IP
traffic can travel to and from the appliance and these resources.
If you cannot provide this access for any listed URL and FQDN, the
associated features will be impaired or inoperable.
For more on requirements for proxy access to the internet, see
Provide Secure Access to the Internet.
Table 8: Required URLs and FQDN Access
You can place the HTTPS proxy server anywhere within your network. The
proxy server communicates with the internet using HTTPS, while the
appliance communicates with the proxy server via HTTP. Therefore, we
recommend that you specify the proxy's HTTP port when configuring the
proxy during appliance configuration.
If you need to change the proxy setting after configuration, you can do so
using the GUI.
Required Network Ports
The following tables list the well-known network service ports that the
appliance uses. You must ensure that these ports are open for traffic flows to
and from the appliance, whether you open them using firewall settings or a
proxy gateway.
Additional ports, protocols, and types of traffic must be accommodated if you
are deploying the appliance in a network that employs SDA infrastructure.
For details, see Required Ports and Protocols for Cisco Software-Defined
Access.
Note For information on security considerations when deploying Cisco DNA Center, see the
Cisco Digital Network Architecture Center Security Best Practices Guide.
Table 9: Ports: Incoming Traffic
Additionally, you can configure your network to allow outgoing IP traffic from the
appliance to the Cisco addresses at: https://www.cisco.com/security/pki/. The appliance
uses the IP addresses listed at the above URL to access Cisco-supported certificates and
trust pools.
If you have implemented Cisco SD-Access in your network, use the information in
the following tables to plan firewall and security policies that secure your Cisco SD-
Access infrastructure properly while providing Cisco DNA Center with the access it
requires to automate your network management.
Figure 10: Cisco SD-Access Fabric Infrastructure
Table 11: Cisco DNA Center Traffic
Source Source Destination Destination Description
Port2 Port
Any Cisco DNA Center UDP 53 DNS Server From Cisco DNA Center to DNS server
Any Cisco DNA Center TCP 22 Fabric underlay From Cisco DNA Center to fabric switches' loopbacks for SSH
Any Cisco DNA Center TCP 23 Fabric underlay From Cisco DNA Center to fabric switches' loopbacks for TELNET
Any Cisco DNA Center UDP 161 Fabric underlay From Cisco DNA Center to fabric switches' loopbacks for SNMP device discovery
ICMP Cisco DNA Center ICMP Fabric underlay From Cisco DNA Center to fabric switches' loopbacks for SNMP device discovery
Any Cisco DNA Center TCP 443 Fabric underlay From Cisco DNA Center to fabric switches for software upgrades (also to the
internet if there is no proxy)
Any Cisco DNA Center TCP 80 Fabric underlay From Cisco DNA Center to fabric switches for Plug and Play (PnP) (also to the
internet if there is no proxy)
Any Cisco DNA Center TCP 830 Fabric underlay From Cisco DNA Center to fabric switches for Netconf (Cisco SD-Access
embedded wireless)
UDP 123 Cisco DNA Center UDP 123 Fabric underlay From Cisco DNA Center to fabric switches for the initial period during LAN
automation
Any Cisco DNA Center UDP 123 NTP Server From Cisco DNA Center to NTP server
Any Cisco DNA Center TCP 22, UDP Cisco Wireless From Cisco DNA Center to Cisco Wireless Controller
161 Controller
ICMP Cisco DNA Center ICMP Cisco Wireless From Cisco DNA Center to Cisco Wireless Controller
Controller
Any Cisco DNA Center TCP 80, TCP AP From Cisco DNA Center to an AP as a sensor and active sensor (Cisco Aironet
443 1800S)
Any AP TCP 32626 Cisco DNA Used for receiving traffic statistics and packet capture data used by the Cisco DNA
Center Assurance Intelligent Capture (gRPC) feature.
2
Cluster, PKI, SFTP server, and proxy port traffic are not included in this table.
Table 12: Internet Connectivity Traffic
Source Port Source Destination Port Destination Description
Any Cisco DNA Center TCP 443 registry.ciscoconnectdna.com Download Cisco DNA Center package updates
Any Cisco DNA Center TCP 443 www.ciscoconnectdna.com Download Cisco DNA Center package updates
Any Cisco DNA Center TCP 443 registry-cdn.ciscoconnectdna.com Download Cisco DNA Center package updates
Any Cisco DNA Center TCP 443 cdn.ciscoconnectdna.com Download Cisco DNA Center package updates
Any Cisco DNA Center TCP 443 software.cisco.com Download device software
Any Cisco DNA Center TCP 443 cloudsso.cisco.com Validate Cisco.com and Smart Account credentials
Any Cisco DNA Center TCP 443 cloudsso1.cisco.com Validate Cisco.com and Smart Account credentials
Any Cisco DNA Center TCP 443 cloudsso2.cisco.com Validate Cisco.com and Smart Account credentials
Any Cisco DNA Center TCP 443 apiconsole.cisco.com CSSM Smart Licensing API
Any Cisco DNA Center TCP 443 sso.cisco.com CCO and Smart Licensing
Any Cisco DNA Center TCP 443 api.cisco.com CCO and Smart Licensing
Any Cisco DNA Center TCP 443 apx.cisco.com CCO and Smart Licensing
Any Cisco DNA Center TCP 443 dashboard.meraki.com Meraki integration
Any Cisco DNA Center TCP 443 api.meraki.com Meraki integration
Any Cisco DNA Center TCP 443 n63.meraki.com Meraki integration
Any Cisco DNA Center TCP 443 dnacenter.uservoice.com User feedback submission
Any Cisco DNA Center Admin TCP 443 *.tiles.mapbox.com Render maps in the browser (for access through
Client proxy; the destination is
*.tiles.mapbox.com/*)
Any Cisco DNA Center TCP 443 www.mapbox.com Maps and Cisco Wireless Controller country code
identification
Table 13: Cisco Software-Defined Access Fabric Underlay Traffic
Cisco DNA Center Second Generation Appliance Installation Guide, Release 1.3.3.0
Any Fabric underlay UDP 123 Cisco DNA Center From fabric switches to Cisco DNA Center; used when
doing LAN automation
ICMP Fabric underlay ICMP Cisco DNA Center From fabric switch and router loopbacks to Cisco DNA
Center for SNMP: device discovery
UDP 161 Fabric underlay Any Cisco DNA Center From fabric switch and router loopbacks to Cisco DNA
Center for SNMP: Device Discovery
Any Fabric underlay UDP 53 DNS Server From fabric switches and routers to DNS server for
name resolution
TCP and UDP Fabric underlay TCP and UDP 4342 Fabric Routers and LISP-encapsulated control messages
4342 Switches
TCP and UDP Fabric underlay Any Fabric Routers and LISP control-plane communications
4342 Switches
Any Fabric underlay UDP 4789 Fabric Routers and Fabric-encapsulated data packets (VXLAN-GPO)
Switches
Any Fabric underlay UDP 1645/1646/1812/1813 ISE From fabric switch and router loopback IPs to ISE for
RADIUS
ICMP Fabric underlay ICMP ISE From fabric switches and routers to ISE for
troubleshooting
UDP 1700/3799 Fabric underlay Any ISE From fabric switches to ISE for care-of address
(CoA)
Any Fabric underlay UDP 123 NTP Server From fabric switch and router loopback IPs to the
NTP server
Any control-plane UDP and TCP 4342/4343 Cisco Wireless Controller From control-plane loopback IP to Cisco Wireless
Controller for Fabric-enabled wireless
3
Border routing protocol, SPAN, profiling, and telemetry traffic are not included in this table.
Table 14: Cisco Wireless Controller Traffic
Source Port Source Destination Port Destination Description
UDP 5246/5247/5248 Cisco Wireless Controller Any AP IP Pool From Cisco Wireless Controller to an AP subnet for
CAPWAP
ICMP Cisco Wireless Controller ICMP AP IP Pool From Cisco Wireless Controller to APs allowing
ping for troubleshooting
Any Cisco Wireless Controller UDP 69/5246/5247 AP IP Pool From Cisco Wireless Controller to an AP subnet for
TCP 22 CAPWAP
Any Cisco Wireless Controller UDP and TCP Control plane From Cisco Wireless Controller to control-plane
4342/4343 loopback IP
Any Cisco Wireless Controller TCP 32222 Cisco DNA Center From Cisco Wireless Controller to Cisco DNA
Center for device discovery
UDP 161 Cisco Wireless Controller Any Cisco DNA Center From Cisco Wireless Controller to Cisco DNA
Center for SNMP
Any Cisco Wireless Controller UDP 162 Cisco DNA Center From Cisco Wireless Controller to Cisco DNA
Center for SNMP traps
Any Cisco Wireless Controller TCP 16113 Cisco Mobility Services From Cisco Wireless Controller to Cisco MSE and
Engine (MSE) and Cisco Spectrum Expert for NMSP
Spectrum Expert
ICMP Cisco Wireless Controller ICMP Cisco DNA Center From Cisco Wireless Controller to allow ping for
troubleshooting
Any HA server TCP 1315 Cisco DNA Center Database server HA (QoS)
Any HA server TCP 1316–1320 Cisco DNA Center HA database ports
Any HA web server TCP 8082 Cisco DNA Center HA web server's health monitor port
Any Cisco Wireless Controller and UDP 514 Cisco Wireless Syslog (optional)
various syslog servers Controller
Any Cisco Wireless Controller UDP 53 DNS Server From Cisco Wireless Controller to DNS server
Any Cisco Wireless Controller TCP 443 ISE From Cisco Wireless Controller to ISE for Guest
SSID web authorization
Any Cisco Wireless Controller UDP 1645,1812 ISE From Cisco Wireless Controller to ISE for
RADIUS authentication
Any Cisco Wireless Controller UDP 1646, ISE From Cisco Wireless Controller to ISE for
1813 RADIUS accounting
Any Cisco Wireless Controller UDP 1700, ISE From Cisco Wireless Controller to ISE for
3799 RADIUS CoA
ICMP Cisco Wireless Controller ICMP ISE From Cisco Wireless Controller to ISE ICMP for
troubleshooting
Any Cisco Wireless Controller UDP 123 NTP server From Cisco Wireless Controller to NTP server
Any ISE TCP 64999 Border From ISE to border node for SGT Exchange
Protocol (SXP)
Any ISE UDP 514 Cisco DNA Center From ISE to syslog server (Cisco DNA
Center)
UDP 1645/1646/1812/1813 ISE Any Fabric underlay From ISE to fabric switches and routers for
RADIUS and authorization
Any ISE UDP 1700/3799 Fabric underlay From ISE to fabric switch and router loopback
IPs for CoA
ICMP ISE ICMP Fabric underlay From ISE to fabric switches for
troubleshooting
Any ISE UDP 123 NTP Server From ISE to NTP server
UDP 1812/1645/1813/1646 ISE Any Cisco Wireless From ISE to Cisco Wireless Controller for
Controller RADIUS
ICMP ISE ICMP Cisco Wireless From ISE to Cisco Wireless Controller for
Controller troubleshooting
4
Note: High availability and profiling traffic are not included in this table.
Table 17: DHCP Server Traffic
Installation of or upgrade to Cisco DNA Center 1.3.3.0 checks to see if Cisco ISE is
configured as an authentication and policy (AAA) server. If the correct version of
Cisco ISE is already configured, you can start migration of group policy data from
Cisco ISE to Cisco DNA Center.
If Cisco ISE is not configured, or if the required version of Cisco ISE is not
present, Cisco DNA Center installs, but Group Based Policy is disabled. You
must install or upgrade Cisco ISE and connect it to Cisco DNA Center. You can
then start the data migration.
Cisco DNA Center data present in the previous version is preserved when you
upgrade. The data migration operation merges data from Cisco DNA Center and
Cisco ISE. If the migration encounters a conflict, preference is given to data
from Cisco ISE.
If Cisco DNA Center becomes unavailable, and it is imperative to manage
policies before Cisco DNA Center becomes available once more, there is an
option in Cisco ISE to override the Read-Only setting.
This allows you to make policy changes directly in Cisco ISE. After Cisco DNA
Center is available again, you must disable the Read-Only override on Cisco ISE,
and re-synchronize the policy data on Cisco DNA Center Group Based Access
Control Settings page.
Only use this option when absolutely necessary, since changes made directly in
Cisco ISE are not propagated to Cisco DNA Center.
•Authorization and Policy Server Information: If you are using Cisco ISE as your
authentication and policy server, you will need the same information listed in the
previous bullet, plus the ISE CLI user name, CLI password, server FQDN, a
subscriber name (such as cdnac), the ISE SSH key (optional), the protocol choice
(RADIUS or TACACS), the authentication port, the accounting port, and retry and
timeout settings.
If you are using an authorization and policy server that is not Cisco ISE, you will
need the server's IP address, protocol choice (RADIUS or TACACS),
authentication port, accounting port, and retry and timeout settings.
This information is required to integrate Cisco DNA Center with your chosen
authentication and policy server, as explained in Configure Authentication and
Policy Servers.
•SNMP Retry and Timeout Values: This is required to set up device polling and
monitoring, as explained in Configure SNMP Properties.
112
When you install an appliance, follow these guidelines:
•Plan your site configuration and prepare the site before installing the appliance. See the Cisco UCS Site Preparation Guide for
help with the recommended site planning and preparation tasks.
•Ensure that there is adequate space around the appliance to enable servicing, and for adequate airflow. The airflow in this
appliance is from front to back.
•Ensure that the site's air-conditioning meets the thermal requirements listed in Environmental Specifications.
•Ensure that the cabinet or rack meets the requirements listed in Review the Rack Requirements.
•Ensure that the site's power meets the requirements listed in Power Specifications. If available, use a UPS to protect against
power failures.
1 After initial power-up, all the ports should have their Link
Status and Link Speed LEDs showing as off.
After network settings are configured and tested using
either the Maglev Configuration wizard (see Configure
the Master Node Using the Maglev Wizard and Configure
Add-On Nodes Using the Maglev Wizard) or browser-
based configuration wizard (see Configure the Master
Node Using the Browser-Based Wizard and Configure
Add-On Nodes Using the Browser-Based Wizard), the
Link Status and Link Speed LEDs for all cabled ports
should be green. The LED for all uncabled ports should
remain unchanged.
If you see LEDs with colors other than those shown above, you may have a problem condition. See
Front and Rear Panels for details on the likely causes of the status. Be sure to correct any problem
conditions before proceeding to configure the appliance.
Prerequisites - Understanding the
requirements
Prerequisites - Understanding the
requirements
Cisco DNA Center Appliance (DN2-HW-APL)
Installation on a VM or custom UCS server is not supported
* Required only if the Cloud Update server is not reachable via the Enterprise Network
Prerequisites - IP Address Requirements
Additional Settings for Configuration Wizard
DNS Server IP Address (1 required, 2+ recommended)
NTP Server IP Address (1 required, 2+ recommended)
Proxy Server IP Address (required if direct internet access is not available – http proxy only)
Proxy server port if required
Device Enterprise IP OOB CIMC Cluster Link Address Service Subnet Address
Name Address
2 10.224.92.102
3 10.224.92.103
Virtual 10.224.92.100
IP
Prerequisites - External Connectivity Requirements
The following URLs need to be accessible from the DNA Center for various
operations
10Gb port for the Enterprise Network. Second embedded 1Gb ethernet controller port.
Optional, intended for connecting to an
isolated network with a static route for cloud
services.
10Gb port for Intra Cluster interface. Leave
it unconnected in standalone mode
initially.
The 10Gb interfaces must be First embedded 1Gb ethernet controller port.
Reserved for the dedicated management network.
connected to switches in
switch-port mode
Best Practices
Always treat DNA-C as a cluster: plan for a “cluster”
Standalone box is a “single node cluster”
Need not be routable in enterprise network, just ensure they don’t clash
Changing cluster subnet and service subnet is not supported yet
Cloud Updates
DNA
Packages
Production Catalog
DNA Node(s)
Reserve IP addresses!
Is the box going to be installed by the same Will the installation happen at a lab first and
Gather information around how to connect to
person who will configure it? then move to production? Identify caveats.
internet, proxy information (IP,
Is the box installed in a location where physical How will access to the system be provided,
username/Pwds), which firewall ports to open,
access is not easy? L3/terminal systems?
DNS additions etc.
How long will it take to rack and stack? Who will troubleshoot any physical connectivity
Plan for how long it will take to get tthis done
problems?
before Installation
DNA Center Installation
Setting Up Passwords
NTP Check
Finish Installation
Finish Installation
Register CCO-Id
Validate access to
Proxy
#CLMEL 153
BRKCRS- 2259
Setting up DNAC – Day 0
Terms and Conditions
Mandatory Acceptance
required
Setting up DNAC – Day 0
Ready to Go
Initiate Device
discovery
Provision, Monitor
and Troubleshoot
devices
Setting up DNAC – Day 0
Login
Design, Automate
and Assure your
Network
157