Cisco Software Defined Access Networking PDF
Cisco Software Defined Access Networking PDF
1. Cover Page
4. Copyright Page
5. About the Authors
14. Introduction
15. Chapter 1. Today’s Networks and the Drivers for Change
1. Networks of Today
2. Common Business and IT Trends
7. Introduction to Multidomain
8. Summary
2. Software-Defined Networking
3. Cisco Software-Defined Access
8. Summary
17. Chapter 3. Introduction to Cisco DNA Center
7. Summary
18. Chapter 4. Cisco Software-Defined Access Fundamentals
1. Network Topologies
2. Cisco Software-Defined Access Underlay
3. Wireless LAN Controllers and Access Points in Cisco Software-
Defined Access
4. Shared Services
5. Transit Networks
6. Fabric Creation
9. Summary
10. References in This Chapter
19. Chapter 5. Cisco Identity Services Engine with Cisco DNA Center
1. Policy Management in Cisco DNA Center with Cisco ISE
2. Group-Based Access Control
3. Segmentation with Third-Party RADIUS Server
6. Summary
7. References in This Chapter
4. Summary
5. References in This Chapter
24. Glossary
25. Index
2. ii
3. iii
4. iv
5. v
6. vi
7. vii
8. viii
9. ix
10. x
11. xi
12. xii
13. xiii
14. xiv
15. xv
16. xvi
17. xvii
18. xviii
19. 1
20. 2
21. 3
22. 4
23. 5
24. 6
25. 7
26. 8
27. 9
28. 10
29. 11
30. 12
31. 13
32. 14
33. 15
34. 16
35. 17
36. 18
37. 19
38. 20
39. 21
40. 22
41. 23
42. 24
43. 25
44. 26
45. 27
46. 28
47. 29
48. 30
49. 31
50. 32
51. 33
52. 34
53. 35
54. 36
55. 37
56. 38
57. 39
58. 40
59. 41
60. 42
61. 43
62. 44
63. 45
64. 46
65. 47
66. 48
67. 49
68. 50
69. 51
70. 52
71. 53
72. 54
73. 55
74. 56
75. 57
76. 58
77. 59
78. 60
79. 61
80. 62
81. 63
82. 64
83. 65
84. 66
85. 67
86. 68
87. 69
88. 70
89. 71
90. 72
91. 73
92. 74
93. 75
94. 76
95. 77
96. 78
97. 79
98. 80
99. 81
100. 82
101. 83
102. 84
103. 85
104. 86
105. 87
106. 88
107. 89
108. 90
109. 91
110. 92
111. 93
112. 94
113. 95
114. 96
115. 97
116. 98
117. 99
118. 100
119. 101
120. 102
121. 103
122. 104
123. 105
124. 106
125. 107
126. 108
127. 109
128. 110
129. 111
130. 112
131. 113
132. 114
133. 115
134. 116
135. 117
136. 118
137. 119
138. 120
139. 121
140. 122
141. 123
142. 124
143. 125
144. 126
145. 127
146. 128
147. 129
148. 130
149. 131
150. 132
151. 133
152. 134
153. 135
154. 136
155. 137
156. 138
157. 139
158. 140
159. 141
160. 142
161. 143
162. 144
163. 145
164. 146
165. 147
166. 148
167. 149
168. 150
169. 151
170. 152
171. 153
172. 154
173. 155
174. 156
175. 157
176. 158
177. 159
178. 160
179. 161
180. 162
181. 163
182. 164
183. 165
184. 166
185. 167
186. 168
187. 169
188. 170
189. 171
190. 172
191. 173
192. 174
193. 175
194. 176
195. 177
196. 178
197. 179
198. 180
199. 181
200. 182
201. 183
202. 184
203. 185
204. 186
205. 187
206. 188
207. 189
208. 190
209. 191
210. 192
211. 193
212. 194
213. 195
214. 196
215. 197
216. 198
217. 199
218. 200
219. 201
220. 202
221. 203
222. 204
223. 205
224. 206
225. 207
226. 208
227. 209
228. 210
229. 211
230. 212
231. 213
232. 214
233. 215
234. 216
235. 217
236. 218
237. 219
238. 220
239. 221
240. 222
241. 223
242. 224
243. 225
244. 226
245. 227
246. 228
247. 229
248. 230
249. 231
250. 232
251. 233
252. 234
253. 235
254. 236
255. 237
256. 238
257. 239
258. 240
259. 241
260. 242
261. 243
262. 244
263. 245
264. 246
265. 247
266. 248
267. 249
268. 250
269. 251
270. 252
271. 253
272. 254
273. 255
274. 256
275. 257
276. 258
277. 259
278. 260
279. 261
280. 262
281. 263
282. 264
283. 265
284. 266
285. 267
286. 268
287. 269
288. 270
289. 271
290. 272
291. 273
292. 274
293. 275
294. 276
295. 277
296. 278
297. 279
298. 280
299. 281
300. 282
301. 283
302. 284
303. 285
304. 286
305. 287
306. 288
307. 289
308. 290
309. 291
310. 292
311. 293
312. 294
313. 295
314. 296
315. 297
316. 298
317. 299
318. 300
319. 301
320. 302
321. 303
322. 304
323. 305
324. 306
325. 307
326. 308
327. 309
328. 310
329. 311
330. 312
331. 313
332. 314
333. 315
334. 316
335. 317
336. 318
337. 319
338. 320
339. 321
340. 322
341. 323
342. 324
343. 325
344. 326
345. 327
346. 328
347. 329
348. 330
349. 331
350. 332
351. 333
352. 334
About This eBook
ePUB is an open, industry-standard format for eBooks.
However, support of ePUB and its many features varies across
reading devices and applications. Use your device or app
settings to customize the presentation to your liking. Settings
that you can customize often include font, font size, single or
double column, landscape or portrait mode, and figures that
you can click or tap to enlarge. For additional information
about the settings and features on your reading device or app,
visit the device manufacturer’s Web site.
Cisco Press
Cisco Software-Defined Access
Jason Gooley
Roddie Hasan
Srilatha Vemula
Published by:
Cisco Press
Hoboken, NJ
ScoutAutomatedPrintCode
ISBN-13: 978-0-13-644838-9
ISBN-10: 0-13-644838-0
Trademark Acknowledgments
All terms mentioned in this book that are known to be
trademarks or service marks have been appropriately
capitalized. Cisco Press or Cisco Systems, Inc., cannot attest to
the accuracy of this information. Use of a term in this book
should not be regarded as affecting the validity of any
trademark or service mark.
Special Sales
For information about buying this title in bulk quantities, or
for special sales opportunities (which may include electronic
versions; custom cover designs; and content particular to your
business, training goals, marketing focus, or branding
interests), please contact our corporate sales department at
[email protected] or (800) 382-3419.
For government sales inquiries, please contact
[email protected].
Feedback Information
At Cisco Press, our goal is to create in-depth technical books
of the highest quality and value. Each book is crafted with care
and precision, undergoing rigorous development that involves
the unique expertise of members from the professional
technical community.
Composition: codeMantra
Americas Headquarters
Cisco Systems, Inc.
San Jose, CA
Europe Headquarters
Cisco Systems International BV Amsterdam,
The Netherlands
Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax
numbers are listed on the Cisco Website at www.cisco.com/go/offices.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco
and/or its affiliates in the U.S. and other countries. To view a list of Cisco
trademarks, go to this URL: www.cisco.com/go/trademarks. Third party trademarks
mentioned are the property of their respective owners. The use of the word partner
does not imply a partnership relationship between Cisco and any other company.
(1110R)
About the Authors
Jason Gooley, CCIE No. 38759 (RS and SP), is a very
enthusiastic and spontaneous person who has more than 25
years of experience in the industry. Currently, Jason works as a
technical evangelist for the Worldwide Enterprise Networking
sales team at Cisco Systems. Jason is very passionate about
helping others in the industry succeed. In addition to being a
Cisco Press author, Jason is a distinguished speaker at
CiscoLive, contributes to the development of the Cisco CCIE
and DevNet exams, provides training for Learning@Cisco, is
an active CCIE mentor, is a committee member for the Cisco
Continuing Education Program (CE), and is a program
committee member of the Chicago Network Operators Group
(CHI-NOG), www.chinog.org. Jason also hosts a show called
MetalDevOps. Jason can be found at www.MetalDevOps.com,
@MetalDevOps, and @Jason_Gooley on all social media
platforms.
Roddie Hasan:
Srilatha Vemula:
Roddie:
Thank you to Brett, Marianne, and all the staff at Cisco Press
for putting up with my nonsense as I struggled to transition
from an in-person communication style to a writing one. I
learned a lot during this process thanks to your expertise and
guidance.
Thank you to Jason Gooley for setting up this project for us,
for your patience with chapter delay after chapter delay, and
for being an amazing co-author along with Srilatha Vemula.
Thank you to all the amazing and sharp people at Cisco that I
have worked with for the past 12 years for helping me grow in
this field and helping get me to a point where I was ready to
write a book.
Srilatha:
Glossary
Index
Reader Services
Register your copy at
www.ciscopress.com/title/9780136448389 for convenient
access to downloads, updates, and corrections as they become
available. To start the registration process, go to
www.ciscopress.com/register and log in or create an account*.
Enter the product ISBN 9780136448389 and click Submit.
When the process is complete, you will find any available
bonus content under Registered Products.
*Be sure to check the box that you would like to hear from us
to receive exclusive discounts on future editions of this
product.
Contents
Introduction
Glossary
Index
Icons Used in This Book
Command Syntax
Conventions
The conventions used to present command syntax in this book
are the same conventions used in the IOS Command
Reference. The Command Reference describes these
conventions as follows:
This book can also help candidates prepare for the Cisco SD-
Access portions of the Implementing Cisco Enterprise
Network Core Technologies (ENCOR 350-401) certification
exam, which is part of the CCNP Enterprise, CCIE Enterprise
Infrastructure, CCIE Enterprise Wireless, and Cisco Certified
Specialist – Enterprise Core certifications.
WHO SHOULD READ THIS BOOK?
The target audience for this book is network professionals who
want to learn how to design, implement, and adopt Cisco SD-
Access in their environment. This book also is designed to
help readers learn how to manage and operate their campus
network by leveraging Cisco DNA Center.
BOOK STRUCTURE
The book is organized into nine chapters:
Chapter 1, “Today’s Networks and the Drivers for Change”: This
chapter covers the most common trends and challenges seen in the
campus area of the network. This chapter also describes some of the
benefits and key capabilities of automation in general, as well as the
associated return on investment in terms of time and risk.
Networks of Today: This section covers the technologies that are driving
changes in the networks of today.
Cloud Trends and Adoption: This section covers the trends and
challenges of cloud adoption.
NETWORKS OF TODAY
The IT industry is constantly changing and evolving. As time
goes on, there is an ever-increasing number of technologies
putting a strain on the network. New paradigms are formed as
others are being shifted away from. New advances are being
developed and adopted within the networking realm. These
advances are being developed to provide faster innovation and
the ability to adopt relevant technologies in a simplified way.
This requires the need for more intelligence and the capability
to leverage the data from connected and distributed
environments such as the campus, branch, data center, and
WAN. Doing so allows for the use of data in interesting and
more powerful ways than ever seen in the past. Some of the
advances driving these outcomes are
Cloud services
Virtualization
Mobile devices, BYOD, and guest access are straining the IT staff.
With the business and IT trends covered thus far still in mind,
it is important to translate these trends into real challenges that
organizations are facing and put them into IT vernacular. As
mentioned previously, the network is encountering pressure
like never before. This is forcing IT teams to look for ways to
alleviate that pressure. Organizations are also looking for ways
to improve the overall user and application experience with
what they currently own while also driving cost down. Lack of
control over visibility and application performance, and
keeping up with the ever-growing security attack surface are
also contributing to organizations looking for a better way
forward. In addition, organizational silos have caused many
organizations to not be able to achieve the benefits from some
of these newer technologies. Breaking down silos to work
toward a common goal for the business as a whole is required
for the business to take full advantage of what some of these
software-defined advancements have to offer.
HIGH-LEVEL DESIGN
CONSIDERATIONS
Considering the complexity of a majority of the networks out
there today, they can be classified in a couple categories such
as redundant and nonredundant. Typically, redundancy leads to
increased complexity. Often, the simplest of networks do not
plan for failures or outages and are commonly single-homed
designs with multiple single points of failure. Networks can
contain different aspects of redundancy. When speaking
strictly of the campus LAN portion of the environment, it may
include redundant links, controllers, switches, and access
points. Table 1-1 lists some of the common techniques that are
introduced when dealing with redundancy.
Filtering
Simplifies operations
Lowers latency
Redundancy can take many different forms. VSS is used for
much more than just redundancy. It helps with certain
scenarios in a campus design, such as removing the need for
stretched VLANs and loops in the network. Figure 1-2
showcases an example of a campus environment before and
after VSS and depicts the simplification of the topology.
The days are over of hunting around and searching through log
files and debugging traffic to determine what the issue is that
has caused an outage to the network. The amount of data that
runs through these networks and has to be sorted through to
chase down an issue is exponentially increasing. This is
leading to the manual sifting through information to get to the
root cause of an issue being extremely more difficult than ever
before. Organizations rely on information relevant to what
they are looking for; otherwise, the data is useless. For
example, if a user couldn’t get on the wireless network last
Tuesday at 3 p.m., and the logs are overwritten or filled with
non-useful information, how does this help the network
operations staff troubleshoot the issue at hand? It doesn’t. This
wastes time, which is one of the most precious resources for
network operations staff. The dichotomy of this is using
analytics and insights to help direct network operators to the
right place at the right time to take the right action. This is part
of what Cisco DNA Assurance does as part of intent-based
networking.
Increased availability
Reduced complexity
Simplified design
INTRODUCTION TO MULTIDOMAIN
A common trend that is arising in the IT industry is to generate
and store data in many areas of the network. Traditionally, a
majority of the data for a business was stored in a centralized
data center. With the influx of guest access, mobile devices,
BYOD, and IoT, data is now being generated remotely in a
distributed manner. In response, the industry is shifting from
data centers to multiple centers of data. That being said,
simple, secure, and highly available connectivity is a must to
allow for enhanced user and application experience. The other
big piece to multidomain is having a seamless policy that can
go across these multiple centers of data. An example of this is
policy that extends from the campus environment across the
WAN and into the data center and back down to the campus.
This provides consistency and deterministic behavior across
the multiple domains. Figure 1-11 illustrates a high-level
example of sharing policy between a campus branch location
and a data center running Cisco Application Centric
Infrastructure (ACI).
Introduction to Cisco
Software-Defined Access
Challenges with Today’s Networks: This section covers the trends and
challenges of today’s campus networks and how to alleviate them using a
fabric architecture.
Network Access Control: This section goes into detail of network access
control (NAC) and its role in security-driven infrastructure.
SOFTWARE-DEFINED
NETWORKING
Cisco started the journey toward digitizing networks in 2015
with the vision of creating a network that is flexible and agile.
Cisco Digital Network Architecture (Cisco DNA) provides a
roadmap to digitization and a path to realize the immediate
benefits of network automation, assurance, and security. The
ultimate goal is to have an IT network that addresses the
challenges of modern networks discussed previously. Security
should be embedded into the network with new innovations
that leverage simplified and consistent policies. These polices
are then mapped into business intent, providing a faster way of
implementing changes through a centralized approach. Most
importantly, the network should be constantly learning by
using analytics, providing visibility and proactively
monitoring and reporting issues for the IT operations staff
from a centralized management pane.
Software-defined networking makes it possible to design and
build networks by decoupling the control and forwarding
planes. A separate control plane creates a programmable
network environment to abstract the underlaying infrastructure
for applications and services. Through abstraction, one can
achieve a common network policy, quick implementation of
network services, and reduced complexity with a centralized
controller. Cisco DNA is an intent-based network binding
business context to policy managed by a controller that has a
central view of the network domain. Cisco DNA creates a
network focused on intent and security that looks like a logical
switch for the applications and services. This logical switch
can be programmed to meet the demands of business changes
with respect to network changes, security policy changes
based on mobility, and continuous security threats. Cisco DNA
is scalable for future growth, thereby reducing overall IT costs
and providing a faster time to market.
Cisco DNA Center: The controller used for creating Cisco Software-
Defined Access for campus networks.
CISCO SOFTWARE-DEFINED
ACCESS
This section unveils the building blocks of Cisco SD-Access
and covers its associated benefits in the campus environment.
This section also introduces how Cisco DNA Center makes
Cisco SD-Access a reality. Cisco SD-Access is the Cisco
digital network evolution transforming traditional campus
LAN designs to intent-driven, programmable networks. The
two main components of Cisco SD-Access are Cisco Campus
Fabric and Cisco DNA Center. Cisco DNA Center offers
automation and assurance to create and monitor the Cisco
Campus Fabric. Figure 2-2 shows Cisco SD-Access at a high
level. Each component will be discussed as part of the “Cisco
SD-Access Roles” section.
The Cisco Campus Fabric uses LISP in the control plane for a
destination record locator, VXLAN-GPO as an overlay to
encapsulate the original packet, and the underlay to transport
the encapsulated packet. As the SGT is carried over in
VXLAN, it can be used to enforce policies based on the roles.
Cisco SD-Access brings an additional plane called the policy
plane, and it uses Cisco TrustSec and ISE that will be
discussed in the upcoming section. Cisco ISE maintains the
SGT-based policies and pushes them on the network
enforcement points. Cisco DNA Center orchestrates the SGT
policies on ISE and the enforcement of the policies to the
network devices. With SGT-based enforcement, the security
policy is attached to the user instead of the location or the IP
address of the user. Figure 2-5 illustrates the management of
policies from Cisco DNA Center.
Figure 2-5 Policy Push from Cisco DNA Center
2. Edge1 then registers the client’s IP address MAC address and its location
(Edge1 loopback) with the control plane nodes using LISP.
3. Client1 initiates traffic to Client2 on Edge2. Edge1 does a mapping lookup
with the control plane node for the location of Client2. The control plane node
provides the location (e.g., loopback of Edge2).
INTRODUCTION TO CISCO
IDENTITY SERVICES ENGINE
The previous section discussed the need for network access
control in any networking infrastructure for secure user access,
visibility, and business and security dynamics. Cisco ISE is a
policy engine for controlling endpoint access and network
device administration for all methods of access to a network.
This includes wired, wireless, and remote VPN access. ISE
enables an administrator to centrally manage access policies
for wired, wireless, and VPN endpoints in the network.
Accounting: Tracks the services that endpoints are accessing and the
amount of network resources they are consuming
RADIUS TACACS+
Command No Yes
accounting
In Table 2-1, “Network access” refers to a user or endpoint
trying to get access to the network as a means to a destination
(reaching an internal database or Google.com), and “Device
administration” is when an administrator or a network operator
is trying to get access to the network device to view or make
configuration changes on the device.
Key takeaways from Table 2-1 are that RADIUS uses UDP
and is used for network access, and TACACS+ uses
connection-based protocol TCP and is primarily used for
device administration. Enterprise networks sometimes require
accounting as granular as per command when administrators
or network operators log in to the network devices to make
changes on the devices. TACACS+ is the only protocol that
can support command authorization and command accounting.
Best practice is to use TACACS+ for device administration
because of the command authorization and accounting
capability for a granular audit trail.
Device administration
Guest access
Profiling
Compliance
Secure Access
With growing security concerns, clients need to be authorized
appropriately before they are provided access to corporate
resources. IT security policies require users such as employees
and contractors to be authenticated using their corporate-
provided credentials before gaining access to the corporate
resources. Older devices that require network access, such as
IP phones, IP cameras, and access points, are sometimes not
capable of doing authentication but should still be confirmed
as corporate assets before being given access to the
environment. Secure access capability on Cisco ISE ensures
that any user or device connecting to the network is
authenticated first and, upon successful authentication, the
endpoint is granted the level of access to the corporate
resources as per the security policies enforced by the
organization. IT networks are moving toward two-factor
authentication to alleviate issues with compromised user
accounts by enforcing a second level of authentication using
soft tokens or push notifications. ISE can be leveraged in these
workflows to implement two-factor authentication.
Authentication and authorization on Cisco ISE can be
performed against an internal database or an external database.
External databases supported with ISE include Microsoft
Active Directory, Lightweight Directory Access Protocol
(LDAP) servers, RSA SecurID, Microsoft SQL Server, and
Duo, among others. ISE offers various options for
authorization, such as VLAN assignment, downloadable
ACLs, redirection ACLs, and Scalable Group Tags to the
endpoint. (The “Segmentation with Cisco TrustSec” section
covers SGTs in detail.)
Device Administration
To maintain the network infrastructure, administrators often
access the infrastructure devices for monitoring, configuration
changes, software upgrades, or troubleshooting purposes.
Differentiated access and accountability plays a vital role
whenever infrastructure devices are being accessed.
Unauthorized users should not be allowed to log in to the
infrastructure devices, and any unauthorized changes could
degrade the network or bring the network down. Device
administration ensures that role-based access is enforced to the
IT personnel accessing the infrastructure. Cisco ISE provides
role-based access control for device administration using
RADIUS and TACACS+. However, TACACS+ protocol
primary use is device administration because it brings in the
capability to perform command authorization to authorize
every command before it’s executed on the device. With Cisco
ISE acting as the TACACS+ server for device administration,
security policies can be enforced to push privilege level and
limited command access through command authorization for
administrators on network devices. Unlike RADIUS,
TACACS+ offers command accounting to log the commands
corresponding to the user, which can be helpful for auditing
and compliance requirements.
Guest Access
When guests or visitors outside the company would like to
reach the Internet or broadly available services on the
company network, IT policies often require validating the
guest first prior to providing access. In practice, a “guest” is
loosely defined as a noncorporate user on a noncorporate
device. These users predominantly expect wireless access to
the Internet and rarely more. Guest access provided by Cisco
ISE allows enterprises the most flexibility for wireless (and
wired) guest access, with several templated workflows for
quick and easy deployment. A typical guest user in a network
would experience the following flow, the packet flow for
which is detailed in Figure 2-12:
1. The guest user connects to a wired port or wireless guest SSID. ISE learns the
MAC address of the guest and pushes the user to the guest portal, hosted on
ISE. The guest is restricted from going anywhere but this portal.
2. The guest is redirected to the guest portal to enter login details. ISE validates
the user and initiates a Change of Authorization (CoA) to reauthorize the user,
as the user is now a registered guest.
Hotspot Guest portal: Guest users are redirected to the ISE on-box
Hotspot Guest portal first and are provided Internet access by accepting
an Acceptable Use Policy (AUP). No credentials are needed. This type of
access is commonly used for a retail guest-user experience.
Self-Registered Guest portal: Guest users are presented with a self-
service page to register by providing required information. Guest account
credentials are sent by email, print, or SMS for login to access the
Internet. ISE supports social media login using Facebook credentials.
Profiling
Gone are the days of a network that simply allows printers,
desktops, and phones to communicate. Modern networks see
the number of types of devices increasing as quickly as the
number of devices themselves. This section covers profiling
services offered by Cisco ISE that corporate networks can
benefit from to help them understand and provision new
devices looking for network services. Context is the king in a
shifting environment where network access is requested as
frequently from a physical security officer as from a network
admin. Classifying types of endpoints on the network helps
businesses understand trends and aids in building a better
segmentation strategy for future, similar endpoints. Effective
authorization privilege can limit attack surfaces, prepare for
future growth, and enforce security policies in case of any
suspicious activity.
DHCP SPAN
NetFlow
pxGrid
SNMP Query
RADIUS
DNS probe: The DNS probe does a reverse DNS lookup for IP addresses
learned by other means. Before a DNS lookup can be performed, one of
the following probes must be started along with the DNS probe: DHCP,
DHCP SPAN, HTTP, RADIUS, or SNMP for IP-to-MAC address
binding.
HTTP probe: For flows where sessions are redirected to a Cisco ISE
portal, such as the Hotspot Guest portal, the HTTP request-header field
contains a User-Agent attribute, which includes application, vendor, and
OS information of the endpoint. Enabling HTTP probe on Cisco ISE uses
the HTTP attributes to profile the endpoint.
SNMP probe: SNMP probe consists of two probes: SNMP Trap and
SNMP Query. SNMP Trap probe alerts ISE profiling services to the
presence (connection or disconnection) of a network endpoint by
configuring ISE as the SNMP server on the infrastructure device. When
an SNMP link up/link down trap is sent to ISE, ISE triggers an SNMP
Query to collect CDP, LLDP, and ARP data for the endpoint. The SNMP
probe is not needed when the RADIUS probe is enabled because the
RADIUS probe triggers an SNMP Query as well.
Profiling Operation
The Cisco ISE profiling service uses the profiling probes to
collect attributes for the endpoints and then matches them to
profiling conditions that will be used in profiling policies.
Figure 2-16 shows an example of a Cisco Provided endpoint
profiling policy named Microsoft-Workstation containing four
conditions. When the endpoint matches a condition in a
profiling policy, a certainty factor (CF) or a weight associated
with that condition is assigned to the endpoint for that profile.
Although conditions may match in multiple profiles, the
profile for which the endpoint has the highest cumulative CF,
or Total Certainty Factor (TCF), is the one assigned to the
endpoint.
Cisco ISE uses the profiling feed service to keep its profiling
database up to date with the latest Organizationally Unique
Identifier (OUI) information from the IEEE. Offline profiler
feed updates are available for download and to import on ISE
specially for environments where there is no Internet
reachability from Cisco ISE. Profiling policies downloaded
from the feed server are tagged as “Cisco Provided” policies.
Full BYOD: Does the corporate policy involve allowing only personal
devices using 802.1X profiles and certificates/passwords similar to the
corporate managed assets? If the answer is yes, full BYOD is the solution
that places the BYOD device in an automated workflow to register the
personal device, provision an 802.1X profile, and install certificates
(optional) when the device is first connected to the network. An
automated workflow requires minimal user education and less
administrative overhead to enable network access on an unmanaged asset.
When the BYOD user connects to the network the first time
through either a wired or a wireless connection, the user’s web
browser is redirected to the Cisco ISE centrally managed
BYOD portal page to start registering the device. This is
followed by provisioning the 802.1X profiles to configure the
supplicant on the device to comply with corporate policies.
The next step typically installs a user certificate issued by
either the Cisco ISE Internal Certificate Authority (ISE CA) or
an external CA if the corporate policy involves connecting
using certificates as credentials. At the end of the workflow, a
BYOD user is provisioned with the certificates and 802.1X
supplicant profile required by IT. The user is automatically
connected using certificates with the 802.1X profile and given
network access.
Compliance
Authenticating the user before providing access to the network
is crucial for security. Networks need more than credentials to
confirm the user is safe to be authorized to access the network,
such as ensuring that the user’s device has the latest antivirus/
antimalware updates, firewalls enabled, Windows security
patches installed, disk encryption enabled, and so on. These
endpoint operating criteria are deemed the “posture” of the
client attempting to access the network. The Cisco ISE
Compliance feature brings in the capability to perform posture
checks on the endpoints, workstations, or mobile devices to
confirm they are compliant with corporate security policies
before granting network access.
Antimalware Conditions
Antispyware Conditions
Antivirus Conditions
Application Conditions
Compound Conditions
File Conditions
Registry Conditions
Service Conditions
USB Conditions
The posture for mobile devices is similar to the posture for
workstations except that there is no posture agent on the
mobile device. Mobile devices connecting to the network
through the full BYOD flow and corporate-owned mobile
devices can be postured by Cisco ISE if the organization has a
Cisco-supported Mobile Device Manager (MDM). ISE can
leverage MDM information to enforce policy during network
access. BYOD devices need to be registered with ISE, which
in turn registers the device with the MDM. The MDM checks
the compliance status of the mobile endpoint and returns the
status to ISE to grant network access if the status is Compliant.
The following MDM attributes can be used to create
compliance policies for mobile devices:
Jailbreak status
Manufacturer
Model
IMEI
Serial number
OS version
Phone number
Standalone deployment
Distributed deployment
Standalone Deployment
In a standalone deployment, all the Cisco ISE personas (PAN,
PSN, MnT, pxGrid) are running on one node. This can be
turned into a basic two-node deployment with another node
with all the personas enabled for redundancy. Scale numbers
will remain the same for a standalone or two-node
deployment. Figure 2-20 provides a visual of the ISE
deployment in standalone versus two-node deployment
models.
Figure 2-20 Standalone and Two-Node Cisco ISE
Deployments
Distributed Deployment
In a distributed deployment, one or more personas are
distributed on different nodes. There are two types of
distributed deployments: hybrid and dedicated. In hybrid
deployments, PAN and MnT personas are combined on two
nodes—one being the primary node and the other being used
for redundancy—and dedicated PSNs handle policy decisions.
Because PSNs are the nodes performing policy decisions, this
deployment model brings in not only additional scale with an
increase in the concurrent sessions the deployment supports,
but also better performance with logging, in comparison with a
standalone deployment. Hybrid deployments allow a
maximum of five PSNs in the deployment. Figure 2-21
provides a visual of ISE hybrid deployment.
Classification
Propagation
Enforcement
Classification
Classification is the ability to assign an SGT to the user or
endpoint or a device that is connected to the network. SGTs
can be assigned dynamically or statically. Dynamic SGT
assignment is done by Cisco ISE based on the identity, profile,
role, and overall context of the endpoint. A client connecting
to the campus network using 802.1X, MAB, or web
authentication is authenticated by ISE and is assigned an SGT
per the policies defined on ISE. In Dynamic SGT assignment,
ISE dynamically assigns the SGT for an authenticating
endpoint. When endpoint authentication is not possible, static
SGT classification is necessary. Static SGT assignments are
usually applicable for static devices such as servers that do not
do authentication. A static SGT can be mapped to a Layer 2
interface, VLAN, subnet, or Layer 3 switched virtual interface
(SVI) instead of relying on assignment from ISE. Static IP-to-
SGT mapping or subnet-to-SGT mapping can be created on
ISE so that it can be pushed to the SGT-capable device instead
of configuring the mapping on each device. The classification
is propagated into the network for policy enforcement. Figure
2-23 summarizes the TrustSec classifications available.
Propagation
Once the SGTs are assigned, they need to be propagated into
the network, and the final goal of TrustSec is to enforce
policies based on the source and destination SGTs. An SGT is
a 16-bit value assigned either statically or dynamically to a
user or device. Cisco TrustSec has two methods to propagate
SGTs into the network infrastructure:
Inline tagging
Enforcement
The primary purpose of SGT assignment and propagation is to
use the SGT for enforcement. Enforcement uses source and
destination SGTs, and enforcement policies are defined on ISE
in the form of SGT ACLs (SGACLs). SGACLs are always
based on a source tag and a destination tag. SGACLs on ISE
are visualized as a spreadsheet, as shown in Figure 2-26. The
highlighted box on the left shows that when traffic tagged
Employee, SGT value 4, attempts to reach Contractors, SGT
value 5, an SGACL named Anti_Malware is applied. This
policy is applied at the egress of the enforcement device where
the SGACL has been dynamically downloaded. Enforcement
can be performed on infrastructure devices such as switches,
routers, or firewalls.
SUMMARY
Cisco SD-Access is a Cisco Campus Fabric managed by Cisco
DNA Center. The need for automation, abstraction, and
translating the business intent directly into network
configuration is needed now more than ever. Cisco is leading
in digitizing the modern networks while lowering operational
overhead. This chapter provided insights into the control, data,
and policy components of Cisco SD-Access. Every network
needs security, but it is typically complicated to implement.
Cisco TrustSec, through the use of Cisco ISE with Cisco DNA
Center, greatly simplifies the journey of bringing together
networking and security in campus networks.
Chapter 3
History of Automation Tools: This section covers the need for network
automation and some of the common ways automation is done today.
Ansible
Puppet
Chef
SaltStack
tasks:
- name: Get Login credentials
include_vars:
/mnt/hgfs/vm_shared/ansible/access.yml
Cisco Network Plug and Play (PnP): PnP allows users to quickly and
easily onboard new devices to the network without needing to manually
connect to the devices via a console cable, providing true zero-touch
deployment (ZTP).
Note
Screenshots in this book were taken with Cisco DNA Center 1.3 and may differ slightly from
the currently available version.
The Design tool is where many of the “day zero” tasks are
performed, starting with defining the layout and visualization
of the network in the Network Hierarchy section (Design >
Network Hierarchy). This section enables users to define
their network hierarchy in a custom way based on how the
company is laid out, using the hierarchical elements areas,
buildings, and floors.
AMERICAS
APAC
EMEAR
Within each continental area, the network operator can define
areas for each country with an ACME location:
AMERICAS
Brazil
Canada
United States
APAC
Japan
Singapore
EMEAR
Germany
United Kingdom
AMERICAS
Brazil
Canada
United States
Richardson
RTP
San Jose
After designating a city, the next most specific element in the
Design tool is a building. Whereas areas and subareas in Cisco
DNA Center are broad geographical locations that do not have
physical addresses, a building must represent an existing
physical location. When defining a building in the Design tool,
an actual address or latitude/longitude pair is required. When
the network operator enters this information, the map on the
screen automatically zooms in to where the building is located.
Figure 3-3 shows the input window for a building.
AMERICAS
Brazil
Canada
United States
Richardson
RTP
San Jose
SJC-01
SJC-13
AMERICAS
Brazil
Canada
United States
Richardson
RTP
San Jose
SJC-01
SJC-01-1
SJC-13
SJC-13-1
SJC-13-2
Network Settings
After creating the network hierarchy in the Design tool, the
network operator can move to Design > Network Settings to
define standard configuration settings for the network. Among
the settings available to define in the Network Settings section,
shown in Figure 3-7, are the following:
Syslog servers
Wireless Deployments
Deploying a wireless network today can be a cumbersome,
multistep process, with the majority of the configuration work
performed in the Cisco Wireless LAN Controller (WLC) GUI
for a specific building or site. A manual deployment involves
logging in to each WLC to manually set wireless parameters
on an ad hoc basis and having to create access point (AP)
groups to keep everything together. The Cisco DNA Center
Design tool can help streamline this process while making it
more efficient and reliable.
Discovery Tool
To take advantage of Cisco DNA Center, devices must be
added to its inventory. New, not yet configured devices can be
automatically configured and added to Cisco DNA Center
through the PnP tool using zero-touch provisioning (ZTP),
which is discussed in Chapter 8. Existing devices can be added
to the inventory manually by using the GUI or by importing a
comma-separated values (CSV) file; however, the fastest way
to add existing devices is to use the Discovery tool, which you
can access by clicking the Tools icon (the three-by-three grid
of squares) at the top right of the Cisco DNA Center home
screen.
IP address range
Using CDP or LLDP to do a discovery can possibly produce
unpredictable results if not limited in scope, as even non-Cisco
devices that are not supported by Cisco DNA Center may be
discovered. Performing a discovery by IP address range allows
the greatest control over the scope of the discovery. Provide a
starting IP address and ending IP address, and Cisco DNA
Center attempts to reach devices on every IP address in the
given range. Figure 3-12 shows a typical discovery job
configuration using IP address range.
Inventory
After a device is discovered, it is displayed in the Inventory
tool, which you can access via the Provision link on the Cisco
DNA Center home screen. When a device is in Inventory,
Cisco DNA Center performs a full “sync” on the device every
6 hours (360 minutes) by default. This sync process connects
to the device and gathers data via show commands and SNMP,
so that Cisco DNA Center has an accurate view of the state of
the device and its configuration. Cisco DNA Center also
connects to network devices on a more frequent basis to
collect data and statistics for Cisco DNA Assurance, which is
covered further in Chapter 9.
Host name
IP address
Site
Reachability status
Software version
Platform
Serial number
Figure 3-16 shows an example of the information that is
displayed in the Inventory tool.
SUMMARY
This chapter provided a high-level overview of Cisco DNA
Center and its features and the evolution in automation tools
and practices that preceded it. This chapter described many of
the more powerful Cisco DNA Center applications, along with
the associated benefits such as efficiency and lower risk
through automation and the capability to visualize the entire
network in a hierarchical fashion. This chapter also covered
the day-zero tasks and the subsequent workflows that can be
used to discover devices and provision them with a
configuration.
Chapter 4
Cisco Software-Defined
Access Fundamentals
Transits Networks: This section covers the transit options in Cisco SD-
Access for connectivity to the outside world.
Fabric Creation: This section covers the design and creation of a Cisco
SD-Access fabric.
Fabric Device Roles: This section discusses the device roles in a Cisco
SD-Access network.
NETWORK TOPOLOGIES
Unlike their data center counterparts, campus network
topologies come in a variety of different shapes and sizes.
Although many campus topologies are based on the traditional
three-layer model of core, distribution, and access, the
building layout and cabling considerations typically mandate
customization of how devices are cabled to each other and
customization of the physical layout of the entire network.
Some campus networks use a star configuration with a
collapsed core, with all access switches connected into a large
core switch. Other networks, such as those in tall buildings,
daisy-chain access switches together, leading to a distribution
switch or core switch.
CISCO SOFTWARE-DEFINED
ACCESS UNDERLAY
As discussed in Chapter 2, “Introduction to Cisco Software-
Defined Access,” the underlay in a Cisco SD-Access fabric
should provide fast, robust, and efficient reachability between
all fabric nodes in the network. The underlay configuration
should also be very simple and static, with the focus being on
resiliency and speed, as its role is very critical to fabric
stability. The underlay should also provide for efficient load
balancing across redundant links between the devices. There
are two ways to configure a Cisco SD-Access underlay:
manually or using LAN Automation.
Manual Underlay
As discussed, the role of the underlay is to route packets
between fabric nodes as quickly and efficiently as possible.
The underlay should be built completely with Layer 3 links to
avoid potential Layer 2 limitations such as loops and
spanning-tree blocked ports. This is typically accomplished
using a routing protocol, such as Open Shortest Path First
(OSPF), Intermediate System to Intermediate System (IS-IS),
or Enhanced Interior Gateway Routing Protocol (EIGRP), and
multiple physical links between devices in the underlay for
redundancy and increased bandwidth. The links between
devices should be configured as point-to-point interfaces (with
/30 or /31 subnet masks), and the routing protocol should use a
load-balancing mechanism such as equal-cost multipath
(ECMP) for optimal bandwidth usage.
interface Loopback0
description Fabric Underlay RID - do not
change
ip address 100.124.0.1 255.255.255.255
ip router isis
!
interface GigabitEthernet1/0/13
description To Border-2 te1/0/13
no switchport
ip address 100.125.0.33 255.255.255.252
ip router isis
bfd interval 300 min_rx 300 multiplier 3
no bfd echo
!
interface GigabitEthernet1/0/21
description To edge1 te1/0/23
no switchpor
ip address 100.125.0.1 255.255.255.252
ip router isis
bfd interval 300 min_rx 300 multiplier 3
no bfd echo
router isis
net 49.0000.0011.0111.0010.00
is-type level-2-only
router-id Loopback0
domain-password cisco
metric-style transition
log-adjacency-changes
bfd all-interfaces
Peer Device (optional): A second existing device that can be used to get
a more accurate view of the network topology.
Primary Device Ports: The interface(s) on the primary device that the
new devices are connected to. Multiple interfaces can be selected to use
for the device discovery process.
Discovered Device Site: The site that newly discovered devices are
assigned to after discovery.
IP Pool: An IP pool that has been configured in the Design tool of Cisco
DNA Center (introduced in Chapter 3, “Introduction to Cisco DNA
Center”). This pool will be subnetted and the addresses will be assigned to
the appropriate uplink and downlink physical interfaces as /31 subnets as
well as /32 loopback interfaces on each new device. The IP pool
configured here must have at least 126 addresses available (a /25 network
mask).
Note
As of Cisco DNA Center version 1.3, LAN Automation discovers devices up to two layers
deep below the seed device.
interface Loopback0
description Fabric Node Router ID
ip address 100.124.128.141 255.255.255.255
ip pim sparse-mode
ip router isis
clns mtu 1400
!
interface GigabitEthernet1/0/21
description Fabric Physical Link
no switchport
dampening
ip address 100.124.128.148 255.255.255.254
ip pim sparse-mode
ip router isis
ip lisp source-locator Loopback0
load-interval 30
bfd interval 100 min_rx 100 multiplier 3
no bfd echo
clns mtu 1400
isis network point-to-point
!
interface GigabitEthernet1/0/22
description Fabric Physical Link
no switchport
dampening
ip address 100.124.128.146 255.255.255.254
ip pim sparse-mode
ip router isis
ip lisp source-locator Loopback0
load-interval 30
bfd interval 100 min_rx 100 multiplier 3
no bfd echo
clns mtu 1400
isis network point-to-point
!
router isis
net 49.0000.1001.2412.8141.00
domain-password cisco
metric-style wide
log-adjacency-changes
nsf ietf
bfd all-interfaces
Figure 4-6 shows the Cisco DNA Center Inventory tool with
the newly discovered and onboarded devices assigned to the
site.
Figure 4-6 Cisco DNA Center Inventory Following LAN
Automation
Note
The exception to this is the Cisco Catalyst 9800 Embedded Wireless feature that is
available for the Cisco Catalyst 9300, 9400, and 9500 Series Switch platforms. This feature
supports only fabric-enabled SSIDs and runs on switches inside the fabric.
SHARED SERVICES
Shared services in a Cisco SD-Access environment are any
services that are common to the enterprise and typically live
outside of the Cisco SD-Access fabric but still need to
communicate with hosts in the fabric on all virtual networks
(VNs). Some common examples of shared services are
TRANSIT NETWORKS
Transit (or peer) networks in Cisco SD-Access define the type
of networks that exist outside of the fabric and that are
connected to the fabric border node(s). The actual network
medium could be a WAN in the case of a branch, or a data
center LAN connection in the case of a large campus.
Regardless of the medium, there are two types of transits that
can be defined with Cisco SD-Access: IP-Based and SD-
Access.
IP-Based Transit
IP-Based transits provide traditional IP connectivity from the
outside world to the fabric and vice versa. To maintain macro-
segmentation outside of the fabric, the connections should use
VRF-lite for traffic separation. Traffic is typically routed from
the border to the transit next-hop router using external Border
Gateway Protocol (eBGP), but any routing protocol can be
used so long as it is VRF-aware, as next-hop peers are needed
across each of the VNs/VRFs as well as the underlay.
Figure 4-9 shows the Transit/Peer Network configuration
screen for an IP-Based transit.
FABRIC CREATION
Fabric creation in Cisco DNA Center is a very simple process
and initially requires only three parameters: fabric name,
fabric location, and a selection of which VN(s) to make part of
the fabric.
Fabric Location
You need to give careful consideration to overall fabric design
and the selection of the fabric location, as these decisions
determine the scope and size of the Cisco SD-Access fabric
and which devices are available for it based on the device
locations chosen during provisioning. In general, a building
would be one fabric, which would include all building access
switches, endpoints, users, and wireless access points.
However, a campus with high-speed connectivity between
buildings could also be defined as a single fabric, depending
on the scale, device/user counts, and building survivability
requirements.
Fabric VNs
Figure 4-11 shows the Add Fabric VN selection screen with
two virtual networks selected. New VNs can be added after the
fabric is created, but a fabric must start with a minimum of one
VN.
Figure 4-11 Add Fabric VN Selection Screen
Note
Virtual network (VN) creation and concepts are discussed in Chapter 5.
Control Plane
The control plane node in a Cisco SD-Access fabric is an
endpoint registration and database system providing
reachability information to other fabric nodes for all endpoints
in the fabric. It essentially the brain of the fabric. The control
plane node also tracks endpoint movement to allow for host
mobility required in wireless networks. All fabric nodes
communicate with the control plane node, both to register new
endpoints as they are onboarded and to request endpoint
reachability information to facilitate traffic between nodes.
Note
The technical and complete traffic flow details of Cisco SD-Access along with examples are
discussed in Chapter 6.
Fabric Borders
The fabric border node in a Cisco SD-Access network is
responsible for routing traffic in and out of a fabric. Any
traffic that needs to exit the fabric toward the Internet, a data
center, or another fabric must pass through a fabric border.
Three types of fabric borders can be configured in Cisco SD-
Access:
Outside world border (or external border): A border that routes traffic
from the fabric destined for any unknown addresses, including the
Internet. It also is the gateway of last resort. The external border does not
register any routes with the control plane, but instead functions as a LISP
Proxy Egress Tunnel Router (PETR).
Internal Border — —
External Border ✓ ✓
Anywhere Border ✓ —
Border Automation
The external connectivity configuration on the border nodes
can be configured manually and use any VRF-aware routing
protocol such as BGP, OSPF, or EIGRP. It can also be
automated with the Cisco DNA Center border automation
feature, which uses eBGP as the routing protocol.
IP Pool: An IP pool that has been reserved in the Design tool of Cisco
DNA Center. This pool will be subnetted into /30 subnets and the
addresses will be assigned to switch virtual interfaces (SVIs) on the
border that are created during the automation.
Predictable and stable host mobility, for both wired and wireless, as hosts
no longer need to change IP addresses or subnets when moving to a
different fabric edge node
Efficient traffic flow, as the default gateway for every endpoint is always
the connected switch
interface Vlan1021
description Configured from Cisco DNA-Center
mac-address 0000.0c9f.f45c
vrf forwarding Campus
ip address 100.100.0.1 255.255.0.0
ip helper-address 100.127.0.1
ip helper-address 100.64.0.100
no ip redirects
ip route-cache same-interface
no lisp mobility liveness test
lisp mobility 100_100_0_0-Campus-IPV4
end
Intermediate Nodes
Intermediate nodes are unique in Cisco SD-Access in the sense
that although they physically exist in the fabric and are
logically in the underlay, they are not part of the fabric overlay
and are not configured in Cisco DNA Center as part of the
fabric workflow.
External Connectivity
As discussed earlier in this chapter, no single recommended or
required solution exists for external connectivity outside of the
fabric in Cisco SD-Access. A company could connect the
border nodes directly into its existing network core or
distribution switches or to a WAN device in the case of a
branch network. The upstream devices only need to support
the routing protocol that is running on the borders along with
VRF-lite to maintain macro-segmentation out of the fabric.
Fusion Router
Although they were initially created for use with
Multiprotocol Label Switching (MPLS) L3VPNs, today virtual
routing and forwarding (VRF) instances are used by many
large enterprises, government agencies, and other regulated
industries in their existing networks for traffic separation.
VRFs provide security in the form of macro-segmentation, as
endpoints in one VRF cannot communicate with endpoints in
another VRF without passing through an intermediate device.
This is because each VRF has its own unique routing table and
routing protocol. In Cisco SD-Access, VNs are used to create
this separation, but they are technically configured as VRFs on
the underlying fabric devices.
Note
A feature will be released for Cisco SD-Access in the future that will eliminate this functional
requirement, but it is not available as of this writing.
Figure 4-21 illustrates the placement of a fusion device in a
Cisco SD-Access network. The border nodes hand off the IoT
and Employee VNs to the fusion device, which then leaks
shared services routes and traffic from the global routing table
into the VRFs.
HOST ONBOARDING
After you create the Cisco SD-Access fabric and define the
fabric device roles, you need to set a few more basic
parameters for the fabric to operate. You find these settings in
the “Host Onboarding” section of Cisco DNA Center.
Authentication Templates
Authentication templates define the global host authentication
policy that is set on all fabric edge nodes in a fabric. After a
template is selected and an IP pool is assigned to a VN, the
global authentication template cannot be changed. You can
choose one of the following four authentication templates,
depending on the security policy of the network and network
authentication mechanisms in place:
Figure 4-22 shows the customization dialog box for the Closed
Authentication template. Some basic settings can be
customized for the templates in the Design tool of Cisco DNA
Center.
Figure 4-22 Customization Dialog Box for the Closed
Authentication Template
VN to IP Pool Mapping
Assigning IP pool(s) to VNs in a fabric is the final step
required to make the fabric operational. This step finalizes the
configuration on the border, control plane, and fabric edge
nodes, including
Creation of the appropriate VLANs and SVIs on all fabric edge nodes
interface Loopback1023
description Loopback Border
vrf forwarding Campus
ip address 100.101.0.1 255.255.255.255
!
router bgp 65534
address-family ipv4 vrf Campus
aggregate-address 100.101.0.0 255.255.0.0
summary-only
network 100.101.0.1 mask 255.255.255.255
!
router lisp
ipv4 source-locator Loopback0
service ipv4
etr map-server 100.124.0.1 key 10c70a
etr map-server 100.124.128.140 key 10c70a
etr map-server 100.124.0.1 proxy-reply
etr map-server 100.124.128.140 proxy-reply
service ethernet
etr map-server 100.124.0.1 key 10c70a
etr map-server 100.124.128.140 key 10c70a
etr map-server 100.124.0.1 proxy-reply
etr map-server 100.124.128.140 proxy-reply
instance-id 4097
service ipv4
route-export site-registrations
distance site-registrations 250
map-cache site-registration
site site_uci
authentication-key 10c70a
eid-record instance-id 4099 100.101.0.0/16
accept-more-specifics
eid-record instance-id 8191 any-mac
Virtual Network: If the VN was not assigned during the creation of the
fabric, it can be added during this process.
IP: Select an IP pool that was defined in the Design tool of Cisco DNA
Center. This is the subnet that endpoints will be placed in.
Groups (optional): If you prefer a static SGT assignment for this pool,
select it here; however, note that using Cisco ISE for dynamic SGT
assignments is recommended.
Wireless Pool: If this pool will be used for both wired and wireless
endpoints, check this check box.
Switchport Override
Cisco DNA Center provides the capability to override the
following global Host Onboarding settings on fabric edge
node interfaces to allow for exceptions or customized devices
in the Cisco SD-Access fabric. These changes can be made on
individual ports or on a range of ports.
Connected Device Type: This can be set to User Devices, Access Point
(AP), or Server.
SUMMARY
This chapter covered configuration concepts and fundamentals
of Cisco Software-Defined Access, from fabric creation to
host onboarding. In addition, it discussed some design
considerations for different types of environments, network
sizes, and topologies. The function that each type of device
role plays in the fabric was covered along with design and
selection criteria for these roles.
Policy Management in Cisco DNA Center with Cisco ISE: This section
covers the integration between Cisco ISE and Cisco DNA Center,
showing the value of centralized management at a policy level.
Host Onboarding with Cisco DNA Center: This section goes into detail
on host onboarding clients in Cisco SD-Access fabric in a simplified
approach using Cisco DNA Center.
Cisco DNA Center integration with Cisco ISE is the first step
toward micro-segmentation in a Cisco SD-Access fabric. The
next section provides the details on the integration and the
steps involved.
Note
All the Cisco DNA Center screen captures shown in this chapter use version 1.3.x, which is
the latest version at the time of writing.
Note
Install the third-party certificate in Cisco DNA Center or the DNA Center cluster before any
deployment implementation and integration with Cisco ISE. Replacing the certificate in
Cisco DNA Center causes network disruption because the services are restarted. Proceed
with caution to make the changes.
Note
Cisco recommends using third-party certificates in Cisco ISE specifically for client services
such as EAP and portal usage. Replacing the Admin certificate in Cisco ISE results in a
restart of the services, which may or may not cause network disruption, depending on the
ISE role (PAN, PSN, or MnT, introduced in Chapter 2).
Shared Secret: Shared secret between the network devices and ISE
policy servers.
Username: Same username to log in to ISE using SSH and GUI. The user
account should be a superadmin user.
Password: Same password for the username for login to ISE SSH and
web UI.
Subscriber Name: Used to identify the Cisco DNA Center client name in
ISE.
SSH Key: (Optional) SSH key, which can be created “offline” and
provided to ISE or Cisco DNA Center.
Cisco DNA Center logs in to ISE from the credentials provided using
SSH.
Cisco DNA Center invokes an ERS API call to ISE to download the
pxGrid certificates from ISE.
The pxGrid connection request is sent to ISE securely from Cisco DNA
Center. After successful connection, ISE pushes all the Cisco TrustSec
information, such as Scalable Group Tags (SGTs), to Cisco DNA Center.
Cisco pxGrid version 2.0 is used from Cisco DNA Center version 1.3.x
onward.
An ERS call also happens from Cisco DNA Center to ISE to download
any existing scalable group tag access control lists (SGACLs).
Improves the user experience to create and manage SGTs and SGACLs
from Cisco DNA Center
Provides a policy matrix view in Cisco DNA Center similar to the matrix
view in Cisco ISE
Supports a third-party AAA server with Cisco ISE as the TrustSec policy
enforcer
If a Cisco ISE security group SGT value does not exist in Cisco DNA
Center, a new scalable group is created in Cisco DNA Center.
If a Cisco ISE security group SGT value exists in Cisco DNA Center but
the names do not match, the name of the Cisco ISE security group
replaces the name of that scalable group in Cisco DNA Center.
If the Cisco ISE security group name is the same but the SGT value is
different, the security group from Cisco ISE is migrated. It retains the
name and tag value, and the Cisco DNA Center scalable group is
renamed and a suffix of “_DNA” is added.
If the SGACL and access contract have the same name and content, no
further action is required, as the information in Cisco DNA Center is
consistent with the information in Cisco ISE.
If the SGACL and access contract have the same name but the content is
different, the SGACL content from Cisco ISE is migrated. The previous
contract content in Cisco DNA Center is discarded.
If the SGACL name does not exist in Cisco DNA Center, a new access
contract with that name is created and the SGACL content from Cisco
ISE is migrated.
If a policy for a source group and destination group pair references the
same SGACL/access contract name in Cisco ISE, no changes are made.
If a policy for a source group and destination group pair references a
different SGACL/access contract name in Cisco ISE, the Cisco ISE
access contract name is referenced in the policy. This overwrites the
previous access contract reference in Cisco DNA Center.
The Cisco ISE default policy is checked and migrated to Cisco DNA
Center.
If the migration does not result in any error messages, a success message
is displayed, as shown in Figure 5-13, and the policy matrix in ISE is
changed to read-only. Cisco DNA Center is now the policy management
platform to make any Trustsec policy changes.
Note
The administrator has the option to manage the group-based access control in ISE instead
of in Cisco DNA Center. If this option is enabled, the Cisco DNA Center group-based
access control UI becomes inactive.
When a client connected to the fabric connects to the network, the client
is authenticated and authorized by ISE. As part of authorization, an SGT
is assigned to the client.
ISE pushes the policy to the fabric edge that needs to be applied to the
client (also known as an SGACL).
Cisco DNA Center as part of fabric configuration makes all the fabric
edges as the SGACL enforcement points. The fabric edge enforces the
SGACL for client traffic at the egress point.
Step 5. The fabric edge enforces the policy for the client
traffic. The policy is applied at the egress of the
fabric edge for the client SGT.
Note
As of Cisco DNA version 1.3.2, some of the flow steps are not directly configurable options,
and day N configuration templates need to be leveraged to configure network devices to
use a third-party RADIUS server for authentication and Cisco ISE for policy download.
SECURE HOST ONBOARDING IN
ENTERPRISE NETWORKS
Host onboarding, as the name suggests, is the process of
onboarding the clients in a network, which could include
workstations, users, BYOD devices, IoT devices, IP phones,
cameras, network devices such as access points, and so on.
This section focuses on the security aspects of onboarding the
hosts in the network in a flexible way with minimal disruption
to the network or the clients. A high-level overview of the
different host onboarding techniques is provided in the
subsequent sections to help you understand its value in a
software-defined campus network and the approach toward the
Cisco Zero Trust model.
Single-Host Mode
In a single-host mode, only one MAC address is allowed on
the switchport. The switch authenticates the port and places it
in an authorized state. Detection of a second MAC address on
the port results in a security violation, as shown in Figure 5-
18. Single-host mode is mainly used in environments that have
a strict restriction of connecting only one client per port.
Figure 5-18 Single-Host Mode
Multi-Host Mode
In multi-host mode, the first MAC address attached is
authenticated. Subsequent hosts that are attached to the port
bypass authentication and piggyback on the first MAC
address’s authentication, as shown in Figure 5-19. Multi-host
mode on the port, along with port security, can be used to
manage network access for all the MAC addresses on a port.
Figure 5-19 Multi-Host Mode
Multi-Domain Mode
Multi-domain mode refers to two domains: data and voice. In
multi-domain mode, also known as multi-domain
authentication (MDA), an IP phone and a host connected
behind the phone are authenticated independently. Even
though they are connected to the same port, the IP phone is
placed in the voice VLAN and the host is placed in the data
VLAN as per the policies pushed by the authentication server.
Any second MAC address detected on the data or voice
domain results in a security violation. Figure 5-20 shows
MDA in action.
Figure 5-20 Multi-Domain Mode
Multi-Auth Mode
In multi-auth mode, one client is allowed on the voice domain
and multiple authenticated clients are allowed on the data
VLAN. Cisco DNA Center by default provisions multi-auth
mode on all the 802.1X-enabled ports. Multi-auth mode is the
most commonly used host mode, as it ensures that every client
is authenticated before connecting into the network, as
depicted in Figure 5-21.
Figure 5-21 Multi-Auth Mode
Note
In multi-auth mode, only one VLAN needs to be enabled for all the hosts connected to the
port. You cannot have two data hosts connected with different data VLANs assigned by the
authentication server.
interface GigabitEthernet1/0/1
switchport access vlan 100
switchport mode access
switchport voice vlan 101
authentication host-mode multi-auth
authentication open ----------------->
enables Monitor mode
authentication port-control auto
mab ----------------->
enables MAB
dot1x pae authenticator ----------------->
enables 802.1X
interface GigabitEthernet1/4
switchport access vlan 60
switchport mode access
switchport voice vlan 61
ip access-group PRE-AUTH in ----------> pre-
auth access control list (Low Impact
Mode)
authentication open
authentication port-control auto
mab
dot1x pae authenticator
int GigabitEthernet1/4
switchport access vlan 60
switchport mode access
switchport voice vlan 61
no authentication open ------------->
enables Closed Mode.
authentication periodic
authentication timer reauthenticate server
authentication port-control auto
mab
dot1x pae authenticator
No Authentication Template
With the No Authentication template selected, the fabric edge
ports are not configured to do a port authentication. Example
5-4 shows the port interface configuration when No
authentication template is applied. Notice that no
authentication commands or authentication templates are
applied on the switchport.
Note
Cisco DNA Center from version 1.2.x forward provisions the authentication templates in the
IBNS 2.0 style.
Closed Authentication
Closed authentication is one of the end goals in Phase II. In
closed authentication, traffic is permitted only if the
authentication is successful. Prior to authentication, only
EAPOL traffic is allowed. With the Closed Authentication
template selected for Cisco DNA Center host onboarding, the
fabric edge ports are configured in closed mode. Example 5-9
shows the fabric edge port configuration provisioned by Cisco
DNA Center.
Easy Connect
Easy Connect is an authentication template, also known as
low-impact mode, that applies an ACL to a port in open
authentication. The ACL acts as an additional security
mechanism to make sure that only certain traffic is allowed if
the client fails authentication. Example 5-10 shows sample
output of a switchport with the Easy Connect template
provisioned by Cisco DNA Center. An inbound ACL named
IPV4_PRE_AUTH_ACL is applied on the interface, and the
source template in use is DefaultWiredDot1xLowImpactAuth
in Easy Connect. The preauthorization ACL is only allowing
DHCP and DNS traffic.
Example 5-10 Easy Connect Template Port Configuration
Once the traffic leaves the fabric from a VN, the VN either can
be handed off to a VRF in the traditional world to keep the
macro-segmentation throughout or can be fused to the global
routing table through a fusion router (covered in Chapter 4).
SEGMENTATION POLICY
CONSTRUCTION IN CISCO SD-
ACCESS
You need a thorough understanding of the network, business
requirements, and security requirements before proceeding
with implementing segmentation policies. To understand the
flow, this section continues with our example company ACME
that is interested in implementing Cisco SD-Access. As part of
business growth, ACME has a new building coming online in
San Jose, California, that is going to be part of the campus
fabric using Cisco DNA Center.
ACME has six edge nodes, two border/control plane nodes co-
located, and one Cisco wireless LAN controller. Cisco ISE,
Cisco DNA Center, and DDI (DNS, DHCP, IP Address
Management) are in the data center. No micro- or macro-
segmentation has been implemented yet. Cisco ISE is already
integrated with Cisco DNA Center. The following section
examines ACME’s business intent, segmentation
requirements, and leverage of Cisco DNA Center and ISE to
apply business intent to the network.
Figure 5-36 ACME Fabric Topology
Note
If border automation is used to automate the handoff between the border nodes and the
non-fabric network, the Campus VN needs to be allowed as part of the border automation.
edge-1#
vlan 1021
name 100_100_0_0-Campus -◊ VLAN name is the
mix of IP pool and VN
interface Vlan1021
description Configured from Cisco DNA-Center
mac-address 0000.0c9f.f45c
vrf forwarding Campus
ip address 100.100.0.1 255.255.0.0 ---->
Anycast Gateway
ip helper-address 100.127.0.1
ip helper-address 100.64.0.100
no ip redirects
ip route-cache same-interface
no lisp mobility liveness test
lisp mobility 100_100_0_0-Campus-IPV4 -----
>LISP command for EID mappings
end
Step 5. Configure ISE policies so that the ACME
employees are authenticated successfully using
802.1X. If the user is part of the accounting group,
the user should be placed in the Campus Users
VLAN and assigned the acct SGT. Similarly, an
HR user should be placed in the Campus Users
VLAN and assigned the hr SGT. Figure 5-40 shows
a snippet of the policies ACME has configured. For
ease of configuration, the internal database is used.
In enterprise networks, ISE policies are usually
configured to match on an external database like
Active Directory membership or LDAP group. A
single policy set is created for wired and wireless
802.1X users, under which authentication and
authorization policies are configured.
Server Policies:
Vlan Group: Vlan: 1021
SGT Value: 17
Method status list:
Method State
dot1x Authc Success
Local Policies:
Server Policies:
Vlan Group: Vlan: 1021
SGT Value: 16
Figure 5-48 shows additional details for wireless user hr2 with
the hr (tag value -16) SGT assigned by Cisco ISE dynamically.
Step 6. Now that all the policies are in place for ACME
guest users, begin the testing phase. A wireless
client is connecting to the SDA-Guest SSID. When
the client is connected to the SSID, the guest
redirect policy is matched, the client is in the
CENTRAL_WEB_AUTH state, and the redirect
URL is pushed, as shown in Figure 5-54. The client
received an IP address of 100.99.0.22 from the
Guest IP pool and is in the web authentication state
with a redirection URL pointing to Cisco ISE. At
this stage, when the client tries to access any
traffic, a captive portal opens to the AUP page.
Figure 5-54 Guest in Web Authentication State on WLC
SUMMARY
Policy plays a vital role in Cisco SD-Access. Through
integration of Cisco DNA Center and Cisco ISE, SGTs, access
contracts, and policies are managed from Cisco DNA Center.
Various levels of security are embedded into the campus
fabric. The first level of defense is macro-segmentation via
virtual networks. The second level of defense is micro-
segmentation using scalable groups. Security has never been
simple to implement, but Cisco DNA Center has made it much
simpler through an interactive and flexible web GUI. The two
use cases of the fictional ACME organization described
several policy options in detail.
Cisco Software-Defined
Access Operation and
Troubleshooting
Fabric Encapsulation
As discussed in Chapter 2, “Introduction to Cisco Software-
Defined Access,” Cisco SD-Access is a fabric-based (or
overlay-based) solution that is built using two industry-
standard encapsulation protocols:
Virtual Extensible LAN (VXLAN) is used for the data plane in Cisco SD-
Access and carries the host traffic between fabric nodes in its
encapsulation.
Seamless mobility: Because LISP uses primarily host routes and a “pull”
model (covered later in this chapter), endpoint mobility is made easier and
more efficient without the overloading of routing tables that would occur
with traditional routing protocols.
LISP
In traditional IP-based networks, an endpoint IP address is
composed of two parts: the network address and the host
address. The endpoint’s subnet mask is used to distinguish
between these two parts so that the endpoint can differentiate
between local traffic (traffic on the same subnet) and remote
traffic. This IP scheme is used globally within enterprises and
the Internet, and although it is functional, it has limitations in
terms of scalability and flexibility.
Locator/Identifier Separation Protocol (LISP) is an industry-
standard protocol described in RFC 6830. LISP was originally
conceived in 2006 as a potential solution to address the
scalability and addressing limitations inherent to traditional IP-
based networking used on the Internet. LISP solves these
limitations by separating reachability information into routing
locator (RLOC) and endpoint identifier (EID). This separation
allows for better scale and more agile networks because the
actual endpoint IP address can be abstracted and doesn’t need
to be known by the underlying network. LISP has many uses
in networking today, including in WAN and data center
applications, and its flexibility and scalability make it suitable
for campus/branch network solutions such as Cisco SD-
Access.
VXLAN
Virtual Extensible LAN (VXLAN) is a network encapsulation
solution that is described in RFC 7348. VXLAN allows for the
transport of Layer 2 Ethernet frames over a Layer 3
infrastructure and is used in many data center applications to
address scalability limitations present in traditional VLAN-
based networks, including support for up to 16 million VLANs
and the ability to span VLANs across geographic boundaries.
It is also the data plane protocol used in the Cisco Application
Centric Infrastructure (Cisco ACI) solution.
Note
Cisco SD-Access technically uses VXLAN Group Policy Option (VXLAN-GPO) extension
for encapsulation, which is a backward-compatible extension to VXLAN that adds support
for the carrying of SGTs in its header. This extension allows for policy in the Cisco SD-
Access fabric.
MTU Considerations
An important design consideration when using a Cisco SD-
Access fabric, or any overlay-based solution, is that the
encapsulation protocol generally increases the total size of the
packet that transits across the underlying network. In the case
of LISP and VXLAN, up to 56 bytes could be added to every
packet, which may cause fragmentation and connectivity
issues if the underlying network is not configured to handle
larger packets. For this reason, the recommended maximum
transmission unit (MTU) for any Cisco SD-Access underlay is
9100 bytes end to end. This MTU size allows for any
encapsulated traffic to properly route through the network
without disruption.
interface Vlan1021
description Configured from Cisco DNA-Center
vrf forwarding Campus
ip address 100.100.0.1 255.255.0.0
ip helper-address 100.127.0.1
ip helper-address 100.64.0.100
2. The fabric edge switch adds DHCP Option 82 containing the VXLAN
Network Identifier (VNID), or instance ID along with its RLOC address and
then encapsulates the request into a unicast packet with the IP address of the
SVI/anycast gateway as its source and the DHCP server IP address as the
destination.
3. This packet is routed in the overlay and sent via the fabric border to the DHCP
server outside of the fabric.
Figure 6-4 shows the flow of the DHCP request sent from the
endpoint. The fabric edge switch intercepts this request and
adds DHCP Option 82 to the request containing instance ID
4099 and its RLOC address of 100.124.128.135. The fabric
edge switch also changes the source of the DHCP request to its
SVI address of 100.100.0.1 and sends the packet toward the
fabric border in the overlay.
The response from the DHCP server is sent back toward the
endpoint and goes through the following process:
1. The DHCP reply is received by the fabric border, which has a loopback
interface configured with the same IP address as the anycast gateway.
2. The fabric border sees Option 82 in the reply containing the fabric border’s
RLOC address and the instance ID and sends the DHCP response directly to
the fabric edge.
3. The fabric edge receives the reply, de-encapsulates the packet, and then
forwards the raw DHCP reply to the endpoint.
Figure 6-5 shows the flow of the DHCP reply sent from the
DHCP server. The fabric border receives the reply and, after
reading Option 82 in the packet, directs the reply to the fabric
edge switch for forwarding to the endpoint.
1. The source endpoint (A) sends an ARP request for the MAC address of the
destination endpoint (B).
2. The LISP process on endpoint A’s fabric edge (A) intercepts this ARP request
and asks the fabric control plane for the requested mapping of endpoint B’s IP
address to its MAC address.
3. The fabric control plane looks up endpoint B’s IP address in its LISP address
resolution table. This table is similar to a switch’s ARP table but is specific to
LISP. The fabric control plane then sends this MAC address to fabric edge A
with a LISP ARP reply.
Figure 6-7 shows endpoint A sending an ARP request message for
100.100.0.22 to fabric switch A. Fabric switch A intercepts the ARP request
and sends a LISP ARP request to the fabric control plane. The fabric control
plane replies with an entry from its LISP ARP table.
Figure 6-7 LISP ARP Process
4. Fabric edge A stores this mapping in its local ARP cache and then queries the
fabric control plane again for the location of endpoint B’s MAC address.
5. The fabric control plane responds to fabric edge A with the RLOC address of
endpoint B’s fabric edge (B).
Figure 6-8 shows fabric edge A sending a LISP Map-Request message for
MAC address b827.eb07.5b9a to the fabric control plane. The fabric control
plane responds with the RLOC address of fabric edge B.
Figure 6-8 LISP Layer 2 Map-Request/Reply
6. Fabric edge A encapsulates the ARP request in VXLAN with fabric edge B’s
VTEP (RLOC) as the destination and sends it in the underlay.
7. Fabric edge B receives the VXLAN packet, de-encapsulates it, and forwards
the ARP request to endpoint B.
8. Endpoint B sends an ARP reply to endpoint A’s MAC address.
9. Fabric edge B queries the fabric control plane for the location of endpoint A’s
MAC address.
10. The fabric control plane looks up the location of endpoint A’s MAC address
and responds to fabric edge B with the RLOC address of fabric edge A.
11. Fabric edge B encapsulates the ARP reply in VXLAN with fabric edge A’s
VTEP (RLOC) address as the destination and sends it in the underlay.
Figure 6-9 shows fabric edge A sending the ARP request for
endpoint B’s MAC address to fabric edge B, which forwards it
to endpoint B. Endpoint B sends a reply back to endpoint A,
and after looking up b827.ebfd.c3e8’s location with the fabric
control plane, fabric edge B sends it to fabric edge A for
forwarding to endpoint A.
2. Fabric edge A queries the fabric control plane for the location of the IP
address of the destination endpoint (B).
3. The fabric control plane performs a lookup and forwards the message to
endpoint B’s fabric edge (B).
5. Fabric edge A installs the mapping in its map-cache table, encapsulates any
traffic to endpoint B in VXLAN with fabric edge B’s VTEP (RLOC) as the
destination, and forwards the traffic through the underlay.
6. Return traffic is processed in the same way and all subsequent traffic between
endpoints A and B is now encapsulated in VXLAN and forwarded through the
underlay directly between fabric edge A and fabric edge B.
1. The source endpoint (A) sends the traffic to its default gateway, which is the
anycast gateway configured as an SVI on fabric edge (A).
2. Fabric edge A checks its LISP map-cache to find a match for the destination
endpoint (B). If there is no match, it sends a LISP Map-Request to the fabric
control plane.
3. The fabric control plane checks its LISP EID table for the IP address of the
destination and, if there is no match, returns a “forward-natively” message to
fabric edge A.
4. Fabric edge A encapsulates the packet in VXLAN with a destination VTEP
(RLOC) of the fabric border, which is configured as a Proxy Egress Tunnel
Router (PETR).
5. Return traffic via the fabric border is processed in the same way as traffic
within the fabric is processed.
Note
The previous example is based on an external border as described in Chapter 4. Internal
borders register non-fabric prefixes to the control plane, and traffic is processed similarly to
how typical intra-fabric traffic is processed.
Note
Some Cisco WLC types, such as the Cisco Catalyst 9800 Embedded Wireless Controller
for Switch, are not connected outside of the fabric but are actually embedded on Cisco
Catalyst 9300 switches that are inside of the fabric. The operations described in this section
are identical regardless of the type of WLC being used.
Figure 6-11 shows the process of the WLC sending VNID (or
instance ID) and VLAN information to the AP for a new
wireless endpoint over the CAPWAP tunnel. The WLC also
registers with the fabric control plane the MAC address of the
wireless endpoint along with the RLOC of the fabric edge.
CISCO SD-ACCESS
TROUBLESHOOTING
Cisco SD-Access is based on industry-standard protocols such
as TrustSec, LISP, and VXLAN, which have existed on Cisco
switch and router platforms for many years. As such, the
various Cisco operating systems have a robust set of
commands available for troubleshooting. The following
sections illustrate some of the most common troubleshooting
commands and descriptions that are used for Cisco SD-
Access. You can perform further troubleshooting in Cisco
DNA Center using the Assurance application. Fabric
Assurance in Cisco DNA Center is discussed in Chapter 9,
“Cisco DNA Assurance.”
Fabric Edge
Troubleshooting DHCP-related issues in a traditional network
is typically straightforward because issues are usually on the
client or the DHCP server. In Cisco SD-Access, however,
extra steps may be required to troubleshoot DHCP problems
due to the DHCP snooping and relay mechanisms that are used
to provide DHCP services to endpoints in the fabric.
The first common thing to verify is whether the DHCP binding
is stored on the fabric edge, using the show ip dhcp snooping
binding command. Next, display which hosts are both
connected to the fabric edge and in the device tracking
database, which you can do with the show device-tracking
database command.
Authentication
During the Host Onboarding workflow in Cisco DNA Center,
a default authentication template is selected and applied to the
entire fabric, but it can also be overridden on a per-port basis.
This authentication template specifies the rules that apply to
endpoints connecting to the fabric edge switch. This template
can be verified by looking at the running configuration of the
fabric edge switch—specifically, the interface and template
sections.
Example 6-20 shows the output of the show client detail MAC
address command on a WLC, which displays authentication
details for a wireless endpoint, including username, IP address,
SSID, RLOC, and SGT.
Example 6-20 Displaying Wireless Endpoint Authentication Details on WLC
Fabric Configuration
--------------------
Fabric Status:
.................................. Enabled
Vnid:
...........................................
8191
Client RLOC IP registered to MS:
................ 100.124.128.135
Clients RLOC IP :
.............................. 100.124.128.135
Policy
Policy in Cisco SD-Access is enforced on egress, meaning that
it is enforced by the fabric edge switch of the destination
endpoint. Some policy information that can be gathered from
the fabric edge switch includes any Security Group ACL
(SGACL) names and summaries of the SGACLs, along with
the SGTs that will be affected.
SUMMARY
This chapter covered the various technologies used in Cisco
SD-Access, including the details of the LISP and VXLAN
implementations. It also demonstrated a typical packet flow
for both wired and wireless endpoints, from host onboarding
and registration to end-to-end conversations. In addition, this
chapter discussed common troubleshooting commands for
Cisco SD-Access and provided examples for each.
CISCO SOFTWARE-DEFINED
ACCESS EXTENSION TO IOT
In the modern world, computing devices are compact and
smart. Each user carries an average of three smart devices with
them, such as a smartphone, a smart watch, and a tablet,
among many other possibilities. The total installation base of
Internet of Things (IoT)-connected devices is projected to
grow to 75.44 billion devices worldwide by 2025, a fivefold
increase in ten years (source: https://www.statista.com). These
IoT devices may or may not be connected to the edge devices.
Considering that edge devices are expensive, are not small in
size, and are racked in a wired closet or in the data center, edge
devices may not be the best place to connect IoT devices to the
network.
As the world moves into a “smart” era where all the devices
need to be controlled, most endpoints are data collectors.
Nontraditional spaces are becoming more common and need to
be integrated into the enterprise environments. The standard
requirements of a traditional network—security, automation,
and network insights—apply to IoT networks as well. The
network must block hackers from gaining entry to the network
from any smart connected IoT devices (for example, a
monitored air conditioning unit). There is a growing need in
the IoT space for ease of management, onboarding IoT devices
automatically, securing east-west communication, redundancy,
and faster convergence.
With Cisco Identity Services Engine (Cisco ISE) integrated
into the architecture of Cisco SD-Access, network
segmentation and policy based on the endpoints’ identity is
integrated into the network. Cisco DNA Center automates the
deployment of a Campus Fabric that can be extended to IoT
devices. The IoT switches can be made part of the fabric, and
the switches can be managed and monitored by Cisco DNA
Center. Figure 7-1 depicts an IoT extension into the Campus
Fabric built in the previous chapters. These IoT switches are
officially called extended nodes in the Cisco Campus Fabric
world.
Figure 7-1 Extended Enterprise in a Campus Network
Extended node ring to StackWise Virtual (SVL) fabric edge. REP edge on
the fabric edge. Two ways out of extended node ring. No single point of
failure.
Extended node ring to stacked fabric edge. Two ways out of extended
node ring. Stacked fabric edge might cause a potential single point of
failure.
--------output_terminated------------
3. The control plane node does not have the non-fabric destination in the host
database. The edge node will receive a negative lookup response from the
control plane node. Any unknown host traffic will be sent to the border by the
fabric edge.
4. The edge node forwards the packet to the border node over a VXLAN tunnel
with the source host SGT and VN inserted in the VXLAN header.
5. The border node decapsulates the VXLAN packet and forwards the packet out
of the fabric to the next hop.
3. FE2 checks with the control plane to find the next hop to forward the traffic to.
The control plane responds with the next hop as FE1.
4. FE2 forwards the packets to FE1 over a VXLAN tunnel with the source SGT
200 and VN inserted. The VXLAN packet is decapsulated at FE1, and the
traffic is forwarded to the policy extended node along with the SGT with
inline tagging.
Multicast Overview
Multicast technology reduces traffic bandwidth consumption
by delivering a single stream of information simultaneously to
potentially thousands of destination clients. Applications that
offer services such as video conferencing, corporate
communications, distance learning, distribution of software,
stock quotes, news, and so on make use of multicast.
PIM dense mode (PIM-DM): This mode uses a push model, which
floods the multicast traffic to all the network segments even if the receiver
has not requested the data. PIM-DM initially floods multicast traffic
throughout the network. Routers that have no downstream neighbors
prune back the unwanted traffic. PIM-DM is not commonly used or
recommended because the traffic is flooded to unwanted devices, causing
unnecessary bandwidth utilization.
PIM sparse mode (PIM-SM): This mode uses a pull model that sends
the multicast traffic to the network segments that have active receivers
explicitly requesting the traffic. In sparse mode, when hosts join a
multicast group, the directly connected routers send PIM Join messages
toward the rendezvous point (RP), which is the meeting point for
multicast sources and receivers. In PIM-SM mode, sources send the
traffic to the RP, which forwards the traffic to the receivers via a shared
distribution tree (SDT). The RP keeps track of multicast groups. By
default, when the first-hop device of the receiver learns about the source,
it sends a Join message directly to the source, creating a source-based
distribution tree from the source to the receiver. This source tree does not
include the RP unless the RP is located within the shortest path between
the source and receiver. The RP is needed only to start new sessions with
sources and receivers for the control traffic in multicast and usually is not
involved in the data plane. Consequently, the RP experiences little
overhead from traffic flow or processing.
PIM sparse-dense mode: Some use cases require some multicast groups
to be in sparse mode and other multicast groups to be in dense mode. PIM
sparse-dense mode enables the interface mode based on the multicast
group. The interface will be in dense mode if the multicast group is in
dense mode, and vice versa with sparse mode.
2. The FE node receives the IGMP Join and sends a PIM Join toward the fabric
RP. The RP is registered with the control plane because it is part of the overlay
in this scenario. The FE asks the control plane node for the location of the RP
address (stored in the IP address-to-RLOC table) and, based on the reply,
sends the PIM Join in the overlay to the RP.
3. The RP now has the receiver information of the multicast group.
4. The multicast source sends multicast traffic toward the fabric border (FB)
because it is the designated router (DR) for that segment.
5. The FB receives the multicast traffic and sends it toward the RP. The FB
queries the control plane for the location of the RP address (IP address-to-
RLOC table) and sends the traffic in the overlay to the RP, as shown on the
right side in Figure 7-12.
6. The RP now has the source and receiver information for that multicast group.
The right side of Figure 7-12 shows the final control plane interaction where
the RP is made aware of the source and the destination multicast group.
As shown in Figure 7-13, data plane traffic flow slightly differs from the
control plane connection steps discussed earlier.
Figure 7-13 Head-End Replication Multicast Data Plane
Interaction in Fabric: PIM ASM
3. The FE is now aware that the border owns the multicast source based on the
first multicast packet received and sends a PIM Join directly to the border for
that multicast group. With the PIM Join from the FE on the FB, the FB knows
the FEs with clients that requested the specific multicast group.
4. Multicast shortest-path tree (SPT) forwarding kicks in after the first multicast
packet; multicast traffic is forwarded between the FB and the FEs directly
subsequently.
5. The fabric border performs head-end replication and the VXLAN tunnel
encapsulates the multicast traffic and unicasts it to the FEs with the receivers.
The multicast traffic is sent in the overlay, as shown on the right side in Figure
7-13.
6. The FE receives the VXLAN packets, decapsulates them, applies the policy,
and then sends the original IP multicast packet to the port on which the
receiver is connected.
3. The fabric edge (FE) receives the IGMPv3 Join and, because the IGMPv3 Join
has the source address information for that multicast group, the FE sends a
PIM Join toward the source directly. In this scenario, the source is reachable
through the border, and the FE sends the PIM Join to the border. The FE
queries the control plane node for the RLOC of the source address, which is
the RLOC of the fabric border (FB). The PIM Join is sent in the overlay from
the FE to the FB. This flow is illustrated on the left side in Figure 7-14.
4. The multicast source sends the multicast traffic on the interfaces toward the
FB because it is the DR for that segment.
5. The FB receives the multicast traffic and sends it toward the FE, because the
PIM Join is coming directly from the FE to the FB in an SSM deployment.
3. The FE receives the VXLAN packets, decapsulates them, applies the policy,
and then sends the original IP multicast packet to the port on which the
receiver is connected.
4. The flow works exactly the same for wireless fabric deployments.
interface LISP0.4096
ip pim lisp transport multicast
ip pim lisp core-group-range 232.0.0.1 1000
3. When the fabric border (FB) receives multicast source traffic, it sends a source
registration message in the overlay for group address 238.0.0.1 to the RP and
forwards the traffic in the overlay for the group address 238.0.0.1 to the RP.
The FB also forwards the traffic in the underlay on the mapped group
232.0.0.9 to the RP. The traffic is sent to the RP because the overlay group is
still ASM. This creates the S,G state in the underlay for the overlay group. (If
SSM were used in the overlay, the RP would have no role for this multicast
group.)
4. Multicast entries in the underlay are now complete to replicate the traffic to
the needed devices for the multicast group.
Example 7-6 Custom SSM Range Multicast Pushed by Cisco DNA Center
Click here to view code image
instance-id 8188
remote-rloc-probe on-route-change
service ethernet
eid-table vlan 1021
broadcast-underlay 239.0.0.1 //VLAN 1021
part of underlay multicast group
database-mapping mac locator-set xxx
exit-service-ethernet
exit-instance-id
2. All fabric nodes that have the IP subnet configured have sent the PIM Joins on
their respective multicast group, and a multicast tree is prebuilt for that
particular IP subnet. The traffic is flooded on this prebuilt multicast tree.
3. The fabric edge intercepts any ARP flooding or broadcast or link-local
multicast from the client and sends it over the dedicated multicast group in the
underlay. The fabric edge encapsulates the client traffic in the VXLAN tunnel
and then sends it with {Source IP = FE node RLOC, Destination IP =
Underlay Multicast Group} as the outer IP address. The underlay based on
normal multicast functionality is responsible for replicating the traffic as
needed. The source tree failover also happens based on regular multicast
working.
4. All the fabric edges receive the traffic sent by Edge Node 1.
instance-id 8189
remote-rloc-probe on-route-change
service ethernet
eid-table vlan 1022
broadcast-underlay 239.0.17.2 //Same
multicast group
flood unknown-unicast
database-mapping mac locator-set <xxx>
exit-service-ethernet
!
exit-instance-id
!
instance-id 8190
remote-rloc-probe on-route-change
service ethernet
eid-table vlan 1024
broadcast-underlay 239.0.17.2 // Same
multicast group
flood unknown-unicast
database-mapping mac locator-set <xxx>
exit-service-ethernet
!
exit-instance-id
!
Fabric Edge#
instance-id 8188
remote-rloc-probe on-route-change
service ethernet
eid-table vlan 1024
broadcast-underlay 239.0.0.1
database-mapping mac locator-set xxx
exit-service-ethernet
exit-instance-id!
interface Vlan1024
description Configured from apic-em
mac-address 0000.0c9f.f45c
vrf forwarding Corp
ip address 8.6.53.1 255.255.255.0
ip helper-address 10.121.128.101
no ip redirects
ip route-cache same-interface
no lisp mobility liveness test
lisp mobility 8_6_53_0-Corp
Fabric Border#
instance-id 8188
remote-rloc-probe on-route-change
service ethernet
eid-table vlan 300
broadcast-underlay 239.0.0.1
database-mapping mac locator-set xxx
exit-service-ethernet
exit-instance-id
!
interface Vlan300
description Configured from apic-em
mac-address 0000.0c9f.f45c
vrf forwarding Corp
ip address 8.6.53.1 255.255.255.0
ip helper-address 10.121.128.101
no ip redirects
ip route-cache same-interface
no lisp mobility liveness test
lisp mobility 8_6_53_0-Corp
Layer 2 Intersite
Large campus networks may consist of multiple fabric sites.
Each fabric site consists of a border, control, and edge nodes.
Cisco DNA Center 1.3.3 and later offers the Layer 2 intersite
feature to extend the Layer 2 segment across fabric sites. The
same Layer 2 subnet extension can be done across multiple
fabric sites and the traditional network. The main use case of
the feature is to allow ARP, broadcast, and link-local multicast
communication for a subnet spanned across multiple sites.
Legacy applications such as card readers, slot machines,
printers, and so forth depend on Layer 2, and if the end clients
for these applications reside in multiple fabrics and/or
traditional networks, the Layer 2 intersite feature allows them
to communicate with each other.
The Layer 2 border needs to be configured across every fabric site for a
specific VLAN. Cisco DNA Center automatically creates a trunk between
the fabric sites. Figure 7-23 shows the trunk link configured by Cisco
DNA Center between the Layer 2 borders at Fabric Site 1 and Fabric Site
2 and shows that VLAN 300 is allowed on the trunk link. VLAN 300 is
external to the fabric; it is the traditional VLAN with the same subnet as
that of fabric VLAN 1021.
The Layer 3 border at every fabric site advertises the /32 prefixes to the
external fusion routers. No summarized routes are sent because the traffic
for a given host might return to the wrong side.
When Host 1 in Fabric Site 1 sends traffic to Host 2 in Fabric Site 2, the
control plane in Fabric Site 1 sends the traffic to the Layer 2 border in
Fabric Site 1. The Layer 2 border translates VLAN 1021 to external
VLAN 300 and sends it over the trunk link to the Layer 2 border at Fabric
Site 2.
The Fabric Site 2 Layer 2 border translates the external VLAN 300 to the
fabric- provisioned VLAN 1021 in Fabric Site 2 and forwards the packet
to the RLOC (fabric edge) of Host 2. Traffic flow is similar for traffic
between fabric sites and traditional network hosts.
L2 Border#
interface Loopback1022
description Loopback Border
vrf forwarding Hosts
// Shared IP pool with loopback on L2 borders
ip address 172.16.8.1 255.255.255.255
end
instance-id 4100
remote-rloc-probe on-route-change
dynamic-eid 172_16_8_0-Hosts-IPV4
<misc>
instance-id 8189
remote-rloc-probe on-route-change
service ethernet
// L2 flooding enabled with Vlan 300
eid-table vlan 300
broadcast-underlay 239.0.17.1
flood unknown-unicast
database-mapping mac locator-set
rloc_223e6de0-2714-4ad8-bef6-d11f76cd1574
exit-service-ethernet
router bgp 422
bgp router-id interface Loopback0
<misc>
address-family ipv4 vrf Campus
bgp aggregate-timer 0
network 172.16.90.0 mask 255.255.255.252
network 172.16.91.1 mask 255.255.255.255
aggregate-address 172.16.91.0 255.255.255.0
summary-only
<misc>
exit-address-family
!
address-family ipv4 vrf Servers
bgp aggregate-timer 0
network 172.16.93.0 mask 255.255.255.0
// No network summarization for shared Pool
aggregate-address 172.16.93.0 255.255.255.0
redistribute lisp metric 10
exit-address-family
!
Note
In topologies where there is a common DHCP server for both the fabric sites and the Layer
2 intersite feature is in use, when a host in Site 1 requests an IP address via DHCP, the
DHCP offer from the data center can also reach the L3 border of Site 2. The L3 border in
Site 2 cannot send the reply packet to the RLOC of the FE of the host in Site 1. The
workaround is to have underlay connectivity between L3 borders of all the sites and the
RLOCs of the FEs of all sites.
Note
With IP transit, L3 borders advertise /32 routes to the fusion router; a larger fusion router
should be considered in case of a greater number of common IP subnets across the sites.
More L2 borders should be considered to spread the load with a rise in the number of
common IP subnets.
Fabric site: Consists of a fabric edge, fabric control plane node, and
fabric border, usually with an ISE Policy Services Node (PSN) and fabric-
mode WLC. A fabric border connects the fabric site to the rest of the
network. A fabric site can be a single physical location, such as a building
in a campus, or multiple locations. Fabric in a Box is an example of a
fabric in a single device.
Fabric domain: Contains one or more individual fabric sites and any
corresponding transit(s) associated with those sites managed by Cisco
DNA Center. The fabric sites in a fabric domain are connected by a transit
network for cross-site communication or external network
communication.
Transit: Connects one or more fabric sites or connects a fabric site to the
rest of the network.
Types of Transit
There are three types of transit that connect fabric sites with
each other or to the rest of the network. The type of transit to
select in a network depends on the scale, cost, policy, and
resiliency requirements.
IP Transit
IP transit offers IP connectivity without native Cisco SD-
Access encapsulation and functionality, potentially requiring
additional VRF and SGT mapping for stitching together the
macro- and micro-segmentation needs between fabric sites. It
leverages the traditional network that uses VRF-LITE or the
MPLS network. Even though IP transit does not carry VN and
SGT information, IP transit is typically used for the following
use cases:
Organizations have an existing WAN in place and would like to use the
WAN as the transit without additional devices needed.
IP transit could be the only option for a few geographically dispersed sites
that are connected through a backhaul mobile LTE network with high
latency.
In this topology, the data plane and policy plane are the same
because they are contained in the same packet. Within the
fabric sites, VXLAN carrying the SGT is used for the data
plane. The fabric border de-encapsulates the VXLAN packet
for the traffic before sending it through the DMVPN tunnel. A
DMVPN tunnel can carry SGTs inline, meaning the SGT from
the VXLAN header on the fabric border is placed in the IP
packet that is sent through the DMVPN tunnel, which the
remote fabric site will be able to de-encapsulate and propagate
in VXLAN within its fabric. The DMVPN configuration on
both the fabric borders should be done manually or using
Cisco DNA Center templates. The topology is scalable
because the policy is carried inline in the data packet.
Figure 7-27 shows a topology of fabric sites using the
traditional WAN as IP transit. There is no DMVPN running
between the fabric sites in this topology, unlike the previous
use case. This topology is common for retail networks with
multiple branches connected over WAN links. The control
plane uses LISP within the fabric sites, and the fabric border
hands off using BGP to the fusion router. WAN link control
protocols are used for the transit. For the data plane, VXLAN
is used within the fabric site and contains the SGT and VN
information. The fabric border strips off the SGT and uses
VRF-lite for data plane communication with the fusion router.
The SGT cannot be carried natively in a WAN transit. To
propagate SGTs, SXP can be run between Cisco ISE and both
the site borders, where ISE pushes the IP address-to-SGT
mappings to both the fabric site borders. In this case, ISE is
the SXP speaker, and both fabric site borders are SXP
listeners. Another option is to run SXP directly between the
fabric site borders, where one fabric border is the SXP speaker
and the other fabric border is the SXP listener. Cisco DNA
Center does not automate the SXP configuration on the fabric
border nodes. The SXP configuration can be pushed using
Cisco DNA Center templates. Scale numbers play a vital role
when SXP is in use. Refer to the Cisco TrustSec platform
compatibility matrix for the SXP-supported Cisco platforms.
Figure 7-27 Fabric Sites Connecting on WAN Transit
Host 1 initiates traffic to Host 2. The fabric edge in Fabric Site 1 requests
the fabric control plane node for the RLOC of Host 2. The fabric control
plane responds to the fabric edge to send traffic to the fabric border in
Fabric Site 1.
The fabric edge in Fabric Site 1 sends Host 1 traffic to the fabric border in
Fabric Site 1.
After receiving the traffic, the fabric border in Fabric Site 1 queries the
transit network fabric control plane node for the destination host. This
occurs because Cisco DNA Center configures the fabric border in Fabric
Site 1 to query its fabric control plane node for the prefixes in Fabric Site
1 and query the transit control plane node for any other prefixes.
The fabric transit control plane node responds to the query with the
destination address of the fabric border in Fabric Site 2.
The fabric border in Fabric Site 1 forwards the traffic to the fabric border
in Fabric Site 2 using VXLAN with the SGT in the header.
After receiving the traffic, the fabric border in Fabric Site 2 queries the
fabric control plane node of Fabric Site 2 for the destination host. This
occurs because Cisco DNA Center configures the fabric border in Fabric
Site 2 to query its fabric control plane node for the prefixes in Fabric Site
2 and query the transit control plane node for any other prefixes.
The fabric control plane in Fabric Site 2 responds with the destination
address as the fabric edge in Fabric Site 2.
The fabric border in Fabric Site 2 forwards the traffic to the fabric edge in
Fabric Site 2 using VXLAN encapsulation with SGTs embedded. The
fabric edge in Fabric Site 2 allows or denies traffic based on the SGACL
enforced for the SGTs assigned to Host 1 and Host 2.
3. The negative reply prompts the edge node to send the traffic to the Fabric Site
1 border.
4. Upon receiving the traffic, the Fabric Site 1 border sends a request to the
transit control plane node for the destination IP address information.
5. The transit control plane node does not have the Internet prefix in its database
and sends a negative reply.
6. Based on the negative reply, the Fabric Site 1 border now knows to forward
the traffic to the Fabric Site 2 or Fabric Site 3 border because they are
connected to the Internet. This configuration is pushed by Cisco DNA Center.
7. Upon receiving the traffic again, the Fabric Site 2 border sends a request to the
transit control plane node for the destination IP address information.
8. The transit control plane node again sends a negative reply because it does not
have the destination IP address registered in its database.
9. The Fabric Site 2 border uses traditional routing lookup to evaluate the next
hop to send the traffic, which usually is the default router. Traffic is sent to the
default router, which then forwards further.
3. The negative reply prompts the edge node to send the traffic to the Fabric Site
2 border. The Fabric Site 2 border receives the traffic and sends a request to
the transit control plane node for the destination IP address information.
4. The transit control plane node has the destination IP address information in its
database because the data center aggregate address was registered by the
Fabric Site 1 border. The transit control plane sends the destination IP address
of the Fabric Site 1 border to the Fabric Site 2 border.
5. The Fabric Site 2 border forwards the DC traffic to the Fabric Site 1 border
using VXLAN. The Fabric Site 1 border receives the traffic and sends a
request to the transit control plane node for the destination IP address
information.
6. The transit control plane node sends a reply to the Fabric Site 1 border noting
that the destination RLOC is its own IP address. The Fabric Site 1 border
forwards the traffic to the data center.
Small Site
In a small site design, border and control plane nodes are co-
located on the same device with one or more fabric edges.
Limited survivability is possible with the redundant co-located
border and control plane node. Figure 7-36 illustrates an
example of a small site Cisco SD-Access design. Only two co-
located border and control plane nodes are allowed. Small site
design benefits are limited survivability with the option to use
a local WLC or embedded WLC with Catalyst 9000 Series
switches.
Figure 7-36 Small Site Cisco SD-Access Design
Medium Site
Medium site Cisco SD-Access design can have a maximum of
six dedicated control plane nodes for wired networks and four
control plane nodes for wireless (two enterprise CP nodes and
two guest CP nodes) for higher survivability, as shown in
Figure 7-37. The design can have up to two co-located control
plane and border nodes. Dedicated edges are supported in this
site model. Cisco ISE is a standalone deployment in a medium
site design. A dedicated WLC or an embedded WLC in High
Availability (HA) can be enabled in the medium site
deployment.
Large Site
Figure 7-38 illustrates an example of large site Cisco SD-
Access design. Large site design supports a maximum of six
control plane nodes (wired) and four border nodes for site
exits. In large site design, there is full survivability for the
control plane and full redundancy for the border. Large site
design can have dedicated edges. It supports a local WLC or
an embedded WLC in HA.
Underlay Network
The underlay network defines the physical switches and
routers used to deploy a Cisco SD-Access network. The
underlay network provides IP connectivity using a routing
protocol (static routing is supported but not scalable) and
carries the traffic encapsulated as part of the overlay network.
A scalable, simple, reliable routing protocol is recommended
in the underlay for a Cisco SD-Access network because the
underlay is mainly used for transport purposes. Endpoints such
as users, access points, IoT devices, and extended nodes
connect to the underlay network. Endpoints connected to the
underlay network are physically connected to the underlay, but
they are part of the overlay network in Cisco SD-Access.
Overlay Network
An overlay network is created on top of the underlay to create
a virtualized network. An overlay network is a logical
topology used to virtually connect devices, built over an
arbitrary physical underlay topology. An overlay network
often uses alternate forwarding attributes to provide additional
services that are not provided by the underlay. The data plane
traffic and control plane signaling are contained within each
virtualized network, maintaining isolation among the networks
as well as independence from the underlay network, also
known as macro-segmentation. Layer 2 overlays run a LAN
segment to transport Layer 2 frames over the Layer 3 underlay.
The control plane node can be co-located with the border node
if the endpoint scale requirements are honored. When there is
a possibility of several mobility events in the network, co-
locating the control plane and border nodes is not
recommended. Every time a wireless user roams, the WLC
sends notifications to the control plane node, and high roam
rates result in hundreds of mobility events per second or
thousands of mobility events per minute, which is why a
dedicated control plane node works better.
Step 2. Integrate Cisco ISE and Cisco ACI. The Cisco ISE
PAN integrates with Cisco APIC over SSL and
uses APIs to synchronize the SGTs and EPGs.
Cisco APIC details are added on Cisco ISE so that
IP address-to-SGT mappings from Cisco ISE and
IP address-to-EPG mappings from Cisco APIC are
exchanged over SXP. Whenever the SXP protocol
is used, scale needs to be accounted for in terms of
the number of mappings that can be shared
between Cisco ISE and Cisco ACI.
Note
The integration only works with a single Cisco ACI fabric with a single Cisco ACI tenant.
Single L3out in ACI tenant and shared L3out are not supported. Cisco SD-Access uses
SGACL for policies, and Cisco APIC uses access contracts for policies between EPGs.
SUMMARY
Cisco SD-Access deployment designs vary based on the scale,
resiliency, policy, multicast, type of nodes, and all the
parameters discussed in detail in this chapter. Enterprises can
chose to deploy a Campus fabric using a single fabric site
deployment or multi-site fabric deployment. Depending on the
network scale, device platforms, their software versions, round
trip time, and a single fabric site model may work for some
customers. Single fabric site brings in ease of use and one
policy across the whole Campus fabric. Multi-site fabric
deployment could be an option for large customers with a
bigger scale who are distributed geographically with higher
RTTs. To retain end-to-end segmentation with minimal manual
work in multisite deployments, using Cisco SD-Access transit
or Cisco SDWAN transit is recommended. Cisco DNA Center
1.3.3 introduces flexible features such as policy extended
nodes, Layer 2 intersite, and Layer 2 flooding. Layer 2
flooding leverages native multicast in the underlay to allow
legacy applications depending on ARP, broadcast, and link-
local multicast in fabric deployments. Multidomain
deployments are inevitable in customer networks where
applications reside not in the campus network but in a data
center or cloud. Cisco SD-Access integration with Cisco ACI
provides the flexibility of exchanging groups between the
domains so that policies can be applied based on user roles or
endpoint groups.
Plug and Play: This section discusses the Plug and Play application,
which provides the capability to automatically onboard and configure
network devices from a factory default state.
Cisco DNA Center Tools: This section discusses some of the other tools
available in Cisco DNA Center that can be handy for day-to-day network
operations.
The focus of this book so far has been on Cisco Software-
Defined Access; however, it is important to remember that
Cisco Software-Defined Access is just one application
included in Cisco DNA Center’s application suite. This
chapter covers some of the other workflows and tools in Cisco
DNA Center that can increase efficiency, lower risk, and
provide agility in enterprise networks.
Network Connectivity
Because the Cisco DNA Center solution runs on a Cisco UCS
server, it features a variety of network connectivity options
that can be selected depending on the environment. Before
configuring the interfaces during installation, it is important to
understand exactly which resources need to be reachable on
Cisco DNA Center and which resources it needs to be able to
reach. At a high level, these resources are as follows:
All appliances in a cluster must be the same size (entry, midsize, or large).
Note
Cisco DNA Center HA requires three nodes in a cluster to avoid a “split-brain” scenario in
which two single isolated nodes both think that they’re active. Using three nodes eliminates
this scenario by using a quorum of two nodes to determine which nodes remain active in
the event of a failure. This is a common methodology in HA architectures and is used in
many HA solutions, such as databases.
The Cisco DNA Center services and configuration exist on all
nodes in a cluster; however, they respond only on the active
node. During normal operation, all services and configurations
are synchronized between the nodes in a cluster. When a
failure or isolation occurs on the active node, one of the
remaining two nodes becomes active and responds to the
services.
Security fixes
Bug fixes
Verifying that the network device has enough flash memory or RAM to
run the new image
Transferring the image from the storage location to the device itself using
a protocol such as Secure Copy (SCP) or Trivial File Transfer Protocol
(TFTP)
Verifying that the image transfer was successful and that the new image
on the device is intact
Altering the boot statements on the device so that it loads the freshly
transferred image properly on the next reload
Reloading the device during a change window and verifying that it boots
up properly
Verifying that the device is running the new image and is in a stable state
Image Repository
SWIM features a software image repository that allows
network operators to store software images on the Cisco DNA
Center appliance itself or on a remote Secure File Transfer
Protocol (SFTP) server for distribution to remote
environments.
Golden Image
A Golden Image in SWIM is any software image that has been
selected to be the “standard” image and can be selected based
on any of the following criteria:
Upgrading Devices
Device upgrades are performed in the Provision tool of the
Cisco DNA Center GUI. Cisco DNA Center displays an alert
to the network operator when a device’s current image does
not match the Golden Image selected and requires an upgrade.
This alert also contains the results of a pre-upgrade check that
is automatically performed for each device that requires an
upgrade. The pre-upgrade check verifies the following
software upgrade requirements:
The device has enough free space on flash memory for the new image.
The device is reachable via SCP or HTTPS for the image transfer.
The device has appropriate licensing for the image and is eligible for the
upgrade.
Banner changes
Hostname changes
Template Creation
The Template Editor stores individual templates in Projects,
which behave like folders, for easy organization and
management. Projects can be created based on any structure
that suits the company.
Note
The Onboarding Configuration project is a special template in Cisco DNA Center that is
used for Plug and Play (PnP) deployments. PnP and the Onboarding Configuration project
are discussed later in this chapter.
Figure 8-12 shows the Add New Template dialog box in the
Template Editor with Device Type and Software Type options
that can be used to define which platform(s) a template should
be executed on.
Figure 8-12 Add New Template Dialog Box
Note
Network profiles are also used to assign wireless service set identifiers (SSIDs) to sites, as
discussed in Chapter 3.
Deploying Templates
Templates are deployed using the Provision tool in Cisco DNA
Center. This tool allows network operators to select multiple
or individual devices at any level in the design hierarchy for
provisioning, and the workflow provides the ability to fill in
any information required for templates that contain variables.
Variable values for templates can also be imported from a
comma-separated values (CSV) file, making it easier to add
custom information for many devices at once.
Note
Due to their structure, templates pushed by Cisco DNA Center result in the raw CLI
commands being input into the device configuration as if they were entered manually. As
such, there is no automation currently available to roll back a template’s change. To remove
the configuration, a new template would have to be created to undo the commands on the
device.
Onboarding Templates
Onboarding templates for PnP in Cisco DNA Center are
created using the Template Editor described earlier in this
chapter and are stored in the Onboarding Configuration
project. Templates in the Onboarding Configuration project
differ from other templates in that they cannot be written in
VTL or be composite templates, and as a result, can contain
only platform/software-specific configuration commands. As
such, an onboarding template should be used to provide a
“Day-0” configuration to the new network devices containing
the basic configuration required to establish connectivity to the
rest of the network. After onboarding, further configuration
can be applied using “Day-N” templates, as described earlier
in this chapter, if more advanced logic is required.
PnP Agent
Most current Cisco devices in a factory or out-of-the-box state
boot with an active PnP Agent that automatically tries to
discover a PnP controller for its onboarding configuration. The
two most common methods that are used to advertise the IP
address of the PnP controller to new devices are
Claiming a Device
After the new device has established its connection with Cisco
DNA Center, it appears in the Provision > Devices > Plug and
Play Devices section in an Unclaimed state, ready for
claiming. Figure 8-21 shows a new PnP device with an
Unclaimed status that has established a connection with Cisco
DNA Center.
Figure 8-21 PnP Device in Unclaimed State
The claim process for each new device allows for the
configuration of the following parameters:
Selection of target software image based on the Golden Image selected for
the site Image Repository
Figures 8-22 through 8-25 show each stage of the PnP claim
process and demonstrate the selection of site, onboarding
template, Golden Image, and variable assignments.
Figure 8-22 PnP Claim Step Showing Site Assignment
Single device: Each device can be added manually in the PnP tool using
the serial and product ID numbers of the device.
Figure 8-26 shows the Add Devices dialog box for the PnP
process and demonstrates the settings required to pre-claim a
device.
Topology
The Topology tool in Cisco DNA Center allows a network
operator to take advantage of the defined site hierarchy and
visualize the network at every level of the hierarchy. The
visualization in the Topology tool shows basic information
about each device, including the name and health score, along
with links between devices. Clicking any device or link
provides more detailed information about the element and
allows the user to quickly run commands on the device or
jump to the Device 360 page in Cisco Assurance.
Command Runner
During troubleshooting or inventory operations, network
operators sometimes need to gather information from network
devices using the CLI. This process typically involves using a
terminal client to connect to the device, executing the
command(s), and then copying and pasting the output to a
spreadsheet or a text file. This repetitive task can be very time
consuming when multiple devices or commands are required.
Security Advisories
Network security is a major concern to all enterprises. They
not only need to protect the network infrastructure from
outside intruders and unwanted traffic, but also need to protect
the network devices themselves from compromise. Security
vulnerabilities are an unfortunate reality for all connected
devices, and given the importance of the network
infrastructure to the business, enterprises must pay extra
attention to making sure that the network devices are protected
and updated to versions of software or configurations that are
not vulnerable. In some companies, entire teams are dedicated
to track and remediate software vulnerabilities on the network.
Security advisories are usually released via mailing lists or on
websites along with workarounds or “fixed-in” versions, and
the team must then audit the network inventory to figure out
which devices may be vulnerable and would require
remediation. This can be a time-consuming, manual process,
and vulnerabilities can sometimes exist in network devices for
months before they’re remediated.
SUMMARY
This chapter covered some of the workflows and tools in
Cisco DNA Center that can be used alongside Cisco Software-
Defined Access to provide more efficient deployment and day-
to-day operations of the network. Some of the workflows
discussed were Software Image Management (SWIM),
templates, and Plug and Play (PnP), which can be used for
upgrades, mass configuration changes, and automated device
onboarding, respectively. It also discussed the Cisco DNA
Center appliance and connectivity options along with High
Availability for the Cisco DNA Center solution.
ASSURANCE BENEFITS
An enterprise network consists of users, endpoints, network
infrastructure devices, and business-critical applications—
email, web-based applications, and business-relevant
applications. More and more devices such as Internet of
Things (IoT) devices are being connected to the network every
day, and these components are heavily dependent on the
network infrastructure. The success of the services, user
experience, and business efficiency depends on the enterprise
infrastructure. Because the network has become critical to
business operations, continuous monitoring of the network is
needed as the network extends further to branch locations,
complexity increases with many applications running
simultaneously, and the threat surface increases.
Cisco ISE integration with Cisco DNA Assurance provides the username,
device information, and user group (Scalable Group Tag) assigned to the
user. Cisco ISE reported the username as Jane Smith, assigned to the HR
scalable group and connected using two devices—a workstation and a
mobile phone—as shown in Figure 9-4. Cisco ISE uses pxGrid to push
this information to the Cisco DNA Assurance engine.
Cisco Application Visibility and Control (AVC) identifies that the flow
records are for Cisco Webex traffic.
Streaming Telemetry
Legacy protocols such as SNMP, syslog, and NetFlow are the
most common ways of conducting telemetry in traditional
networks. However, some of these protocols, such as SNMP,
have serious shortcomings because they use a poll-based
mechanism. If a critical key performance indicator (KPI) was
measured on a device, the collector wouldn’t know about it
until the next polling interval. SNMP also uses management
information base (MIB) to get data points about the network
device performance, and the entire MIB would need to be read
into the collector, even if only a single data point was required.
If multiple collectors needed this information, SNMP
information would have to be unicast to each receiver, using a
lot of the network bandwidth for operational purposes. Some
of these restrictions make SNMP slow and inefficient for
programmable infrastructures.
Figure 9-5 Cisco DNA Assurance Collectors
Health Dashboards
One of the challenges of traditional networks is the amount of
raw data the operator needs to go through to make a
meaningful connection to the network or the issue. To simplify
the Cisco DNA Assurance data consumption, Cisco DNA
Assurance introduced health scores, which are assigned to
network devices, clients, and applications based on several
KPIs received from the network. Health scores are colored
coded to reflect the alert level so that the network operator can
easily see which items warrant immediate attention. Figure 9-6
provides an example of the health score range offered by Cisco
DNA Assurance.
Note
The network operator has the option to change the health score KPIs to make them
relevant to their infrastructure. For example, the default KPI for CPU utilization is Good if
the CPU utilization is less than 95 percent. The network operator can change this KPI if the
network infrastructure requirement is to be within 80 percent CPU utilization.
Application Experience
Intelligent Capture
Traditionally, Cisco DNA Center receives information about
device and client health from Cisco wireless LAN Controllers.
The Intelligent Capture feature provides support for a direct
communication link between Cisco DNA Center and the
access points, so each of the APs can communicate with Cisco
DNA Center directly. Using this channel, Cisco DNA Center
receives packet capture data, AP and client statistics, and
spectrum data. This gives visibility into data from APs that is
usually not available from wireless LAN controllers. The
network operator can use this tool to proactively find and
resolve wireless problems with onboarding, interference, poor
performance, and spectrum analysis. Please refer to the Cisco
DNA Center Supported Devices to view the list of the
supported hardware and software supporting the Intelligent
Capture feature.
Anomaly Capture
WLC generates the client event and shares with Cisco DNA
Assurance through streaming telemetry whenever a wireless
client connects. The access point has visibility into client
onboarding events and can collect captures in case of a client
anomaly event. Figure 9-20 depicts the packet flow that occurs
in an anomaly event.
DHCP failure
802.1X failure
Extensible Authentication Protocol (EAP) Key Exchange failure (4-way,
GTK Failure, Invalid EAPOL Key MIC, EAP timeout, etc.)
Path Trace
Path Trace is a useful Cisco DNA Assurance tool that lets the
operator see the application or service path that traffic takes
from a source to a destination. It provides the visibility at the
hop level, similar to a traceroute, because it uses traceroute to
identify the traffic path. The source and the destination for a
path trace can be a combination of wired or wireless clients or
the device interface IP addresses.
The path trace for Grace Smith indicates that the issue is an
interface ACL applied on the p1.edge1-sda1.local device along
the traffic path. Path Trace essentially helps the operator
resolve issues faster without logging in to multiple devices.
Sensor Tests
Let’s consider a hypothetical use case where Company ACME
has an executive leadership event scheduled in a week and the
network operations manager has been told to make sure the
wireless environment is ready for the event. The CIO would
like to make sure the wireless environment in that area is
performing as expected, and if there are any reported issues,
they need to be reported proactively. Company ACME is
looking for a solution that can proactively check the system
and replicate client behavior without the network team doing
on-premise testing. Cisco introduced Cisco Aironet 1800s
Active Sensor, a dedicated compact wireless network sensor
designed to be placed anywhere in the network to monitor the
wireless network. It simulates real-world client experiences by
running periodic wireless connection tests to validate wireless
performance and to make sure the network is adhering to the
committed SLAs. Deployment of the 1800s sensors is
beneficial in proactively monitoring the network for critical
venues and high-value locations, such as conference halls and
meeting rooms.
SUMMARY
The Cisco DNA Assurance solution provides new capabilities
that enable network operators to monitor and troubleshoot the
network more easily. This chapter introduced some highlights
of the Cisco DNA Assurance capabilities, such as streaming
telemetry, intelligent capture for real-time wireless client
troubleshooting, proactive monitoring using sensors, path trace
features to visualize the packet flow with every hop in detail,
network time travel, and 360 views of clients, network
devices, and applications. The new machine learning
algorithms along with guided remediations based on the Cisco
knowledge database provide valuable insights for maintaining
the network. Intent-based networking has taken network
operations a step further with Cisco DNA Assurance.
A
access control list (ACL) A set of rules based on IP addresses
that is used to permit or deny network traffic passing through
network devices.
B
Bidirectional Forwarding Detection (BFD) A standards-
based protocol that detects the connectivity state between
network devices and alerts higher-layer protocols to state
changes.
C
Cisco Application Centric Infrastructure(Cisco ACI) A
software controller–based solution that uses software-defined
networking (SDN) to deploy, monitor, and manage enterprise
data centers and clouds.
D
Device 360 In Cisco Assurance, a complete view of a
particular network device’s information and location on a
network.
downloadable access control list (dACL) An ACL that is
dynamically pushed by Cisco ISE to a network switch after a
client authenticates.
E
EtherChannel Grouping of multiple physical links between
devices into a single virtual link to optimize bandwidth usage
and provide failover.
F–K
fabric border node A node in the Cisco Software-Defined
Access fabric that facilitates traffic entering and exiting the
fabric domain.
L
Link Layer Discovery Protocol (LLDP) A Layer 2–based
protocol that functions similarly to Cisco Discovery
Protocol(CDP) but can also be used between non-Cisco
devices.
M
machine learning A subset of artificial intelligence used to
gather data and information from the network environment to
constantly learn, adapt, and improve the accuracy of the AI.
N–O
NetFlow A standards-based protocol that contains explicit
application data and is used by many network management
systems to monitor network devices and flows.
P
Port Aggregation Protocol (PAgP) A protocol that is used for
EtherChannel. See also EtherChannel.
Q
quality of service (QoS) The categorizing and prioritization of
traffic in a network, typically based on application type and
requirements.
R
received signal strength indicator (RSSI) The signal strength
that a wireless device receives from a wireless transmitter.
S
Scalable Group Tag (SGT) A unique identification assigned
to an endpoint or group of endpoints for use in segmentation.
Also known as Security Group Tag.
T
Terminal Access Controller Access-Control System Plus
(TACACS+) A networking protocol that is used primarily for
device authentication, authorization, and accounting.
U
User 360 In Cisco Assurance, a complete view of a particular
user’s devices and activities on a network.
V
Virtual Extensible LAN (VXLAN) A network encapsulation
protocol that allows for the transport of Layer 2 Ethernet
frames over a Layer 3 infrastructure.
W–X
Wireless LAN Controller (WLC) A hardware or software-
based network device that provides management and
controller functions for APs.
A
AAA (authentication, authorization, and accounting), 33
access contracts, 123–124
access points, Cisco SD-Access, 89
access tunnels, displaying, 185–186
accounting, 33, 34
ACI (Cisco Application Centric Infrastructure), 16–17
analytics, 9
ETA, 12
ANC (Adaptive Network Control), 49
Anomaly Capture, 301–302
Ansible Playbook, 61
APIC-EM (Application Policy Infrastructure Controller
Enterprise Module),
core applications, 62–63
APIs (application programming interfaces), 9
Application Health dashboard, Cisco DNA Assurance,
299–300
architecture, 50
Cisco DNA Assurance, 287–288
Cisco DNA Center, 256
ARP flooding, 218–219. See also Layer 2 networks
assigning, templates, 269–270
assurance, 285
authentication, 31, 33, 35
Cisco ISE Compliance, 46–48
IEEE 802.1X, 35–37
troubleshooting in Cisco SD-Access, 188–190
authentication templates
Cisco SD-Access, 105–106
Closed Authentication, 140–141
Easy Connect, 141–144
editing, 142–144
No Authentication, 137–138
Open Authentication, 138–140
authenticators, 35
authorization, 33, 35
automation, 2, 7
Ansible Playbook, 61
border, 98–99
Cisco DNA Center, 25–26
copying configuration files, 60
GUIs, 62
LAN, 84–86
configuration, 87–88
first phase, 86
second phase, 87
and manually configured networks, 2–3
tools, history of, 60–62
B
bandwidth, in WAN environments, 19
bidirectional PIM, 210
border nodes, 96–98
automation, 98–99
control plane collocation, 99–100
BYOD (bring your own device), 4, 5, 45–46, 128
security, 31
C
campus networks
corporate network access use case, 149–159
desired benefits, 5–6
fabrics, 24–25
guest access use case, 159–164
Layer 2 intersite, 224
design and traffic flow, 224–227
multidomain, 16–18
three-tier, 14
CAs (certificate authorities), 114–115
certificates
Cisco ISE, 115–116
self-signed, 113
Cisco ACI (Application Centric Infrastructure), 252–253
Cisco AI Network Analytics, 304–306
Cisco Campus Fabric, 25–28
LISP, 26, 27
traffic flow for wired clients, 30
Cisco Catalyst 9000 Series switches, 11
Cisco DNA Assurance, 9, 286
architecture, 287–288
data collection points, 289–291
health dashboards, 292–293
Application Health, 299–300
Cisco SD-Access Fabric Network, 296
Client Health, 297–298
Network Health, 294–296
Overall Health, 293
network time travel, 292
streaming telemetry, 290–292
tools
Anomaly Capture, 301–302
Cisco AI Network Analytics, 304–306
Intelligent Capture, 300–301
Path Trace, 303
sensor tests, 303–304
Cisco DNA Center, 28–29, 63, 112, 197
access contracts, 123–124
APIC-EM, core applications, 62–63
architecture, 256
authentication templates
editing, 142–144
No Authentication, 137–138
Open Authentication, 138–140
automation, 25–26
Cisco Campus Fabric, 25–28
Cisco ISE integration, 116–122
certificates in Cisco DNA Center, 113–115
certificates on Cisco ISE, 115–116
Cisco SD-Access, 23–24
claiming devices, 276–279
CLI (command-line interface), 115
clustering, 258–259
communication flow with Cisco ISE, 120–121
corporate network access use case, 149–159
Design tool, 64–68
Network Hierarchy, 64–68
Network Settings, 69
wireless deployments, 70–72
Discovery tool, 72–75
fabrics, 24–25
group-based access control, 122–126
guest access use case, 159–164
HA, 258
home screen, 63–64
host onboarding, 128, 136–137
IBN (intent-based networking), 286–287
import file support, 115
Inventory tool, 74–77
network connectivity, 256–257
PKI Certificate Management feature, 114–115
PnP, 272–273
PnP Agent, 275–276
policies, 124
segmentation, 124–126
Provision tool, 77–78
resources, 256–257
roles, 75
scale numbers, 256
software image management, 259–261
Start Migration link, 123–124
SWIM
Golden Image, 262
image repository, 261
upgrading devices, 263–266
switchport override, 109
sync process, 74
templates, 266–267
assigning, 269–270
creating, 267–269
deploying, 270–272
onboarding, 273–274
third-party RADIUS server, 126–127
tools
Command Runner, 281–282
Security Advisories, 283
Topology, 280–281
verifying integration with Cisco ISE, 121–122
Cisco DNA (Digital Network Architecture), 10, 12
Cisco ISE (Identity Services Engine), 29, 31, 32, 33, 112,
196
architecture, 50
BYOD, 45–46
certificates, 115
Cisco DNA Center integration, 116–122
certificates in Cisco DNA Center, 113–115
certificates on Cisco ISE, 115–116
communication flow with Cisco DNA Center, 120–121
Compliance, 46–48
deployment options
dedicated distributed, 52
distributed, 51–52
standalone, 51
design considerations, 50
device administration, 37
differences between RADIUS and TACACS+ protocols, 33
group-based access control, 122–126
guest access, 38–40
integrations with pxGrid, 48–49
policy sets, 146–148
posture checks, 45–48
probes, 41, 42–43
profiling, 40–41, 43–45
role-based access control, 37
secure access, 34–37
TACACS+, 37–38
verifying integration with Cisco DNA Center, 121–122
Cisco Network Visibility Application, 63
Cisco Rapid Threat Containment, 49
Cisco SD-Access, 23–24, 112
access points, 89
authentication templates, 105–106
border and control plane collocation, 99–100
border automation, 98–99
Cisco ACI policy extension, 252–253
components, 28–29, 245–246
corporate network access use case, 149–159
design considerations, 240
fabric border node, 248–249
fabric control plane node, 248
fabric wireless integration, 249
infrastructure services, 249
large sites, 243
medium sites, 243
mixed SD-Access wireless and centralized wireless
option, 250
security policy, 251–252
single-site versus multisite, 244–245
small sites, 242
very small sites, 241–242
wireless guest deployment, 250–251
wireless over-the-top centralized wireless option, 250
DHCP, 172–175
debug on fabric switch, 174
request process, 173
for distributed campus deployments, 228–229
Cisco SD-Access transit, 232–234
fabric multisite or multidomain with IP transit, 230–232
IP transit, 229–230
multisite Cisco SD-Access transit, 234–237
policy deployment models, 238–240
external connectivity, 104
fusion router, 104–105
fabric encapsulation, 167–168
LISP, 168–170
VXLAN, 171–172
Fabric in a Box (FiaB) deployment, 227–228
fabrics, 24–25
border node, 96–98
control plane, 95–96
creation, 92
device roles, 94–95
edge nodes, 100–102
host onboarding, 105
intermediate nodes, 103–104
MTU considerations, 172
placement, 93
roles, 170
SSID to IP pool mapping, 108–109
VN to IP pool mapping, 106–108
VNs, 94
VXLAN, 26
fusion router, 91
guest access use case, 159–164
host operation and packet flow, 172
IoT extension, 196–197
extended node configuration, 200–203
extended nodes, 198
hosts communicating with hosts connected outside the
fabric, 205–206
onboarding the extended node, 203–205
policy extended nodes, 198–199
traffic flow within a policy extended node, 207–208
traffic from clients connected to policy extended node,
206–207
traffic to clients connected to policy extended node, 207
latency considerations, 240–241
Layer 2
border, 221–223
flooding, 218–221
intersite, 224–227
multicast, 208
configuration in Cisco DNA Center, 216–218
fabric native, 214–216
PIM ASM with head-end replication, 211
PIM SSM with head-end replication, 213–214
network profiles, 269–270
network topologies, 81–82
overlay, design considerations, 247–248
segmentation
macro-, 144–145
micro-, 145–146
outside the fabric, 164
policies, 148
shared services, 90–91
switchport override, 109
transit networks, 91
IP-based transit, 91–92
SD-Access transit, 92
troubleshooting, 181–182, 188
authentication, 188–190
fabric control plane, 186–187
fabric edge, 182–186
policy, 190–191
SGTs, 191–192
underlay, 82–83, 246
automated, 84–89
design considerations, 246–247
manual, 83–84
wired host onboarding and registration, 175–176
wired host operation, 176
inter-subnet traffic in the fabric, 179
intra-subnet traffic in the fabric, 176–178
traffic to destinations outside the fabric, 180
wireless host operation, 180–181
initial onboarding and registration, 180–181
WLCs, 89
Cisco SD-Access Fabric Network Health dashboard, Cisco
DNA Assurance, 296
Cisco SD-WAN, transit, 237–238
Cisco Stealthwatch, 11
Cisco TrustSec, 54
functions
classification, 55
enforcement, 57–58
propagation, 55–57
SGTs, 54
Cisco Zero Trust, 128
claiming devices, 276–279
classification
Cisco TrustSec, 55
endpoints, 40
CLI (command-line interface), 3
Cisco DNA Center, 115
Client Health dashboard, Cisco DNA Assurance, 297–298
Closed Authentication template, 140–141
closed mode, IEEE 802.1X, 134–136
cloud computing, 4, 11
clustering, Cisco DNA Center, 258–259
CMX (Cisco Connected Mobile Experience), 300
COA (Change of Authorization), 38–39
Command Runner, 281–282
commands
ip helper-address, 172
show authentication sessions, 189
show authentication sessions interface, 154
show client detail, 190
show cts environment-data, 191–192
show cts rbacl, 191
show cts role-based permissions, 191
show device-tracking database, 182–183
show ip dhcp snooping binding, 182
show lisp instance-id, 187
show lisp instance-id ethernet database, 183
show lisp instance-id ethernet server, 186–187
show policy-map type control subscriber, 139, 141
show running config, 188
show template interface source user, 139
write erase, 203
Compliance, Cisco ISE, 46–48
configuration changes, 266–267
configuration files, copying, 60
configuring
extended nodes, 200–203
Layer 2 flooding, 219–221
connectivity, Cisco DNA Center, 256–257
context, endpoints, 48
contracts, 123–124
control plane, 3, 24–25
border node collocation in Cisco SD-Access, 99–100
in Cisco SD-Access, 95–96
Cisco SD-Access, 29
show cts rol-based permissions, 156, 163
controllers, 23
corporate network access use case, 149–159
creating, templates, 267–269
D
data collection points, Cisco DNA Assurance, 289–291
data plane, 3, 24–25
dedicated distributed deployment, Cisco ISE, 52
delivery modes, multicast, 210
deployment options
Cisco ISE
dedicated distributed, 52
distributed, 51–52
standalone, 51
Cisco SD-Access
distributed campus, 228–233, 233–237
FiaB (Fabric in a Box), 227–228
policies, 238–240
templates, 270–272
design considerations
fabric border node, 248–249
fabric control plane node, 248
fabric wireless integration, 249
infrastructure services, 249
large sites, 243
medium sites, 243
mixed SD-Access wireless and centralized wireless option,
250
overlay network, 247–248
security policy, 251–252
single-site versus multisite, 244–245
small sites, 242
underlay network, 246–247
very small sites, 241–242
wireless guest deployment, 250–251
wireless over-the-top centralized wireless option, 250
Design tool (Cisco DNA Center)
Network Hierarchy, 64–68
Network Settings, 69
wireless deployments, 70–72
device upgrade process, Cisco DNA Center, 263–266
DHCP (Dynamic Host Configuration Protocol), 90
in Cisco SD-Access, 172–175
debug on fabric switch, 174
request process, 173
digital transformation model, 7
Discovery tool (Cisco DNA Center), 72–75
distributed campus deployments, 228–229
Cisco SD-Access transit, 232–233
multisite, 233–237
fabric multisite or multidomain with IP transit, 230–232
IP transit, 229–230
policy deployment models, 238–240
distributed deployment, Cisco ISE, 51–52
DMVPN (Dynamic Multipoint Virtual Private Network),
24–25
DNS (Domain Name Service), 90
E
Easy Connect template, 141–144
EasyQoS, 63
editing, authentication templates, 142–144
EID (endpoint identifier), 26
encapsulation protocols, 167–168
LISP (Location Identifier Separation Protocol), 168–170
VXLAN (Virtual Extensible LAN), 171–172
endpoints, 112
classification, 40
context, 48
posture checks, 45–48
profiling, 40–41, 43–45
ERS (External RESTful Services), 113
enabling in Cisco ISE (Identity Services Engine), 118
ETA (Cisco Encrypted Traffic Analytics), 12
extended nodes, 197, 198
configuration, 200–203
onboarding, 203–205
external connectivity, Cisco SD-Access, 104–105
F
fabric border node
Cisco SD-Access, 29
design considerations, 248–249
fabric edge node, Cisco SD-Access, 29
fabric WAN controller, Cisco SD-Access, 29
fabrics, 82, 112
architecture, 24–25
Cisco Campus Fabric, 25–28
Cisco SD-Access
access points, 89
automated underlay, 84–89
border node, 96–98
device roles, 94–95
edge nodes, 100–102
host onboarding, 105
intermediate nodes, 103–104
manual underlay, 83–84
SSID to IP pool mapping, 108–109
VN to IP pool mapping, 106–108
VNs, 94
control plane, 95–96
design considerations, 248
troubleshooting, 186–187
creation in Cisco SD-Access, 92
edge nodes
displaying ip addresses, 184
troubleshooting, 182–186
encapsulation
LISP, 168–170
VXLAN, 171–172
encapsulation protocols, 167–168
MTU considerations, 172
placement, 93
roles, 170
segmentation outside, 164
VXLAN, 26
FHRPs (first hop redundancy protocols), 13
FiaB (Fabric in a Box) deployment, 227–228
full BYOD (bring your own device), 45
fusion router, 91
G
Golden Image, 68, 84, 262
GRE (Generic Routing Encapsulation), 24–25
group-based access control, 122–126
guest access
Cisco ISE, 38–40
use case, 159–164
GUIs, 62
H
HA (High Availability), Cisco DNA Center, 258
health dashboards (Cisco DNA Assurance), 292–293
Application Health, 299–300
Cisco SD-Access Fabric Network, 296
Client Health, 297–298
Network Health, 294–296
Overall Health, 293
HIPAA (Health Insurance Portability and Accountability
Act), 112
history, of automation tools, 60–62
host onboarding, 128
Cisco DNA Center, 136–137
Cisco SD-Access, 105
Hotspot Guest portal, Cisco ISE, 40
I
IaaS (Infrastructure as a Service), 4, 18
IBN (intent-based networking), 8, 63, 286
problem isolation, 9
IEEE 802.1X, 35–37
endpoint host modes, 128
multi-auth, 129–130
multi-domain, 129–130
multi-host, 128–129
single-host, 128–129
phased deployment, 130-, 131
closed mode, 134–136
low-impact mode, 133–134
monitor mode (visibility mode), 132–133
IGMP (Internet Group Management Protocol), 209
image repository, Cisco DNA Center, 261
infrastructure services, design considerations, 249
inline tagging, 55–56
insights, 9
integrating, Cisco DNA Center and Cisco ISE (Identity
Services Engine), 116–122
Intelligent Capture, 300–301
intermediate nodes, 103–104
Inventory tool, Cisco DNA Center, 74–77
IoT (Internet of Things), 4, 112
Cisco SD-Access extension, 196–197
extended nodes, 198
configuration, 200–203
onboarding, 203–205
policy extended nodes, 198–199
security, 196
use cases for Cisco SD-Access
hosts communicating with hosts connected outside the
fabric, 205–206
traffic flow within a policy extended node, 207–208
traffic from clients connected to policy extended node,
206–207
traffic to clients connected to policy extended node, 207
IP addresses, displaying in LISP, 184, 185
ip helper-address command, 172
IP multicast. See multicast
IP pools
mapping to SSID, 108–109
mapping to VNs, 106–108
IP transit, 84, 91–92, 229–230
fabric multisite or multidomain, 230–232
IT industry, 22
advances in, 1–2
analytics, 9
automation, 2–3, 7
cloud computing, 18–20
history of automation tools, 60–62
IaaS, 4
IBN, 8
multidomain, 16–18
overlay networks, 24–25
SDN, 3
trends, 4
IWAN (Cisco Intelligent WAN), 63
L
LAN Automation, 84–86
configuration, 87–88
first phase, 86
second phase, 87
large sites, design considerations, 243
latency considerations for Cisco SD-Access, 240–241
Layer 2 networks
border, 221–223
flooding, 218–221
intersite, 224
design and traffic flow, 224–227
Spanning Tree, 13–14
Layer 3 routed access, 14–15, 102
benefits, 15–16
lig (LISP Internet Groper), 186
LISP (Location Identifier Separation Protocol), 24–25, 26,
27, 96, 168–170
IP addresses, displaying, 184
map-register debug, 176
low-impact mode, IEEE 802.1X, 133–134
M
MAB (MAC Authentication Bypass), 35
macro-segmentation, 112, 144–145
malware, 112
manual underlay, Cisco SD-Access, 83–84
manually configuring networks, 7, 14–15
risks of, 2–3
medium sites, design considerations, 243
micro-segmentation, 112, 145–146
monitor mode (visibility mode), IEEE 802.1X, 132–133
MPLS (Multiprotocol Label Switching), 24–25
MTU (maximum transmission unit), 172
multi-auth mode, IEEE 802.1X, 129–130
multicast, 208–209
bidirectional PIM, 210
in Cisco SD-Access
configuration in Cisco DNA Center, 216–218
PIM ASM with head-end replication, 211
PIM SSM with head-end replication, 213–214
delivery modes, 210
fabric native, 214–216
IGMP, 209
PIM sparse-dense mode, 209
PIM-DM, 209
PIM-SM, 209
multidomain, 16–18
multi-domain mode, IEEE 802.1X, 129–130
multi-host mode, IEEE 802.1X, 128–129
multisite Cisco SD-Access transit, 234–237
multisite design, 244–245
N
network access, 34
network access control (NAC), 30, 33, 128
need for, 31
network controllers, 3
Network Health dashboard, Cisco DNA Assurance, 294–
296
network operations workflow, 9
network profiles, 269–270
networks. See also software-defined networking
challenges of traditional implementations, 285–286
corporate access use case, 149–159
guest access use case, 159–164
isolating, 112
planning, 59–60
redundant, 6–7
topologies, 81–82
transit, 91
zero trust, 128
No Authentication template, 137–138
nodes
Cisco SD-Access fabric, 94–95
extended, 197, 198
configuration, 200–203
onboarding, 203–205
policy extended, 198–199
NTP (Network Time Protocol), 90
O
onboarding, extended nodes, 203–205
onboarding templates, 273–274
Open Authentication template, 138–140
Overall Health dashboard, Cisco DNA Assurance, 293
overlay networks, 24–25, 112
design considerations, 247–248
P
Path Trace, 303
PCAPs (anomaly-triggered packet captures), 301
PCI (Payment Card Industry), isolating point-of-sales
machines, 112
phased deployment, IEEE 802.1X, 130, 131
closed mode, 134–136
low-impact mode, 133–134
monitor mode (visibility mode), 132–133
PIM sparse-dense mode, 209
PIM-DM (PIM dense mode), 209
PIM-SM (PIM sparse mode), 209
PKI (Public Key Infrastructure), 114–115
placement, of fabrics, 93
planning, networks, 59–60
PnP (plug and play), 62
Cisco DNA Center, 272–273
claiming devices, 276–279
PnP Agent, 275–276
PoE (Power over Ethernet), 196
point-of-sales machines, isolating, 112
policies, 112, 124
deployment models in Cisco SD-Access distributed
deployment, 238–240
segmentation, 124–126, 148
troubleshooting, 190–191
policy extended nodes, 198–199
policy sets, 146–148
posture checks, 45–48
private key certificates, 115
probes, Cisco ISE, 41, 42–43
problem isolation, 9
profiling, Cisco ISE, 40–41, 43–45
propagation, Cisco TrustSec, 55–57
Provision tool, Cisco DNA Center, 77–78
pull model, 26
pxGrid (Cisco Platform Exchange Grid), 48–49, 113, 115,
120
Personas, 116
R
RADIUS, 37
for Cisco DNA Center, 126–127
and TACACS+, 33
reactive notifications, 9
redundancy, 6–7
REP (Resilient Ethernet Protocol), 199
risks of manually configured networks, 2–3
roles
in Cisco SD-Access, 94–95
fabric, 170
S
SaaS (Software as a Service), 4, 18
scale numbers, Cisco DNA Center, 256
SD-Access transit, 92
SDN (software-defined networking), 3
SD-WAN (Software-Defined WAN), 17, 18
security, 11, 22
BYOD, 31
design considerations, 251–252
IoT, 196
shadow IT, 18
Security Advisories, 283
segmentation, 26, 112
Cisco TrustSec, 54
macro-, 112, 144–145
micro-, 112, 145–146
outside the fabric, 164
policies, 148
segmentation policies, 124–126
Self-Registered Guest portal, Cisco ISE, 40
self-signed certificates, 113
sensor tests, 303–304
sensors, 287
ServiceNOW, 9
SGTs (Scalable Group Tags), 26, 122, 123, 145–146
classification, 55
inline tagging, 55–56
propagation, 55–57
troubleshooting, 191–192
shadow IT, 18
shared services, Cisco SD-Access, 90–91
show authentication sessions command, 189
show authentication sessions interface command, 154
show client detail command, 190
show cts environment-data command, 191–192
show cts rbacl command, 191
show cts rol-based permissions command, 163
show cts role-based permissions command, 156, 191
show device-tracking database command, 182–183
show ip dhcp snooping binding command, 182
show lisp instance-id command, 187
show lisp instance-id ethernet database command, 183
show lisp instance-id ethernet server command, 186–187
show policy-map type control subscriber command, 139,
141
show running config command, 188
show template interface source user command, 139
simple BYOD (bring your own device), 45
single-host mode, IEEE 802.1X, 128–129
single-site design, 244–245
small sites, design considerations, 242
SNMP (Simple Network Management Protocol), 9
software image management, Cisco DNA Center, 259–261
software-defined networking, 22–23
solutions for campus networks, 5–6
SPAN (Switched Port Analyzer), 9
Spanning Tree, 15–16
drawbacks, 13–14
three-tier campus networks, 14
versions, 13
Sponsored-Guest portal, Cisco ISE, 40
SSID, mapping to IP pools, 108–109
SSL (Secure Sockets Layer), 113
standalone deployment, Cisco ISE, 51
STOMP (Simple Text Oriented Message Protocol), 49
storage, multidomain, 16–18
streaming telemetry, 290–292
supplicants, 35, 37
SWIM (Software Image Management), 261
Golden Image, 262
image repository, 261
upgrading devices, 263–266
switchport override, Cisco DNA Center, 109
SXP (SGT Exchange Protocol), 92, 164, 228
sync process, Cisco DNA Center, 74
T
TAC (Cisco Technical Assistance Center), 2–3
TACACS+, 37, 38
and RADIUS, 33
telemetry, traditional versus streaming, 292
templates
assigning, 269–270
Cisco DNA Center, 266–267
creating, 267–269
deploying, 270–272
onboarding, 273–274
three-layer network topology, 82
three-tier campus networks, Spanning Tree, 14
tools
Cisco DNA Assurance
Anomaly Capture, 301–302
Cisco AI Network Analytics, 304–306
Intelligent Capture, 300–301
Path Trace, 303
sensor tests, 303–304
Cisco DNA Center
Command Runner, 281–282
Security Advisories, 283
Topology, 280–281
topologies, LAN Automation, 84–86
configuration, 87–88
first phase, 86
second phase, 87
Topology tool, 280–281
transit networks, 91
IP-based transit, 91–92
SD-Access transit, 92
troubleshooting
Cisco SD-Access, 181–182, 188
authentication, 188–190
fabric control plane, 186–187
fabric edge, 182–186
policy, 190–191
SGTs, 191–192
replicating the issue, 9
trunking, 14–15
U
UADP (Unified Access Data Plane), 287
underlay networks, 24
Cisco SD-Access, 82–83
automated, 84–89
manual, 83–84
design considerations, 246–247
upgrading devices, in Cisco DNA Center, 263–266
V
verifying, Cisco DNA Center and Cisco ISE (Identity
Services Engine) integration, 121–122
very small sites, design considerations, 241–242
VLANs, 14–15, 26
VNs (virtual networks)
macro-segmentation, 144–145
mapping to IP pools, 106–108
micro-segmentation, 145–146
VRF (virtual routing and forwarding), 104–105, 144
VSS (Cisco Virtual Switching System), 7
VXLAN (Virtual Extensible LAN), 26, 168, 171–172
VXLAN-GPO, 26
W
WAN environments, bandwidth, 19
wireless deployments, Cisco DNA Center, 70–72
WLCs (wireless LAN controllers)
Cisco SD-Access, 89
displaying wireless endpoint MAC addresses, 185
write erase command, 203
X-Y-Z
X.509 certificates, 115
YAML (Yet Another Markup Language), 60
zero trust networks, 128
Code Snippets
Many titles include programming code or configuration
examples. To optimize the presentation of these elements,
view the eBook in single-column, landscape mode and adjust
the font size to the smallest setting. In addition to presenting
code and configurations in the reflowable text format, we have
included images of the code that mimic the presentation found
in the print book; therefore, where the reflowable format may
compromise the presentation of the code listing, you will see a
“Click here to view code image” link. Click the link to view
the print-fidelity code image. To return to the previous page
viewed, click the Back button on your device or app.