Mobile First Campus Validated Ref Architecture
Mobile First Campus Validated Ref Architecture
1
Mobile First Campus
Validated Reference Architecture
Contents
REVISION HISTORY ........................................................................................................................... 5
INTRODUCTION .................................................................................................................................. 5
DESIGN GOALS ............................................................................................................................... 6
TARGET AUDIENCE ........................................................................................................................ 6
SCOPE ............................................................................................................................................. 6
Reference Material ............................................................................................................................ 7
Related Documents........................................................................................................................... 7
Graphical Icons .............................................................................................................................. 8
Acronym List .................................................................................................................................. 8
SOLUTION BUILDING BLOCKS........................................................................................................ 12
OVERVIEW ..................................................................................................................................... 12
DESIGN PRICIPLES....................................................................................................................... 12
MODULAR DESIGNS ..................................................................................................................... 12
CAMPUS ACCESS LAYER ......................................................................................................... 14
CAMPUS AGGREGATION LAYER ............................................................................................. 17
WIRELESS MODULE AGGREGATION LAYER.......................................................................... 24
WIRELESS MODULE REDUNDANCY ........................................................................................ 24
CAMPUS CORE LAYER ............................................................................................................. 28
Additional Design Elements & Considerations ................................................................................ 29
Quality of Service ......................................................................................................................... 29
Routing Protocols ........................................................................................................................ 29
SECURITY & ACCESS CONTROL ............................................................................................. 30
MANAGEMENT ........................................................................................................................... 31
REFERENCE ARCHITECTURE BUILDING BLOCKS.................................................................... 33
SMALL OFFICE ........................................................................................................................... 33
LARGE OFFICE .......................................................................................................................... 50
CAMPUS ..................................................................................................................................... 57
VRD CASE STUDY OVERVIEW........................................................................................................ 74
Headquarters Building ................................................................................................................. 74
Gold River .................................................................................................................................... 77
Squaw Valley ............................................................................................................................... 79
2
Mobile First Campus
Validated Reference Architecture
Kirkwood ...................................................................................................................................... 80
Mt. Rose ...................................................................................................................................... 81
DESIGN REQUIREMENTS ............................................................................................................ 83
Availability Requirements............................................................................................................. 83
Core Layer Requirements ............................................................................................................ 83
Aggregation Layer Requirements ................................................................................................ 83
Access Layer Requirements ........................................................................................................ 83
Layer 2 Requirements ................................................................................................................. 83
Routing Requirements ................................................................................................................. 84
Multicast Requirements ............................................................................................................... 84
Administrative Device Access ...................................................................................................... 84
Network Instrumentation & Management ..................................................................................... 84
End User Experience ................................................................................................................... 85
Security ........................................................................................................................................ 85
DESIGN OVERVIEW ...................................................................................................................... 85
SWITCHING ARCHITECTURE ...................................................................................................... 90
ROUTING ARCHITECTURE .......................................................................................................... 94
DATACENTER CONNECTIVITY .................................................................................................... 99
MULTICAST ROUTING ................................................................................................................ 100
Quality of Service .......................................................................................................................... 100
IP ADDRESSING .......................................................................................................................... 101
VLANs ........................................................................................................................................... 104
Mobility Services Block ................................................................................................................. 105
Network Services .......................................................................................................................... 108
Device & Network Management .................................................................................................... 109
Network Instrumentation ............................................................................................................... 110
Network Automation ...................................................................................................................... 110
Other Services .............................................................................................................................. 110
ADAPTING THIS CASE STUDY ...................................................................................................... 111
Port/Slot LAG Interface Diversity .................................................................................................. 111
Incorporating VRFs ....................................................................................................................... 111
Reducing Required Number of Physical Interfaces....................................................................... 111
Dynamic Segmentation ................................................................................................................. 111
ClearPass Configuration ............................................................................................................... 111
Adaptation Summary..................................................................................................................... 112
3
Mobile First Campus
Validated Reference Architecture
4
Mobile First Campus
Validated Reference Architecture
REVISION HISTORY
INTRODUCTION
The Aruba Mobile First Reference Architecture Validated Reference Design Guide has been prepared to enable the reader to
understand the building blocks of a Mobile-First network including validate device configurations which align to Aruba leading
practices.
The Aruba Mobile First Architecture accelerates innovation in the mobile, IoT and cloud era. It provides a secure, open, and
autonomous network that is policy-driven, API-centric, IoT ready, and automated. These key features all deliver a first-class
user experience and deliver on the promise of any user, any location, same experience.
There are six characteristics of a Mobile First Enterprise Network:
This Validated Reference Design (VRD) document will demonstrate how Aruba’s products are used together to build a Mobile
First Campus network. In this VRD, we will provide guidance on network design for small, medium, and large facilities and then
present a case study for a multi-site Mobile First network using the following Aruba solution building blocks:
• Mobility Controllers
• Access Points
• Switches
• ClearPass
• AirWave
The Aruba Mobile First Architecture was designed with automation integration in mind. The Network Analytics Engine utilizes
the open REST API across the Aruba portfolio to give a real-time monitoring and alert triggering system to help network
5
operators manage and inspect for network anomalies and changes. In designing a Mobile First network, care should be given
to ensure that plans are made to allow for automation for both the initial deployment as well as for changes that will occur over
the life of the network. The Aruba approach to ensuring that our design allows for automation is called D4AO – Designed for
Automation and Operation.
DESIGN GOALS
The design prepared in this case study has a target of sub-second failover when network devices or links experience a planned
or unplanned outage. When possible, the default configuration for protocol timers and settings are used and tuning of these
values is implemented only when required. The design elements presented in this document can be used to build new
networks as well as to optimize and redesign existing networks. This design document is not exhaustive of all design and
configuration options, but rather is representative of recommended design elements, network devices, and hardware.
TARGET AUDIENCE
This VRD is written for IT Professionals who need to design an Aruba wired and wireless network for a large organization with
multiple sites supporting between 15,000 and 20,000 users/devices. These IT professionals can fill a variety of roles:
• Systems engineers who need a standard set of procedures for implementing solutions.
• Project managers and those who estimate levels of effort to craft statements of work or project plans.
• Aruba partners who sell technology or create implementation documentation.
SCOPE
The Validated Reference Design series documents focus on particular aspects of Aruba technologies and deployment models.
Together these guides provide a structured framework to understand and deploy Aruba Mobile First Networks The VRD series
has four document categories:
• Foundation guides explain the core technologies of an Aruba Networks. These guides also describe different aspects
of planning, operation, and troubleshooting deployments
• Base Design guides describe the most common deployment models, recommendations, and configurations
• Application guides build on the base designs. These guides deliver specific information that is relevant to deploying
particular applications such as voice, video, or outdoor campus extension.
• Specialty Deployment guides involve deployments in conditions that differ significantly from the common base design
deployment models.
6
Figure 1 - Aruba Reference Architectures
The Campus Validated Reference Design is considered a Base Design guides within the VRD core technology series.
Reference Material
Readers should have a solid working understanding of basic wired and wireless LAN concepts as well as the Aruba technology
explained in the foundation level guides in order to read this VRD. The following resources will assist readers who require the
knowledge necessary to digest this document in the intended manner:
• For information on Aruba Mobility Controllers and deployment models, please refer to the Aruba Mobility Controllers
and Deployment Models Validated Reference Design
• The complete suite of Aruba technical documentation is available for download from the Aruba Support Site. These
documents present complete, detailed feature and functionality explanations beyond the scope of the VRD series. The
Aruba support site is located at:
• For more training on Aruba products or to learn about Aruba certifications, please visit the Aruba Training and
Certification page. This page contains links to class descriptions, calendars, and test descriptions.
• Aruba hosts a user forum site and user meetings called Airheads Community. The forum contains discussions of
deployment best practices, products, and troubleshooting tips. Airheads is an invaluable resource that allows network
administrators to interact with each other and Aruba experts.
Related Documents
The following documents may be helpful as supplemental reference material to this guide:
• ArubaOS 8 User Guide
• ArubaOS 8 CLI Reference Guide
• Aruba Solution Exchange
7
• ArubaOS 8 Fundamentals Guide
• Aruba Dynamic Segmentation for Wired Networks
• Aruba ClearPass Wired Policy Enforcement
Graphical Icons
Acronym List
Acronym Definition
A-MPDU Aggregated Media Access Control Packet Data Unit
8
A-MSDU Aggregated Media Access Control Service Data Unit
AAA Authentication, Authorization, and Accounting
AAC AP Anchor Controller
ACR Advanced Cryptography
AD Active Directory
AP Access Point
API Application Programming Interface
BLMS Backup Local Management Switch
BGP Border Gateway Protocol
BYOD Bring Your Own Device
CoA Change of Authorization
CLI Command Line Interface
CPSec Control Place Security
CPPM ClearPass Policy Manager
CPU Central Processing Unit
DC Data Center
DNS Domain Name Service
DHCP Dynamic Host Configuration Protocol
DMZ Demilitarized Zone
EAP-PEAP Extensible Authentication Protocol-Protected EAP
EAP-TLS Extensible Authentication Protocol-Transport Layer Security
FQDN Fully-qualified Domain Name
GRE Generic Routing Encapsulation
GUI Graphical User Interface
HA High Availability
HMM Hardware MM
HTTP Hypertext Transfer Protocol
HTTPS HTTP Secure
IP Internet Protocol
IPsec Internet Protocol Security
LMS Local Management Switch
MAC Media Access Control
MC Mobility Controller
9
MCM Master Controller Mode
MD Managed Device
MD Mobility Device
MM Mobility Master
MM-HW Mobility Master - Hardware
MM-VA Mobility Master – Virtual Appliance
MN Managed Node
NAS Network Access Server
NAT Network Address Translation
NBAPI Northbound Application Programming Interface
OSPF Open Shortest Path First
PAPI Proprietary Access Protocol Interface
PEF Policy Enforcement Firewall
PSK Pre-shared Key
RADIUS Remote Authentication Dial In User Service
RAM Random Access Memory
REST API Representational State Transfer Application Programing Interface
RFP RF Protect
S-AAC Standby AP Anchor Controller
S-UAC Standby User Anchor Controller
SfB Skype for Business
SPT Spanning Tree Protocol
SSID Service Set Identifier
UAC User Anchor Controller
UCC Unified Communications and Collaboration
VIP Virtual Internet Protocol address
VLAN Virtual Local Area Network
VM Virtual Machine
VMC Virtual MC
VMM Virtual MM
VPN Virtual Private Network
VPNC Virtual Private Network Concentrator
VRRP Virtual Router Redundancy Protocol
10
VSF Virtual Switching Framework
WLAN Wireless Local Area Network
WPA2-PSK Wi-Fi Protected Access 2-Pre-Shared Key
XML Extensible Markup Language
ZTP Zero-touch Provisioning
11
SOLUTION BUILDING BLOCKS
OVERVIEW
This chapter addresses the design decisions and best practices that can be followed to implement an end-to-end Aruba mobile
first architecture for a typical enterprise network. This chapter focuses on architecture design recommendations and explains
the various configurations and considerations that are needed to for each architecture. Reference architectures are provided for
small, medium and large buildings as well as large campuses. For each architecture the following topics are discussed:
Recommended modular local area network (LAN) designs
Mobility controller cluster placement
Design considerations and best practices
Suggested switch and wireless platforms
The information provided in this chapter is useful for network architects responsible for greenfield designs, network admins
responsible for optimizing existing networks as well as network planners requiring a template that can be followed as their
network grows. The scope of this chapter applies to small offices with fewer than 32 Access Points through to large campuses
supporting up to 10,000 Access Points.
DESIGN PRICIPLES
The foundation of each reference architecture provided in this chapter is the underlying modular local area network (LAN)
design model that separates the network into smaller more manageable modular components. A typical LAN consists of a set of
common interconnected layers such as core, aggregation and access that forms the main network along with additional
modules that provide specific functions such as Internet, WAN, Wireless and server aggregation.
This modular approach simplifies the overall design and management of the LAN while providing the following benefits:
1. Modules can be easily replicated providing scale as the network grows.
2. Modules can be added and removed with minimum impact to other layers as network requirements evolve.
3. Modules allows the impact of operational changes to be constrained to a smaller subset of the network
4. Modules provide specific fault domains improving resiliency.
The modular design philosophies outlined in this chapter are consistent with industry leading practices and can be applied to
any size network.
MODULAR DESIGNS
The modular design that is selected for a specific LAN deployment is dependent on many factors. Most networks are built using
either a 2-tier or 3-tier modular design (figure 5 and 6). These two designs are commonly described as follows:
12
2-tier Modular Network – Collapses the core and aggregation layers into a single layer. The switches in the core /
aggregation layer perform a dual-role by providing aggregation to the access layer as well as modules and performs IP
routing functions.
3-tier Modular Network – Utilizes a dedicated aggregation layer between the core and access layers. The aggregation
layer switches provide aggregation to the access layers and connect directly to the core. For larger networks,
aggregation layer switches are commonly deployed to connect modules such as wireless, WAN, internet edge and
server farms.
A 2-tier modular design is well suited for small buildings with few wiring closets and access switches. The access layer VLANs
are extended between the access layer switches and the core / aggregation layer switches using 802.1Q trunking. The core /
aggregation switches provide layer 3 interfaces (VLAN interfaces or SVIs) for each VLAN and provide reachability to the rest of
the IP network.
In a 3-tier modular design, the IP routing functions are distributed between the core and aggregation layers and may be
extended to the access layers. All 3-tier LAN designs will implement IP routing between the aggregation and core switches
using a dynamic routing protocol such as OSPF for reachability and address summarization:
13
Layer 2 access layer – All VLANs from the access layer are extended to the aggregation layer switches using 802.1Q
trunking. The aggregation switches provide layer 3 interfaces (VLAN interfaces or SVIs) for each VLAN and provide
reachability to the rest of the IP network.
Routed access layer – IP routing is performed between the aggregation and access layer switches (as well as
between the aggregation and core layers). In this deployment model each access layer switch or stack provides
reachability to the rest of the IP network.
The Aruba Mobile First Architecture supports both designs allowing our customers to leverage the benefits of Aruba solutions
with either network design.
Dynamic Segmentation
Aruba’s ability to assign policy (roles) on the fly to a user or a switch port based on such things as access method of a client,
time-of-day, or type-of-machine is the foundation of Dynamic Segmentation. Dynamic Segmentation allows the same user
experience to wired users/devices as provided to wireless users/devices. No longer do switches need to have statically
configured ports or complex RADIUS VSAs. Aruba implements ‘Colorless Ports’ in which a port has a basic configuration and
until the network authenticates or profiles the user or port, network access is restricted.
Once authenticated or profiled, a ‘role’ is assigned to the user or port. Roles will dictate what VLAN is to be assigned, if the
traffic is locally-switched, or if tunneled back to a Mobility Controller or Cluster. Roles are generally aligned to business function
to provide for specialized network access for users and devices within a Mobile First network. Security controls to allow/deny
traffic flows can be applied at the local switch or at the Mobility Controller (with tunneled traffic). Roles can also assign policy
elements including QoS, reauthentication times, and captive portal information.
In the Mobile-First Architecture, it is common to use a tunneling configuration for many device roles. Tunneling allows for the
application of stateful firewall rules via the Mobility Controller on a per-user or per-port basis. Not all traffic must be tunneled
back to a Mobility Controller, rather key roles such as IoT devices, point of sale systems, and guests/specific users can be
tunneled.
In planning for dynamic segmentation, it is important to understand appropriate scale as there are different capabilities between
switch models. Platform selection is influenced by the number of role elements (ACE and QoS policies). ArubaOS-Switch does
14
not support enabling both Role-Based and Port-Based tunneling on the same switch/stack. Mobility Controller Clusters must
also have sufficient capacity and licensing to support tunneling from switches. Mobility Controller scaling is found in later
sections of this document.
ArubaOS-Switch supports a maximum of 32 user tunnels per port up to a maximum total of 1024 tunnels per
NOTE
switch/switch stack.
Maximum #
Max Stack Access Switch
Switch Model Members Ports per Stack
Aruba 2930F 8 (VSF) 384
Aruba 2930M 10 480
Aruba 3810 Series 10 480
Aruba 5400R Series 2 (VSF) 576 (leaving no uplink ports)
Figure 7 - Access Layer Stacking Overview
“Just taking the defaults” is not always an optimal approach to having a well design network. With respect to access layer
stacking, Aruba recommends defining the stack commander and standby roles to optimally protect the infrastructure should a
device or link failure occur. In a switch stack of three or more members, we recommend associating these roles to devices
which do NOT have uplinks to the aggregation layer devices. Additionally, when connecting stacking cables between devices,
we recommend a ring topology.
Connectivity to access layer switches should be implemented to provide redundancy. By definition, there will be some level of
oversubscription of bandwidth between the access layer and the aggregation layer. Most access layer switches (or stacks) can
be supported with a pair of 10G links with the goal of being at or below a 20:1 oversubscription ratio. Care should be taken
when exceeding this ratio to ensure that performance is acceptable. Aruba recommends aggregating both links (or more as
needed) into one logical link to provide an active/active path between network devices/services blocks. ArubaOS-Switch uses
the term “Trunk” to refer to two or more physical links combined into one logical link.
15
Uplink Forwarding Switch/Stack Total Switch Ports Oversubscription Ratio
Capacity
20G (2x10GB links) 2930M with 4 members 192 9.6:1
20G (2x10GB links) 2930M with 8 members 384 19.2:1
40G (4x10GB links) 2930M with 10 members 480 12:1
40G (4x10GB links) Fully loaded 5400R 512 12.8:1
Figure 8 - Access Layer Uplink Oversubscription
The diagram below depicts the recommended physical connectivity for a stack containing 4 ArubaOS-Switches.
16
Aruba 3810 Series 48 48 TBD 1 2
Aruba 5400R Series 48 48 TBD 1 2
Figure 10 - Access Layer PoE Overview
Switch product selection must also consider the use of per-port tunneling. Please review Dynamic Segmentation
NOTE
information above.
In planning the bandwidth required between the aggregation layer to the Core layer, consideration must be given to
oversubscription rate to properly plan for the number of aggregation layer switches (or pairs of switches) as well as uplink port
density for core devices. Most aggregation layer switches (or pairs of switches) can be supported with a pair of 40G links with
the goal of being at or below a 12:1 oversubscription ratio. Aggregation layers which have two layer 3 paths to core devices
can perform equal-cost multipath (ECMP) routing which can nearly double throughput from the aggregation switch provided that
the destination prefixes have multiple routes of equal cost (or metric). The hashing algorithm used will approach a 50/50 ratio in
distributing the flows between layer 3 links. In designing a network for a great deal of fault tolerance, we recommend reducing
the oversubscription ratio to 6:1 when we have multiple aggregation layer device pairs such to protect against significant
throughput loss should a core device fail.
17
Table below illustrates potential configurations for aggregation layer switches. The throughput and port counts are based upon
a single switch and interfaces have been allocated to implement an optimal configuration using either VSX with ArubaOS-CX
devices or VSF with ArubaOS-Switch devices.
Aruba 8400
20G (2x10GB links) 254 2540Gb 127:1 8 10G modules
18
2 40G modules
Aruba 5406R
20G (2x10GB links) 46 460Gb 23:1 6 – 8 port
modules
40G (Pair of 2x10GB links) 44 440Gb 11:1 6 – 8 port
modules
80G (2x40GB links)* 32 320Gb 4:1 4 – 8 port
modules
2 – 6 port
modules for
Uplinks and
VSF
160G (Pair of 2x40GB links) 24 240Gb 1.5:1 3 – 8 port
modules
4 – 6 port
modules for
Uplinks and
VSF
Aruba 5412R
20G (2x10GB links) 93 930Gb 46.5:1 12 – 8 port
modules
Presuming
VSF, 2 – VSF
link to peer
1 link for uplink
40G (Pair of 2x10GB links) 91 920Gb 23.25:1 12 – 8 port
modules
Presuming
VSF, 2 – VSF
link to peer
2 link for uplink
80G (2x40GB links)* 72 720Gb 9:1 9 – 8 port
modules
2 – 2 port 40G
modules
Presuming
VSF,
2 – 40G VSF
link to peer
1 link for uplink
160G (Pair of 2x40GB 72 720Gb 4.5:1 9 – 8 port
links)* modules
2 – 2 port 40G
modules
Presuming
VSF,
2 – 40G VSF
link to peer
2 links for
uplink
In a three-tier model with a layer 2 access switches, the Aruba recommended solution to provide connectivity and high-
availability is to use ArubaOS-CX switches and leverage the Aruba Virtual Switching Extensions (VSX). VSX provides for the
19
aggregation of multiple links from each VSX device to downstream switches as well as for the synchronization of several
configuration elements including Access-Lists and VLANS.
Building an aggregation layer using VSX, the design must account for uplinks to the core, layer 3 links to the VSX peer device,
the Interswitch Link (ISL) and for the VSX keepalive link. The ISL link should be of equal or greater bandwidth than the uplinks.
Traffic will only traverse the ISL link when all layer-three uplinks from a VSX peer device have failed. This situation is very
unlikely to occur in a well-designed network.
When designing an aggregation layer with the Aruba 8320 switch which has six 40G interfaces, Aruba recommends the
following connectivity and interface allocation be used when using 40G interfaces.
If the design requirements can be met using 10G interfaces, Aruba recommends the following connectivity and interface
allocation. Note that the ISL link is designed to use a pair of 40G interfaces even when using 10G uplinks.
ArubaOS-Switch devices deployed in an aggregation role can be physically or virtually stacked together. Aruba 5400R devices
can be configured with VSF as they do not support backplane stacking. The design must account for uplinks to the core, layer
links to the VSF peer device. The VSF peer link should be a LAG with at least two interfaces with a total bandwidth equal to
20
sum of the uplink interface bandwidth. Multi Active Detection (MAD) is a ‘failsafe’ mechanism used in VSF enabled networks.
The diagram below depicts a VSF configuration with 20G of uplink bandwidth.
If you are implementing LLDP-MAD it is recommended to use an existing network path that is not a direct
NOTE connection between VSF devices. The LLDP-MAD traffic should not follow and east-west path between VSF
devices.
Distributed Trunking (DT) a precursor technology to VSF is an alternative to provide high-availability. DT does not provide the
same features as VSF including layer three forwarding. Aruba recommends migrating from DT to VSF. Fundamentally, DT
presents a unified forwarding plane to neighboring devices. DT uses a proprietary protocol that allows two or more aggregated
links to be distributed across two switches to create a link aggregation group called a DT-LAG. The DT-LAGS appear to the
downstream device as if they are from a single device. This allows third party devices such as switches, servers, or any other
networking device that supports trunking to interoperate with the distributed trunking switches seamlessly. Distributed trunking
provides device-level redundancy in addition to link failure protection.
21
Each distributed trunk (DT) switch in a DT pair must be configured with a separate ISC link and peer-keepalive link. The peer-
keepalive link is used to transmit keepalive messages when the ISC link is down to determine if the failure is a link-level failure
or the complete failure of the remote peer.
NOTE DT supports a maximum of two switches and is supported on 5400R and 3810M platforms.
With a combination of layer-two and layer-three services deployed in the aggregation layer, architects should ensure that they
do not exceed layer-two and layer-tree table sizes. This is of more concern with respect to layer-two tables as most campus
networks will not likely exceed layer-three table sizes. Consideration must also be given to Dual Stack (IPv4+IPv6)
environments. The tables below list validated scale Aruba switches used to provide aggregation layer services. Aruba
recommends not exceeding 80% of the capacity.
Switch Series MAC Table Size IPv4 ARP Entries IPv6 ND Entries Dual Stack Clients
( 1 IPv4 ARP + 2 IPv6 ND)
Aruba 3810 Series (16.06) 64,000 25,000 25,000 8,333
Aruba 5400R Series (16.06) 64,000 25,000 25,000 8,333
Aruba 8320 Series (CX 10.1.020) 47,000 47,000 44,000 22,000
Configured in Mobile-First Mode
Aruba 8400 Series (CX 10.1.020) 64,000 64,000 48,000 32,000
Table 14 - Aggregation Switch Layer-Two Table Sizes
Beginning with ArubaOS-CX 10.1, the Aruba 8320 provides two modes of operation to allocate resources to either
NOTE a layer-two focused configuration or a layer-three focused configuration. The data presented in the tables above
are the maximum values for either a layer-two or layer-three configuration.
Consideration should also be given to the operating system and its default and long-term behavior when performing capacity
planning. Unfortunately the current state of IPv6 support across operating systems and versions is fragmented. This means a
single IPv6 addressing method will not necessarily support all the IPv6 cli- ents that can connect to a Mobile First network. For
example if your Mobile First network supports Android devices, you must implement Stateless Address Auto Configuration
(SLAAC) along with RFC-8106 to provide DNS information to clients. However RFC-8106 is not supported by older MacOS or
Windows operating systems. Therefore a combination of IPv6 addressing methods must be enabled to ensure all the devices
on the network can obtain IPv6 addressing and DNS information required to use the network.
22
Operating System SLAAC with RFC-8106 Stateless DHCPv6 Stateful DHCPv6
Mac OS X Yes (10.11 and above) Yes (10.7 and above) Yes (10.7 and above)
Windows 7/8/8.1/10 SLAAC only Yes Yes
Windows 10 Creators Update SLAAC + RDNSS Yes Yes
iOS Yes (11.0 and above) Yes (4.0 and above) Yes (4.3.1 and above)
Android Yes (5.0 and above) No No
Table 16 - Client IPv6 Addressing Support
All information in the above table has been gathered using various sources on the internet.
NOTE Initial release support for RDNSS for Apple devices is not well documented, but has been verified to work on current iOS and
MacOS releases.
DHCPv6 and SLAAC enabled networks behave differently. It is not uncommon for clients in SLAAC enabled environment to
have enabled IPv6 privacy extensions and thus have several IPv6 addresses.
The data in the table below was captured after the host had been online for 4 hours. There are several factors which could lead
to additional addresses being assigned to a device. If possible, data from the current environment should be used to help
construct a baseline to ensure that you have sufficient platform scaling capabilities.
Operating System IPv4 Addresses IPv6 Addresses when using IPv6 Addresses when using
DHCPv6 SLAAC
Mac OS X 10.12 1 2 (link local, global) 3 (link local, 2 global)
Windows 10 1 2 (link local, global) 3 (link local, 2 global)
Ubuntu 9 1 2 (link local, global) 3 (link local, 2 global)
iOS 12 Beta 1 2 (link local, global) 3 (link local, 2 global)
Android 1 Google does not support 3 (link local, 2 global)
DHCPv6 for address
assignment with Android
devices
Table 16 –IPv4 and IPv6 Addressing of Client Operating Systems
Note that the number of IPv6 addresses per client when using SLAAC is not the maximum number of addresses a client can
use. Rather, this is the observed number of IPv6 addresses after a device power-on/boot and being operational for four hours.
If both DHCPv6 and SLAAC configured, some clients will obtain IPv6 addresses from both SLAAC and DHCPv6. Larger
networks designed to support more than 10,000 devices should use care in planning for IPv6.
23
WIRELESS MODULE AGGREGATION LAYER
For the wireless module, a dedicated aggregation layer will typically be introduced once the number of wireless and dynamically
segmented client host addresses exceeds a specific count. As wireless and dynamically segmented client traffic is tunneled
from the Access Points and access layer switches to the mobility controller cluster, the MAC learning and IP processing
overhead is incurred by the first-hop router for those VLANs. In a 2-tier modular network design this overhead is incurred by the
aggregation / core layer while in a 3-tier modular network design this overhead is incurred by the core. The addition of a
dedicated wireless module aggregation layer migrates the MAC learning and IP processing overhead from the core to a
dedicated wireless aggregation layer providing stability, fault isolation and scaling.
As a general best practice Aruba recommends implementing a dedicated wireless aggregation layer when the total number of
IPv4+IPv6 addresses from both wireless and dynamically segmented clients exceeds 4,096. This recommendation future
proofs the network and ensures the core layer is not overwhelmed as new classes of devices such as IoT are added to the
network or IPv6 is introduced which can double or triple number of host IP addresses.
How large can I scale a wireless module? The answer to this question is dependent on the scaling capabilities of the
aggregation layer switches that you deploy for the wireless module. Switches are designed to support a specific number of
hosts which includes the necessary table sizes and processing power to perform layer 2 (MAC) and layer 3 (ARP+ND) learning
and table maintenance.
The latest generation of campus switches from Aruba can comfortably scale to support 64,000 addresses (IPv4 & IPv6). As a
general rule you should design your network appropriately so that the total number of host addresses per wireless module does
not exceed the capacity of your wireless module aggregation switches Larger wireless networks scaling beyond 64,000 x IPv4
or IPv6 host addresses requiring additional wireless modules consisting of an aggregation layer and mobility controller cluster.
Scaling for native IPv4 deployments is easy to calculate as each host is assigned a single IPv4 address. A wireless module
using Aruba 8400 series aggregation switches can comfortably scale to support 64,000 IPv4 only hosts. The number of hosts
than can be supported for a dual-stack (IPv4+IPv6) or native IPv6 deployments is more challenging to calculate as each IPv6
host can be assigned multiple IPv6 addresses (link-local address plus one or more global addresses). Therefore, the total
number of IPv6 addresses that are assigned per host determines the maximum overall number of hosts that can be supported
within each wireless module:
Native IPv6 – Assuming each host is assigned one global remote address, each wireless module can support a
maximum of 48,000 wireless + dynamically segmented dual-stack hosts.
Dual-Stack – Assuming each host is assigned one IPv4 address and two global IPv6 addresses, each wireless module
can support a maximum of 32,000 wireless + dynamically segmented hosts. As each host will consume one entry per
global address.
Strategies and architectures to scale to support networks with ARP and ND requirements beyond the scale data
noted above is discussed in the campus reference architecture section. An Aruba mobile first architecture can be
NOTE
scaled to support up to 100,000 clients per Mobility Master by implementing multiple mobility controller clusters
each with their own aggregation layer.
24
ArubaOS 8 Clustering – Each Access Point and client establishes a tunnel to a primary and secondary Mobility
Controller within the cluster (see chapter X). This ensures a network path is available to Access Points and clients in
the event of an in-service upgrade or a planned / unplanned Mobility Controller outage.
Device / Link Redundancy – Each Mobility Controller is connected to two Aruba switches supporting network
virtualization function (NVF) in the core or wireless aggregation layer. ArubaOS-CX switches provide NVF with VSX
bundled interfaces for active/active forwarding. ArubaOS-Switch based devices implement NVF via physical stacking,
logical stacking (VSF or DT) and Trunks to bundle interfaces for active/active forwarding. This ensures a network path
is available to the Mobility Controllers, Access Points and clients in the event of a planned or unplanned core or
wireless aggregation switch outage or link failure.
Path Redundancy – Is provided using Link Aggregation Control Protocol (LACP) which is part of the IEEE 802.3ad
standard. LACP is an active protocol that allows LAG switch peers to detect if their peer port and device is operational.
First-Hop Router Redundancy – The network must provide for the continued forwarding of packets during the failure of
the default gateway. This feature is natively provided by Aruba Switches supporting NVF without the need for
implementing first-hop routing redundancy protocols such as VRRP.
For all mobile first reference architectures, the Mobility Controller ports in the LAG are distributed between pairs of Aruba
switches implementing NVF. The Aruba switches that the Mobility Controllers are connected to will be dependent on the 2-tier
or 3-tier hierarchical network design selected for the deployment and the number of wireless clients that are supported. The
Aruba switches supporting the wireless module can be stack of Aruba 3810Ms, a pair of Aruba 5400Rs configured for VSF or a
pair of Aruba 8320s or 8400s configured with VSX.
25
Figure 17 - Core / Aggregation using Stacking
26
Figure 18 - Core / Aggregation using Virtual Switching Framework (VSF)
27
Figure 19 - Core / Aggregation using VSX with Multi-Chassis LAG
28
The primary function of the campus core layer is to forward packets as quickly as possible. To that end, the design for the core
layer should be layer 3 centric with a minimum number of features enabled. One key element which must be included in a good
design is high-availability. The core must be reliable and available. To that end, a chassis based device with redundant
management module and line cards is often the best option but it is not required. Redundant LAGs to aggregation devices
should be used whenever possible. Triangle topologies between layer 3 devices should be used as opposed to square
topologies to speed convergence in the event of a network path or device failure.
• Quality of Service
• Inter-site and Intra-site Routing
Quality of Service
Quality of service (QoS) configurations can be very complex to design, implement and manage. Ultimately, the purpose of
QoS is to manage “unfairness” in the network and help prioritize packets so that business critical applications are most likely to
perform properly during times of network congestion. In designing a QoS configuration Aruba recommends using the following
guidelines:
1. Classification is very environment specific and must be adjusted for all but a very simple design
2. Build the most simple model possible to minimize the challenges in supporting a complex design.
3. Apply or re-mark packets as close to the ingress as possible.
4. Voice traffic should be placed in the highest priority queue and the configuration should ensure that the packets are
transmitted before all other queues (strict priority queue).
Routing Protocols
A question which is frequently asked is ‘which routing protocol should I use?’ There are two viable options for an MFRA
network. OSPF and BGP are viable protocols which can be used. OSPF is more commonly used in Campus environments but
it is becoming more common to see BGP used within the Campus. There are advantages and disadvantages to both protocols.
Single area OSPF designs with several OSPF speakers are generally less complicated to design and support than an BGP
solution. BGP provides much more flexibility and control with respect to prefix advertisement and filtering as compared to
OSPF.
Returning to the ‘which routing protocol should I use’ question, the best answer is use OSPF and BGP in the right places in
your network. For example, if you have a campus network with 500 OSPF speakers, it is advantageous to build a design with
29
multiple OSPF “islands” interconnected via BGP peers. This approach would provide the following advantages over a pure
OSPF network:
It would also require the use of redistribution (and potentially mutual redistribution) which depending upon the IP addressing
plan can be very tedious. Starting with the ‘simple is best’ design approach and using OSPF as our routing protocol, the table
below provides guidance as to when designs would likely benefit from using OSPF and BGP.
Aruba’s ClearPass Policy Manager, part of the Aruba 360 Secure Fabric, provides role- and device-based secure network access control for
IoT, BYOD, corporate devices, as well as employees, contractors and guests. With a built-in context-based policy engine, RADIUS,
TACACS+, non-RADIUS enforcement using OnConnect, device profling, posture assessment, onboarding, and guest access options,
ClearPass is unrivaled as a foundation for network security for organizations of any size. Mobile-First networks leverage ClearPass for -
end user authentication, device authentication/profiling, as well as administrative authentication to network infrastructure
devices/systems. User roles are configured in ClearPass and then are automatically pushed to Mobility Controllers and Access Switches as
required. User roles can include access-lists, QoS configuration elements, VLAN membership and similar elements.
ClearPass provides high-availability through a published-subscriber model. A simple HA design would include a publisher and two
subscribers. Network devices would be configured to interact with the subscribers while administrators would perform all configuration
1 OSPF in the access layer would only be seen if the network design implements a routed access layer model.
30
on the publisher. The publisher would then replicate configurations to the subscribers. Design requirements may call for having additional
subscribers in each facility.
ClearPass design and leading practices are beyond the scope of this document. Please review the VRDs and
NOTE
other documents available on Arubapedia for additional information regarding ClearPass.
MANAGEMENT
AIRWAVE
An end-to-end Mobile-First network leverages AirWave to provide network monitoring and management. AirWave provides
controllability and visibility for wired and wireless devices in any network with a single graphical interface. Key AirWave features
include:
Human error is one of, if not the, top reasons for network outages. The best designed network if not implemented and
managed properly will experience more unplanned outages than a network which was design and build to be operations
centric. Zero Touch Provisioning (ZTP) is a powerful way to ensure that configurations for devices are deployed automatically
and consistently using customized templates. To deliver on the Mobile First promise of any user, any location, same
experience, we must adopt an approach to design systems for automation and operation. Aruba calls this methodology D4AO.
D4AO will standardize configurations and provide a solution to manage the ‘network’ as a collection of systems providing a
service rather than a set of access points, switches, and mobility controllers.
Beyond provisioning, AirWave provides monitoring of devices via SNMP, SSH, ICMP, and other protocols to provide
administrators the capability to view the performance of their network and clients in real-time or historically.
NOTE The case study presented in this VRD will include configurations build using the D4AO methodology.
31
NETWORK ANALYTICS ENGINE
The Network Analytics Engine is a monitoring and analytics tool that is built in to the AOS-CX operating system. Powered by
Python and utilizing REST API, the Network Analytics Engine allows for constant monitoring for anomalous behavior in your
network, with the capability to automatically alert and take actions.
These actions include using REST, SSH, interacting with syslog events, and even allow for calling custom Python function
definitions. REST API is supported across all of the Aruba devices and applications, allowing for the Network Analytics Engine
to interact with the Aruba portfolio as well as 3rd party tools that support REST.
The capability of real time monitoring with an alert system enables the network operators to be immediately alerted when traffic
anomalies occur, and take action without interaction necessary from the operators. Utilizing the built in time series database,
the Network Analytics engine keeps a history of what it is monitoring. This enables operators to identify trends and traffic
regularities to better identify future anomalies.
As networks become increasingly complex, automation is necessary to help manage and keep tight control over a network and
the devices attached to it. The Network Analytics Engine utilizes automation in an easily viewable fashion, while
complementing other automation tools through our open REST API.
NETEDIT
NetEdit is Aruba tool that provides powerful network wide configuration and conformance services for ArubaOS-CX devices.
Network design in the D4AO model provides key advantages to leverage the power of NetEdit. For example, having a uniform
VLAN definition for all sites, a common hostname construct, and an address block for device management allows for creating
powerful conformance queries to ensure that the ‘as-designed’, ‘as-implemented’, and current state configurations are as
expected.
NetEdit provides the ability to compare configuration elements between dynamically defined groups of devices. Over time
network device configurations change due to new requirements, new technologies, and unplanned changes. Providing the
network administrator a powerful and easy to use tool that quickly provides actionable information is a key design goal of
NetEdit. Reducing “configuration drift” leads to better performing and “well behaved” networks – something that is equally
important to network administrators supporting small, medium and large networks.
The D4AO elements that are being incorporated into the case study in this VRD include:
Configuration Element
Hostname Hostnames should note the device location, role and a unique identifier.
Management IP Address Each device should have a unique IP address by which it is managed.
QoS Class & Policy Names Devices should share common names for QoS configuration elements
ACLs and ACEs Devices should share common ACL and ACE entries
Route maps Devices should share common route-maps for identical functions. Route-maps names
should suggest how the route-map is used. For example, a route-map for redistribution of
OSPF routes into BGP would be named ‘OSPF->BGP’.
IP Prefix Lists Common prefix lists should be defined for devices performing the same role (such as WAN
edge). Uppercase names are suggested as they stand out when reading a configuration.
Please review the references at the end of this document for additional information about NetEdit.
32
REFERENCE ARCHITECTURE BUILDING BLOCKS
This section includes mobile first reference architectures for small, medium and large buildings as well as campuses consisting
of multiple different size buildings. For convenience, a scenario is provided for each architecture to provide a baseline upon
which the modular network and wireless module design is derived. Each architecture also builds upon the previous design
adding additional layers as the access layer and client counts are increased.
SMALL OFFICE
SCENARIO
The following reference design is for a small office consisting of a single floor. The building includes one main distribution frame
(MDF) / server room and one intermediate distribution frame (IDF) that connects to the MDF using multi-mode fiber. The
building supports up to 150 employees and requires 15 x 802.11ac Wave 2 Access Points to provide full 2.4GHz and 5GHz
coverage.
Building Characteristics:
1 Floor / 20,000 sq. ft. Total Size
150 x Employees / 300 x Concurrent IPv4 Clients
15 x 802.11ac Wave 2 Access Points
1 x Combined Server Room / Wiring Closet (MDF)
1 x Wiring Closet (IDF)
This building implements two wiring closets and therefore does not require an aggregation layer between the core and access
layer. This building will implement a 2-tier modular network design where the access layer switches and modules connect
directly to a collapsed core / aggregation layer (figure 3-0). This 2-tier modular network design can also accommodate small
buildings with a larger sq. footage and additional floors if required.
The following is a summary of the modular network architecture and design:
LAN Core / Aggregation:
Cluster or stack of switches with mixed ports:
o SFP/SFP+ (Access Layer Interconnects)
o 10/100/1000BASE-T Ports (Module Connectivity)
IP routing
Layer 2 Link Aggregation to Access layer devices and Module Connectivity
LAN Access:
A stack of two or more switches per wiring closet:
o SFP/SFP+ (Core / Aggregation Layer Interconnects)
o 10/100/1000BASE-T with HPE SmartRate (Edge Ports)
Layer 2 Link Aggregation to Core/Aggregation Layer Devices
802.11ac Wave 2 Access Points
33
The number of Access Points required for this hypothetical scenario was estimated based on the buildings square
footage and the wireless density / capacity requirements. For this scenario it was determined that 15 x Access
Points would be required based on each Access Point providing 1,200 sq. ft. of coverage. Each Access Point in
NOTE this scenario supporting 30 clients.
The actual number of Access Points and their placement for a real deployment should be determined using a site
survey factoring each individual coverage areas density requirements.
34
CONSIDERATIONS & BEST PRACTICES
This section provides a list of key design and implementation considerations for this reference design.
LOCAL AREA NETWORK
The small building in this scenario has a two-tier network with a collapsed core/aggregation design. The core/aggregation
switch could be a single device or a stack of two or more switches (virtual stack or backplane stack). The core/aggregation
switches will connect to the access switches with layer-two links and will provide any and all routing between VLANS. It is likely
that there will be no more than 4 VLANS (one for device management, one for users and one for building management, and
one for security cameras). The core switch can provide connectivity to an optional switch stack for any local compute resources
(computer room stack). The “size” of the small building may not warrant having any in-building local compute or a dedicated
computer room switch.
The recommended core/aggregation design calls for switch redundancy which can be achieved using either ArubaOS-CX or
ArubaOS-Switch devices. It is likely that a small building will use ArubaOS-Switch devices and redundancy would be
implemented via backplane stacking or the Virtual Stacking Framework.
The recommended access switch design would be to use one or more switches per IDF in a stacking configuration. The switch
stacks would need to provide enough power for access points and other PoE devices as well as provide enough Ethernet
interfaces for wired systems. Stacking is recommended to build fault tolerant designs so that if one switch is off-line, the is still
connectivity to access points and the building core/aggregation switches. Connectivity to the core/aggregation layer would be
provided by two 10G ports (using ports from different switches in the stack) configured in a LAG/MCLAG.
The optional computer room switch has similar redundancy considerations as the core/aggregation switch stack. While we
don’t need to plan for PoE we do need to consider the impact of a computer room switch outage. In most cases, the cost of an
outage is sufficient that having redundant computer room switches is highly desirable. This is especially true if the devices
which will be connected to the switches have the ability to be dual-attached, we can minimize the impact of a switch failure.
Table 7 below provides a summary of the applicable LAN considerations and best practices that should be considered for a 2-
tier modular network design:
2
UDLD should only be used on 1G links as 10G natively includes these services/functions.
35
Bidirectional Forwarding Potentially Potentially
Detection (BFD)
Power over Ethernet No Yes
Table 8 below summarizes the device roles and provides general guidance on the number of devices recommended as part of
the Mobile-First Reference Architecture.
While the number of required 802.11ac Wave 2 Access Points for this design is small, Aruba recommends implementing a
Mobility Master (MM) to take advantage of specific features that are required to provide mission critical wireless services when
wireless is the primary access medium. The additional of a mobility master to the design provides centralized configuration and
monitoring, supports features including clustering, AirMatch and Live Upgrades and finally provides centralized application
support (UCC and AppRF).
While a controller-based solution can be deployed without a mobility master (MM), it is not a recommended best
NOTE
practice.
36
REDUNDANCY
Redundancy for a small building reference architecture is provided across all layers. The redundancy built into the 2-tier
modular network design that establishes the foundation network determines the level of redundancy that is provided to the
modules. Often the cost of an outage is the key driver in developing an approach/plan to provide network redundancy. As a
first line of defense, most small networks use dual power supplies and often use a stack of switches to provide redundancy.
For this scenario the mobility master and mobility cluster members are deployed within a server room and connect directly to
the core / aggregation switches. To provide full redundancy, two virtual mobility masters and one cluster of hardware or virtual
mobility controllers is required:
Aruba Mobility Master (MM):
o Two virtual MMs
o L2 master redundancy (Active / Standby)
Hardware Mobility Controllers (MCs):
o Single cluster of hardware MCs
o Minimum of two cluster members
Virtual Mobility Controllers (MCs):
o Single cluster of virtual MCs
o Minimum of two cluster members
o Separate virtual server hosts
Access Points
o AP Master pointing to the clusters VRRP VIP
o Fast failover using cluster built-in redundancy
Figures 16 and 17 provide detailed example for how the virtual and hardware cluster members are connected to the core /
aggregation layer. Hardware mobility controllers are directly connected to the core / aggregation layer switches via two or more
1 Gigabit Ethernet ports configured in a LAG group. The LAG port members being distributed between core / aggregation layer
stack members.
37
Figure 24 - Hardware Mobility Controller Cluster – Core / Aggregation Layer
Virtual Mobility Controllers are logically connected to a virtual switch within the virtual server host. The virtual host server is
directly connected to the core / aggregation switches via two or more 1 Gigabit or 10 Gigabit Ethernet ports implementing
802.3ad link aggregation or a proprietary load-balancing / failover mechanism. Each port being distributed between core /
aggregation layer switch stack members.
38
The mobility master(s) are deployed in a similar manner to the cluster of virtual mobility controllers. Each virtual server host
supporting one virtual mobility master operating in an active / standby mode. While a small building can elect to implement a
single mobility master, there are no additional licenses required to implement a standby. The only overhead being the additional
CPU, memory and storage utilization on the virtual server host.
Redundancy for virtual servers is hypervisor dependent. To provide against link, path and node failures, the
NOTE
hypervisor may implement 802.3ad link aggregation or a proprietary load-balancing / failover mechanism
39
Figure 26 - Hardware Mobility Controller Cluster – VLANs
As a best practice Aruba recommends implementing unique VLAN ids within the wireless module. This allows for an
aggregation layer to be introduced in the future without disrupting the other layers within the network. This also allows for
smaller layer 2 domains which is key to reducing layer 2 instability due to operational changes, loops, or mis-configurations
originating from other layers or modules in the network from impacting the wireless module.
40
SCALING & PLATFORM SUGGESTIONS
Table10 provides platform suggestions for the small building scenario which is to support 15 Access Points and 300 concurrent
clients. Where appropriate a good, better and best suggestion is made based on feature, performance and scaling. These are
suggestions based on the described scenario and maybe substituted at your own discretion.
802.11ac Wave 2 Access Points 300 Series 310 Series 330/340 Series
41
MEDIUM OFFICE
SCENARIO
The following reference design is for a medium office consisting of six floors. The building includes a data center which
connects via single-mode fiber to a main distribution frame (MDF) on each floor. Each floor includes three intermediate
distribution frames (IDFs) which connect to the MDF via multi-mode fiber. The building supports up to 1,500 employees and
requires 120 x 802.11ac Wave 2 Access Points to provide full 2.4GHz and 5GHz coverage.
Building Characteristics:
6 Floors / 150,000 sq. ft. Total Size
1,500 x Employees / 3,000 x Concurrent IPv4 Clients
120 x 802.11ac Wave 2 Access Points
1 x Computer Room
1 x MDF per floor (6 total)
2 x IDFs per floor (12 total)
As this building implements a structured wiring design using MDFs and IDFs, an aggregation layer to connect the access layer
is required. This building will implement a 3-tier modular network design where the access layer switches connect via
aggregation layer switches in each MDF that connect directly to the core (figure 3-5). For scaling, aggregation and fault domain
isolation – this modular network design also includes an additional aggregation layers for the computer room.
The following is a summary of the modular network architecture and design:
LAN Core:
A cluster of switches with fiber ports:
o SFP/SFP+/QSFP+ (Aggregation Layer Interconnects)
o SFP/SFP+ (Module Connectivity)
IP routing to Aggregation Layer Devices and Modules
LAN Aggregation:
A stack of two switches with fiber ports per MDF:
o SFP/SFP+/QSFP+ (Core and Access Layer Interconnects)
IP routing to Core Layer Devices
Layer 2 Link Aggregation to Access Layer Devices
LAN Access:
A stack of two or more switches per MDF and IDF:
o SFP/SFP+ (Aggregation Layer Interconnects)
o 10/100/1000BASE-T with PoE+ (Edge Ports)
Layer 2 Link Aggregation to Aggregation Layer Devices
802.11ac Wave 2 Access Points
42
Figure 29 - Medium Office – 3-Tier Modular Network Design
The number of Access Points required for this hypothetical scenario was calculated based on the buildings sq. ft.
and the wireless density / capacity requirements. For this scenario it was determined that 120 x Access Points
would be required based on each Access Point providing 1,200 sq. ft. of coverage. Each Access Point in this
NOTE scenario supporting 30 clients.
The actual number of Access Points and their placement for a real deployment should be determined using a site
survey factoring each individual coverage areas density requirements.
43
LOCAL AREA NETWORK
The medium building in this scenario has a three-tier network providing dedicated access, aggregation, and core layers. The
aggregation layer consists of two pair of switches with each pair providing redundant connectivity for connections to both the
core and access layers. The core layer will consist of a pair of devices to provide redundancy and eliminate single points of
failure. Product selection will determine the options available to implement HA configurations. The aggregation switches will
connect to the access switches with layer-two links and will provide any and all routing between VLANS. It is likely that there
will be no more than 4 VLANS (one for device management, one for users and one for building management, and one for
security cameras). The aggregation layer switches will connect to the core switches via layer 3 links. The core switch can
provide connectivity to an optional switch stack for any local compute resources (computer room stack). The “size” of the
medium building may not warrant having any in-building local compute or a dedicated computer room switch.
The recommended switch redundancy design can be achieved using either ArubaOS-CX or ArubaOS-Switch devices. It is
likely that a medium building will use ArubaOS-Switch devices and redundancy would be implemented via backplane stacking
or the Virtual Stacking Framework.
The recommended access switch design would be to use one or more switches per IDF in a stacking configuration. The switch
stacks would need to provide enough power for access points and other PoE devices as well as provide enough Ethernet
interfaces for wired systems. Stacking is recommended to build fault tolerant designs so that if one switch is off-line, the is still
connectivity to access points and the building core/aggregation switches. Connectivity to the aggregation layer would be
provided by two 10G ports (using ports from different switches in the stack) configured in an aggregated group (“Trunk” or
VSX/MCLAG).
The optional computer room switch has similar redundancy considerations as the aggregation switch stack. While we don’t
need to plan for PoE we do need to consider the impact of a computer room switch outage. In most cases, the cost of an
outage is sufficient that having redundant computer room switches is highly desirable. This is especially true if the devices
which will be connected to the switches have the ability to be dual-attached, we can minimize the impact of a switch failure.
The table below provides a summary of the applicable LAN considerations and best practices that should be considered for a 3-
tier modular network design:
3
UDLD is applicable to 1Gb links
44
Power over Ethernet No No Yes
Bidirectional Forwarding Potentially No No
Detection (BFD)
Figure 30 - LAN Considerations & Best Practices
REDUNDANCY
Redundancy for a medium building reference architecture is provided across all layers. The redundancy built into the 3-tier
modular network design that establishes the foundation network determines the level of redundancy that is provided to the
modules. Aruba recommends using NVF functions (stacking or MCLAG/VSX) to provide network redundancy as well as using
redundant links and power supplies to maximize network availability and resiliency.
For this scenario the mobility master and mobility cluster members are deployed within a computer room and connect directly to
the core or computer room aggregation switches. To provide full redundancy, two virtual mobility masters and one cluster of
hardware or virtual mobility controllers is required:
Aruba Mobility Master (MM):
o Two virtual MMs
o L2 master redundancy (Active / Standby)
Hardware Mobility Controllers (MCs):
o Single cluster of hardware MCs
o Minimum of two cluster members
Virtual Mobility Controllers (MCs):
o Single cluster of virtual MCs
o Minimum of two cluster members
o Separate virtual server hosts
Access Points
o AP Master pointing to the clusters VRRP VIP
o Fast failover using cluster built-in redundancy
The figures below provide detailed examples for how the virtual and hardware cluster members are connected to their
respective layers. Hardware mobility controllers are directly connected to the core layer switches via two or more 1 Gigabit or
45
10 Gigabit Ethernet ports configured in a LAG group. The LAG port members being distributed between redundant core /
aggregation switches.
Virtual Mobility Controllers are logically connected to a virtual switch within the virtual server host. The virtual host server is
directly connected to the computer room aggregation switches via two or more 1 Gigabit or 10 Gigabit Ethernet ports
implementing 802.3ad link aggregation or a proprietary load-balancing / failover mechanism. Each port being distributed
between redundant computer room aggregation switches.
46
The mobility master(s) are deployed in a similar manner to the cluster of virtual mobility controllers. Each virtual server host
supporting one virtual mobility master operating in an active / standby mode.
Redundancy for virtual servers is hypervisor dependent. To provide against link, path and node failures, the
NOTE
hypervisor may implement 802.3ad link aggregation or a proprietary load-balancing / failover mechanism
SCALABILITY
For this scenario there are no specific LAN scalability considerations that need to be made. The core, aggregation and access
layers can easily accommodate the Access Points (APs) and client counts without modification or derivation from the design. A
wireless aggregation layer can be added in the future as additional APs and clients are added to the network.
Wireless module scaling is also not a concern as the mobility masters can be expanded and additional cluster members added
over time to accommodate additional APs, clients and switching capacity as the network grows.
For this medium building design Aruba recommends implementing the MM-VA-500 mobility master and a cluster of two or more
hardware or virtual mobility controllers (see platform suggestions). The mobility master selected for this design can scale to
support 500 x APs, 5,000 x clients and 50 x mobility controllers.
VIRTUAL LANS
For this design the core or computer room aggregation layer terminates all the VLANs from the mobility controllers. The VLANs
are extended from the mobility controllers to the core or computer room aggregation layer using 802.1Q trunking. Aruba
recommends using tagged VLANs wherever possible to provide additional loop prevention.
The wireless module consists of one or more user VLANs depending on the security and policy model that is implemented. For
a single VLAN design, all wireless and dynamically segmented clients are assigned to a common VLAN id with roles and
policies determining the level of access each user is provided on the network. The single VLAN is extended from the core or
computer room aggregation layer switches to each physical or virtual mobility controller cluster member. Additional VLANs can
be added and extended as required (Figures 14 and 15). For example your mobile first design may require separate VLANs to
be assigned to wireless and dynamically segmented clients for policy compliance.
At a minimum two VLANs are required between the core or computer room aggregation layer and each mobility controller
cluster member. One VLAN is dedicated for management and Mobility Manager (MM) communications while the second VLAN
is mapped to clients. All VLANs are common between cluster members to permit seamless mobility. The core or computer room
aggregation layer switches have VLAN based IP interfaces defined and operate as the default gateway for each VLAN. First-
hop router redundancy is natively provided by the Aruba stacking architecture.
47
Figure 34 - Hardware Mobility Controller Cluster – VLANs
48
As a best practice Aruba recommends implementing unique VLAN ids within the wireless module. This allows for an
aggregation layer to be introduced in the future without disrupting the other layers within the network. This also allows for
smaller layer 2 domains which is key to reducing layer 2 instability due to operational changes, loops, or mis-configurations
originating from other layers or modules in the network from impacting the wireless module.
802.11ac Wave 2 Access Points 300 Series 310 Series 330/340 Series
49
LARGE OFFICE
SCENARIO
The following reference design is for a large office consisting of 12 floors. The building includes a data center which connects
via single-mode fiber to a main distribution frame (MDF) on each floor. Each floor includes three intermediate distribution
frames (IDFs) which connect to the MDF via multi-mode fiber. The building supports up to 3,000 employees and requires 300 x
802.11ac Wave 2 Access Points to provide full 2.4GHz and 5GHz coverage.
Building Characteristics:
12 Floors / 360,000 sq. ft. Total Size
3,000 x Employees / 6,000 x Concurrent IPv4 Clients
300 x 802.11ac Wave 2 Access Points
1 x Computer Room
1 x MDF per floor (12 total)
2 x IDFs per floor (24 total)
As this building implements a structured wiring design using MDFs and IDFs, an aggregation layer to connect the access layer
is required. This building will implement a 3-tier modular network design where the access layer switches connect via
aggregation layer switches in each MDF that connect directly to the core (figure 3-10). For scaling, aggregation and fault
domain isolation – this modular network design also includes an additional aggregation layers for the computer room and
wireless modules.
The following is a summary of the modular network architecture and design:
LAN Core:
A pair of redundant switches with a mix of 10G and 40G fiber ports:
o SFP/SFP+/QSFP+ (Aggregation Layer Interconnects)
IP routing to Aggregation Layer Devices and Modules
Optional NVF Functions (MCLAG/VSX)
LAN Aggregation:
A stack of two switches with fiber ports per MDF:
o SFP/SFP+/QSFP+ (Core and Access Layer Interconnects)
NVF Functions (MCLAG/VSX)
IP routing to Core Layer Devices
LAN Access:
A stack of two or more switches per MDF and IDF:
o SFP/SFP+ (Aggregation Layer Interconnects)
o 10/100/1000BASE-T with PoE+ (Edge Ports)
Layer 2 Link Aggregation to Access Layer Devices
802.11ac Wave 2 Access Points802.11ac Wave 2 Access Points
50
Figure 37 - Large Office – 3-Tier Modular Network Design
The number of Access Points required for this hypothetical scenario was calculated based on the buildings sq. ft.
and the wireless density / capacity requirements. For this scenario it was determined that 300 x Access Points
would be required based on each Access Point providing 1,200 sq. ft. of coverage. Each Access Point in this
NOTE scenario supporting 30 clients.
The actual number of Access Points and their placement for a real deployment should be determined using a site
survey factoring each individual coverage areas density requirements.
51
CONSIDERATIONS & BEST PRACTICES
This section provides a list of key design and implementation considerations for this reference design.
LOCAL AREA NETWORK
The large building in this scenario has a three-tier network providing dedicated access, aggregation, and core layers. The
wireless network also has a dedicated service block providing connectivity for the mobility controller cluster. The aggregation
layer consists of two pair of switches with each pair providing redundant connectivity for connections to both the core and
access layers. The core layer will consist of a pair of devices to provide redundancy and eliminate single points of failure. In a
large building, it is very likely that an ArubaOS-CX based switch will be used for both core and aggregation devices. The
aggregation switches will connect to the access switches with layer-two links and will provide any and all routing between
VLANS. It is likely that there will be no more than 4 VLANS (one for device management, one for users and one for building
management, and one for security cameras). The aggregation layer switches will connect to the core switches via layer 3 links.
The core switch will also provide connectivity to other sevice blocks which likely includes connectivity to a data-center, internet
edge, and WAN edge service blocks.
With a core layer that is entirely layer 3 connected, the network can leverage equal cost multipath routing to provide for
connectivity between core devices as well as to aggregation and other service blocks. Eliminating layer 2 protocols from the
core configuration will ensure that the core is focused on high-speed layer 3 packet forwarding.
The aggregation layer will provide high-availability using VSX. VSX will allow for the elimination of spanning-tree at the
aggregation layer and will allow for active/active forwarding to/from access-layer devices. VSX Config Sync will also be
leveraged to ensure that device pairs have identical configuration elements such as acess-lists and VLANS.
The recommended access switch design would be to use one or more switches per IDF in a stacking configuration. The switch
stacks would need to provide enough power for access points and other PoE devices as well as provide enough Ethernet
interfaces for wired systems. Stacking is recommended to build fault tolerant designs so that if one switch is off-line, the is still
connectivity to access points and the building core/aggregation switches. Connectivity to the aggregation layer would be
provided by two 10G ports (using ports from different switches in the stack) configured in an aggregated group (“Trunk” or
VSX/MCLAG).
Large facilities may have an on-site data center. Data center design is beyond the scope of this document. However,
connectivity from the Campus network to the data-center will be include in this design. Fundamentally, connectivity to the data-
center has very similar requirements to connectivity to other service blocks including the WAN edge or the internet edge.
The table below provides a summary of the applicable LAN considerations and best practices that should be considered for a 3-
tier modular network design:
52
Management Yes Yes Yes
Power over Ethernet No No Yes
Bidirectional Forwarding Potentially No No
Detection (BFD)
Figure 38 - LAN Considerations & Best Practices
REDUNDANCY
Redundancy for a large building reference architecture is provided across all layers. The redundancy built into the 3-tier
modular network design that establishes the foundation network determines the level of redundancy that is provided to the
modules. Aruba recommends using NVF functions (stacking or MCLAG/VSX) to provide network redundancy as well as using
redundant links and power supplies to maximize network availability and resiliency. The Aruba 8400 provides the maximum
redundancy of any Aruba Switch and is recommended for use in the Core, Aggregation, and Wireless Aggregation layers.
For this scenario the mobility master and mobility cluster members are deployed within a computer room and connect directly to
the wireless aggregation or computer room aggregation switches. To provide full redundancy, two hardware or virtual mobility
masters and one cluster of hardware or virtual mobility controllers is required:
Aruba Mobility Master (MM):
o Two hardware or virtual MMs
o L2 master redundancy (Active / Standby)
Hardware Mobility Controllers (MCs):
o Single cluster of hardware MCs
o Minimum of two cluster members
Virtual Mobility Controllers (MCs):
o Single cluster of virtual MCs
o Minimum of two cluster members
o Separate virtual server hosts
Access Points
o AP Master pointing to the clusters VRRP VIP
53
o Fast failover using cluster built-in redundancy
Figures 26 and 27 provide detailed example for how the virtual and hardware cluster members are connected to their respective
layers. Hardware mobility controllers are directly connected to the core layer switches via two or more 10 Gigabit Ethernet ports
configured in a LAG group. The LAG port members being distributed between redundant wireless aggregations switches.
Virtual Mobility Controllers are logically connected to a virtual switch within the virtual server host. The virtual host server is
directly connected to the computer room aggregation switches via two or more 10 Gigabit Ethernet ports implementing 802.3ad
link aggregation or a proprietary load-balancing / failover mechanism. Each port being distributed between redundant computer
room aggregation switches.
The mobility master(s) are deployed in a similar manner to the cluster of virtual mobility controllers. Each virtual server host
supporting one virtual mobility master operating in an active / standby mode.
54
Redundancy for virtual servers is hypervisor dependent. To provide against link, path and node failures, the
NOTE
hypervisor may implement 802.3ad link aggregation or a proprietary load-balancing / failover mechanism
SCALABILITY
To accommodate the requirement to support 6,000 x wireless IPv4 hosts on the network, a wireless aggregation layer is
included in the design. As a general best practice Aruba recommends a wireless aggregation layer once the IPv4+IPv6 host
count exceeds 4,094. The wireless aggregation layer is needed if hardware mobility controllers are deployed and is connected
directly to the core layer. If virtual mobility controllers are deployed – the computer room aggregation switches provide this
function.
Future scaling is not a concern as the mobility masters can be expanded and additional cluster members added over time to
accommodate additional APs, clients and switching capacity as the network grows. For this large building design, Aruba
recommends implementing the MM-HW-5K or MM-VA-5K mobility master and a cluster of two or more hardware or virtual
mobility controllers (see platform suggestions). The mobility master selected for this design can scale to support 5,000 x APs,
50,000 x clients and 500 x mobility controllers.
VIRTUAL LANS
For this design the wireless module aggregation layer terminates all the layer 2 VLANs from the mobility controllers. The VLANs
are extended from the mobility controllers to its respective aggregation layer switches using 802.1Q trunking. Aruba
recommends using tagged VLANs wherever possible to provide additional loop prevention.
The wireless module consists of one or more user VLANs depending on the security and policy model that is implemented. For
a single VLAN design, all wireless and dynamically segmented clients are assigned to a common VLAN id with roles and
policies determining the level of access each user is provided on the network. The single VLAN is extended from the respective
aggregation layer switches to each physical or virtual mobility controller cluster member. Additional VLANs can be added and
extended as required (figures 19 and 20). For example your mobile first design may require separate VLANs to be assigned to
wireless and dynamically segmented clients for policy compliance.
At a minimum two VLANs are required between the respective aggregation layer and each mobility controller cluster member.
One VLAN is dedicated for management and Mobility Manager (MM) communications while the second VLAN is mapped to
clients. All VLANs are common between cluster members to permit seamless mobility. The aggregation layer switches have
VLAN based IP interfaces defined and operate as the default gateway for each VLAN. First-hop router redundancy is natively
provided by the Aruba clustering or stacking architecture.
55
Figure 42 - Hardware Mobility Controller Cluster – VLANs
As a best practice Aruba recommends implementing unique VLAN ids within the wireless module. This allows for an
aggregation layer to be introduced in the future without disrupting the other layers within the network. This also allows for
smaller layer 2 domains which is key to reducing layer 2 instability due to operational changes, loops, or mis-configurations
originating from other layers or modules in the network from impacting the wireless module.
56
SCALING & PLATFORM SUGGESTIONS
Table 30 provides platform suggestions for the large building scenario which is to support 300 x Access Points and 6,000 x
concurrent clients. Where appropriate a good, better and best suggestion is made based on feature, performance and scaling.
These are suggestions based on the described scenario and maybe substituted at your own discretion.
802.11ac Wave 2 Access Points 300 Series 310 Series 330/340 Series
CAMPUS
The following reference design is for a campus which consists of multiple buildings (different sizes) and two datacenters. Each
building in the campus implements their own 2-tier or 3-tier modular network connecting to a campus backbone. The campus in
this scenario needs to support 64,000 x concurrent dual-stack clients and requires 6,000 x 802.11ac Wave 2 Access Points.
For a campus deployment, one key decision that needs to be made is where to place the mobility controller clusters. Due to the
high scaling requirements, a campus will generally require multiple clusters of mobility controllers which can either be
centralized in the datacenters or strategically distributed between the buildings. The clusters in both cases are managed by
hardware or virtual mobility masters deployed between the datacenters.
Both centralized and distributed mobility controller deployment models are valid for campus deployments with each model
supporting different mobility needs. As seamless mobility can only be provided between Access Points (APs) managed by a
common cluster, the mobility requirements will influence the cluster deployment model that is selected. .
An addition considerations for cluster placement are the traffic flows. If the user applications are primarily hosted in the
datacenter, a centralized cluster is a good choice as the wireless and dynamically segmented client sessions are terminated
within the cluster. Placing the cluster closer to the applications optimizes the north/south traffic flows. If the primary applications
are distributed between buildings in the campus, a distributed mobility controller model maybe a better choice to prevent the
unnecessary hairpining of traffic across the core.
Centralized Clusters:
Permits a larger mobility domain when ubiquitous indoor / outdoor coverage is required.
57
Efficient when the primary applications are hosted in the cloud or datacenter.
Distributed Clusters:
Permits smaller mobility domains such as within buildings or between co-located buildings.
Efficient when the primary applications are distributed or workgroup based.
The next two sections provide reference architectures for both centralized and distributed cluster deployments.
Campus Characteristics:
6,000 x 802.11ac Wave 2 Access Points
64,000 x Concurrent Dual-Stack Clients
2 x Datacenters with Layer 2 Extension
58
Figure 45 - Campus Modular Network Design – Centralized Mobility Controller Clusters
59
The table below provides a summary of these components:
When datacenters are separated at layer 3, a different approach is required. To support the AP and client counts and maintain
full redundancy, an active / standby model is implemented where each datacenter hosts an equal quantity of mobility masters
and mobility controllers:
1. Mobility Masters – Two mobility masters are hosted per datacenter implementing layer 2 and layer 3 master
redundancy. Layer 2 master redundancy is provided between mobility masters within each datacenter while layer 3
master redundancy provides redundancy between datacenters.
2. Mobility Controller Clusters – Two clusters of mobility controllers are hosted per datacenter. The APs are configured
with a primary LMS and backup LMS to determine their primary and secondary cluster assignments. Fast failover is
provided within the primary cluster while a full bootstrap is required to failover between the primary and secondary
clusters.
For aggregation layer scaling and fault domain isolation, each cluster of mobility controllers is connected to separate Aruba
8400 series aggregation layer switches, each aggregation layer accommodating up to 64,000 IPv4 and 16,000 IPv6 host
addresses. As each datacenter is separated at layer 3, four wireless modules and wireless aggregation layers are required to
accommodate an individual datacenter failure.
ROAMING DOMAINS
With an ArubaOS 8 architecture, seamless mobility is provided between Access Points (APs) managed by a common cluster.
Each wireless and dynamically segmented client is assigned an Active User Anchor Controller (A-UAC) and Standby User
Anchor Controller (S-UAC) cluster member to provide fast failover in the event of a cluster member failure or live upgrades.
To provide scaling for this design, two clusters of mobility controllers are required. As seamless roaming can only be provided
between APs managed by the same cluster, special considerations need to be made to ensure that APs in groups of building
that require seamless roaming are managed by the same cluster. The following considerations need to be made:
60
1. APs in the same building must be managed by the same cluster. This ensures wireless client sessions are not
interrupted as the clients roam within the building.
2. Indoor and outdoor APs in co-located buildings with overlapping coverage must be managed by the same cluster. This
ensures client sessions are not interrupted as the clients roam within a building or between buildings.
APs in buildings that are geographically separated and do not have overlapping coverage can be distributed between clusters
as required with attention being made to ensure AP and client capacity is as evenly distributed as possible (figure 3-16):
Figure 3-16. Roaming Domains
If the campus deployment supports both wireless and dynamically segmented clients, you may consider deploying
NOTE
separate clusters for wireless and dynamically segmented clients.
REDUNDANCY
For this scenario the datacenters are located in separate buildings which are also connected to the campus backbone. The
datacenters are interconnected using high-speed links ensuring there is adequate bandwidth capacity available to support the
hosted applications and services that are hosted in each datacenter.
For a dual datacenter design, the mobility masters and mobility controller clusters are distributed between both datacenters.
The wireless components can be deployed using several strategies to achieve redundancy which is depend on the datacenter
design:
Layer 2 Extension – If VLANs are extended between datacenters, the mobility masters and the mobility cluster
members can be split between the datacenters. Each datacenter hosting 1 x mobility master and 1/2 the cluster
members.
Layer 3 Separation – The mobility masters and mobility cluster members are duplicated in each datacenter.
LAYER 2 EXTENSION
The layer 2 datacenter redundancy model is very easy to understand as it operates in the same manner as a single datacenter
deployment model. Each datacenter hosts a mobility master and 1/2 of the mobility controllers of each cluster. The mobility
masters configured for L2 redundancy while Access Points (APs) and client load-balancing and fast-failover is provided by each
cluster (figure 3-17):
Aruba Mobility Master (MM):
o Two hardware or virtual MMs (one per datacenter)
o L2 master redundancy (Active / Standby)
61
Hardware Mobility Controllers (MCs):
o Two clusters of hardware MCs
o Cluster members equally distributed between datacenters)
Access Points
o AP Master pointing to the clusters VRRP VIP
o Fast failover using cluster built-in redundancy
o Per building AP cluster assignment based on roaming requirements
By default the APs and clients will be load-balanced and distributed between cluster members residing in each
NOTE datacenter. With this design it is possible that APs and clients within a building will be assigned to cluster members
in different datacenters.
LAYER 3 SEPARATED
The layer 3 datacenter redundancy model differs from the model by duplicating the mobility masters and clusters within each
datacenter. Each datacenter hosting two mobility masters and two clusters of mobility controllers. The mobility masters
configured for L2 redundancy within the datacenter and L3 redundancy between datacenters. The Access Points (APs) within
each building are assigned a primary and backup cluster using the primary and backup LMS. AP and client fast-failover are
provided within each cluster while a full bootstrap is required to provide failover between clusters (figure 3-18):
62
Aruba Mobility Master (MM):
o Four hardware or virtual MMs (two per datacenter)
o L2 master redundancy (Active / Standby)
o L3 master redundancy (Primary / Secondary)
Hardware Mobility Controllers (MCs):
o Four clusters of hardware MCs (Primary / Secondary)
o Cluster members duplicated between datacenters
o Primary clusters alternating between datacenters
Access Points
o Primary and Backup LMS using the Primary and Secondary cluster VRRP VIP
o Fast failover using cluster built-in redundancy
o Bootstrap failover between Primary and Secondary clusters
o Per building AP cluster assignment based on roaming requirements
Figure 3-18. Redundancy – Layer 3 Separation
63
SCALABILITY
Scaling is the primary concern for this campus scenario which is complicated by the inclusion of a secondary datacenter and
the datacenter deployment model. To accommodate the scaling and redundancy requirements for this campus scenario,
considerations were made for both the datacenter aggregation layer and the mobility controller cluster design.
DATACENTER AGGREGATION LAYER
Both datacenter deployment models require clusters of mobility controllers that are connected to their respective datacenter
aggregation layers. To accommodate 64,000 x concurrent dual-stack hosts, two clusters of mobility controllers are required –
each supporting up to 32,000 x dual-stack hosts. Each IPv6 host in this example being assigned a single global IPv6 address.
Clients using SLAAC are likely to obtain and use additional IPv6 addresses will reduce the number of supported devices.
Due to the high number of clients that must be supported, each cluster is connected to a separate Aruba 8400 series wireless
aggregation layer. This recommendation applies to both layer 2 extended and layer 3 separated datacenter designs:
Layer 2 Extension – Requires two datacenter aggregation layers which are split between datacenters. Each wireless
aggregation layer supporting one cluster of mobility controllers.
Layer 3 Separated – Requires two datacenter aggregation layers per datacenter. Each wireless aggregation layer
connecting a primary or secondary cluster of mobility controllers.
This datacenter aggregation layer design ensures that a single aggregation layer never accommodates more than 64,000 x
IPv4 or IPv6 host addresses during normal operation as well as during a datacenter failure.
64
Figure 47 - Datacenter Wireless Aggregation Layer Scaling
MOBILITY MASTER
For this campus design, Aruba recommends implementing the MM-HW-10K or MM-VA-10K mobility master (see platform
suggestions). As data switching throughput is not as big of a concern as with the mobility controller clusters, hardware or virtual
MMs can be deployed.
The mobility master selected for this design can scale to support 10,000 x APs, 100,000 x clients and 1,000 x mobility
controllers. This will provide adequate capacity to support the AP, client and mobility controller counts while providing additional
headroom for future growth. Additional clients and APs can be added as the campus grows by adding additional aggregation
layers and mobility controller clusters.
Scaling beyond 64,000 x dual-stack clients for a centralized deployment model can be achieved by deploying
additional mobility controller clusters within the datacenter. For an AOS 8.X deployment, a mobility master can be
NOTE
scaled to support up to 100,000 x clients, 10,000 x APs and 1,000 x mobility controllers (see appendicies).
Additional scaling being possible by deploying additional mobility masters and mobility controller clusters.
VIRTUAL LANS
For a centralized cluster design, the datacenter aggregation layer terminates all the VLANs from the mobility controller cluster
members. The datacenter architecture that is implemented determines the VLAN design. In both designs the VLANs are
extended from the mobility controllers to their respective datacenter aggregation layer switches using 802.1Q trunking. The
primary difference between the designs being the number of VLANs that are required.
LAYER 2 EXTENSION
When VLANs are extended between datacenters, each cluster implements its own unique VLAN ids and broadcast domains
that are extended between the datacenters. Each cluster consists of one or more user VLANs depending on the VLAN model
that has been implemented. For a single VLAN design, all wireless and dynamically segmented clients are assigned to a
common VLAN id with roles and policies determining the level of access each user is provided on the network. Each cluster
implementing unique VLAN ids.
The user VLANs are extended from the aggregation layer switches to each mobility controller cluster member (figure 3-20). At a
minimum two VLANs are required between the datacenter aggregation layers and each mobility controller cluster member. One
VLAN is dedicated for management, cluster and MM communications while the additional VLANs are mapped to clients. The
VLANs are common between cluster members split between the datacenters to permit seamless mobility. The datacenter
aggregation layer switches have VLAN based IP interfaces defined and operate as the default gateway for each VLAN. First-
hop router redundancy is natively provided by VRRP or the Aruba clustering architecture.
65
66
Figure 3-20. Wireless and Dynamically Segmented Client VLANs – Layer 2 Extension
LAYER 3 SEPARATION
When the datacenters are separated at layer 3, the VLANs are unique per datacenter. The primary and secondary clusters in
each datacenter each requiring their own unique VLAN ids and broadcast domains. Each cluster consists of one or more user
VLANs depending on the VLAN model that has been implemented. For a single VLAN design, all wireless and dynamically
segmented clients are assigned to a common VLAN id with roles and policies determining the level of access each user is
provided on the network. Each cluster implementing unique VLAN ids.
The user VLANs are extended from the aggregation layer switches to each mobility controller cluster member (figure 3-21). At a
minimum two VLANs are required between the datacenter aggregation layers and each mobility controller cluster member. One
VLAN is dedicated for management, cluster and MM communications while the additional VLANs are mapped to clients. The
VLANs are common between cluster members in each datacenter to permit seamless mobility. The datacenter aggregation
layer switches have VLAN based IP interfaces defined and operate as the default gateway for each VLAN. First-hop router
redundancy is natively provided by VRRP or the Aruba clustering architecture.
Figure 3-21. Wireless and Dynamically Segmented Client VLANs – Layer 3 Separation
One difference between the two datacenter designs is the client VLAN assignment and broadcast domain membership during a
datacenter failure. While both models offer full redundancy, only the layer 2 VLAN extension model offers fast failover in the
event of a datacenter outage:
67
1. Layer 2 Extension – Impacted clients maintain their VLAN id and IP addressing after a datacenter failover. The APs,
Aruba switches and clients are assigned to a new cluster member in their existing cluster in the remaining datacenter.
2. Layer 3 Separated – Impacted clients are assigned a new VLAN id and IP addressing after a datacenter failover. The
APs, Aruba switches and clients being assigned to a secondary cluster member in the remaining datacenter.
Core Layer
Building Specific
Switching
Aggregation Layer
(Follow Small, Medium and Large Recommendations)
Access Layer
802.11ac Wave 2 Access Points 300 Series 310 Series 330/340 Series
As each building in the campus can be different in size, each building will require its own respective 2-tier or 3-tier
hierarchical network design. As such switching suggestions for core, aggregation and access layers is not
NOTE
provided in table 17 as these selections will be unique per building. The individual building selections should be
made following the small, medium and large suggestions highlighted in the previous sections.
68
SCENARIO 2 – DISTRIBUTED CLUSTERS
The following reference design is for a campus such as a university with 285 buildings distributed over a 900 acre site. Each
building implementing its own 2-tier or 3-tier modular network design that connects to a common campus backbone. The
university has 20,000 faculty, staff and students with IPv4 and/or IPv6 clients. To provide coverage, the university has deployed
3,500 x 802.11ac Wave 2 Access Points (figure 3-22).
Campus Characteristics:
3,500 x 802.11ac Wave 2 Access Points
40,000 x Concurrent Clients (Native IPv4 and/or Dual-Stack)
1 x Datacenter
69
Figure 49 - Campus Modular Network Design – Distributed Mobility Controller Clusters
70
requiring overlapping coverage are serviced by a cluster of mobility controller strategically deployed in one of the co-located
buildings. APs and clients in standalone or isolated buildings being serviced by their own cluster of mobility controllers.
The modular network design and mobility controller cluster placement recommendations for each building in the campus follow
the same recommendations provided for the small, medium and large office reference designs. The mobility controller clusters
connecting to their respective layer depending on the buildings size. As with the previous recommendations, a wireless
aggregation layer being recommended when then wireless and dynamically segmented clients exceeds 4,096.
As the building sizes, number of APs and hosts vary – the mobility controller clusters are customized per building or co-located
buildings to meet the AP, client and throughput requirements. For ease of deployment, troubleshooting and repair, it is
recommended that you standardize on common models of mobility controllers for small, medium and large buildings. Your
design may include specifying two or three models of mobility controllers depending on the range of building sizes you need to
support.
Table 18 provides a summary of these components:
ROAMING DOMAINS
With an ArubaOS 8 architecture, seamless mobility is provided between Access Points (APs) managed by a common cluster.
Each wireless and dynamically segmented client is assigned a primary (UAC) and secondary (S-UAC) cluster member to
provide fast failover in the event of a cluster member failure or live upgrades.
This campus design includes both standalone and co-located buildings. Roaming is provided within each building as well as
strategically between co-located buildings where overlapping coverage is provided. Co-located buildings providing indoor /
outdoor coverage permitting roaming as faculty and students move between the co-located buildings (figure 34):
Standalone Buildings – Are each serviced by a cluster of mobility controllers deployed within each building. When
necessary APs in small buildings being serviced by a mobility controller cluster in a neighboring building.
Co-Located Buildings – Are services by a cluster of mobility controllers strategically deployed in one of the co-located
buildings. Each cluster servicing APs across two or more buildings.
71
REDUNDANCY
For this scenario the mobility masters are deployed within the datacenter and connect directly to separate datacenter
aggregation switches. Redundancy within each building is provided by the modular network design and clusters of mobility
controllers. The mobility controllers are deployed following the same recommendations provided for the small, medium and
large office reference designs:
Aruba Mobility Master (MM):
o Two hardware or virtual MMs
o L2 master redundancy (Active / Standby)
Hardware Mobility Controllers (MCs):
o Multiple clusters of hardware MCs
o Minimum of two cluster members
Virtual Mobility Controllers (MCs):
o Multiple clusters of virtual MCs
o Minimum of two cluster members
Access Points
o AP Master pointing to the clusters VRRP VIP
o Fast failover using cluster built-in redundancy
Additional redundancy between clusters can be achieved if desired by implementing the backup LMS option. This will allow
Access Points (APs) in a building to failover to an alternative designated cluster in the event of an in-building cluster or wireless
aggregation layer failure. Please note that the APs will perform a full bootstrap to failover to the alternate cluster which is user
impacting. The alternate cluster and aggregation layer must also be scaled accordingly to accommodate the AP and client
counts.
SCALABILITY
The primary scaling concern for this scenario is mobility master scaling. For this campus design, not only do you need to
accommodate the total number of Access Points (APs) and clients, but also the total number of mobility controllers which are
distributed between buildings. For this scenario, the 285 buildings will be serviced by 180 clusters – each with a minimum of two
mobility controller members. Clusters in some larger lager buildings implementing three or four cluster members as required.
For this campus design, Aruba recommends implementing the MM-HW-5K or MM-VA-5K mobility master (see platform
suggestions). As the number of distributed mobility controllers is the primary concern, hardware or virtual MMs can be
deployed. The mobility master selected for this design can scale to support 5,000 x APs, 50,000 x clients and 500 x mobility
controllers. This will provide adequate capacity to support the AP, client and mobility controller counts while providing additional
headroom for future growth. If you’re specific campus design requires more mobility controllers, the MM-HW-10K or MM-VA-
10K mobility master can be selected which can support up to 1,000 x mobility controllers.
VIRTUAL LANS
For a distributed cluster design, the building core or wireless aggregation layer terminates all the VLANs from the buildings
wireless module. The wireless and dynamically segmented client VLANs are extended from the mobility controllers to their
buildings respective access layer switches using 802.1Q trunking.
The wireless module consists of one or more user VLANs depending on the model that is implemented. For a single VLAN
design, all wireless and dynamically segmented clients are assigned to a common VLAN id with roles and policies determining
the level of access each user is provided on the network. The single VLAN is extended from the respective aggregation layer
switches to each physical or virtual mobility controller cluster member. Additional VLANs can be added and extended as
required (figure 35). For example, your mobile first design may require separate VLANs to be assigned to wireless and
dynamically segmented clients for policy compliance.
At a minimum two VLANs are required between the buildings core or wireless aggregation layer switches and each mobility
controller cluster member. One VLAN is dedicated for management and mobility master communications while the additional
72
VLANs are mapped to clients. The VLANs are common between cluster members to permit seamless mobility within each
building.
Each building may implement common VLAN ids or unique VLAN ids as required. As each building is layer 3 separated from
the other buildings in the campus, the VLAN ids can be re-used simplifying the WLAN and dynamically segmented client
deployment. However, each VLAN will require its own IPv4 and IPv6 subnet assignments.
73
VRD CASE STUDY OVERVIEW
In this section of the Campus VRD, we will examine the networking needs of a fictitious company named Dumars Industries.
Dumars Industries has 5 offices within a metropolitan area and is looking to expand to additional remote offices throughout the
country. Dumars Industries has been in business for 18 years and has approximately 25,000 employees. The majority of
employees work in one of the company offices, the primary exception being the field sales teams which consist of
approximately 1000 total users. Dumars Industries also has active college intern program and generally hosts 50-100
additional interns each quarter.
The five primary sites are depicted below as well as the connectivity between each site and internet connectivity. Headquarters
and Gold River are the two main-sites and support the largest user populations. Metro-E circuits have been provisioned as
depicted below.
Dumars Industries provides each employee a laptop or tablet device and allows employees to use their own devices as well.
The company has embraced SaaS and has moved approximately 85% of workloads to the cloud. Dumars Industries has
elected to keep building security systems on-premises.
Headquarters Building
The headquarters building is a 14-story facility and has an adjacent parking garage. The following business functions and
associated staff work in the headquarters building:
• Executive Offices
74
• Client Briefing Center
• Human Resources
• Legal Services
• Finance
• Safety & Security
The company space planner has projected to have approximately 400 users per floor for a total of 5,000 users. 95% of the
users will be connected to the network wirelessly. The majority of switchports will be used for connecting to access points,
building control IoT devices, and security cameras. Executive Offices will provide wired connectivity for IP desk phones while
other users will use soft-phone clients. Each floor also has several small and medium sized conference rooms and two large
conference rooms. All conference rooms will have wired connectivity for conference room phones and audio/visual equipment.
Each floor has three intermediate distribution frames (IDFs) with fiber connectivity back to the main distribution frame (MDF)
located on the 7th floor. The fiber path from each IDF is a ‘home run’ to the MDF. The cable path between floors runs through
the first IDF of each floor. There are 24 strands of OM3 fiber running from the MDF to primary IDF on each floor. There are
also 12 strands of OM3 running from the main IDF on each floor to the second and third IDFs. The fiber plant provides the
ability to provision connections from each IDF to the MDF via intermedia patch panels.
Each IDF will contain a stack of switches to provide both power and connectivity for access points and other devices. To
provide the best user experience, the access switching stacks will support HPE SmartRate to deliver 5 Gbps of connectivity to
each access point. Smartrate will be used for network locations with higher bandwidth requirements. It is anticipated that 25
access points will be needed per floor – although more may be needed in the future. The switches will also provide power and
network connectivity to 5-10 security cameras, 15-20 badge readers, and 15-20 building control IoT devices.
The copper cabling plan had been recently upgraded in anticipation of supporting 802.3bt Power over Ethernet. Currently,
there are plans being developed to modernize the building infrastructure to take advantage of additional power provided by
802.3bt. The table below provides a summary of the copper cable plant.
The diagram below provides a high-level overview of the fiber plant on each floor and connectivity back to the building MDF
located on the 7th floor.
75
Figure 55 - Headquarters IDF Connectivity
The Data Center at Headquarters is located on the 7th floor. Dumars Industries has moved the majority of their workload to the
cloud but does maintain a few in-house systems/services. The systems which have not moved to the cloud are:
• Email and Calendaring
• Directory Services
• DHCP and DNS Services
• Security and Building Safety Systems
• ClearPass
• Network Management Applications
• IP Telephony Systems
Of the systems still located in the data center, most are virtualized with the exception of the application(s) to support the IP
camera systems. The data center network connects to the campus core switches via multiple links to provide both sufficient
76
capacity and redundancy. The data center is designed with a spine and leaf architecture and the building network is a ‘large
leaf’. Other services which are accessible via the data center network include internet access, remote access/VPN services,
and public web pages/content. Internet connectivity is provided by BGP peerings to two ISPs. The internet edge service
block advertises a default route to the Campus network. Default route handling will be detailed in the routing architecture
portion of this document.
The Gold River Datacenter is designed to replicate all of the services provided by the Headquarters Datacenter. The
virtualization environment provides capabilities to move virtual machines/workloads between datacenters and update host
addressing as required. This VRD will document the connectivity from the Campus network to the Data Center network but will
not include technical details of the data center environment.
Gold River
The Gold River building is an pair of adjacent eight story buildings supporting approximately 5000 users. The buildings are
known ‘Gold River’ and ‘Gold River North’ (often abbreviated GDRN). The following business functions and associated staff
work in the facility:
• Research & Development
• Client Support Services
• Internal and External Training
The company space planner has projected to have approximately 50 users on the first and second floors of the Gold River main
building and 400 users per floor on floors three through eight. Training facilities are only in the Gold River Main building and are
not located in the GDRN building. 90% of the users will be connected to the network wirelessly. The training facilities are are
essentially large conference rooms with partitions to support having training sessions for small groups of 10-12 and scaling to
large groups of up to 100. The training facility can support a maximum of 400 students at any given time.
The majority of switchports will be used for connecting to access points, building control IoT devices, and security cameras.
Floors three through six have several small and medium sized conference rooms and two large conference rooms. Conference
rooms will have wired connectivity for conference room phones and audio/visual equipment.
Each floor has three intermediate distribution frames (IDFs) with fiber connectivity back to the main distribution frame (MDF)
located on the 8th floor. There are 24 strands of OM3 fiber running from the MDF to primary IDF on each floor. There are also
12 strands of OM3 running from the main IDF on each floor to the second and third IDFs. The fiber plant provides the ability to
provision connections from each IDF to the MDF via intermedia patch panels.
Each IDF will contain a stack of switches to provide both power and connectivity for access points and other devices. To
provide the best user experience, the access switching stacks will support HPE SmartRate to deliver the capability to provide
more than 1Gbps of connectivity to specific access points deployed in locations requiring high bandwidth. It is anticipated that
30-35 access points will be needed per floor – although more may be needed in the future. The switches will also provide
power and network connectivity to 5-10 security cameras, 15-20 badge readers, and 15-20 building control IoT devices.
The diagram below provides a high-level overview of the fiber plant on each floor and connectivity back to the building MDF
located on the 8th floor
77
Figure 56 - Gold River & Gold River North IDF Connectivity
The diagram below shows the links between service blocks in the Gold River and Gold River North facilities. Note that access
layer devices are omitted for clarity. The only layer 3 devices in the Gold River North building are a pair of 8320s.
78
The Gold River Data center is located on the 8th floor of the GDR facility. This facility serves as a redundant site to the primary
data center at Headquarters. The systems which have been replicated at the GDR DC are:
• Email and Calendaring
• Directory Services
• DHCP and DNS Services
• Security and Building Safety Systems
• ClearPass
• Network Management Applications
• IP Telephony Systems
The connectivity model for the Gold River DC to the Gold River Campus is the same as the Headquarters design. The GDR
DC uses a spine and leaf architecture and the Gold River Campus network is a ‘large leaf’. Aligning to the design of the
Headquarters DC, the Gold River DC provides redundant connectivity. Internet connectivity is provided by a BGP peering to an
additional ISP. The internet edge service block advertises a default route to the Campus network. Default route handling will
be detailed in the routing architecture portion of this document.
Squaw Valley
The Squaw Valley building is an two story building supporting approximately 1000 users. The primary business function at
this site is manufacturing. This facility runs 24x7. The majority of users will be connected via the wireless network.
Manufacturing systems are the primary devices connected to the wired network. Approximately 50% of the wired ports are
used to connect to manufacturing plant/equipment devices. The remaining switch ports will be used for connecting to access
points, building control IoT devices, and security cameras. There are two small and two medium sized conference rooms on
each floor.
Each floor has four intermediate distribution frames (IDFs) with fiber connectivity back to the main distribution frame (MDF)
located on the 1st floor. There are 8 strands of OM3 fiber running from the MDF to each IDF. Each IDF will contain a stack of
switches to provide both power and connectivity for access points and other devices. To provide the best user experience, the
access switching stacks will support HPE SmartRate up to 5Gbps of connectivity to APs and other devices. It is anticipated that
40-50 access points will be needed per floor. The switches will also provide power and network connectivity to 5-10 security
cameras, 15-20 badge readers, and 15-20 building control IoT devices.
The diagram below provides a high-level overview of the IDF physical connectivity.
79
Figure 58 – Squaw Valley IDF Connectivity
Kirkwood
The Kirkwood building is a two story building supporting approximately 1000 users. The primary business functions at this site
are marketing and sales. The majority of users will be connected via the wireless network. The remaining switch ports will be
used for connecting to access points, building control IoT devices, and security cameras. There are twelve small and eight
medium and two large conference rooms on each floor.
Each floor has four intermediate distribution frames (IDFs) with fiber connectivity back to the main distribution frame (MDF)
located on the 1st floor. There are 8 strands of OM3 fiber running from the MDF to each IDF. Each IDF will contain a stack of
switches to provide both power and connectivity for access points and other devices. It is anticipated that 40-50 access points
will be needed per floor. The switches will also provide power and network connectivity to 5-10 security cameras, 15-20 badge
readers, and 15-20 building control IoT devices.
The diagram below provides a high-level overview of the IDF physical connectivity.
80
Figure 59 - Kirkwood IDF Connectivity
Mt. Rose
The Mt. Rose facility serves as the primary shipping/receiving location for raw materials and finished goods. The facility also
has a warehouse and the newly-launched materials recovery program where returned/old/damaged products are disassembled
in an effort to recover and re-use as much of the raw materials as possible. The building is approximately 250,000 square feet
and provides 12 truck bays. There are approximately 250 users in this facility. The facility has a few offices and a two
conference rooms but the majority of the space is used for warehouse space.
There are 10 IDFs in the building. Each IDF will have a pair of access switches providing connectivity to approximately 30
access points. The wireless network will also use outdoor access points to provide coverage of outdoor areas where
employees will be working. Outdoor security cameras will be used in this location.
81
Figure 60 - Mt. Rose IDF Connectivity
82
DESIGN REQUIREMENTS
Availability Requirements
1. Single device faults in the access layer should not impact more than 30 users
2. Single device faults in the aggregation layer must not impact more than 0 users
3. Single device faults in the core layer must not impact more than 0 users
4. Device or circuit outages should result in sub-second failover.
5. Devices should be configured with as much interchassis/interstack redundancy as possible
6. A single device failure of a mobility controller member should not result any perceptible impact to users.
Layer 2 Requirements
1. Spanning-Tree domains must be as small as possible to minimize network convergence events
2. Spanning-Tree should be eliminated from the network provided there are sufficient safeguards in place to mitigate any
looping events.
3. LACP must be used on all link-aggregation configurations.
83
Routing Requirements
1. BGP will be used to provide connectivity between sites.
2. BGP will be configured to advertise aggregate addressing for each site.
3. Each site will have a unique private ASN.
4. OSPF will be used as the routing protocol within each building/campus.
5. OSPF will be implemented with a single area in each building/campus.
6. Each OSPF speaker should not have more than 12 adjacencies
7. Redistribution between routing protocols will only be configured on WAN edge devices
8. Headquarters and Gold River will provide internet connectivity for the enterprise network.
Multicast Requirements
1. PIM Sparse Mode will be used in each building
2. PIM BSR will be configured on core device(s) to provide an RP for each building.
3. IGMP and IGMP snooping will be configured on aggregation and access layer devices to optimize multicast traffic
flows.
4. There is no business need to transport multicast data between facilities today.
5. All network devices MUST implement features to optimize multicast traffic flooding to conserve both wired bandwidth
and wireless bandwidth.
84
End User Experience
1. Wired and wireless devices/users should be profiled/authenticated by the network so that pre-defined security policies
can be applied
2. The network must provide for the application of differing polices for employees using company systems and employee
owned devices (BYOD).
3. The network must provide ‘Guest Internet’ for wireless users
4. The network must provide ‘Internet Only’ access for wired ports in training facilities
5. The network must provide for the ability to block systems/hosts/users from different groups (such as employees,
Building Management Systems, IoT Devices, etc) from communicating with other groups
6. End-users and Visitors MUST be able to self-provision BYOD devices using either the guest network or BYOD
services.
7. Wireless roaming in buildings must be implemented such that a user can maintain connectivity when roaming on or
between floors.
8. Optimize RF design for voice and roaming
Security
1. All devices should only allow administrative access from defined networks/hosts.
2. Access layer devices should be configured to prevent attached host systems from influencing/changing the spanning-
tree topology of the network. Any spanning-tree related frames received on ports connected to access-layer hosts
should cause the port to be disabled.
3. All devices should authenticate peer/neighbor adjacencies for routing protocols.
4. All devices should only allow secure communication access methods for administrative access
5. When supported, all control plane protocols should be authenticated and encrypted.
DESIGN OVERVIEW
In crafting a design to meet the identified requirements as well as provide some ‘planning for the future’ consideration must be
given to service blocks which are the most likely to have additional requirements. The access layer is the most likely service
block to require changes to support new business needs. Changes to the aggregation layer are often driven by building or
expansion in the number of access layer devices/IDFs. In most cases, the overall design of the aggregation layer doesn’t
change substantially. The same holds true for the core layer. Using ClearPass for centralize policy definition and having the
network enforce the policies, there may not be a great deal of change to device configurations. Of course, there will be
exceptions to this such as the need to adjust a QoS configuration to support a new application.
An item that is often overlooked in designing and building a Mobile-First network is to ensure that MTUs are properly configured
on all devices so that features such as Dynamic Segmentation as well as OSPF (or other protocols/functions) which require
MTUs larger than 1500 bytes. Care should be taken to ensure that the IP path from access devices (switches or APs) can
provide an MTU of at least 1564 to the mobility controllers. In a Campus environment this likely doesn’t present a problem but
in a Metro-E network where backup controllers are placed at remote sites, the Metro-E circuits must support the jumbo frames.
An IP MTU of 2048 and an Ethernet MTU of 2048 are recommended.
85
Aruba does NOT recommend deploying dynamic segmentation across a WAN where devices are separated over
NOTE
low-speed higher latency links.sky
The table below lists the design models and operating systems used within each of the service blocks. All of the sites will
feature a common access-layer design leveraging leading practices including device hardening, link aggregation, loop-
protection, and dynamic segmentation. The single two-tier site will be designed with ArubaOS-Switch devices for all roles and
the other sites will use ArubaOS-CX in all roles save for the access layer. The Kirkwood site will not require a dedicated
Wireless Aggregation service block. The mobility controllers will be connected to the aggregation switches.
The Metro-E devices used in this design are all 8320s with ArubaOS-CX. In this case study, we have elected to use the same
device for this role in all sites to ensure configuration constancy and reduce troubleshooting complexity.
The switches, access points, and mobility controllers planned for each site are listed below.
Headquarters
Access Switches 126 x 2930Ms Stacks of 2-4 switches will be deployed in each of the
42 IDFs.
86
Access Points 340 Series APs (indoor) Quantity to be determined by site survey
370 Series APs (outdoor)
Wireless Aggregation Switches 2x 8320s One pair of aggregation switches are used implement
an L3 attached wireless services block to off-load L2
processing from core devices and to address failure-
domain sizing.
Mobility Controllers 3x7220s Three mobility controllers will are called for to support
local HQ users as well as to act as backup devices
for Gold River and other remote sites.
Access Switches 126 x 2930Ms Stacks of 2-4 switches will be deployed in each of the
IDFs.
Access Points 340 Series APs (indoor) Quantity to be determined by site survey
370 Series APs (outdoor)
Wireless Aggregation Switches 2x 8320s One pair of aggregation switches are used implement
an L3 attached wireless services block to off-load L2
processing from core devices and to address failure-
domain sizing.
87
Mobility Controllers 3x7220s Three mobility controllers will are called for to support
local HQ users as well as to act as backup devices
for Gold River and other remote sites.
Squaw Valley
Core Switches 2x 8320s One pairs of core switches are used to address port
density as well as failure-domain sizing.
Access Switches 64 x 2930Ms Stacks of 2-4 switches will be deployed in each of the
IDFs.
Access Points 340 Series APs (indoor) Quantity to be determined by site survey
370 Series APs (outdoor)
Wireless Aggregation Switches 2x 8320s One pair of aggregation switches are used implement
an L3 attached wireless services block to off-load L2
processing from core devices and to address failure-
domain sizing.
Mobility Controllers 3x7220s Three mobility controllers will are called for to support
local HQ users as well as to act as backup devices
for Gold River and other remote sites.
88
Kirkwood Equipment List
Kirkwood
Core Switches 2x 8320s One pairs of core switches are used to address port
density as well as failure-domain sizing.
Access Switches 64 x 2930Ms Stacks of 2-4 switches will be deployed in each of the
IDFs.
Access Points 340 Series APs (indoor) Quantity to be determined by site survey
370 Series APs (outdoor)
Wireless Aggregation Switches N/A The size of the user population at this site doesn’t
warrant having a dedicated wireless aggregation
switch/service block. Future growth may dictate the
addition of a pair of switches to provide this function.
Mobility Controllers 3x7210s Three mobility controllers will are called for to support
local users.
The Kirkwood site is unique in that there are some design compromises which could be made that would not adversely impact
the network performance. It would be possible to collapse the Metro-E edge functions into the Core layer. This would reduce
operational agility in that maintenance to the core devices would also impact access to remote sites and cloud-based
applications.
89
Model & Quantity Notes
Mt. Rose
Collapsed Core & Aggregation 2x3810s The user density at this site allows for using a two-tier
Switches network. ArubaOS-Switch devices were selected to
support future growth (and deployment of a dedicated
core) which would then provide a nearly identical
functional design to other three-tier sites.
Access Switches TBD x 2930Ms Stacks of 2-4 switches will be deployed in each of the
IDFs.
Access Points 340 Series APs (indoor) Product count to be determined by site survey
370 Series APs (outdoor)
Wireless Aggregation Switches N/A The size of the user population at this site doesn’t
warrant having a dedicated wireless aggregation
switch/service block. Future growth may dictate the
addition of a pair of switches to provide this function.
Mobility Controllers 3x7210 Three mobility controllers will are called for to support
local users and provide HA.
Access layer product selection can vary based upon site specific and customer specific goals and needs. For
NOTE example, a customer can elect to use a chassis-based solution instead of a switch stack without significant impact
to the overall network design.
SWITCHING ARCHITECTURE
The three-tier model used in this design will have layer-two access devices with layer-three services provided by the
aggregation layer. The aggregation layer will be using VSX to provide active/active forwarding for the access-layer. VSX will
require that LACP be used on the VSX/MCLAG links. Spanning-tree will be implemented on the access-layer devices with
each access device/stack being a small spanning-tree domain. Each of the switches/stacks will be configured as the root
bridge. Spanning-tree frames will be disabled on the uplink ports connecting to the aggregation layer switches. Loop-protect
will be enabled on the uplink ports connecting the access-switch to the aggregation layer switches. Aruba 8400 and 8320
switches will be used in the aggregation role in this design.
Loop-protection can also be implemented on the VSX aggregation devices. It is critical to understand the behavior of loop-
protect to decide when and where to include it in network designs. In a VSX configuration which has enabled loop-protect, if a
90
loop is detected by the VSX switch, both links in the VSX/MCLAG bundle will stop forwarding traffic (provided that the
configured action is just to disable forwarding) until the device believes the loop condition is resolved. VSX with loop-protect is
recommended for networks in which there is a potential for loops to be created between access-layer devices. Aruba
recommends using loop-protect in this case as it will minimize the impact of a loop by disabling one pair of VSX forwarding
interfaces protecting the other access-layer devices and the VSX pair.
VSX ISL Link Use a separate dedicated links/LAGs for the ISL link. Do not allow the
keepalive traffic to traverse this link.
VSX Keepalive Use a dedicated layer 3 LAG/Link(s) between peers. Optionally, this link can
be in a VRF other than the default VRF.
VSX Sync Enable VSX Sync on VLANS and SVIs to enable for access-lists and other
elements to be synchronized from the primary VSX device to the secondary
VSX device.
91
MTU Ensure that you configure the interface MTU prior to assigning the interface to
a LAG. The MTU should be at least 20 bytes larger than the IP MTU
In the three-tier model, the core devices will only provide layer 3 connectivity to other service blocks/devices. Aruba 8400 and
8320 switches will be used in the core role in this design.
The collapsed-core design for this case study will provide layer 2 access devices and a pair of core/aggregation switches
providing layer 2 and layer 3 services. The two-tier model will be implemented using ArubaOS-Switch devices. The collapsed
core/aggregation will be configured with Virtual Stacking Framework (VSF) to present a single logical device to neighboring
devices. Connectivity between devices/stacks will be provisioned with redundant links.
92
Figure 63 - Two-Tier Design Topology Overview
93
Feature/Config Element Notes
VSF & VSF MAD VSF and MAD would be configured when using devices which do not support
backplane stacking in the collapsed core/aggregation role. This would be
commonly seen in deployments using the 5400R switches.
Backplane Stacking When using 3810s or other devices which support backplane stacking in the
collapsed core/aggregation role, use backplane stacking to interconnect
devices.
Routing Simple layer three configurations are likely needed and will require routing to
reach the WAN/Metro-E edge.
Spanning-Tree Spanning-tree will be enabled on all devices. The root bridge will be the
collapsed core/aggregation switch.
MTU Ensure that you configure the interface MTU prior to assigning the interface to
a LAG. The MTU should be at least 20 bytes larger than the IP MTU
ROUTING ARCHITECTURE
The following diagram shows the planned routing protocol configuration using OSPF within each building and BGP for
connectivity to other sites. Each site will use a private BGP ASN with two BGP speakers. Circuits to reach remote sites will be
distributed to each of the Metro-E edge switches. Redistribution will be performed on the Metro-E Edge devices. BGP will
redistribute a default route as well as summary routes for the remote sites into OSPF. BGP will be configured to announce
aggregate addresses for the OSPF prefixes. BGP communities will be applied to prefix advertisements such that a routing
policy can be configured if required. The baseline policy will be to have each site select a preferred and backup internet edge
to achieve some level of load balancing. Note that the internet edge, DMZ, and data center design is out of the scope of this
Campus VRD.
94
Figure 65 - Enterprise Routing Architecture Overview
The Metro-Ethernet edge switches will be iBGP peered to each other and will form eBGP peerings to other sites mirroring the
circuit topology. There is no IGP used within the Metro-E network, as such, the eBGP peerings are established using the
interface addresses. The iBGP peerings are established using the loopback addresses of the Metro-E switches. The BGP
features and configuration elements used in this design a highlighted in the table below.
95
Fast External Fallover This feature will drop bgp sessions to eBGP peers when the outgoing
interfaces used to reach the peer goes down
Peer-Groups Peer groups are used for eBGP peers to ensure that we have consistent
applied outbound route-policy via route maps and other commands which
should be applied identically to eBGP peers
BGP Communities Communities will be send to eBGP peers and will be used to adjust local AS
routing policy.
In designing an optimal OSPF network, two key goals are to minimize the number of OSPF adjacencies per device and to
avoid having large area (or areas) of devices with significant performance capabilities. A high-performing OSPF network design
would be have the fewest adjacencies possible and all OSPF speakers would be of similar performance capabilities. If your
network will have OSPF speakers of various performance capabilities, multiple OSPF areas may be required. This may also be
a driver to use BGP to build connectivity between OSPF domains. In Campus networks where the links between devices are
often more reliable than WAN circuits, it is less likely to experience the same potential volume and frequency of OSPF events.
Thus, optimizing OSPF for a campus design can have somewhat less rigid requirements. In the spirit of ‘simple is best’ it is still
a leading practice to work to conform your Campus OSPF design to the same principles and practices as an OSPF WAN
design. To that end, we recommend a Campus OSPF speaker have less than 12 adjacencies. In our case study, the
Headquarters site is the largest facility and the Core devices have 11 OSPF adjacencies. The table below lists all of the OSPF
speakers for the HQ facility. Other sites using a three-tier model will have a similar table. Given the size of the HQ facility, two
aggregation device pairs are used. Smaller sites may are likely to have fewer aggregation device pairs.
Notes
96
SWHQ-AGG2A Floors 8-14 Aggregation 3
Large networks may treat the Campus as a ‘leaf’ attached to the Data Center ‘spine’. In these types of designs, it
NOTE is likely that the WAN Edge block is also a ‘leaf’. In this case study, the decision was made to show the WAN
Edge connected to the Core.
97
Feature/Config Element Notes
max-metric router-lsa on-startup This feature will drop bgp sessions to eBGP peers when the outgoing
interfaces used to reach the peer goes down
passive-interface default This feature will stop OSPF from forming adjacencies on OSPF enabled
interfaces unless the ‘no passive-interface’ command is used. This is
recommended to reduce the
trap-enable This feature will cause OSPF to send traps when various OSPF events such
as establishment/loss of a neighbor occur.
OSPF network types Point to point network types will be configured on all layer 3 point to point links
to optimize OSPF as no election is required for point to point links
Neighbor authentication OSPF authentication will be configured to ensure that OSPF adjacencies are
only formed with the devices who can properly authentication each other.
98
DATACENTER CONNECTIVITY
The diagram below builds upon the Enterprise Routing Architecture to show additional details about connectivity to both ISPs
and provides a high-level overview of the paths to reach compute resources in both the HQ and Gold River Data Centers. The
diagram below depicts the relationship between various service blocks for this case study. Comprehensive Data Center
connectivity is beyond the scope of this document.
99
MULTICAST ROUTING
Dumars Industries has very few applications and services which use multicast. Each building/site is multicast enabled but there
is not a business need today to transport multicast across the Metro-E network. The security camera system uses both
multicast and unicast packets. The camera archiving system joins specific multicast groups for the cameras and records the
footage. Playback of the footage generates unicast streams to the viewing device. Focusing on the multicast design of each
building, PIM sparse mode will be used along with IGMP to enable multicast forwarding functions. Core switches will be
configured as candidate rendezvous points. The primary core switch will be configured as the ‘best’ candidate RP while the
secondary core switch will be the ‘backup’ candidate RP.
All layer 3 interfaces which face intra-building switches will have PIM enabled. All layer 3 interfaces supporting user/host
VLANS will have both PIM and IGMP enabled. IGMP snooping will be enabled to optimize multicast flows.
The mobility controllers will be configured to support multicast flows and will convert multicast flows to unicast flows for radio
optimization.
Quality of Service
To provide the best end-to-end user experience Quality of Service (QoS) should be configured. QoS configurations are often
unique to each network and are influence by the applications and services deployed. In this VRD, we will present an end-to-
end QoS model for network. A QoS design should apply markings (or remark) packets as close to ingress as possible. The
access layer configuration will classify and mark packets via DSCP. The aggregation layer will trust the DSCP markings of
100
packets received from access layer switches (via layer-two interfaces) and will trust the DSCP markings of packets received
from core devices (via layer-three interfaces). The mechanism that defines how we treat prioritize the transmission of packets
is the ‘schedule-profile’. The VRD is using the default 8 egress queue model which is common to both ArubaOS-Switch and
ArubaOS-CX devices. In this model, there will be 1 queue used to support “real-time” traffic (primarily VoIP) and 7 queues
used for other application traffic. Often there are differences in switch platform capabilities used and QoS configurations will
have minor variations to accommodate these differences. The diagram below highlight the QoS model used in the MFRA VRD.
Note we will trust the DSCP markings at ingress from APs but we will remark ingress traffic from other devices.
IP ADDRESSING
The IP addressing used in this design is crafted from the RFC1918 address space. This design calls for providing wireless
roaming within each building. Given the distance between facilities, there is not a need to maintain connectivity when roaming
between facilities. The Gold River and Gold River North do have RF overlap and provide seamless roaming between buildings.
The address plan for IPv4 was crafted to be as close to ‘future proof’ as possible and provides substantially more address
space per facility and function than required. Dumars Industries is exploring moving to IPv6 in the future when the need to have
end-to-end IPv6 connectivity to cloud-based providers warrants the business decision to invest in moving to a IPv4/IPv6 dual
stack configuration.
The IP addressing is crafted to provide large address blocks to so that users (internal and external) and infrastructure systems
and devices have clear boundaries allowing for the creation of simple filtering via access-lists but also for route summarization
and ease of allocating address space to new facilities. The management address space is divided such that there a separate
address blocks for layer 3 connected devices (core and aggregation switches) as well as layer 2 connected devices (access
layer switches and APs). The address blocks are also crafted so that if Dumars Industries migrates to an layer 3 access model,
101
the management IP addressing will require minimal, if any, changes. The table below provides a summary of the addressing
plan.
Headquarters
10.1.0.0/16 Management for layer 3 attached devices and for /30 links between devices
102
172.31.0.0/16 Guest Wireless Address Space
Gold River
10.3.0.0/16 Management for layer 3 attached devices and for /30 links between devices
10.4.0.0/16 Management for layer 2 attached devices. Note this address space is split into subnets to
account for 1 subnet per aggregation layer.
103
Squaw Valley
10.5.0.0/16 Management for layer 3 attached devices and for /30 links between devices
Mt. Rose
10.16.0.0/21 Management for layer 3 attached devices and for /30 links between devices
Kirkwood
VLANs
In crafting a VLAN plan that is applicable to the majority, if not all sites, it is important to plan for both current and future needs.
This design calls for defining 12 VLANS per building/facility with the exception of the training facility where a wired-guest VLAN.
The HQ facility (or facilities with multiple aggregation blocks multiple address blocks assigned to support these VLANS). The
104
VLANS align with the address blocks allocated to each building. Some network engineers may elect to reduce the number of
VLANS or add other VLANS as dictated by business needs. In this VRD, the following VLAN IDs are used in all facilities:
VLAN ID Description
1 Not used
10 Network Infrastructure Management for L2 attached devices (access switches and APs)
999 Default VLAN with no connectivity to network resources. Used for initial authentication purposes.
VLAN 999 will be the ‘default vlan’ for each wired switchport and will require users/devices to be profiled and/or authenticated
before they are assigned to either the corporate wired VLAN or a BYOD wired VLAN. This process will be implemented using
dynamic segmentation with ArubaOS-Switch and ClearPass.
Very granular policies can be crafted and implemented using Dynamic Segmentation. In those types of designs, it
NOTE
is very likely that additional VLANS would be used as there is a 1:1 mapping of user roles to VLANS.
The overall mobility design calls for having Mobility Masters at HQ and Gold River and each site will have a Mobility Cluster
sized to support the planned site device population. HQ and Gold River will also provide failover services should the local site
experience an outage. The HQ and Gold River clusters will be sized to support a single remote site failure scenario. Cluster
sizing will allow for failure of two remote sites. Failure of a third site would lead to an extended network outage. The network
doesn’t provide to layer 2 connectivity between sites. During a failure of all mobility controllers at a site, access points will
105
reboot and connect to the mobility cluster This design limitation can be mitigated by providing either a redundant set of
controllers at each site or by adding additional Metro-E circuits. Dumars Industries is aware of this limitation and is willing to
accept this risk. For the remote sites to experience a failure, both Metro-E switches and/or circuits would need to fail along with
both of the aggregation switches and the entire local mobility controller cluster. The diagram below depicts the steady state
and failed state tunnels built between APs and controllers as well as wireless devices and controllers. Connectivity to the
Failover Site Mobility Cluster will be established once the local APs reboot.
Guest internet access will be provided by configuring a guest SSID which will then be extended from the local mobility
controllers to a mobility controller cluster in the DMZ. All guest traffic is controlled by the DMZ mobility controllers. However,
the APs still terminate onto the local controllers. Between the local controllers and DMZ master controllers L2 GRE tunnels to
bridge guest SSID traffic to the DMZ. From there, guest traffic can be handled by the DMZ mobility controllers and be isolated
from the internal network.
Tunnels are configured one direction at a time. The VRRP-VIPs will be used as the tunnel endpoints for both sets of controllers.
106
Figure 71 – Guest Internet Access Diagram
The Aruba Solutions Exchange (ase.arubanetworks.com) has a wizard to generate configurations for the
NOTE
configuration describe above. The solution is called ‘L2 GRE to DMZ Controller with Captive Portal SSID’
The table below documents the mobility cluster active and standby configuration for each site.
HQ Gold River
Kirkwood Headquarters
Each of the Mobility Controllers will be attached via a LAG with two interfaces to the supporting switch and will provide a
forwarding capacity of 20G to each controller. The mobility services block switches will be configured as VSX pair and will have
a multiple 40G links to the core switches. Configuration of VSX will be nearly identical to a wired aggregation switch with the
107
exception of the allowed VLANS on the dot1q trunks. For the Mt. Rose facility, the mobility controllers will be attached to the
collapsed core/aggregation switches as there is not a mobility services block at this location.
A dot1q trunk will be configured from the Mobility Aggregation switches (configured as a VSX pair, VSF stack, or a backplane
stacked device pair) to each mobility controller. The dot1q trunk will transport all VLANS including the device management
VLAN for the environment.
Network Services
All network services will be provided by redundant systems at the HQ DC as well as by systems at the Gold River DC. The
services provided by the network are:
• Active Directory
• DNS
• DHCP
• NTP
• ClearPass
• Airwave
• Syslog
To implement a highly-available network, some of these services are hosted on clusters and/or multiple machines.
Service/System Description
Airwave HQ AirWave
syslog-hq HQ Syslog
108
Figure 72 - Network Services Overview
Description
SNMP All devices will be configured to support SNMP operations including traps.
Logging All devices will be configured to send logs to two syslog hosts
NTP & Timezone All devices will be configured to obtain time from two NTP servers. The timezone will be
locally defined on each device.
Figure 73 - Device & Network Management
109
Network Instrumentation
All network device templates will have configuration elements to support management functions/services. The table below lists
the configuration elements.
Description
sFlow Aggregation and Core devices will be configured to export sFlow for layer 3 interfaces
which connect to upstream devices/peers. Two sFlow collectors will be configured.
Figure 74 - Network Instrumentation
Network Automation
All network device templates will enable access via the RestAPI to allow for interaction for automation tools and systems. This
will also allow NAE Agents to interact with other network devices.
Description
Network Analytics Engine Core and Aggregation devices using ArubaOS-CX will leverage the Network Analytics
(NAE) Engine to provide for agents to monitor, alert, and interact with the network in an
automated fashion.
Automated Orchestration Automation framework utilizing Python and REST are fully capable of interacting with
Frameworks Aruba devices, applications, and tools, such as NAE.
Other Services
All network device templates will have configuration elements to the following services:
Description
TACACS+ All devices will be configured to support AAA via TACACS+ for administrative access.
Authentication, Authorization, and Accounting will be enabled whenever possible.
Device Banners Message of the day and Exec banners will be configured on all devices.
Figure 75 – Other Services
110
ADAPTING THIS CASE STUDY
The case study presented in this document was conceived to be adaptable in both size and focus. With respect to size, the Mt.
Rose facility is a comparatively ‘small’ facility (compared to the others in this case study). The design concepts and
configurations for this facility could be used for smaller or larger sites. Some of the additional considerations to tailor this
example to specific environments would be to centralize the mobility controllers and/or distribute additional ClearPass
subscribers to additional buildings.
The Metro-E network in this case study could be a layer three MPLS VPN service in which all sites have full-mesh connectivity.
This change would alter the number/location of mobility controllers and the dynamic segmentation design as WAN transport for
dynamic segmentation is not recommended. The metro-E network might not be part of the design; rather all building might be
on the same campus. The primary difference this change presents would change the location and placement of the mobility
controllers.
Incorporating VRFs
If you are planning on incorporating the use of VRFs (VRF-lite) in your design, the sample configurations can be adapted for
this design requirement by moving the layer three functions from Routed Only Ports (ROPs) to SVIs and by associating the
SVIs with the VRFs. One use-case for VRFs would be to establish a management VRF. The management VRF would provide
for the layer 3 segmentation of users and devices attached to the network and the network devices. Syslog, AAA, SNMP, NTP,
SSH, VSX keepalive, WebUI, and other services are VRF aware and can be selectively enabled/disabled on specific VRFs.
Dynamic Segmentation
In this case study all of the traffic save for IP Phones and AV equipment is being tunneled back to the mobility controllers. To
adapt this behavior to your environment you may choose to perform local switching for some users/systems. This change will
require having additional ClearPass configuration for tunneled and non-tunneled devices. This change will also require
implementing additional VLANS and associated IP addressing and services (DHCP).
ClearPass Configuration
Several network features/functions including Dynamic Segmentation, 802.1X user and device authentication as configured in
this case study leverage ClearPass. Aruba recommends reviewing the ClearPass Design Documents to craft configurations
for your environment, The configurations crafted for this case study are simple authentication policies which should be modified
to include proper security controls for your production network. Two configurations were created for this VRD:
111
• MAC Authentication to support Access Points and other “infrastructure” devices including phones, and physical
security devices.
• Dot1X user authentication for both wired and wireless connected users.
Three roles were created to allow for the assignment of each user (executives, engineering, and general employees) to the
appropriate VLAN.
Adaptation Summary
In summary, this case study was designed to be adaptable and to address broad design requirements. Following the
guidelines presented in the building blocks section of this document will allow for the scaling up or down as needed to best align
the design principles and practices to your specific environment. As always, please reach out on the Airheads community for
any questions.
DOCUMENT CONTRIBUTORS
The table below lists key contributors and reviewers of this document. The core TME team would like to acknowledge their
assistance in preparing, editing, and validating content and configurations in this document.
Name Role
112
Vincent Giles Wired TME
The team would also like the acknowledge the following Consulting Systems Engineers for their
assistance in reviewing the document:
Name Role
113
APPENDIX A – LAB DEVICE CONFIGURATRIONS
SWHQ-CORE1 Configuration
!Version ArubaOS-CX TL.10.01.0002
hostname SWHQ-CORE1
banner motd !
**************************************************************
* *
* This is a private computer network/device. Unauthorized *
* access is prohibited. All attempts to login/connect *
* to this device/network are logged. Unauthorized users *
* must disconnect now. *
* *
**************************************************************
!
banner exec !
***********************************************************************
* *
* Welcome to SWHQ-CORE1 // 8400 // lookback0 10.1.1.1/32 *
* *
* Headquarters Core Switch 1
* *
***********************************************************************
! Syslog configuration
logging 10.254.120.10 udp severity warning
114
logging 10.254.224.10 udp severity warning
! enable SNMPv2c
snmp-server vrf default
snmp-server system-description SWCQ-CORE1
115
snmp-server system-location HQ MDF // Row 6 Rack 6
snmp-server system-contact [email protected]
snmp-server community s3cret!
snmp-server host 10.254.124.65 trap version v2c community s3cret!
snmp-server host 10.254.224.65 trap version v2c community s3cret!
116
map queue 0 local-priority 0
map queue 1 local-priority 1
map queue 2 local-priority 2
map queue 3 local-priority 3
map queue 4 local-priority 4
map queue 5 local-priority 6
map queue 6 local-priority 7
map queue 7 local-priority 5
name queue 7 VOICE
interface lag 1
117
description LAG to swhq-core-2
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.1/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable
interface lag 2
description to SWHQ-WAN1
no shutdown
ip mtu 2048
ip address 10.1.252.6/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip ospf network point-to-point
interface lag 3
description to SWHQ-WAN2
no shutdown
ip mtu 2048
ip address 10.1.252.22/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
interface lag 20
description SWHQ-AGG1A
118
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.25/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable
interface lag 21
description to SWHQ-AGG1B
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.37/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable
interface lag 22
description SWHQ-AGG2A
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.65/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable
119
interface lag 23
description to SWHQ-AGG2B
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.69/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable
interface lag 24
description to SWHQ-MAGG1
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.45/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable
interface lag 25
description to SWHQ-MAGG2
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.49/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
120
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable
interface lag 31
description SWHQ-DC1
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.86/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable
interface lag 32
description to SWHQ-DC2
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.89/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable
interface 1/1/1
description to SWHQ-AGG1B
no shutdown
mtu 2068
lag 21
interface 1/1/2
description to SWHQ-AGG1B
121
no shutdown
mtu 2068
lag 21
interface 1/1/3
description to SWHQ-AGG1A
no shutdown
mtu 2068
lag 20
interface 1/1/4
description to SWHQ-AGG1A
no shutdown
mtu 2068
lag 20
interface 1/1/5
description to SWHQ-WAN2
no shutdown
mtu 2068
lag 3
interface 1/1/11
description to SWHQ-MAGG1A
no shutdown
mtu 2068
lag 24
interface 1/1/12
description to SWHQ-MAGG1B
no shutdown
mtu 2068
lag 25
interface 1/1/13
description to SWHQ-MAGG1A
no shutdown
mtu 2068
lag 24
interface 1/1/14
description to SWHQ-MAGG1B
no shutdown
mtu 2068
lag 25
interface 1/1/16
122
description to SWHQ-WAN1
no shutdown
mtu 2068
lag 2
interface 1/1/49
description to SWHQ-CORE2
no shutdown
mtu 2068
lag 1
interface 1/1/50
description to SWHQ-CORE2
no shutdown
mtu 2068
lag 1
interface loopback 0
ip address 10.1.1.1/32
ip ospf 1 area 0.0.0.0
router pim
enable
rp-candidate source-ip-interface lag1
rp-candidate group-prefix 224.0.0.0/4
bsr-candidate source-ip-interface lag1
bsr-candidate priority 1
SWHQ-CORE2 Configuration
123
* to this device/network are logged. Unauthorized users *
* must disconnect now. *
* *
**************************************************************
!
banner exec !
***********************************************************************
* *
* Welcome to SWHQ-CORE2 // 8400 // lookback0 10.1.1.2/32 *
* *
* Headquarters Core Switch 2
* *
***********************************************************************
! Syslog configuration
logging 10.254.120.10 udp severity warning
logging 10.254.224.10 udp severity warning
124
! fallback local account if TACACS is not reachable/functioning
user admin group administrators password <<removed>>
! enable SNMPv2c
snmp-server vrf default
snmp-server system-description SWCQ-CORE2
snmp-server system-location HQ MDF // Row 6 Rack 6
snmp-server system-contact [email protected]
snmp-server community s3cret!
snmp-server host 10.254.124.65 trap version v2c community s3cret!
snmp-server host 10.254.224.65 trap version v2c community s3cret!
125
! define the OSPF router ID to match
! the loopback address
router-id 10.1.1.2
! set the max-metric on start-up to exclude the device
! from routing via OSPF until <check time> seconds after
! system boot
max-metric router-lsa on-startup
! Use passive interfaces by default and only no-passive on
! interfaces which require OSPF adjacencies to be build
passive-interface default
! enable SNMP traps for OSPF events to be sent to trap
! receivers
trap-enable
! define the OSPF area ID
area 0.0.0.0
126
wfq queue 0 weight 1
wfq queue 1 weight 1
wfq queue 2 weight 1
wfq queue 3 weight 1
wfq queue 4 weight 1
wfq queue 5 weight 1
wfq queue 6 weight 1
strict queue 7
interface lag 1
description LAG to SWHQ-CORE1
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.2/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable
interface lag 2
127
description to SWHQ-WAN2
no shutdown
ip mtu 2048
ip address 10.1.252.18/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip ospf network point-to-point
interface lag 3
description to SWHQ-WAN1
no shutdown
ip mtu 2048
ip address 10.1.252.10/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
interface lag 20
description SWHQ-AGG1A
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.29/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable
interface lag 21
description to SWHQ-AGG1B
128
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.41/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable
interface lag 22
description SWHQ-AGG2A
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.73/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable
interface lag 23
description to SWHQ-AGG2B
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.77/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable
129
interface lag 24
description to SWHQ-MAGG1
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.53/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable
interface lag 25
description to SWHQ-MAGG2
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.57/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable
interface lag 31
description SWHQ-DC1
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.93/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
130
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable
interface lag 32
description to SWHQ-DC2
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.97/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable
interface 1/1/1
description to SWHQ-AGG1B
no shutdown
mtu 2068
lag 21
interface 1/1/2
description to SWHQ-AGG1B
no shutdown
mtu 2068
lag 21
interface 1/1/3
description to SWHQ-AGG1A
no shutdown
mtu 2068
lag 20
interface 1/1/4
description to SWHQ-AGG1A
no shutdown
mtu 2068
lag 20
interface 1/1/5
131
description to SWHQ-WAN2
no shutdown
mtu 2068
lag 3
interface 1/1/11
description to SWHQ-MAGG1A
no shutdown
mtu 2068
lag 24
interface 1/1/12
description to SWHQ-MAGG1B
no shutdown
mtu 2068
lag 25
interface 1/1/13
description to SWHQ-MAGG1A
no shutdown
mtu 2068
lag 24
interface 1/1/14
description to SWHQ-MAGG1B
no shutdown
mtu 2068
lag 25
interface 1/1/16
description to SWHQ-WAN1
no shutdown
mtu 2068
lag 2
interface 1/1/49
description to SWHQ-CORE1
no shutdown
mtu 2068
lag 1
interface 1/1/50
description to SWHQ-CORE1
no shutdown
mtu 2068
132
lag 1
interface loopback 0
ip address 10.1.1.2/32
ip ospf 1 area 0.0.0.0
router pim
enable
rp-candidate source-ip-interface lag1
rp-candidate group-prefix 224.0.0.0/4
bsr-candidate source-ip-interface lag1
bsr-candidate priority 2
133
SWHQ-AGG1A Configuration
!Version ArubaOS-CX TL.10.01.0002
hostname SWHQ-AGG1A
banner motd !
**************************************************************
* *
* This is a private computer network/device. Unauthorized *
* access is prohibited. All attempts to login/connect *
* to this device/network are logged. Unauthorized users *
* must disconnect now. *
* *
**************************************************************
!
banner exec !
***********************************************************************
* *
* Welcome to SWHQ-AGG1A // 8320 // lookback0 10.1.2.1/32 *
* *
* Headquarters Bldg Aggregation Block Switch 1 - VSX Pair *
* Pair supports Floors 1-7. *
* *
***********************************************************************
! Syslog configuration
logging 10.254.120.10 udp severity warning
logging 10.254.224.10 udp severity warning
134
! Sample sFlow configuration exporting to two collectors
sflow
sflow collector 10.254.124.32
sflow collector 10.254.224.32
! define the reporting agent IP to match loopback 0
! interface address
sflow agent-ip 10.1.2.1
!
!
!
! enable SNMPv2c
snmp-server vrf default
snmp-server system-description HQSW-AGG1A
snmp-server system-location HQ MDF // Row 6 Rack 8
snmp-server system-contact [email protected]
135
snmp-server community s3cret!
snmp-server host 10.254.124.65 trap version v2c community s3cret!
snmp-server host 10.254.224.65 trap version v2c community s3cret!
136
name PHYSEC Devices
vsx-sync
vlan 40
name PHONES-AV Devices
vsx-sync
vlan 999
name NO_ACCESS_VLAN
vsx-sync
vlan 1281
name EXEC_USERS
vsx-sync
vlan 1282
name ENGINEERING_SUPPORT_USERS
vsx-sync
vlan 1283
name DEFAULT_USERS
vsx-sync
137
dwrr queue 2 weight 1
dwrr queue 3 weight 1
dwrr queue 4 weight 1
dwrr queue 5 weight 1
dwrr queue 6 weight 1
strict queue 7
138
no ip ospf passive
! define the OSPF network time to p2p to optimize
! as no DR/BDR is needed
ip ospf network point-to-point
! enable OSPF authentication via MD5 and define an
! authentication key
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
! enable PIM spase mode for multicast forwarding
ip pim-sparse enable
139
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable
140
vsx-sync vlans
description TO swhq-acc-a1-1
! enable the interface to forward frames
no shutdown
! disable routing and make this an L2 lag
no routing
! leave the default VLAN as 1 which is NOT used for
! production traffic
vlan trunk native 1
! allow appropriate VLANS to traverse the dot1q link
vlan trunk allowed 10,20,30,40,138,997-999,1281-1283
! VSX requires LACP active mode
lacp mode active
! Loop protection is enabled
loop-protect vlan 10,20,30,40,100-104,138,997-999,1281-1283
141
! physical interface configuration
! note the MTU is 20 bytes larger than the IP MTU defined
! on L3 lag interfaces
interface 1/1/1
description to SWHQ-CORE1
no shutdown
mtu 2068
lag 2
interface 1/1/2
description to SWHQ-CORE1
no shutdown
mtu 2068
lag 2
interface 1/1/3
description to SWHQ-CORE2
no shutdown
mtu 2068
lag 3
interface 1/1/4
description to SWHQ-CORE2
no shutdown
mtu 2068
lag 3
interface 1/1/17
description to swhq-acc-a1-1
no shutdown
lag 41
interface 1/1/18
description to swhq-acc-a1-2
no shutdown
lag 42
interface 1/1/19
description to swhq-acc-a1-3
142
no shutdown
lag 43
interface 1/1/47
description VSX keepalive
no shutdown
lag 11
interface 1/1/48
description VSX keepalive
no shutdown
lag 11
interface 1/1/49
description to SWHQ-AGG1B
VSX keepalive no shutdown
lag 1
interface 1/1/50
description to SWHQ-AGG1B
no shutdown
lag 1
interface 1/1/51
description to SWHQ-AGG1B for ISL link
no shutdown
lag 10
interface 1/1/52
description to SWHQ-AGG1B for ISL link
no shutdown
lag 10
143
! note that no L3 is configured for VLAN 1
interface vlan1
! define the VLANS for the design. Note that for the
! production design there are multiple aggregation
! switches supporting some buildings/floors
! vlan numbers are consistent but IP addresses
! change to support the address plan
interface vlan20
vsx-sync active-gateways
description IOT Devices
ip address 172.16.31.253/19
active-gateway ip 172.16.31.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
144
ip helper-address 10.254.134.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan30
vsx-sync active-gateways
description Physical Security Devices
ip address 172.16.63.253/19
active-gateway ip 172.16.63.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan40
vsx-sync active-gateways
description Phones-AV equipment
ip address 172.16.95.253/19
active-gateway ip 172.16.95.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan138
vsx-sync active-gateways
attach VRF MOBILITY
description Corp BYOD
ip address 10.32.223.253/19
active-gateway ip 10.32.223.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
145
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan999
interface vlan1281
vsx-sync active-gateways
description EXEC Corp Users
ip address 10.32.31.253/19
active-gateway ip 10.32.31.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan1282
description Engineering & Support Users
ip address 10.32.95.253/19
active-gateway ip 10.32.95.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan1283
description All other Users
ip address 10.32.159.253/19
active-gateway ip 10.32.159.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
146
ip pim-sparse enable
! enable VSX
vsx
! define the LAG used for the ISL link
inter-switch-link lag 10
! define the VSX role for this device
role primary
! enable VSX keepalive
keepalive peer 192.168.1.2 source 192.168.1.1 vrf VSX_KEEPALIVE
! enable PIM
router pim
enable
! enable HTTPS for the WebUI and enable RESTAPI access
https-server rest access-mode read-write
https-server vrf default
147
SWHQ-AGG1B Configuration
!
banner exec !
***********************************************************************
* *
* Welcome to SWHQ-AGG1B // 8320 // lookback0 10.1.2.2/32 *
* *
* Headquarters Bldg Aggregation Block Switch 1 - VSX Pair *
* Pair supports Floors 1-7. *
* *
***********************************************************************
! Syslog configuration
logging 10.254.120.10 udp severity warning
logging 10.254.224.10 udp severity warning
148
vrf VSX_KEEPALIVEx
! enable SNMPv2c
snmp-server vrf default
snmp-server system-description HQSW-AGG1B
snmp-server system-location HQ MDF // Row 6 Rack 8
149
snmp-server system-contact [email protected]
snmp-server community s3cret!
snmp-server host 10.254.124.65 trap version v2c community s3cret!
snmp-server host 10.254.224.65 trap version v2c community s3cret!
150
vlan 30
name PHYSEC Devices
vsx-sync
vlan 40
name PHONES-AV Devices
vsx-sync
vlan 999
name NO_ACCESS_VLAN
vsx-sync
vlan 1281
name EXEC_USERS
vsx-sync
vlan 1282
name ENGINEERING_SUPPORT_USERS
vsx-sync
vlan 1283
name DEFAULT_USERS
vsx-sync
151
dwrr queue 1 weight 1
dwrr queue 2 weight 1
dwrr queue 3 weight 1
dwrr queue 4 weight 1
dwrr queue 5 weight 1
dwrr queue 6 weight 1
strict queue 7
152
! disable passive to form OSPF adjacencies
no ip ospf passive
! define the OSPF network time to p2p to optimize
! as no DR/BDR is needed
ip ospf network point-to-point
! enable OSPF authentication via MD5 and define an
! authentication key
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
! enable PIM spase mode for multicast forwarding
ip pim-sparse enable
153
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable
154
lacp mode active
! Loop protection is enabled
loop-protect vlan 10,20,30,40,100-104,138,997-999,1281-1283
interface 1/1/1
description to SWHQ-CORE1
no shutdown
mtu 2068
lag 2
interface 1/1/2
155
description to SWHQ-CORE1
no shutdown
mtu 2068
lag 2
interface 1/1/3
description to SWHQ-CORE2
no shutdown
mtu 2068
lag 3
interface 1/1/4
description to SWHQ-CORE2
no shutdown
mtu 2068
lag 3
interface 1/1/17
description to swhq-acc-a1-1
no shutdown
lag 41
interface 1/1/18
description to swhq-acc-a1-2
no shutdown
lag 42
interface 1/1/19
description to swhq-acc-a1-3
no shutdown
lag 43
interface 1/1/47
description VSX keepalive
no shutdown
lag 11
156
interface 1/1/48
description VSX keepalive
no shutdown
lag 11
interface 1/1/49
description to SWHQ-AGG1B
VSX keepalive no shutdown
lag 1
interface 1/1/50
description to SWHQ-AGG1B
no shutdown
lag 1
interface 1/1/51
description to SWHQ-AGG1B for ISL link
no shutdown
lag 10
interface 1/1/52
description to SWHQ-AGG1B for ISL link
no shutdown
lag 10
! define the VLANS for the design. Note that for the
! production design there are multiple aggregation
! switches supporting some buildings/floors
! vlan numbers are consistent but IP addresses
! change to support the address plan
157
interface vlan10
! VSX sync the active gateway config to secondary
! VSX peer device
vsx-sync active-gateways
description l2 ACCESS switch & AP & l2 device mgmt
ip address 10.2.127.252/17
! the same virtual MAC is reused in this design
! for all of the VSX interfaces
active-gateway ip 10.2.127.254 mac 00:00:00:10:11:12
! forward DHCP and other b’cast packets
ip helper-address 10.254.34.64
! configure the interface to be included in
! the OSPF process 1 and area 0.0.0.0
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan20
vsx-sync active-gateways
description IOT Devices
ip address 172.16.31.252/19
active-gateway ip 172.16.31.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan30
vsx-sync active-gateways
description Physical Security Devices
ip address 172.16.63.252/19
active-gateway ip 172.16.63.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
158
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan40
vsx-sync active-gateways
description Phones-AV equipment
ip address 172.16.95.252/19
active-gateway ip 172.16.95.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan1281
vsx-sync active-gateways
description EXEC Corp Users
ip address 10.32.31.252/19
active-gateway ip 10.32.31.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan1282
description Engineering & Support Users
ip address 10.32.95.252/19
active-gateway ip 10.32.95.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan1283
159
description All other Users
ip address 10.32.159.252/19
active-gateway ip 10.32.159.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
! enable VSX
vsx
! define the LAG used for the ISL link
inter-switch-link lag 10
! define the VSX role for this device
role primary
! enable VSX keepalive
keepalive peer 192.168.1.1 source 192.168.1.2 vrf VSX_KEEPALIVE
! enable PIM
router pim
enable
! enable HTTPS for the WebUI and enable RESTAPI access
https-server rest access-mode read-write
https-server vrf default
160
SWHW-MAGG1A Configuration
!Version ArubaOS-CX TL.10.01.0002
hostname SWHQ-MAGG1A
banner motd !
**************************************************************
* *
* This is a private computer network/device. Unauthorized *
* access is prohibited. All attempts to login/connect *
* to this device/network are logged. Unauthorized users *
* must disconnect now. *
* *
**************************************************************
!
banner exec !
***********************************************************************
* *
* Welcome to SWHQ-MAGG1A // 8320 // lookback0 10.1.3.1/32 *
* *
* Headquarters Bldg Mobility Agg Block Switch 1 - VSX Pair *
* *
***********************************************************************
! Syslog configuration
logging 10.254.120.10 udp severity warning
logging 10.254.224.10 udp severity warning
161
! Sample sFlow configuration exporting to two collectors
sflow
sflow collector 10.254.124.32
sflow collector 10.254.224.32
! define the reporting agent IP to match loopback 0
! interface address
sflow agent-ip 10.1.3.1
!
!
!
! enable SNMPv2c
snmp-server vrf default
snmp-server system-description HQSW-MAGG1A
snmp-server system-location HQ MDF // Row 6 Rack 9
snmp-server system-contact [email protected]
162
snmp-server community s3cret!
snmp-server host 10.254.124.65 trap version v2c community s3cret!
snmp-server host 10.254.224.65 trap version v2c community s3cret!
163
name PHYSEC Devices
vsx-sync
vlan 40
name PHONES-AV Devices
vsx-sync
vlan 999
name NO_ACCESS_VLAN
vsx-sync
vlan 1281
name EXEC_USERS
vsx-sync
vlan 1282
name ENGINEERING_SUPPORT_USERS
vsx-sync
vlan 1283
name DEFAULT_USERS
vsx-sync
164
dwrr queue 2 weight 1
dwrr queue 3 weight 1
dwrr queue 4 weight 1
dwrr queue 5 weight 1
dwrr queue 6 weight 1
strict queue 7
165
no ip ospf passive
! define the OSPF network time to p2p to optimize
! as no DR/BDR is needed
ip ospf network point-to-point
! enable OSPF authentication via MD5 and define an
! authentication key
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
! enable PIM spase mode for multicast forwarding
ip pim-sparse enable
166
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable
167
! Loop protection is enabled
loop-protect vlan 10,20,30,40,100-104,138,997-999,1281-1283
interface 1/1/1
description to SWHQ-CORE1
no shutdown
mtu 2068
lag 2
interface 1/1/2
description to SWHQ-CORE1
168
no shutdown
mtu 2068
lag 2
interface 1/1/3
description to SWHQ-CORE2
no shutdown
mtu 2068
lag 3
interface 1/1/4
description to SWHQ-CORE2
no shutdown
mtu 2068
lag 3
interface 1/1/17
description to HQ-MC-1
no shutdown
lag 51
interface 1/1/18
description to HQ-MC-2
no shutdown
lag 52
interface 1/1/19
description to HQ-MC-3
no shutdown
lag 53
interface 1/1/47
description VSX keepalive
no shutdown
lag 11
interface 1/1/48
169
description VSX keepalive
no shutdown
lag 11
interface 1/1/49
description to SWHQ-MAGG1B
VSX keepalive no shutdown
lag 1
interface 1/1/50
description to SWHQ-MAGG1B
no shutdown
lag 1
interface 1/1/51
description to SWHQ-MAGG1B for ISL link
no shutdown
lag 10
interface 1/1/52
description to SWHQ-MAGG1B for ISL link
no shutdown
lag 10
! define the VLANS for the design. Note that for the
! production design there are multiple aggregation
! switches supporting some buildings/floors
! vlan numbers are consistent but IP addresses
! change to support the address plan
170
interface vlan20
vsx-sync active-gateways
description IOT Devices
ip address 172.26.15.253/19
active-gateway ip 172.16.15.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan30
vsx-sync active-gateways
description Physical Security Devices
ip address 172.16.31.253/19
active-gateway ip 172.16.31.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan40
vsx-sync active-gateways
description Phones-AV equipment
ip address 172.16.47.253/19
active-gateway ip 172.16.47.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan1281
171
vsx-sync active-gateways
description EXEC Corp Users
ip address 10.32.31.253/19
active-gateway ip 10.32.31.254 mac 00:00:00:10:11:12
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan1282
description Engineering & Support Users
ip address 10.32.63.253/19
active-gateway ip 10.32.63.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan1283
description All other Users
ip address 10.32.95.253/19
active-gateway ip 10.32.95.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
! enable VSX
vsx
! define the LAG used for the ISL link
inter-switch-link lag 10
! define the VSX role for this device
role primary
! enable VSX keepalive
keepalive peer 192.168.1.1 source 192.168.1.2 vrf VSX_KEEPALIVE
172
! define the domain name and name servers
! used by this device
ip dns domain-name dumarsinc.com
ip dns server-address 10.254.10.10
ip dns server-address 10.254.130.10
! enable PIM
router pim
enable
! enable HTTPS for the WebUI and enable RESTAPI access
https-server rest access-mode read-write
https-server vrf default
173
SWHQ-MAGG1B Configuration
!Version ArubaOS-CX TL.10.01.0002
hostname SWHQ-MAGG1B
banner motd !
**************************************************************
* *
* This is a private computer network/device. Unauthorized *
* access is prohibited. All attempts to login/connect *
* to this device/network are logged. Unauthorized users *
* must disconnect now. *
* *
**************************************************************
!
banner exec !
***********************************************************************
* *
* Welcome to SWHQ-MAGG1B // 8320 // lookback0 10.1.3.2/32 *
* *
* Headquarters Bldg Mobility Agg Block Switch 1 - VSX Pair *
* *
***********************************************************************
! Syslog configuration
logging 10.254.120.10 udp severity warning
logging 10.254.224.10 udp severity warning
174
! Sample sFlow configuration exporting to two collectors
sflow
sflow collector 10.254.124.32
sflow collector 10.254.224.32
! define the reporting agent IP to match loopback 0
! interface address
sflow agent-ip 10.1.3.2
!
!
!
! enable SNMPv2c
snmp-server vrf default
snmp-server system-description HQSW-MAGG1B
snmp-server system-location HQ MDF // Row 6 Rack 9
snmp-server system-contact [email protected]
snmp-server community s3cret!
175
snmp-server host 10.254.124.65 trap version v2c community s3cret!
snmp-server host 10.254.224.65 trap version v2c community s3cret!
176
vsx-sync
vlan 40
name PHONES-AV Devices
vsx-sync
vlan 999
name NO_ACCESS_VLAN
vsx-sync
vlan 1281
name EXEC_USERS
vsx-sync
vlan 1282
name ENGINEERING_SUPPORT_USERS
vsx-sync
vlan 1283
name DEFAULT_USERS
vsx-sync
177
dwrr queue 3 weight 1
dwrr queue 4 weight 1
dwrr queue 5 weight 1
dwrr queue 6 weight 1
strict queue 7
178
ip ospf network point-to-point
! enable OSPF authentication via MD5 and define an
! authentication key
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
! enable PIM spase mode for multicast forwarding
ip pim-sparse enable
179
ip pim-sparse enable
180
vsx-sync vlans
description TO HQ-MC-2
no shutdown
no routing
vlan trunk native 1
vlan trunk allowed 10,20,30,40,138,997-999,1281-1283
lacp mode active
loop-protect vlan 10,20,30,40,100-104,138,997-999,1281-1283
interface 1/1/1
description to SWHQ-CORE1
no shutdown
mtu 2068
lag 2
interface 1/1/2
description to SWHQ-CORE1
no shutdown
mtu 2068
lag 2
interface 1/1/3
181
description to SWHQ-CORE2
no shutdown
mtu 2068
lag 3
interface 1/1/4
description to SWHQ-CORE2
no shutdown
mtu 2068
lag 3
interface 1/1/17
description to HQ-MC-1
no shutdown
lag 51
interface 1/1/18
description to HQ-MC-2
no shutdown
lag 52
interface 1/1/19
description to HQ-MC-3
no shutdown
lag 53
interface 1/1/47
description VSX keepalive
no shutdown
lag 11
interface 1/1/48
description VSX keepalive
no shutdown
lag 11
interface 1/1/49
182
description to SWHQ-MAGG1B
VSX keepalive no shutdown
lag 1
interface 1/1/50
description to SWHQ-MAGG1B
no shutdown
lag 1
interface 1/1/51
description to SWHQ-MAGG1B for ISL link
no shutdown
lag 10
interface 1/1/52
description to SWHQ-MAGG1B for ISL link
no shutdown
lag 10
! define the VLANS for the design. Note that for the
! production design there are multiple aggregation
! switches supporting some buildings/floors
! vlan numbers are consistent but IP addresses
! change to support the address plan
interface vlan20
vsx-sync active-gateways
description IOT Devices
183
ip address 172.26.15.252/19
active-gateway ip 172.16.15.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan30
vsx-sync active-gateways
description Physical Security Devices
ip address 172.16.31.252/19
active-gateway ip 172.16.31.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan40
vsx-sync active-gateways
description Phones-AV equipment
ip address 172.16.47.252/19
active-gateway ip 172.16.47.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan138
vsx-sync active-gateways
attach VRF MOBILITY
description Corp BYOD
184
ip address 10.32.223.252/19
active-gateway ip 10.32.223.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 24 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan1281
vsx-sync active-gateways
description EXEC Corp Users
ip address 10.32.31.252/19
active-gateway ip 10.32.31.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan1282
description Engineering & Support Users
ip address 10.32.63.252/19
active-gateway ip 10.32.63.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan1283
description All other Users
ip address 10.32.95.252/19
active-gateway ip 10.32.95.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
185
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
! enable VSX
vsx
! define the LAG used for the ISL link
inter-switch-link lag 10
! define the VSX role for this device
role primary
! enable VSX keepalive
keepalive peer 192.168.1.1 source 192.168.1.2 vrf VSX_KEEPALIVE
! enable PIM
router pim
enable
! enable HTTPS for the WebUI and enable RESTAPI access
https-server rest access-mode read-write
https-server vrf default
186
SWHQ-WAN1 Configuration
!Version ArubaOS-CX TL.10.01.0002
hostname SWHQ-WAN1
banner motd !
**************************************************************
* *
* This is a private computer network/device. Unauthorized *
* access is prohibited. All attempts to login/connect *
* to this device/network are logged. Unauthorized users *
* must disconnect now. *
* *
**************************************************************
!
banner exec !
***********************************************************************
* *
*
* Welcome to SWHQ-WAN1 // 8320 // Loopback 0 10.224.224.1
* *
* Headquarters WAN Edge to Metro E Network
* *
***********************************************************************
! Syslog configuration
logging 10.254.120.10 udp severity warning
logging 10.254.224.10 udp severity warning
187
sflow collector 10.254.124.32
sflow collector 10.254.224.32
! define the reporting agent IP to match loopback 0
! interface address
sflow agent-ip 10.224.224.1
!
!
!
! enable SNMPv2c
snmp-server vrf default
snmp-server system-description SWCQ-WAN1
snmp-server system-location HQ MDF // Row 6 Rack 8
snmp-server system-contact [email protected]
snmp-server community s3cret!
snmp-server host 10.254.124.65 trap version v2c community s3cret!
snmp-server host 10.254.224.65 trap version v2c community s3cret!
188
! enable SSH from the default VRF
ssh server vrf default
189
match ip address prefix-list DEFAULT_ONLY
set community 1:1
190
max-metric router-lsa on-startup
! Use passive interfaces by default and only no-passive on
! interfaces which require OSPF adjacencies to be build
passive-interface default
! enable SNMP traps for OSPF events to be sent to trap
! receivers
trap-enable
! redistribute BGP and static routes into OSPF
redistribute bgp route-map BGP->OSPF
redistribute static route-map STATIC->OSPF
! define the OSPF area ID
area 0.0.0.0
191
dwrr queue 3 weight 1
dwrr queue 4 weight 1
dwrr queue 5 weight 1
dwrr queue 6 weight 1
strict queue 7
interface lag 1
description L3 to HQ-WAN2
no shutdown
ip mtu 2048
ip address 10.1.252.13/30
interface lag 2
description to CORE1
no shutdown
ip mtu 2048
192
ip address 10.1.252.5/30
lacp mode active
interface lag 3
description L3 to HQ Core SW2
no shutdown
ip mtu 2048
ip address 10.1.252.9/30
lacp mode active
interface 1/1/1
description Metro-E SV-SW01
no shutdown
mtu 2068
ip address 10.224.0.62/30
ip mtu 2048
interface 1/1/5
description to SWHQ-CORE1
no shutdown
mtu 2068
lag 3
interface 1/1/48
description to SWHQ-CORE1
no shutdown
193
mtu 2068
lag 2
interface 1/1/54
description to SWHQ-WAN1
no shutdown
mtu 2068
lag 1
interface loopback 0
ip address 10.224.224.1/32
ip ospf 1 area 0.0.0.0
194
! enable BFD for eBGP peers
neighbor EBGP_PEERS fall-over bfd
neighbor 10.224.0.61 remote-as 64514
neighbor 10.224.0.61 peer-group EBGP_PEERS
neighbor 10.224.0.61 password ciphertext <<removed>>
neighbor 10.224.224.2 remote-as 64512
neighbor 10.224.224.2 description swhq-wan2
! for iBGP peerings we are using the loopback
! of our neighbor to provide for reachability over
! multiple paths if available.
neighbor 10.224.224.2 password ciphertext <<removed>>
neighbor 10.224.224.2 update-source loopback 0
!
https-server rest access-mode read-write
https-server vrf default
195
SWHQ-WAN2 Configuration
!
banner exec !
***********************************************************************
* *
*
* Welcome to SWHQ-WAN2 // 8320 // Loopback 0 10.224.224.2
* *
* Headquarters WAN Edge to Metro E Network
* *
***********************************************************************
! Syslog configuration
logging 10.254.120.10 udp severity warning
logging 10.254.224.10 udp severity warning
196
sflow
sflow collector 10.254.124.32
sflow collector 10.254.224.32
! define the reporting agent IP to match loopback 0
! interface address
sflow agent-ip 10.224.224.2
!
!
!
! enable SNMPv2c
snmp-server vrf default
snmp-server system-description SWCQ-WAN2
snmp-server system-location HQ MDF // Row 6 Rack 8
snmp-server system-contact [email protected]
snmp-server community s3cret!
snmp-server host 10.254.124.65 trap version v2c community s3cret!
197
snmp-server host 10.254.224.65 trap version v2c community s3cret!
198
! enable OSPF and define a process ID
router ospf 1
! define the OSPF router ID to match
! the loopback address
router-id 10.224.224.1
! set the max-metric on start-up to exclude the device
! from routing via OSPF until <check time> seconds after
! system boot
max-metric router-lsa on-startup
! Use passive interfaces by default and only no-passive on
! interfaces which require OSPF adjacencies to be build
passive-interface default
! enable SNMP traps for OSPF events to be sent to trap
! receivers
trap-enable
! redistribute BGP and static routes into OSPF
redistribute bgp route-map BGP->OSPF
redistribute static route-map STATIC->OSPF
! define the OSPF area ID
area 0.0.0.0
199
map queue 7 local-priority 5
name queue 7 VOICE
interface lag 1
description L3 to HQ-WAN1
no shutdown
ip mtu 2048
ip address 10.1.252.14/30
200
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
interface lag 2
description to SWHQ-CORE2
no shutdown
ip mtu 2048
ip address 10.1.252.17/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
interface lag 3
description L3 to SWHQ-CORE1
no shutdown
ip mtu 2048
ip address 10.1.252.21/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
interface 1/1/1
description to SWGDR-WAN1
no shutdown
mtu 2068
ip address 10.224.0.21/30
ip mtu 2048
interface 1/1/5
201
description to SWHQ-CORE1
no shutdown
mtu 2068
lag 3
interface 1/1/48
description to SWHQ-CORE2
no shutdown
mtu 2068
lag 2
interface 1/1/54
description to SWHQ-WAN1
no shutdown
mtu 2068
lag 1
interface loopback 0
ip address 10.224.224.2/32
ip ospf 1 area 0.0.0.0
202
neighbor 10.224.0.22 password ciphertext <<removed>>
neighbor 10.224.224.1 remote-as 64512
neighbor 10.224.224.1 description swhq-wan1
neighbor 10.224.224.1 password ciphertext <<removed>>
neighbor 10.224.224.1 update-source loopback 0
https-server rest access-mode read-write
https-server vrf default
SWHQ-ACC-1A
For AOS-S device configurations below, comments should be removed before applying configuration(s) to
NOTE
switches.
stacking
member 1 type "JL321A" mac-address <removed>
member 1 flexible-module A type JL083A
exit
member 2 type "JL321A" mac-address <removed>
member 2 flexible-module A type JL083A
exit
hostname "swhq-acc-a1-1"
203
10 match udp 0.0.0.0 255.255.255.255 range 50020 50039 0.0.0.0
255.255.255.255 range 50020 50039
exit
class ipv4 "BULK-AF11"
10 remark "OSSV servers - Snap Vault"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
10566
20 match tcp 0.0.0.0 255.255.255.255 eq 10566 0.0.0.0 255.255.255.255 gt
1023
exit
class ipv4 "BULK-AF12"
10 remark "S4B File Transfer"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255
range 42020 42039
20 remark "S4B App/Screen Sharing"
20 match tcp 0.0.0.0 255.255.255.255 range 42000 42019 0.0.0.0
255.255.255.255 range 42000 42019
25 match udp 0.0.0.0 255.255.255.255 range 42000 42019 0.0.0.0
255.255.255.255 range 42000 42019
exit
class ipv4 "BULK-AF13"
10 remark "WFoD"
10 match udp 0.0.0.0 255.255.255.255 eq 5103 0.0.0.0 255.255.255.255 eq
5103
20 match tcp 0.0.0.0 255.255.255.255 eq 5103 0.0.0.0 255.255.255.255 eq
5103
exit
class ipv4 "BUSN-AF21"
10 remark "SVT traffic"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
8500
exit
class ipv4 "CTRL-AF31"
10 remark "TACACS+ traffic"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
49
20 remark "RADIUS authentication traffic"
20 match udp 0.0.0.0 255.255.255.255 eq 1812 0.0.0.0 255.255.255.255 gt
1023
30 match udp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
1812
40 remark "Wireless CAPWAP control traffic"
40 match udp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
5246
50 match udp 0.0.0.0 255.255.255.255 eq 5246 0.0.0.0 255.255.255.255 gt
1023
60 remark "SIP Signalling"
60 match tcp 0.0.0.0 255.255.255.255 range 5060 5069 0.0.0.0 0.0.0.0
range 5060 5069
204
exit
class ipv4 "VIDEO-AF42"
10 remark "S4B Video"
10 match udp 0.0.0.0 255.255.255.255 range 58000 58019 0.0.0.0
255.255.255.255 range 58000 58019
20 remark "S4B Client Media Port"
20 match udp 0.0.0.0 255.255.255.255 range 5350 5389 0.0.0.0
255.255.255.255 range 5350 5389
exit
class ipv4 "Network Control"
10 remark "CS6 Traffic for Q7 when 8-queues"
10 match udp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 6
exit
! Login banner
205
*\n***********************************************************\n"
logging 10.254.120.10
logging 10.254.224.10
logging severity warning
206
timesync ntp
ntp unicast
ntp authentication key-id 1 authentication-mode md5 key-value secret
ntp server 10.254.124.10 iburst
ntp server 10.254.224.10 iburst
ntp enable
no telnet-server
web-management ssl
ip default-gateway 10.2.127.254
ip dns domain-name "dumarsinc.com"
ip dns server-address priority 1 10.254.10.10
ip dns server-address priority 2 10.254.130.10
tunneled-node-server
controller-ip 10.1.254.10
mode role-based
exit
interface 1/A1
name "Link to AGG1A"
exit
interface 2/A1
name "Link to AGG1B"
exit
interface Trk1
qos trust dscp
207
exit
! Enable TACACS+ authentication for SSH login and enable access, with local
! authentication as backup method
vlan 1
name "DEFAULT_VLAN"
no untagged 1/1-1/48,1/A2-1/A4,2/1-2/48,2/A2-2/A4,Trk1
no ip address
exit
208
vlan 10
name "management"
tagged Trk1
ip address 10.2.1.11 255.255.128.0
jumbo
service-policy "QOS_IN" in
exit
! VLANs 20, 30, 40 dynamically assigned by user role; IGMP, jumbo frames,
! QoS policy (for inbound packets) enabled
vlan 20
name "IoT - bldg control"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 30
name "physec devices"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 40
name "phones-av devices"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 999
name "Unauth VLAN"
untagged 1/1-1/48,1/A2-1/A4,2/1-2/48,2/A2-2/A4
no ip address
jumbo
exit
! VLANs 1281, 1282, 1283 dynamically assigned by user role; IGMP, jumbo
frames,
! QoS policy (for inbound packets) enabled
vlan 1281
name "EXEC_USERS"
no ip address
209
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 1282
name "ENGINEERING_SUPPORT_USERS"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 1283
name "DEFAULT_USERS"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
spanning-tree
spanning-tree 1/1-1/48,1/A2-1/A4,2/1-2/48,2/A2-2/A4 admin-edge-port
spanning-tree 1/1-1/48,1/A2-1/A4,2/1-2/48,2/A2-2/A4 bpdu-protection
spanning-tree Trk1 priority 4 bpdu-filter pvst-filter
spanning-tree bpdu-protection-timeout 60 priority 0
no tftp server
loop-protect Trk1
no autorun
no dhcp config-file-update
no dhcp image-file-update
no dhcp tr69-acs-url
210
SWHQ-ACC-A1-2
stacking
member 1 type "JL321A" mac-address <removed>
member 1 flexible-module A type JL083A
exit
member 2 type "JL321A" mac-address <removed>
member 2 flexible-module A type JL083A
exit
hostname "swhq-acc-a1-2"
211
20 match tcp 0.0.0.0 255.255.255.255 range 42000 42019 0.0.0.0
255.255.255.255 range 42000 42019
25 match udp 0.0.0.0 255.255.255.255 range 42000 42019 0.0.0.0
255.255.255.255 range 42000 42019
exit
class ipv4 "BULK-AF13"
10 remark "WFoD"
10 match udp 0.0.0.0 255.255.255.255 eq 5103 0.0.0.0 255.255.255.255 eq
5103
20 match tcp 0.0.0.0 255.255.255.255 eq 5103 0.0.0.0 255.255.255.255 eq
5103
exit
class ipv4 "BUSN-AF21"
10 remark "SVT traffic"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
8500
exit
class ipv4 "CTRL-AF31"
10 remark "TACACS+ traffic"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
49
20 remark "RADIUS authentication traffic"
20 match udp 0.0.0.0 255.255.255.255 eq 1812 0.0.0.0 255.255.255.255 gt
1023
30 match udp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
1812
40 remark "Wireless CAPWAP control traffic"
40 match udp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
5246
50 match udp 0.0.0.0 255.255.255.255 eq 5246 0.0.0.0 255.255.255.255 gt
1023
60 remark "SIP Signalling"
60 match tcp 0.0.0.0 255.255.255.255 range 5060 5069 0.0.0.0 0.0.0.0
range 5060 5069
exit
class ipv4 "VIDEO-AF42"
10 remark "S4B Video"
10 match udp 0.0.0.0 255.255.255.255 range 58000 58019 0.0.0.0
255.255.255.255 range 58000 58019
20 remark "S4B Client Media Port"
20 match udp 0.0.0.0 255.255.255.255 range 5350 5389 0.0.0.0
255.255.255.255 range 5350 5389
exit
class ipv4 "Network Control"
10 remark "CS6 Traffic for Q7 when 8-queues"
10 match udp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 6
exit
212
policy qos "QOS_IN"
10 remark "Input QoS Policy"
20 class ipv4 "BULK-AF11" action dscp af11
30 class ipv4 "VOICE-EF" action dscp ef
40 class ipv4 "VIDEO-AF42" action dscp af42
50 class ipv4 "CTRL-AF31" action dscp af31
60 class ipv4 "BUSN-AF21" action dscp af21
70 class ipv4 "BULK-AF12" action dscp af12
80 class ipv4 "BULK-AF13" action dscp af13
90 class ipv4 "Network" action priority 7
100 class ipv4 "Network Control" action priority 6
110 class ipv4 "CS5" action priority 5
120 class ipv4 "default" action dscp default
exit
! Login banner
*\n***********************************************************\n"
logging 10.254.120.10
logging 10.254.224.10
logging severity warning
213
qos type-of-service diff-services
qos traffic-template "MFRA-VRD"
map-traffic-group 1 priority 1
map-traffic-group 1 name "background-tcg"
map-traffic-group 2 priority 2
map-traffic-group 2 name "spare-tcg"
map-traffic-group 3 priority 0
map-traffic-group 3 name "best-effort-tcg"
map-traffic-group 4 priority 3
map-traffic-group 4 name "ex-effort-tcg"
map-traffic-group 5 priority 4
map-traffic-group 5 name "controlled-load-tcg"
map-traffic-group 6 priority 5
map-traffic-group 6 name "video-tcg"
map-traffic-group 7 priority 6
map-traffic-group 7 name "voice-tcg"
map-traffic-group 8 priority 7
map-traffic-group 8 name "control-tcg"
exit
timesync ntp
ntp unicast
ntp authentication key-id 1 authentication-mode md5 key-value secret
ntp server 10.254.124.10 iburst
ntp server 10.254.224.10 iburst
ntp enable
no telnet-server
214
! Timezone and DST configuration
web-management ssl
ip default-gateway 10.2.127.254
ip dns domain-name "dumarsinc.com"
ip dns server-address priority 1 10.254.10.10
ip dns server-address priority 2 10.254.130.10
tunneled-node-server
controller-ip 10.1.254.10
mode role-based
exit
interface 1/A1
name "Link to AGG1A"
exit
interface 2/A1
name "Link to AGG1B"
exit
interface Trk1
qos trust dscp
exit
215
aaa authentication login privilege-mode
! Enable TACACS+ authentication for SSH login and enable access, with local
! authentication as backup method
vlan 1
name "DEFAULT_VLAN"
no untagged 1/1-1/48,1/A2-1/A4,,2/1-2/48,2/A2-2/A4,Trk1
no ip address
exit
vlan 10
name "management"
tagged Trk1
ip address 10.2.2.11 255.255.128.0
jumbo
service-policy "QOS_IN" in
exit
! VLANs 20, 30, 40 dynamically assigned by user role; IGMP, jumbo frames,
! QoS policy (for inbound packets) enabled
vlan 20
name "IoT - bldg control"
no ip address
ip igmp
216
jumbo
service-policy "QOS_IN" in
exit
vlan 30
name "physec devices"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 40
name "phones-av devices"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 999
name "Unauth VLAN"
untagged 1/1-1/48,1/A2-1/A4,2/1-2/48,2/A2-2/A4
no ip address
jumbo
exit
! VLANs 1281, 1282, 1283 dynamically assigned by user role; IGMP, jumbo
frames,
! QoS policy (for inbound packets) enabled
vlan 1281
name "EXEC_USERS"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 1282
name "ENGINEERING_SUPPORT_USERS"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 1283
name "DEFAULT_USERS"
no ip address
ip igmp
217
jumbo
service-policy "QOS_IN" in
exit
spanning-tree
spanning-tree 1/1-1/48,1/A2-1/A4,2/1-2/48,2/A2-2/A4 admin-edge-port
spanning-tree 1/1-1/48,1/A2-1/A4,2/1-2/48,2/A2-2/A4 bpdu-protection
spanning-tree Trk1 priority 4 bpdu-filter pvst-filter
spanning-tree bpdu-protection-timeout 60 priority 0
no tftp server
loop-protect Trk1
no autorun
no dhcp config-file-update
no dhcp image-file-update
no dhcp tr69-acs-url
218
Mt. Rose Site Configurations
SWMTR-WAN1
!Version ArubaOS-CX TL.10.01.0002
hostname SWMTR-WAN1
banner motd !
**************************************************************
* *
* This is a private computer network/device. Unauthorized *
* access is prohibited. All attempts to login/connect *
* to this device/network are logged. Unauthorized users *
* must disconnect now. *
* *
**************************************************************
!
banner exec !
***********************************************************************
* *
*
* Welcome to SWMTR-WAN1 // 8320 // Loopback 0 10.224.224.36
* *
* Mt. Rose WAN Edge to Metro E Network
* *
***********************************************************************
! Syslog configuration
logging 10.254.120.10 udp severity warning
logging 10.254.224.10 udp severity warning
219
! Sample sFlow configuration exporting to two collectors
sflow
sflow collector 10.254.124.32
sflow collector 10.254.224.32
! define the reporting agent IP to match loopback 0
! interface address
sflow agent-ip 10.224.224.36
!
!
!
! enable SNMPv2c
snmp-server vrf default
snmp-server system-description SWMTR-WAN1
snmp-server system-location Mt Rose MDF // Rack 1
snmp-server system-contact [email protected]
220
snmp-server community s3cret!
snmp-server host 10.254.124.65 trap version v2c community s3cret!
snmp-server host 10.254.224.65 trap version v2c community s3cret!
221
set local-preference 250
222
max-metric router-lsa on-startup
! Use passive interfaces by default and only no-passive on
! interfaces which require OSPF adjacencies to be build
passive-interface default
! enable SNMP traps for OSPF events to be sent to trap
! receivers
trap-enable
! redistribute BGP and static routes into OSPF
redistribute bgp route-map BGP->OSPF
redistribute static route-map STATIC->OSPF
! define the OSPF area ID
area 0.0.0.0
223
dwrr queue 3 weight 1
dwrr queue 4 weight 1
dwrr queue 5 weight 1
dwrr queue 6 weight 1
strict queue 7
interface lag 1
description L3 to SWMTR-WAN2
no shutdown
ip mtu 2048
ip address 10.224.0.53/30
interface lag 2
description to SWMTR-CORE
no shutdown
ip mtu 2048
224
ip address 10.16.252.5/30
lacp mode active
interface 1/1/1
description Metro-E SWSV-WAN2
no shutdown
mtu 2068
ip address 10.224.0.53/30
ip mtu 2048
interface 1/1/5
description to SWMTR-CORE
no shutdown
mtu 2068
lag 2
interface 1/1/6
description to SWMTR-CORE
no shutdown
mtu 2068
lag 2
interface loopback 0
ip address 10.224.224.36/32
ip ospf 1 area 0.0.0.0
225
aggregate-address 10.16.8.0/21 summary-only
aggregate-address 10.64.0.0/21 summary-only
! define router ID to match the loopback 0 address
bgp router-id 10.224.224.36
! advertise the loopback interface into bgp
network 10.224.224.36/32
! enable bgp fast-external-fallover to tear down bgp
! sessions if the physical interface goes down
bgp fast-external-fallover
bgp log-neighbor-changes
! redistribute OSPF and BGP as per route maps
redistribute ospf route-map OSPF->BGP
redistribute static route-map STATIC->BGP
! create a peer group for common config elements for
! eBGP peers. Note that community is being sent as
! community is used by remote sites to influence
! bgp path selection.
neighbor EBGP_PEERS peer-group
neighbor EBGP_PEERS route-map BGP_INBOUND_POLICY in
neighbor EBGP_PEERS route-map BGP_OUTBOUND_POLICY out
neighbor EBGP_PEERS fall-over
neighbor EBGP_PEERS send-community standard
! enable BFD for eBGP peers
neighbor EBGP_PEERS fall-over bfd
neighbor 10.224.0.37 remote-as 64514
neighbor 10.224.0.37 peer-group EBGP_PEERS
neighbor 10.224.0.37 password ciphertext <<removed>>
neighbor 10.224.224.37 remote-as 64515
neighbor 10.224.224.37 description SWMTR-WAN2
! for iBGP peerings we are using the loopback
! of our neighbor to provide for reachability over
! multiple paths if available.
neighbor 10.224.224.37 password ciphertext <<removed>>
neighbor 10.224.224.37 update-source loopback 0
!
https-server rest access-mode read-write
https-server vrf default
226
SWMTR-WAN2 Configuration
!
banner exec !
***********************************************************************
* *
*
* Welcome to SWMTR-WAN2 // 8320 // Loopback 0 10.224.224.37
* *
* Mt. Rose WAN Edge Switch 2 to Metro E Network
* *
***********************************************************************
! Syslog configuration
logging 10.254.120.10 udp severity warning
logging 10.254.224.10 udp severity warning
227
sflow
sflow collector 10.254.124.32
sflow collector 10.254.224.32
! define the reporting agent IP to match loopback 0
! interface address
sflow agent-ip 10.224.224.37
!
!
!
! enable SNMPv2c
snmp-server vrf default
snmp-server system-description SWMTR-WAN2
snmp-server system-location Mt Rose MDF // Rack 1
snmp-server system-contact [email protected]
snmp-server community s3cret!
snmp-server host 10.254.124.65 trap version v2c community s3cret!
228
snmp-server host 10.254.224.65 trap version v2c community s3cret!
229
route-map BGP_INBOUND_POLICY permit seq 99
description DO NOTHING
230
! interfaces which require OSPF adjacencies to be build
passive-interface default
! enable SNMP traps for OSPF events to be sent to trap
! receivers
trap-enable
! redistribute BGP and static routes into OSPF
redistribute bgp route-map BGP->OSPF
redistribute static route-map STATIC->OSPF
! define the OSPF area ID
area 0.0.0.0
231
dwrr queue 5 weight 1
dwrr queue 6 weight 1
strict queue 7
interface lag 1
description L3 to SWMTR-WAN1
no shutdown
ip mtu 2048
ip address 10.224.0.54/30
interface lag 2
description to SWMTR-CORE
no shutdown
ip mtu 2048
ip address 10.16.252.9/30
lacp mode active
232
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
interface 1/1/1
description Metro-E to GDRSW-WAN1
no shutdown
mtu 2068
ip address 10.224.0.58/30
ip mtu 2048
interface 1/1/5
description to SWMTR-CORE
no shutdown
mtu 2068
lag 2
interface 1/1/6
description to SWMTR-CORE
no shutdown
mtu 2068
lag 2
interface loopback 0
ip address 10.224.224.37/32
ip ospf 1 area 0.0.0.0
233
! define router ID to match the loopback 0 address
bgp router-id 10.224.224.37
! advertise the loopback interface into bgp
network 10.224.224.37/32
! enable bgp fast-external-fallover to tear down bgp
! sessions if the physical interface goes down
bgp fast-external-fallover
bgp log-neighbor-changes
! redistribute OSPF and BGP as per route maps
redistribute ospf route-map OSPF->BGP
redistribute static route-map STATIC->BGP
! create a peer group for common config elements for
! eBGP peers. Note that community is being sent as
! community is used by remote sites to influence
! bgp path selection.
neighbor EBGP_PEERS peer-group
neighbor EBGP_PEERS route-map BGP_INBOUND_POLICY in
neighbor EBGP_PEERS route-map BGP_OUTBOUND_POLICY out
neighbor EBGP_PEERS fall-over
neighbor EBGP_PEERS send-community standard
! enable BFD for eBGP peers
neighbor EBGP_PEERS fall-over bfd
neighbor 10.224.0.57 remote-as 64513
neighbor 10.224.0.57 peer-group EBGP_PEERS
neighbor 10.224.0.57 password ciphertext <<removed>>
neighbor 10.224.224.36 remote-as 64515
neighbor 10.224.224.36 description SWMTR-WAN1
! for iBGP peerings we are using the loopback
! of our neighbor to provide for reachability over
! multiple paths if available.
neighbor 10.224.224.36 password ciphertext <<removed>>
neighbor 10.224.224.36 update-source loopback 0
!
https-server rest access-mode read-write
https-server vrf default
234
SWMTR-CORE
hostname "SWMTR-CORE"
vsf
enable domain 1
member 1
type "J9850A" mac-address 941882-cf2900
priority 255
link 1 1/B1-1/B2
link 1 name "I-Link1_1"
exit
member 2
type "J9850A" mac-address 941882-cf2f00
priority 128
link 1 2/B1-2/B2
link 1 name "I-Link2_1"
exit
oobm-mad
port-speed 40g
exit
235
10 match ip 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255
exit
class ipv4 "VOICE-EF"
10 remark "S4B Audio"
10 match udp 0.0.0.0 255.255.255.255 range 50020 50039 0.0.0.0
255.255.255.255 range 50020 50039
exit
class ipv4 "BULK-AF11"
10 remark "OSSV servers - Snap Vault"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
10566
20 match tcp 0.0.0.0 255.255.255.255 eq 10566 0.0.0.0 255.255.255.255 gt
1023
exit
class ipv4 "BULK-AF12"
10 remark "S4B File Transfer"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255
range 42020 42039
20 remark "S4B App/Screen Sharing"
20 match tcp 0.0.0.0 255.255.255.255 range 42000 42019 0.0.0.0
255.255.255.255 range 42000 42019
25 match udp 0.0.0.0 255.255.255.255 range 42000 42019 0.0.0.0
255.255.255.255 range 42000 42019
exit
class ipv4 "BULK-AF13"
10 remark "WFoD"
10 match udp 0.0.0.0 255.255.255.255 eq 5103 0.0.0.0 255.255.255.255 eq
5103
20 match tcp 0.0.0.0 255.255.255.255 eq 5103 0.0.0.0 255.255.255.255 eq
5103
exit
class ipv4 "BUSN-AF21"
10 remark "SVT traffic"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
8500
exit
class ipv4 "CTRL-AF31"
10 remark "TACACS+ traffic"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
49
20 remark "RADIUS authentication traffic"
20 match udp 0.0.0.0 255.255.255.255 eq 1812 0.0.0.0 255.255.255.255 gt
1023
30 match udp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
1812
40 remark "Wireless CAPWAP control traffic"
40 match udp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
5246
236
50 match udp 0.0.0.0 255.255.255.255 eq 5246 0.0.0.0 255.255.255.255 gt
1023
60 remark "SIP Signalling"
60 match tcp 0.0.0.0 255.255.255.255 range 5060 5069 0.0.0.0 0.0.0.0
range 5060 5069
exit
class ipv4 "VIDEO-AF42"
10 remark "S4B Video"
10 match udp 0.0.0.0 255.255.255.255 range 58000 58019 0.0.0.0
255.255.255.255 range 58000 58019
20 remark "S4B Client Media Port"
20 match udp 0.0.0.0 255.255.255.255 range 5350 5389 0.0.0.0
255.255.255.255 range 5350 5389
exit
class ipv4 "Network Control"
10 remark "CS6 Traffic for Q7 when 8-queues"
10 match udp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 6
exit
! Login banner
237
*\n* This is a private computer
network/device. Unauthorized
*\n* access is prohibited. All attempts to login/connect *\n* to this
device/network are
logged. Unauthorized users *\n* must disconnect now.
*\n*
*\n***********************************************************\n"
logging 10.254.120.10
logging 10.254.224.10
logging severity warning
timesync ntp
ntp unicast
ntp authentication key-id 1 authentication-mode md5 key-value secret
ntp server 10.254.124.10 iburst
ntp server 10.254.224.10 iburst
238
ntp enable
no telnet-server
web-management ssl
ip router-id 10.16.1.8
! Enable IP routing
ip routing
key-chain "secret"
key-chain "secret" key 1 key-string "secret!"
interface loopback 0
ip address 10.16.1.8
ip ospf 10.16.1.8 area backbone
exit
239
! on privilege level provided by authentication servers
! Enable TACACS+ authentication for SSH login and enable access, with local
! authentication as backup method
! OSPF configuration
router ospf
area backbone
enable
exit
vlan 1
name "DEFAULT_VLAN"
no untagged 1/A1-1/A7,2/A1-2/A7,2/C1-2/C24,Trk1
no ip address
exit
vlan 2
name "TO WAN1"
untagged 2/C1
ip address 10.16.7.2 255.255.255.252
ip ospf 10.16.7.2 area backbone
ip ospf 10.16.7.2 network-type point-to-point
exit
vlan 3
name "TO WAN2"
untagged 2/C2
ip address 10.16.7.6 255.255.255.252
ip ospf 10.16.7.6 area backbone
ip ospf 10.16.7.6 network-type point-to-point
exit
vlan 10
name "Management"
tagged Trk1
ip address 10.16.15.254 255.255.248.0
ip ospf 10.16.15.254 passive
ip ospf 10.16.15.254 area backbone
240
exit
vlan 20
name "IoT_Building_Control"
ip address 172.19.3.254 255.255.252.0
ip ospf 172.19.3.254 passive
ip ospf 172.19.3.254 area backbone
exit
vlan 30
name "Phy_Sec"
ip address 172.19.7.254 255.255.252.0
ip ospf 172.19.7.254 passive
ip ospf 172.19.7.254 area backbone
exit
vlan 40
name "Phone_AV"
tagged Trk1
ip address 172.19.11.254 255.255.252.0
ip ospf 172.19.11.254 passive
ip ospf 172.19.11.254 area backbone
exit
vlan 999
name "Unauth VLAN"
no untagged
untagged 1/A1-1/A7,2/A1-2/A7,2/C3-2/C24
no ip address
exit
! VLANs 1281, 1282, 1283 are dynamically assigned to devices at access layer
by
! user role
vlan 1281
name "EXEC_Corp"
ip address 10.64.1.254 255.255.254.0
ip ospf 10.64.1.254 passive
ip ospf 10.64.1.254 area backbone
exit
vlan 1282
name "Engineering_Support"
ip address 10.64.3.254 255.255.254.0
ip ospf 10.64.3.254 passive
241
ip ospf 10.64.3.254 area backbone
exit
vlan 1283
name "Other_Users"
ip address 10.64.5.254 255.255.254.0
ip ospf 10.64.5.254 passive
ip ospf 10.64.5.254 area backbone
exit
! Enable MSTP as primary root with priority 0, with root-guard on all non-
uplink
! ports
spanning-tree
spanning-tree 1/A1-1/A7,2/A1-2/A7,2/C3-2/C24 root-guard
spanning-tree Trk1 priority 0 root-guard
spanning-tree root primary priority 0
no allow-v2-modules
password manager
SWMTR-ACC-1A-1
Running configuration:
stacking
member 1 type "JL321A" mac-address <removed>
member 1 flexible-module A type JL083A
exit
member 2 type "JL321A" mac-address <removed>
member 2 flexible-module A type JL083A
exit
hostname "SWMTR-ACC-1A-1"
242
class ipv4 "CS5"
10 remark "CS5 Traffic for Q6 when 8-queues"
10 match udp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 5
exit
class ipv4 "Network"
10 remark "CS7 Traffic for Q8 when 8-queues"
10 match udp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 10
exit
class ipv4 "default"
10 match ip 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255
exit
class ipv4 "VOICE-EF"
10 remark "S4B Audio"
10 match udp 0.0.0.0 255.255.255.255 range 50020 50039 0.0.0.0
255.255.255.255 range 50020 50039
exit
class ipv4 "BULK-AF11"
10 remark "OSSV servers - Snap Vault"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
10566
20 match tcp 0.0.0.0 255.255.255.255 eq 10566 0.0.0.0 255.255.255.255 gt
1023
exit
class ipv4 "BULK-AF12"
10 remark "S4B File Transfer"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255
range 42020 42039
20 remark "S4B App/Screen Sharing"
20 match tcp 0.0.0.0 255.255.255.255 range 42000 42019 0.0.0.0
255.255.255.255 range 42000 42019
25 match udp 0.0.0.0 255.255.255.255 range 42000 42019 0.0.0.0
255.255.255.255 range 42000 42019
exit
class ipv4 "BULK-AF13"
10 remark "WFoD"
10 match udp 0.0.0.0 255.255.255.255 eq 5103 0.0.0.0 255.255.255.255 eq
5103
20 match tcp 0.0.0.0 255.255.255.255 eq 5103 0.0.0.0 255.255.255.255 eq
5103
exit
class ipv4 "BUSN-AF21"
10 remark "SVT traffic"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
8500
exit
class ipv4 "CTRL-AF31"
10 remark "TACACS+ traffic"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
49
243
20 remark "RADIUS authentication traffic"
20 match udp 0.0.0.0 255.255.255.255 eq 1812 0.0.0.0 255.255.255.255 gt
1023
30 match udp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
1812
40 remark "Wireless CAPWAP control traffic"
40 match udp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
5246
50 match udp 0.0.0.0 255.255.255.255 eq 5246 0.0.0.0 255.255.255.255 gt
1023
60 remark "SIP Signalling"
60 match tcp 0.0.0.0 255.255.255.255 range 5060 5069 0.0.0.0 0.0.0.0
range 5060 5069
exit
class ipv4 "VIDEO-AF42"
10 remark "S4B Video"
10 match udp 0.0.0.0 255.255.255.255 range 58000 58019 0.0.0.0
255.255.255.255 range 58000 58019
20 remark "S4B Client Media Port"
20 match udp 0.0.0.0 255.255.255.255 range 5350 5389 0.0.0.0
255.255.255.255 range 5350 5389
exit
class ipv4 "Network Control"
10 remark "CS6 Traffic for Q7 when 8-queues"
10 match udp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 6
exit
244
! Uplink LACP trunk to HQ Aggregation
! Login banner
*\n***********************************************************\n"
logging 10.254.120.10
logging 10.254.224.10
logging severity warning
245
map-traffic-group 8 priority 7
map-traffic-group 8 name "control-tcg"
exit
timesync ntp
ntp unicast
ntp authentication key-id 1 authentication-mode md5 key-value secret
ntp server 10.254.124.10 iburst
ntp server 10.254.224.10 iburst
ntp enable
no telnet-server
web-management ssl
ip default-gateway 10.16.15.254
tunneled-node-server
246
controller-ip 10.1.254.10
mode role-based
exit
interface 1/A1
name "Link to AGG1A"
exit
interface 1/A2
name "Link to AGG1B"
exit
interface 1/A3
name "Link 1 to MTR-CORE"
exit
interface 1/A4
name "Link 2 to MTR-CORE"
exit
interface Trk1
qos trust dscp
exit
interface Trk2
qos trust dscp
exit
! Enable TACACS+ authentication for SSH login and enable access, with local
! authentication as backup method
247
! Enable 802.1x authentication with limit of 5 clients on all ports
vlan 1
name "DEFAULT_VLAN"
no untagged 1/1-1/48,1/A3-1/A4,2/1-2/48,2/A3-2/A4,Trk1-Trk2
no ip address
exit
vlan 10
name "Management"
tagged Trk1-Trk2
ip address 10.16.8.20 255.255.248.0
service-policy "QOS_IN" in
exit
vlan 20
name "IoT_Building_Control"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 30
name "Phy_Sec"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 40
name "Phone_AV"
no ip address
248
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 999
name "Unauth VLAN"
untagged 1/1-1/48,1/A2-1/A4,2/1-2/48,2/A2-2/A4
no ip address
jumbo
exit
vlan 1281
name "EXEC_USERS"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 1282
name "ENGINEERING_SUPPORT_USERS"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 1283
name "DEFAULT_USERS"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
spanning-tree
spanning-tree 1/1-1/48,1/A3-1/A4,2/1-2/48,2/A3-2/A4 admin-edge-port
spanning-tree 1/1-1/48,1/A3-1/A4,2/1-2/48,2/A3-2/A4 bpdu-protection
spanning-tree Trk1-Trk2 priority 4 bpdu-filter pvst-filter
spanning-tree bpdu-protection-timeout 60 priority 0
249
no tftp server
loop-protect Trk1-Trk2
no autorun
no dhcp config-file-update
no dhcp image-file-update
no dhcp tr69-acs-url
250
Mobility Controller Configuration
251
HQ Site Configuration
ip access-list session apprf-dumars_guest-guest-logon-sacl
!
user-role dumars_guest-guest-logon
access-list session logon-control
access-list session captiveportal
access-list session v6-logon-control
access-list session captiveportal6
!
vlan 10
!
vlan 20
!
vlan 138
!
aaa rfc-3576-server "10.254.33.24"
key 29027974fb07628a8e71f9bdee36eb62
!
aaa authentication dot1x "dumars_byod"
!
aaa authentication dot1x "Dumars_Inc"
!
aaa authentication-server radius "M1RA-CPPM"
host "10.254.33.24"
key 09130061e7d5370d7e1ef27817239880
cppm username "m1ra" password
e4bc0458ab31346f76bfbf6c8497f9cd30517e224e291a77
!
aaa server-group "default"
auth-server Internal position 1
!
252
aaa server-group "dumars_byod"
auth-server M1RA-CPPM position 1
!
aaa server-group "dumars_employee"
auth-server M1RA-CPPM position 1
!
aaa server-group "dumars_engineering"
auth-server M1RA-CPPM position 1
!
aaa server-group "Dumars_Inc"
auth-server M1RA-CPPM position 1
!
aaa server-group "dumarsinc-radius"
auth-server M1RA-CPPM position 1
!
aaa profile "default"
initial-role "authenticated"
dot1x-server-group "default"
download-role
!
aaa profile "dumars_byod"
authentication-dot1x "dumars_byod"
dot1x-default-role "authenticated"
dot1x-server-group "dumars_byod"
!
aaa profile "dumars_guest"
initial-role "dumars_guest-guest-logon"
!
aaa profile "Dumars_Inc"
authentication-dot1x "Dumars_Inc"
dot1x-default-role "authenticated"
dot1x-server-group "Dumars_Inc"
download-role
253
rfc-3576-server "10.254.33.24"
!
aaa authentication captive-portal "dumars_guest"
no user-logon
redirect-url "https://www.dumarsinc.com"
!
lc-cluster group-profile "M1RA-CLUSTER"
controller 10.1.254.10 priority 128 mcast-vlan 0 vrrp-ip 0.0.0.0 vrrp-
vlan 0 group 0
controller 10.1.254.11 priority 128 mcast-vlan 0 vrrp-ip 0.0.0.0 vrrp-
vlan 0 group 0
!
ap system-profile "dumars"
lms-ip 10.1.254.12
bkup-lms-ip 10.254.132.66
lms-preemption
ap-console-password ebfca5fad32c7d5fb11aad1d790bae2f4bccf23762ef8e96
!
ap system-profile "gdr"
lms-ip 10.254.132.66
bkup-lms-ip 10.1.254.12
lms-preemption
ap-console-password 1bd85a5b7f6d4f745e0e740a47eb2a03515fdec6428dfca8
!
ap system-profile "hq"
lms-ip 10.1.254.12
bkup-lms-ip 10.254.132.66
lms-preemption
ap-console-password ccd229f8aab8870ecc37f993a5894f902696c6e55592ee77
!
wlan ssid-profile "dumars_byod"
essid "DUMARS_BYOD"
opmode wpa2-aes
!
254
wlan ssid-profile "dumars_guest"
essid "DUMARS_GUEST"
!
wlan ssid-profile "Dumars_Inc"
essid "DUMARS_INC"
opmode wpa2-aes
!
wlan virtual-ap "dumars_byod"
aaa-profile "dumars_byod"
vlan 138
ssid-profile "dumars_byod"
!
wlan virtual-ap "dumars_guest"
aaa-profile "dumars_guest"
ssid-profile "dumars_guest"
!
wlan virtual-ap "Dumars_Inc"
aaa-profile "Dumars_Inc"
vlan 1281
ssid-profile "Dumars_Inc"
!
ap-group "default"
virtual-ap "dumars_guest"
virtual-ap "dumars_byod"
virtual-ap "Dumars_Inc"
!
ap-group "GDR-APs"
virtual-ap "dumars_guest"
virtual-ap "dumars_byod"
virtual-ap "Dumars_Inc"
ap-system-profile "gdr"
!
ap-group "HQ-APs"
255
virtual-ap "dumars_guest"
virtual-ap "dumars_byod"
virtual-ap "Dumars_Inc"
ap-system-profile "hq"
!
HQMC1A Controller
256
trusted
trusted vlan 1-4094
!
interface port-channel 3
trusted
trusted vlan 1-4094
!
interface port-channel 4
trusted
trusted vlan 1-4094
!
interface port-channel 5
trusted
trusted vlan 1-4094
!
interface port-channel 6
trusted
trusted vlan 1-4094
!
interface port-channel 7
trusted
trusted vlan 1-4094
!
interface vlan 104
ip address 10.1.254.10 255.255.255.0
!
interface tunnel 16657
tunnel source controller-ip
tunnel destination 10.254.7.18
tunnel mode gre 10
trusted
tunnel vlan 139
mtu 1400
tunnel keepalive
tunnel keepalive 10 3
!
ip default-gateway 10.1.254.1
uplink wired vlan 104 uplink-id link1
!
mgmt-user admin root c8fd41fc01ba1dde325b58f211a3bd90a84c857a1595e3e8e1
firewall
cp-bandwidth-contract trusted-ucast 65535
cp-bandwidth-contract trusted-mcast 3906
cp-bandwidth-contract untrusted-ucast 9765
cp-bandwidth-contract untrusted-mcast 3906
cp-bandwidth-contract route 976
cp-bandwidth-contract sessmirr 976
cp-bandwidth-contract vrrp 512
cp-bandwidth-contract auth 976
257
cp-bandwidth-contract arp-traffic 3906
cp-bandwidth-contract l2-other 1953
!
hostname HQMC1A
clock timezone America/Los_Angeles
lc-cluster group-membership M1RA-CLUSTER
lc-cluster exclude-vlan 139,1
country US
vrrp 104
ip address 10.1.254.12
priority 120
vlan 104
no shutdown
!
HQMC1B Controller
258
trusted vlan 1-4094
!
interface port-channel 2
trusted
trusted vlan 1-4094
!
interface port-channel 3
trusted
trusted vlan 1-4094
!
interface port-channel 4
trusted
trusted vlan 1-4094
!
interface port-channel 5
trusted
trusted vlan 1-4094
!
interface port-channel 6
trusted
trusted vlan 1-4094
!
interface port-channel 7
trusted
trusted vlan 1-4094
!
interface vlan 104
ip address 10.1.254.11 255.255.255.0
!
ip default-gateway 10.1.254.1
uplink wired vlan 104 uplink-id link1
!
mgmt-user admin root 39ad53a5012e232378835922a19925c288d632fefc9100a10c
firewall
cp-bandwidth-contract trusted-ucast 65535
cp-bandwidth-contract trusted-mcast 3906
cp-bandwidth-contract untrusted-ucast 9765
cp-bandwidth-contract untrusted-mcast 3906
cp-bandwidth-contract route 976
cp-bandwidth-contract sessmirr 976
cp-bandwidth-contract vrrp 512
cp-bandwidth-contract auth 976
cp-bandwidth-contract arp-traffic 3906
cp-bandwidth-contract l2-other 1953
!
hostname HQMC1B
clock timezone America/Los_Angeles
lc-cluster group-membership M1RA-CLUSTER
259
lc-cluster exclude-vlan 139,1
country US
vrrp 104
ip address 10.1.254.12
vlan 104
no shutdown
!
260
auth-server M1RA-CPPM position 1
!
aaa server-group "dumars_employee"
auth-server M1RA-CPPM position 1
!
aaa server-group "dumars_engineering"
auth-server M1RA-CPPM position 1
!
aaa server-group "Dumars_Inc"
auth-server M1RA-CPPM position 1
!
aaa server-group "dumarsinc-radius"
auth-server M1RA-CPPM position 1
!
aaa profile "default"
initial-role "authenticated"
dot1x-server-group "default"
download-role
!
aaa profile "dumars_byod"
authentication-dot1x "dumars_byod"
dot1x-default-role "authenticated"
dot1x-server-group "dumars_byod"
!
aaa profile "dumars_guest"
initial-role "dumars_guest-guest-logon"
!
aaa profile "Dumars_Inc"
authentication-dot1x "Dumars_Inc"
dot1x-default-role "authenticated"
dot1x-server-group "Dumars_Inc"
download-role
rfc-3576-server "10.254.33.24"
261
!
aaa authentication captive-portal "dumars_guest"
no user-logon
redirect-url "https://www.dumarsinc.com"
!
lc-cluster group-profile "M1RA-CLUSTER1"
controller 10.254.132.64 priority 128 mcast-vlan 0 vrrp-ip 0.0.0.0 vrrp-
vlan 0 group 0
controller 10.254.132.65 priority 128 mcast-vlan 0 vrrp-ip 0.0.0.0 vrrp-
vlan 0 group 0
!
ap system-profile "dumars"
lms-ip 10.1.254.12
bkup-lms-ip 10.254.132.66
lms-preemption
ap-console-password cfc843226ca022b0878e726e15d8ccd43054aa10594a07e7
!
ap system-profile "gdr"
lms-ip 10.254.132.66
bkup-lms-ip 10.1.254.12
lms-preemption
ap-console-password 69a98db4f44d3fe6ebf8345bccbd0359936c30269acd36d7
!
ap system-profile "hq"
lms-ip 10.1.254.12
bkup-lms-ip 10.254.132.66
lms-preemption
ap-console-password a3f7fff17c6b8c2640bcf8dcdd333c96780168bf7c7b1579
!
wlan ssid-profile "dumars_byod"
essid "DUMARS_BYOD"
opmode wpa2-aes
!
wlan ssid-profile "dumars_guest"
262
essid "DUMARS_GUEST"
!
wlan ssid-profile "Dumars_Inc"
essid "DUMARS_INC"
opmode wpa2-aes
!
wlan virtual-ap "dumars_byod"
aaa-profile "dumars_byod"
vlan 138
ssid-profile "dumars_byod"
!
wlan virtual-ap "dumars_guest"
aaa-profile "dumars_guest"
ssid-profile "dumars_guest"
!
wlan virtual-ap "Dumars_Inc"
aaa-profile "Dumars_Inc"
vlan 1281
ssid-profile "Dumars_Inc"
!
ap-group "default"
virtual-ap "dumars_guest"
virtual-ap "dumars_byod"
virtual-ap "Dumars_Inc"
!
ap-group "GDR-APs"
virtual-ap "dumars_guest"
virtual-ap "dumars_byod"
virtual-ap "Dumars_Inc"
ap-system-profile "gdr"
!
ap-group "HQ-APs"
virtual-ap "dumars_guest"
263
virtual-ap "dumars_byod"
virtual-ap "Dumars_Inc"
ap-system-profile "hq"
!
GDRMC1A Configuration
masterip 10.254.32.10 ipsec 0d57fed1a5cf41901a2e1f04ea4e12c4 interface vlan 1
controller-ip vlan 1
interface mgmt
shutdown
!
vlan 1
!
interface gigabitethernet 0/0/0
description GE0/0/0
switchport mode trunk
no spanning-tree
trusted
trusted vlan 1-4094
!
interface gigabitethernet 0/0/1
shutdown
no spanning-tree
!
interface gigabitethernet 0/0/2
shutdown
no spanning-tree
!
interface port-channel 0
264
trusted
trusted vlan 1-4094
!
interface port-channel 1
trusted
trusted vlan 1-4094
!
interface port-channel 2
trusted
trusted vlan 1-4094
!
interface port-channel 3
trusted
trusted vlan 1-4094
!
interface port-channel 4
trusted
trusted vlan 1-4094
!
interface port-channel 5
trusted
trusted vlan 1-4094
!
interface port-channel 6
trusted
trusted vlan 1-4094
!
interface port-channel 7
265
trusted
trusted vlan 1-4094
!
interface vlan 1
ip address 10.254.132.64 255.255.255.0
!
ip default-gateway 10.254.132.1
uplink wired vlan 1 uplink-id link1
!
mgmt-user admin root 7eeb4bee010f514cc0a5db45fe21e74219b8164d130401d3df
firewall
cp-bandwidth-contract trusted-ucast 65535
cp-bandwidth-contract trusted-mcast 1953
cp-bandwidth-contract untrusted-ucast 9765
cp-bandwidth-contract untrusted-mcast 1953
cp-bandwidth-contract route 976
cp-bandwidth-contract sessmirr 976
cp-bandwidth-contract vrrp 512
cp-bandwidth-contract auth 976
cp-bandwidth-contract arp-traffic 976
cp-bandwidth-contract l2-other 976
cp-bandwidth-contract ike 1953
!
hostname GDRMC1A
clock timezone America/Los_Angeles
lc-cluster group-membership M1RA-CLUSTER1
lc-cluster exclude-vlan 139
266
country US
vrrp 1
ip address 10.254.132.66
priority 120
vlan 1
no shutdown
!
GDRMC1B Configuration
267
shutdown
no spanning-tree
!
interface port-channel 0
trusted
trusted vlan 1-4094
!
interface port-channel 1
trusted
trusted vlan 1-4094
!
interface port-channel 2
trusted
trusted vlan 1-4094
!
interface port-channel 3
trusted
trusted vlan 1-4094
!
interface port-channel 4
trusted
trusted vlan 1-4094
!
interface port-channel 5
trusted
trusted vlan 1-4094
!
interface port-channel 6
268
trusted
trusted vlan 1-4094
!
interface port-channel 7
trusted
trusted vlan 1-4094
!
interface vlan 1
ip address 10.254.132.65 255.255.255.0
!
ip default-gateway 10.254.132.1
uplink wired vlan 1 uplink-id link1
!
mgmt-user admin root c04747a10173ab6521bd9ca0cd235f2feedfae0593e5138c22
firewall
cp-bandwidth-contract trusted-ucast 65535
cp-bandwidth-contract trusted-mcast 1953
cp-bandwidth-contract untrusted-ucast 9765
cp-bandwidth-contract untrusted-mcast 1953
cp-bandwidth-contract route 976
cp-bandwidth-contract sessmirr 976
cp-bandwidth-contract vrrp 512
cp-bandwidth-contract auth 976
cp-bandwidth-contract arp-traffic 976
cp-bandwidth-contract l2-other 976
cp-bandwidth-contract ike 1953
!
269
hostname GDRMC1B
clock timezone America/Los_Angeles
lc-cluster group-membership M1RA-CLUSTER1
lc-cluster exclude-vlan 139
country US
vrrp 1
ip address 10.254.132.66
vlan 1
no shutdown
!
DMZMC1A Config
masterip 10.254.32.10 ipsec a57b8c03ce071ed1769674e9fd016898 interface vlan
777
user-role logon
access-list session global-sacl
access-list session apprf-logon-sacl
access-list session ra-guard
access-list session logon-control
access-list session captiveportal
access-list session vpnlogon
access-list session v6-logon-control
access-list session captiveportal6
270
captive-portal DUMARS_GUEST
!
controller-ip vlan 777
vlan 1
!
vlan 777
!
vlan-name OUTSIDE_FW
vlan OUTSIDE_FW 1
interface gigabitethernet 0/0/0
!
interface gigabitethernet 0/0/1
description GE0/0/1
!
interface gigabitethernet 0/0/2
description GE0/0/2
!
interface gigabitethernet 0/0/3
description GE0/0/3
!
interface gigabitethernet 0/0/4
description GE0/0/4
switchport mode trunk
trusted
trusted vlan 1-4094
!
interface gigabitethernet 0/0/5
description GE0/0/5
271
!
interface port-channel 0
trusted
trusted vlan 1-4094
!
interface port-channel 1
trusted
trusted vlan 1-4094
!
interface port-channel 2
trusted
trusted vlan 1-4094
!
interface port-channel 3
trusted
trusted vlan 1-4094
!
interface port-channel 4
trusted
trusted vlan 1-4094
!
interface port-channel 5
trusted
trusted vlan 1-4094
!
interface port-channel 6
trusted
trusted vlan 1-4094
272
!
interface port-channel 7
trusted
trusted vlan 1-4094
!
interface vlan 1
ip address 10.6.8.242 255.255.255.0
!
interface vlan 139
ip address 172.31.0.1 255.255.0.0
mtu 1400
no suppress-arp
!
interface vlan 777
ip address 10.254.7.18 255.255.255.0
!
interface tunnel 1
tunnel source controller-ip
tunnel destination 10.1.254.11
tunnel mode gre 11
tunnel vlan 139
no inter-tunnel-flooding
mtu 1400
tunnel keepalive
tunnel keepalive 10 3
!
interface tunnel 2
tunnel source controller-ip
273
tunnel destination 10.1.254.10
tunnel mode gre 10
tunnel vlan 139
no inter-tunnel-flooding
mtu 1400
tunnel keepalive
tunnel keepalive 10 3
!
ip route 10.224.0.0 255.255.0.0 10.254.7.1
ip route 10.254.0.0 255.255.0.0 10.254.7.1
ip route 10.1.0.0 255.255.0.0 10.254.7.1
ip default-gateway 10.6.8.1 1
ip default-gateway 10.254.7.1
uplink wired vlan 777 uplink-id link1
!
service dhcp
ip dhcp pool vlan_139
dns-server 8.8.8.8
default-router 172.31.0.1
network 172.31.0.0 255.255.252.0
!
mgmt-user admin root fb3fcc40018dc9111454cb68135d5ce15b0ba2dc282773f1ed
firewall
cp-bandwidth-contract trusted-ucast 65535
cp-bandwidth-contract trusted-mcast 3906
cp-bandwidth-contract untrusted-ucast 9765
cp-bandwidth-contract untrusted-mcast 3906
cp-bandwidth-contract route 976
274
cp-bandwidth-contract sessmirr 976
cp-bandwidth-contract vrrp 512
cp-bandwidth-contract auth 976
cp-bandwidth-contract arp-traffic 3906
cp-bandwidth-contract l2-other 1953
!
aaa profile "DUMARS_GUEST_OPEN"
!
aaa authentication captive-portal "DUMARS_GUEST"
no user-logon
guest-logon
!
aaa authentication captive-portal "logon_cppm_sg"
no user-logon
guest-logon
!
aaa authentication wired
profile "DUMARS_GUEST_OPEN"
!
hostname DMZ-7205
clock timezone America/Los_Angeles
country US
sync-files /flash/upload/custom/logon_cppm_sg command-index 2
275
Appendix B - PLATFORM SCALING
CAMPUS SWITCHING
Switch Series Link Aggregation Trunk Groups/ Max # Interfaces per Layer 2 VLANS VLAN IP Interfaces
Groups Multi-chassis LAG/MCLAG (SVIs)
(LAGs) LAGs
Aruba 2930 Series (16.06) 60 60 8 2048 2048
Aruba 3810 Series (16.06) 144 144 8 4094 4094
Aruba 5400R Series 144 144 8 4094 4094
(16.06)
Aruba 8320 Series 32 48 8 512 256
(CX 10.1)
Aruba 8400 Series 128 128 8 256 512
(CX 10.1)
Figure 76 - Switch Interface Scaling
Switch Series MAC Entries IPv4 ARP Entries IPv6 ND Entries Dual Stack Clients
(1 IPv4 ARP+ 2 IPv6 ND)
Aruba 3810 Series (16.06) 64,000 25,000 25,000 8,333
Aruba 5400R Series (16.06) 64,000 25,000 25,000 8,333
Aruba 8320 Series (CX 10.1.020) 47,000 47,000 44,000 22,000
Configured in Mobile-First Mode
Aruba 8400 Series (CX 10.1.020) 64,000 64,000 48,000 32,000
Table 77 – ARP and ND Scaling
276
Aruba 5400R Series (16.06) 16 128 16 10,000
Aruba 8320 Series (CX 10.1) 32 32 32
Aruba 8400 Series (CX 10.1) 32 32 32
Figure 79 – OSPFv2 Scaling
WIRELESS
MOBILITY MASTER
HARDWARE
VIRTUAL
277
Cluster Members Max. APs / Max. Clients / AAC / Controller AAC-S / UAC / Controller UAC-S /
Cluster Cluster Controller Controller
2 32 2,048 16 16 1,024 1,024
3 48 3,072 16 16 1,024 1,024
4 64 4,096 16 16 1,024 1,024
Figure 83 - 7024 Cluster Scaling
Cluster Members Max. APs / Max. Clients / AAC / Controller AAC-S / UAC / Controller UAC-S /
Cluster Cluster Controller Controller
2 64 4,096 32 32 2,048 2,048
3 96 6,144 32 32 2,048 2,048
4 128 8,192 32 32 2,048 2,048
Figure 84 - 7030 Cluster Scaling
7200 SERIES
7205 Cluster Scaling
Cluster Members Max. APs / Max. Clients / AAC / Controller AAC-S / UAC / Controller UAC-S /
Cluster Cluster Controller Controller
2 256 8,192 128 128 4,096 4,096
3 384 12,288 128 128 4,096 4,096
4 512 16,384 128 128 4,096 4,096
5 640 20,480 128 128 4,096 4,096
6 768 24,576 128 128 4,096 4,096
7 896 28,672 128 128 4,096 4,096
8 1,024 32,768 128 128 4,096 4,096
9 1,152 36,864 128 128 4,096 4,096
10 1,280 40,960 128 128 4,096 4,096
11 1,408 45,056 128 128 4,096 4,096
12 1,536 49,152 128 128 4,096 4,096
Figure 85 - 7205 Cluster Scaling
278
7210 Cluster Scaling
Cluster Members Max. APs / Max. Clients / AAC / Controller AAC-S / UAC / Controller UAC-S /
Cluster Cluster Controller Controller
2 512 16,384 256 256 8,192 8,192
3 768 24,576 256 256 8,192 8,192
4 1,024 32,768 256 256 8,192 8,192
5 1,280 40,960 256 256 8,192 8,192
6 1,536 49,152 256 256 8,192 8,192
7 1,792 57,344 256 256 8,192 8,192
8 2,048 65,536 256 256 8,192 8,192
9 2,304 73,728 256 256 8,192 8,192
10 2,560 81,920 256 256 8,192 8,192
11 2,816 90,112 256 256 8,192 8,192
12 3,072 98,304 256 256 8,192 8,192
Figure 86 - 7210 Cluster Scaling
Cluster Members Max. APs / Max. Clients / AAC / Controller AAC-S / UAC / Controller UAC-S /
Cluster Cluster Controller Controller
2 1,024 24,576 512 512 12,288 12,288
3 1,536 36,864 512 512 12,288 12,288
4 2,048 49,152 512 512 12,288 12,288
5 2,560 61,440 512 512 12,288 12,288
6 3,072 73,728 512 512 12,288 12,288
7 3,584 86,016 512 512 12,288 12,288
8 4,096 98,304 512 512 12,288 12,288
9 4,608 110,592 ¹ 512 512 12,288 12,288
10 5,120 122,880 ¹ 512 512 12,288 12,288
11 5,632 135,168 ¹ 512 512 12,288 12,288
12 6,144 147,456 ¹ 512 512 12,288 12,288
¹ The number of potential clients supported in the cluster exceeds the 100,000 limit of the Mobility Master.
279
7240/7240XM/7280 Cluster Scaling
Cluster Members Max. APs / Max. Clients / AAC / Controller AAC-S / UAC / Controller UAC-S /
Cluster Cluster Controller Controller
2 2,048 32,768 1,024 1,024 16,384 16,384
3 3,072 49,152 1,024 1,024 16,384 16,384
4 4,096 65,536 1,024 1,024 16,384 16,384
5 5,120 81,920 1,024 1,024 16,384 16,384
6 6,144 98,304 1,024 1,024 16,384 16,384
7 7,168 114,688 ² 1,024 1,024 16,384 16,384
8 8,192 131,072 ² 1,024 1,024 16,384 16,384
9 9,216 147,456 ² 1,024 1,024 16,384 16,384
10 10,240 ¹ 163,840 ² 1,024 1,024 16,384 16,384
11 11,264 ¹ 180,224 ² 1,024 1,024 16,384 16,384
12 12,288 ¹ 196,608 ² 1,024 1,024 16,384 16,384
¹ The number of potential Access Points supported in the cluster exceeds the 10,000 limit of the Mobility Master.
² The number of potential clients supported in the cluster exceeds the 100,000 limit of the Mobility Master.
Figure 88 - 7240/7240XM/7280 Cluster Scaling
VIRTUAL
MC-VA-50 Cluster Scaling
Cluster Members Max. APs / Max. Clients / AAC / Controller AAC-S / UAC / Controller UAC-S /
Cluster Cluster Controller Controller
2 50 800 25 25 400 400
3 75 1,200 25 25 400 400
4 100 1,600 25 25 400 400
5 125 2,000 25 25 400 400
6 150 2,400 25 25 400 400
7 175 2,800 25 25 400 400
8 200 3,200 25 25 400 400
9 225 3,600 25 25 400 400
10 250 4,000 25 25 400 400
11 275 4,400 25 25 400 400
12 300 4,800 25 25 400 400
Figure 89 - MC-VA-50 Cluster Scaling
280
MC-VA-250 Cluster
Cluster Members Max. APs / Max. Clients / AAC / Controller AAC-S / UAC / Controller UAC-S /
Cluster Cluster Controller Controller
2 250 4,000 125 125 2,000 2,000
3 375 6,000 125 125 2,000 2,000
4 500 8,000 125 125 2,000 2,000
5 625 10,000 125 125 2,000 2,000
6 750 12,000 125 125 2,000 2,000
7 875 14,000 125 125 2,000 2,000
8 1,000 16,000 125 125 2,000 2,000
9 1,125 18,000 125 125 2,000 2,000
10 1,250 20,000 125 125 2,000 2,000
11 1,375 22,000 125 125 2,000 2,000
12 1,500 24,000 125 125 2,000 2,000
Figure 90 - MC-VA-250 Cluster Scaling
MC-VA-1K Cluster
Cluster Members Max. APs / Max. Clients / AAC / Controller AAC-S / UAC / Controller UAC-S /
Cluster Cluster Controller Controller
2 1,000 16,000 500 500 8,000 8,000
3 1,500 24,000 500 500 8,000 8,000
4 2,000 32,000 500 500 8,000 8,000
5 2,500 40,000 500 500 8,000 8,000
6 3,000 48,000 500 500 8,000 8,000
7 3,500 56,000 500 500 8,000 8,000
8 4,000 64,000 500 500 8,000 8,000
9 4,500 72,000 500 500 8,000 8,000
10 5,000 80,000 500 500 8,000 8,000
11 5,500 88,000 500 500 8,000 8,000
12 6,000 96,000 500 500 8,000 8,000
Figure 91 - MC-VA-1K Cluster
281
For more information
http://www.arubanetworks.com/
www.arubanetworks.com