Tech Field Day: Hugo Riveros
Tech Field Day: Hugo Riveros
FIELD DAY
Hugo Riveros
SE Manager – Multi Country Area
Data Center Networking Introduction
2
What makes a data center network?
3
What is a network fabric?
Marketing term
– Optimally interconnect 1,000, 10,000, 100,000 or more
end points (servers, storage)
– Provide redundancy when any node or any link fails
– Failure will happen – it’s just a question of time
– Minimize # hops to reach any other peer in the fabric
– Latency impact
– East/West (E/W) traffic vs North/South (N/S) traffic
– E/W traffic = Servers to servers inside the DC
– N/S traffic = Clients to servers entering / servers to clients
leaving the DC
4
Data Center Networking
Architectures
5
Enterprise Datacenter Network Architecture Evolution
Traditional 3 layer
STP Spine&Leaf L3 ECMP
Optimized L2/L3 Fabric VXLAN* ,EVPN & Network Virtualization**
IRF/VSX MLAG Spine&Leaf L2 ECMP
TRILL/SPB …
…
L3 Fabric
L2 Fabric
…
…
vSwitch vSwitch
VMs VMs
LACP SW VTEPs
L2 SW & HW VTEPs
L3 HW VTEPs
Core …
Core
Spines
L3
Agg …
L3 L3
L2
Leafs Access
L2 L2
– Spine = Multiple individual backbone devices that provide redundant connectivity for each leaf
– Leaf = Switch which connects to every spine switch (can be VTEP but not mandatory), provides entry into equidistant networks with
no constraints on workload placement
– Core = A single device (logical or physical) that provides centralized connectivity to other devices (servers/switches)
– Aggregation = Aggregates multiple access switches – usually performs L2/L3 services
– Access = Typically connects into a Core or Aggregation device – usually running L2 services
– ToR = Umbrella term, referring to a switch located at the Top-of-Rack
7
EoR / MoR (End of Row / Middle of Row)
– EoR/MoR refers to physical location of switches where switches are placed in one rack
– Server-to-switch cables stretch from rack to rack, usually requires less equipment than ToR deployment
– Usually lower latency for intra-row traffic because of less hops
– Less problem isolation, less scalability
– EoR/MoR could be spine switches which connect to ToRs within the same Row
– Can be considered as 1 POD, replicate design to scale up multiple PODs
EoR MoR
8
Consistent Leaf/ToR Designs
Spines …
vSwitch
VMs
Bare Metal OS
Server
* When Servers and IP storage are located in the same rack leaf functions are delivered from a single, DC optimized switch pair per rack.
** Separate Service Leafs are not needed when the network services are distributed among the servers/IP storage racks.
** When the Network Service are centralized, depending on the scale and failure zones design within the DC the Border Leafs can serve also as a Service Leafs.
9
Data Center Fabric (Spine-Leaf)
Scaling up the leafs
Question: What determines the number of leafs supported in a spine leaf topology?
40/100G 40/100G
40/100G 40/100G
Leaf 1 Leaf 32
10
Data Center Fabric (Spine-Leaf)
Understanding oversubscription
Spine-1 Spine-2 Spine-3 Spine-4
Leaf Leaf
– Some customers need to scale beyond the port density supported by a spine/leaf fabric
– Recommendation: Multiple spine/leaf L3 fabrics/PODs
L3
L2
12
Aruba Data Center Networks
Benefits or modern DCs
– A stable, low latency fabric with high availability/ performance/ density/
scalability
– N/S campus/client traffic connectivity achieved via border switches
(service leafs) /routers
– L2 extension between racks: Essentially driven by VM mobility
– VXLAN as de-facto solution by many overlay vendors
– Scalable, up to 16M Virtual Network Identifier (VNIs) to support multi-tenancy
– Oversubscription
– Spine and leaf for fewer layers and reduced hop count / latency /
oversubscription levels
– Designed for E/W application traffic performance (80% of traffic is EW)
– Mac Address Explosion
– DC fabric becomes a big L3 domain (no STP) with L2 processing
(encapsulation / de-capsulation) at the edge
13
Data Center Networking Portfolio
14
Management and Orchestration
Core, Aggregation and Data Center
15
Addressable Market for Aruba Switching will Double from CY18
24.9
TAM ($B) 23.1 1,6 Telco
22.8
Today
6,1 5,7 5,2 4,5 DC in the Enterprise 2019+
13.0
Leverage
extensive
Long term FlexFabric portfolio addressing demanding deployment scenarios
FlexFabric
solution
FlexFabric Aruba
18
Check For Aruba Fit: Customer Qualification Overview
Similar to Campus Positioning
Foundational Interest in CX
Aruba Customer? Now versus later?
Questions Innovations?
Compact, cost effective, 100GbE (small core/spine) Highest density, 25/100GbE flexibility and features
Storage/HPC ToR
– 1/10 GbE downlinks x – 10 or 25 GbE downlinks x – Full data path error detection
40/100G uplinks 40/100G uplinks – 1/10 GbE downlinks x 100G
– Low-latency, high-availability – VXLAN support for network uplinks
connectivity virtualization – VXLAN support for network
– Perfect for out-of-band – Low-latency, high-availability virtualization
management (iLO) connectivity – Deep buffers to ensure
connectivity – Enhanced support for network connectivity
telemetry – Flexible port configurations
21
22
ARUBA CX
SWITCHING
The next generation of switching
CUSTOMER NETWORKING CHALLENGES
IN THE EDGE-CLOUD ERA
NetworkAnalytics
Network AnalyticsEngine
Engine
Time-SeriesDatabase
Time-Series Database
100%REST
100% RESTAPIs
APIs
StateDatabase
State Database
CX Core
Micro-Services
Micro-Services
Architecture
Architecture
AOS-CX
AOS-CX
vs
Deep buffers
Large tables
CX 8400 Carrier-class HA
Modular
High-density access
CX 6400 Core and Agg
7 4 1
Modular power integrated operating
switches power switches model
Future ready: 1/10G to 25/50G uplinks
for scale and investment protection
Flexible growth: VSF stacking
for ease of management and
collapsed architectures
Built for Wi-Fi 6: Smart Rate on all
ports and 60W always-on PoE
880G 10 member 2880W
Capacity Stacking 60W PoE
ARCHITECTURE MATTERS
ARUBA GEN7 ASIC
BUILT ON CLOUD-
NATIVE PRINCIPLES Aruba Network Analytics Engine
Time-Series Database
Microservices
Architecture
AOS-CX
Academic
BYOD Records
n0tma1ware
IoT .biz
Guest AirGroup
Extended to Access
Dual control and data planes Extended to Access and AOS-CX Extended to Access
with improved performance to
bring live upgrades to Always-on Secure, unified access Industry-standard
modular access segmentation that scales and
PoE across wired and wireless for
provides consistent
users and IoT, enabled by
policy-based automation architecture across campus
and data center
>_ CX Core
vs
Needle in the Latency and large, Manual correlation
haystack unfiltered data sets and limited
actionable insights CX Access CX Access CX Access
NAE integrated everywhere in network
Difficult to Delays in data Resource Real-time, Automated 24/7 network
recreate and/or processing and intensive with network-wide monitoring for technician built-in
identify issues analysis longer MTTR visibility with rapid detection to every switch
actionable data of issues
ARUBA NETWORK ANALYTIC ENGINE
POWERING DISTRIBUTED ANALYTICS, ONE SWITCH AT A TIME
ITSM integrated change mgmt. Transceiver Diagnostics for Predictive Fault Finder for
with ServiceNow / TopDesk Health and Failure Root Cause General Network Health
Proactive Email notifications for VSX Health Monitor to Highlight VoIP monitor based on IPSLA
critical events and errors VSX Stability transactions
Auto-config archiving with TFTP Monitor and Change Route when MAC and ARP Count Analytics to
config updates Failure Detected Ensure Proper Device Load
https://community.arubanetworks.com/t5/Developer-
Community/Streaming-Telemetry-Real-time-notifications-to-
Whatsapp-when-a/td-p/551384
ERROR FREE
NETWORK
CONFIGURATION
SOME LAT CUSTOMERS WITH CX
44
AUTOMATE NETWORK LIFECYCLE
Consistency, Conformance, Deployment, Change Validation
Edit
Highlight inconsistencies and conformance
Validate violations to ensure business standards are met
Deploy
Never blow through a change window again,
Audit change impact commit / rollback
Visibility
Network Health indicators aligned to your concerns
Troubleshoot
CX Mobile App
THANK
YOU