0% found this document useful (0 votes)
80 views62 pages

FB DC Network Evolution v0.1

Uploaded by

trungvieta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views62 pages

FB DC Network Evolution v0.1

Uploaded by

trungvieta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Arista Universal Cloud Network

for VIETTEL NETWORK

Confidential. Copyright © Arista 2020.


2017. All rights reserved.
AGENDA
Section01 : FaceBook Data Center Network Evolution

Section02 : Arista Universal Cloud Network for Viettel

Confidential. Copyright © Arista 2017. All rights reserved.


FB Data Center
Network Evolution

3 Confidential. Copyright © Arista 2017. All rights reserved.


FB Data Center Networking – from the beginning ...
Pre-2014 2014 2019 now - 2021

- "FatCat" Architect
- Using big box Cisco N7K and C3K

- Fully open and disaggregated hardware


designed through the Open Compute
Project (OCP)
- F4 Architect with Merchant Silicon + FBOSS
- IP Fabric with BGP end-to-end

- Reinvent with F16 Architect - Keep F16 Architect


- Uniform the DC network with new platform - Upgrade from 100G
SOC/Single-chip Switch 12,8Tbps CWDM4 to 200G FR4
(Minipack/Arista7368x4) - New Platform
- More open, more bandwith (4x), more scale ToR/Fabric/Spine with
(06xF16) and more saving (capex and opex) 25,6Tbps chipset.

4 Confidential. Copyright © Arista 2017. All rights reserved.


FB Overall Network Overview

5 Confidential. Copyright © Arista 2017. All rights reserved.


FB Data Center Network Evolution – 1st Generation Cluster based
Architecture

6 Confidential. Copyright © Arista 2017. All rights reserved.


FB Data Center Network Evolution – 2nd Generatation Cluster based
Architecture
• It is referred as FATCAT or 4-post architecture

Cisco N7018

Cisco N7018

Cisco N3k

7 Confidential. Copyright © Arista 2017. All rights reserved.


Challenges with Cluster Based Architecture
• A CSW failure reduces intra-cluster capacity to 75%; an FC failure reduces inter-cluster
capacity to 75%.

• The cluster size is limited by the size of the CSW. CSW port density limit the scale and
bandwidth of these topologies.

• Very large switches (Eg: Nexus 7018) restrict vendor choice and are produced in smaller
volumes which result in high per-port CapEx and OpEx.

• Large switches tend to have oversubscribed switching fabrics (internally), so all ports
cannot be used simultaneously.

• The proprietary internals of these big switches prevent customization, complicate


management, and extend waits for bug fixes to months or even years.

8 Confidential. Copyright © Arista 2017. All rights reserved.


FB Data Center Network Evolution – F4 (2014)

• Server Pods: racks


• 4 Parallel Spine Planes
• Edge Pods: uplinks
• Up to 1:1 Racks: Spine (non-blocking)
• Practical so far: 2:1
• Links: Start at 40G, upgrade to 100G,
Fiber: SMF
• Routing: BGP

9 Confidential. Copyright © Arista 2017. All rights reserved.


FB DC F4 - Pod Deployment

• Break Cluster Switch into small identical units


• Server Pod: 48 racks
• 4 x 100G per rack (400G)

10 Confidential. Copyright © Arista 2017. All rights reserved.


FB DC F4 – 4 Spine Planes

• Scalability – without large boxes


• Capacity – load balanced between
and within the planes
• Reliability – contained failure
domains and large-scale ops
• Flexibility – independent planes

11 Confidential. Copyright © Arista 2017. All rights reserved.


FB DC F4 – 4 Spine Planes

All 3 pictures shows exactly THE SAME topology

Confidential. Copyright © Arista 2018. All rights reserved.


FB DC F4 – Path between Servers

13 Confidential. Copyright © Arista 2017. All rights reserved.


FB DC F4 - Data Center Region

Traffic Trend in FB DC

• Fabric Aggregation (FA):


inter-building fabric of fabrics
• Up to 3 large buildings (fabrics)
• 100Ts level of regional uplink capacity per fabric (max)

14 Confidential. Copyright © Arista 2017. All rights reserved.


Facebook datacenter physical topology

15 Confidential. Copyright © Arista 2017. All rights reserved.


Hardware for F4 – Facebook Platforms

128X40G FSW(6ack)
128X100G FSW (BackPack)

16X40G RSW(Wedge) 32X100G RSW(Wedge100)


• From 6ack to Backpack, from Wedge to Wedge100, from 10/40G to 25/100G, from Trident2 to Tomahawk 2

Confidential. Copyright © Arista 2018. All rights reserved.


Hardware for F4 – Facebook Platforms

Confidential. Copyright © Arista 2018. All rights reserved.


Hardware for F4 – Arista 7308X

Arista
7308X

Max Zavyalov, Network Engineer in Edge & Network Services team


https://www.facebook.com/zuck/posts/10103136694875121
2016 Facebook Luleå Datacenter F4 Design
Confidential. Copyright © Arista 2018. All rights reserved.
what is the current
FB DC Network ...

19 Confidential. Copyright © Arista 2018.


2017. All rights reserved.
Common DC Network: A system with Many Parameters

•Bandwidth and capacity • Servers and services


•Scale and scalability • Switch ASICs
•Topology and routing • Optics and link speed
•Regional composition • Power and cooling
•Lifecycle: Deployment and retrofits • Fiber infrastructure
•Automation and management • Physical Space

Timelines: need by vs. technology availability and development

Confidential. Copyright © Arista 2018. All rights reserved.


FaceBook Drivers

Subscribers Applications Regions Optics Power & Cooling

• 1.35B to 2.6B • 18 DCs • Physical


• Video • CWDM4-OCP
• From different around the constraints
• Realtime • DR/FR
regions world • Supplyment

• Scaling
• Scaling • ASICs • ASICs
• Scaling • Capacity
• Region • Optics • Power and
• Capacity • Service and
• Topology and • Fiber cooling
• Region Server
routing infrastructure • Physical
• Bandwidth
space

Confidential. Copyright © Arista 2018. All rights reserved.


Confidential. Copyright © Arista 2018. All rights reserved.
Rethink and Transform Facebook Data Center Network:
Growing Pressure
• More regions, bigger regions: Expanding Mega Regions NETWORKING (5-6 buildings)
= accelerated fabric-to-fabric East-West demand (F4 support up to 3 buildings)
• Higher per-rack speeds: Compute-Storage and AI disaggregation need more inter-rack
bandwidth, NIC technology were easily capable of driving 1.6T and more per rack.
• Both require larger fabric Spine capacity (by 2-4x) ..
• DC networks – a system with many parameters
- Main goals:
≫ Bandwidth capacity
≫ Scale and scalability
- Main Concerns:
≫ Optics and link speeds: Optics availability for 400G @ scale
≫ Power and cooling: The power in a region is a fixed resource, 128p
switch is the best fit for FB based on their DC floor plan, No. of servers.
- ….

23 Confidential. Copyright © Arista 2017. All rights reserved.


What’s Next?

• with 4 x 128p multi-chip 400G fabric switches

• How to achieve the next 2-4X after 1.6T?


- Adding more fabric planes on multi-chip hardware = too much power...
- Increasing link speeds = would need 800G or 1600G optics in 2-3 years…

24 Confidential. Copyright © Arista 2017. All rights reserved.


Rethink and Transform Facebook Data Center Network:
Optics
• Concerns: 400G availability @ scale
• We start large – no time for new tech to ramp-up
• Risky dependency on bleeding-edge tech
• High cost of early adoption
• Interop for upgrade / retrofit paths
• Large-scale ISP and OSP structured fiber plants

100G CWDM4-OCP

Confidential. Copyright © Arista 2018. All rights reserved.


Rethink and Transform Facebook Data Center Network:
Power & Efficiency • Node radix-128 – best fit at Facebook scale. . .
• Achieved by building intra-node topologies from radix-32 sub-
switches (ASIC+uServer)
• 12 small-radix subsystems – Ok @100G NETWORKING
• At higher speed + growing scale the efficiency starts declining
4 Internal Spine
(Fabric Card)
Ethernet + BGP

4 Down 4 Up
for Rack Switch for Spine Switch

uServer Control

Backpack Fabric Switch (FSW): a Clos of 12 sub-switches

Confidential. Copyright © Arista 2018. All rights reserved.


Rethink and Transform Facebook Data Center Network:
Power & Efficiency
• This is 48 FSW ASICs per Pod
• Also, multi-chip Spine-tier nodes
• +Optics dependency for every next generation

FSW1 FSW2 FSW3 FSW4

12 12 12 12

Confidential. Copyright © Arista 2018. All rights reserved.


Rethink and Transform Facebook Data Center Network
• In the network,
-Developing F16, a next-generation data center fabric design
-4x the capacity of previous design.
-F16 is also more scalable and simpler to operate and evolve,
-Using mature, readily available 100G CWDM4-OCP,
≫same desired 4x capacity increase as 400G link speeds, but with 100G optics
• Brand-new building block switch -- Minipack
-Consumes 50 percent less power and space
-Modular and flexible
≫can serve multiple roles in these new topologies and support the ongoing evolution of the network
• HGRID as the evolution of Fabric Aggregator to handle the doubling of buildings per region
• FBOSS is still the software
-Changes to ensure that a single code image and the same overall systems
-Support multiple generations of data center topologies
-Support increasing number of hardware platforms, especially the new modular Minipack platform

Confidential. Copyright © Arista 2018. All rights reserved.


Introducing F16 fabric

• From 4 x 128p multi-chip 400G (51,2Tbps bandwidth) fabric switches

Fabric 400G

• To 16 x 128p single-chip 100G (12,8 Tbps bandwidth) fabric switches

Fabric 100G

29 Confidential. Copyright © Arista 2017. All rights reserved.


Introducing F16 fabric

• Broadcom Tomahawk-3
• Same rack uplink bandwidth capacity as 4 x 400G: up to 1.6T per TOR
• 3X+ less chips and control planes =
TCO and Ops efficiency
• 2X+ less power/Gbps than 100G F4 fabrics
• Mature and available optics, instead of
high-volume bleeding edge ramp-up:
OCP 100G CWDM4
• Realistic next-steps scalability:
- optimized for power in current and future generations
- 200G or 400G optics as the way to achieve the
next 2x or 4x

30 Confidential. Copyright © Arista 2017. All rights reserved.


F16 Fabric Design

• Up to 16-plane architecture: achieving 4X


capacity with 100G links
• Up to 1.6T capacity per rack
• Single-chip radix-128 building blocks
• Locked Spine scale at 1.33:1 from start (36
FSW-Spine uplinks for 48 Racks/Pod)
• Each Spines can connect to 96 PODs
• No Edge Pods – replaced with direct Spine
uplinks to new large-scale Disaggregated FA

31 Confidential. Copyright © Arista 2017. All rights reserved.


F16.8P: 8-plane variant

• Physical Infra and fiber designed and built for


full F16
• Starting number of parallel planes: 8
• 800G capacity per rack (8 x 100G)

32 Confidential. Copyright © Arista 2017. All rights reserved.


F16 Region Evolution: HGRID

• Edge Pods → direct Spine-FA uplinks


• No device is big enough to mesh F16 fabrics(576 100G ports) – disaggregated
solution required 36x16
• Goal: mega-region – beyond 3 fabrics

33 Confidential. Copyright © Arista 2017. All rights reserved.


F16 Region Evolution: HGRID

• HGRID – connecting slices of matching Spine Switches across F16s


• Partial Mesh = additional routing and reachability considerations

34 Confidential. Copyright © Arista 2017. All rights reserved.


F16 region evolution: HGRID

• HGRID entity composition:

35 Confidential. Copyright © Arista 2017. All rights reserved.


F16 mega-region (6 buildings)

• Sample 6-building region


with full-size F16 fabrics
• Petabit-level regional uplink
capacity, per fabric
• Evolution of our architecture
Fabric Aggregator with new
building blocks
• BGP routing end-to-end,
designed for reliability, fast
convergence, FIB fit

36 Confidential. Copyright © Arista 2017. All rights reserved.


FB Data Center - High Level
DR EBB

FA 16
FA 01

Plane
S001 S002 S003 S004 S016 S001 S002 S003 S004 S016

Fabric 1 Fabric 6
(Building) (Building)

Pod 01, Fabric 1 Pod 96, Fabric 1 Pod 01, Fabric 6 Pod 96, Fabric 6
TOTALS 4608 RACKS PER F16 TOTALS 6X4,608 = 27,648 RACKS TOTALS 4608 RACKS PER F16

Confidential. Copyright © Arista 2017. All rights reserved.


Hardware for F16

• Facebook Minipack: 128 x 100G, 4RU, TH-3, FBOSS

• Arista 7368x4 (Glacier): 128 x 100G: 4RU, TH3, FBOSS or EOS

• Where to use Minipack/Arista 7368x4?


-FSW – Fabric Switch
-SSW – Spine Switch
-FA – Fabric Aggregator

Confidential. Copyright © Arista 2018. All rights reserved.


Hardware for F16: Minipack/Arista 7368X4

• 12.8Tbps throughput
• Modular design, instead of fixed ‘pizza box’ design Regional Fabric
-8 slots with 4x400G (PIM-4DD/QSFP-DD) or 16x100G Aggregator (FA)

(PIM-16Q/QSFP-28)
≫With 4 reverse gearbox on PIM-16Q card
• 8x200G, 16x40G/100G
• 100G CWDM4-OCP optics with reverse gearbox
Spine Switch
≫Support mixed-and-match
-Redundant power
-Redundant fan
• Multiple roles
-Leaf, spine, aggregator Fabric Switch

Switch-On-a-Chip (SOC) Rack Switches:


Wedge-100S

Confidential. Copyright © Arista 2018. All rights reserved.


From Backpack/Inyo to Minipack/Arista 7368x4 (Glacier)

40 Confidential. Copyright © Arista 2017. All rights reserved.


Simpler and Flatter

8 6
24 12

Confidential. Copyright © Arista 2018. All rights reserved.


Minipack – Next-Generation 128x 100G Switch
•Switch ASIC: Broadcom Tomahawk-3
•Size: 4RU (vs. 8RU Backpack)
•Power: ~1.4KW budgetary (At full line rate,
fully populated with 128x QSFP28 CWDM4
optics)
•Interfaces: 128x100G or 32x400G
•Radix: 128x
•½ size and power of BackPack

Confidential. Copyright © Arista 2018. All rights reserved.


Minipack Functional Block Diagram

Confidential. Copyright © Arista 2018. All rights reserved.


MiniPack with PIM-16Q & PIM-4DD

Front-view with 128x100GE Front-view with 32x400GE

Rear-view 4 PSU 8 FAN

Confidential. Copyright © Arista 2018. All rights reserved.


Switch Main Board (SMB) PIM-16Q PIM-4DD

Confidential. Copyright © Arista 2018. All rights reserved.


Arista 7268X4: Hyper-scale Cloud - Cost and Power Efficient
Bandwidth
•Demand for more bandwidth in the cloud
-High Network Radix Modular System
-High performance 12.8Tbps switch
•Choice of port module configurations
-16 x 100G QSFP
-8 x 200G QSFP56
-4 x 400G (OSFP and QSFP-DD)
•4U System Optimized for cloud networks 7368X Series
•Improve power efficiency per bandwidth 128 x 100G Ports
•Upgradeable to next generation 64 x 200G Ports
32 x 400G Ports

Already released and shipping in volume

46 Confidential. Copyright © Arista 2018. All rights reserved.


7368X Front – Management and I/O Modules

Management 4 x 400G QSFP-DD 16 x 100G QSFP


(8 x 200G mode)
Module
(4 x 400G OSFP)

47 Confidential. Copyright © Arista 2018. All rights reserved.


7368X Rear – Switch Card and common equipment
Single Switch Card / Module (Removable)
2U Fan Modules

AC PSU Up to 4 PSU
(1900W) (2 Default)
48 Confidential. Copyright © Arista 2018. All rights reserved.
High Level Schematic: 7368X - System

PSU PSU Fan Fan Fan Fan Fan PSU PSU

TH3
Management Card

Switch Card – 128 x 100G / 32 x 400G

GB GB GB GB GB GB GB GB

16 x 100G 4 x 400G 16 x 100G

QSFP100 OSFP QSFP100

49 Confidential. Copyright © Arista 2018. All rights reserved.


7368X – Ease of Maintenance

•All components field removable:


-Switch Card – remove from rear without cable changes
-Management Module – removes from front
-100G and 400G Modules – hot swap
-Power Supplies – rear accessible and hot swap
-Fan Modules – individually removable and hot swap

50 Confidential. Copyright © Arista 2018. All rights reserved.


Arista 7368X4 -- Joint Development Between Facebook and
Arista

Confidential. Copyright © Arista 2018. All rights reserved.


F16 Summary

• F16 fabric: achieving 4X bandwidth at scale, without 4X faster links


• 100G links: not forced to adapt next-gen optics from early day1
• 8 planes, 16 planes: new dimension of scaling
• Power savings: both now and in the future iterations
• Next steps: clear path to the next 2-4X – on specific tiers or all-around
• Simpler: single-chip large-radix systems improve efficiency
• Flattened: 3X+ less ASICs, 2.25+X less tiers, 2-3X less hops between servers
• Minipack: one flexible and efficient building block for all roles in fabric
• HGRID: disaggregated aggregation – scaling the multi-fabric regions in both
bandwidth and size

52 Confidential. Copyright © Arista 2017. All rights reserved.


Features
● First IPv6! ● Convergence ● Link Speed
● BGP V6 + V4 ○ Link Flaps ○ 100G (CWDM4-100G)
○ Complex route-maps ○ Neighbor flaps ○ 40G (LR4/CWDM4)
○ 96 way ECMP ○ OIR(Sup, SW, LC, PWR, ○ 200G (2 Speed/ QSFP-56)
○ High churn
FAN) ○ Mix & Match
○ Neighbor Scale
○ System Failure ● Visibility into Platform
○ Route scale
● BFD ○ Agent Crash/Restart commands
○ V4 + V6 ● Time to … ● Usage & Headroom visibility
○ RFC 7130 ○ SSH to system for Resources
○ Echo mode (explore) ○ Linecard & all ports up ● Writing to SSD
● ALPM ○ BGP neighbors up ● Interoperate with 7500R
○ 200Kv6 + 180Kv4 (High ○ Fib Programmed (J/J+), Firewheel, Acacia &
Scale) ○ Everything up & working Juniper PTX & FBOSS
○ Fast programming
● Cli & eAPI Response time
53
Confidential. Copyright © Arista 2017. All rights reserved.
platforms update in 2021..

54 Confidential. Copyright © Arista 2017. All rights reserved.


Hardware for F16: platform updates

- Keep the same F8/F16 architect, but upgrade each link from 100G CWDM to 200G FR4
- Upgrade to new ToR Wedge400 and Minipack2 / Arista7388x4

Wedge 400 vs Wedge 400C – Next Gen ToR


- Based on 12,8Tbps Chipset: Tomahawk3 or Cisco Silicon One (Q200L)
- Uplink 16x100/200/400G
- Downlink 32x100/200G
- 4x capacity from Wedge 100 (3,2Tbps à 12,8Tbps)
- Deployed 200G FR4 at scale

Wedge 400 vs Wedge 400C

55 Confidential. Copyright © Arista 2017. All rights reserved.


Hardware for F16: platform updates
Next FSW / Spine / FA with Minipack2 and Arista 7388X4
- Based on 25,6Tbps Chipset: Tomahawk4
- 128 radix 200G: 128x200G or 64x400G
- 2x capacity from Minipack (12,8Tbps à 25,6Tbps)

FB Minipack2 Arista 7388x4

56 Confidential. Copyright © Arista 2017. All rights reserved.


WE WORK TOGETHER!!!

Confidential. Copyright © Arista 2018. All rights reserved.


Reference

• Introducing data center fabric, the next-generation Facebook data center


network
• Reinventing Facebook’s data center network
• Fabric Aggregator: A flexible solution to our traffic demand
• DataCenter Network @Facebook
• Facebook Datacenter network Architecture
• OCP2019
• OCP Summit 2021

58 Confidential. Copyright © Arista 2017. All rights reserved.


Q&A

59 Confidential. Copyright © Arista 2017. All rights reserved.


Facebook Test Topology and Combinations

60 Confidential. Copyright © Arista 2017. All rights reserved.


FB Glacier A Test Topology
IXIA: 1x100G IXIA: 1x100G

Any Switch 2 x 7280R

Cables: UU – DU (4 x 2 x 2) FA 01 FA 16 Cables: UU – DU (4 x 2 x 2)
IXIA: None IXIA: None

Cables: Cables:
- FSW to SSW (48 Links x 2(planes) x - FSW to SSW (48 Links x 2(planes) x
2(Pods) 2(Pods)
- SSW to FA (8(Per Du) x 2(2 Du’s) x 4 - SSW to FA (8(Per Du) x 2(2 Du’s) x 4
(SSW) x 2 (Planes) (SSW) x 2 (Planes)
- IXIA: 8x100G to test FA - IXIA: 8x100G to test FA

4 TORS 4 TORS 4 TORS 4 TORS

IXIA: 4x100G IXIA: 4x100G IXIA: 4x100G IXIA: 4x100G


+ 1x100G + 1x100G + 1x100G + 1x100G

Confidential. Copyright © Arista 2017. All rights reserved.


FB Glacier Solution TB-A Topology AS10001
HNS
UPP434 IXIA

6K IPv4 Routes
12K IPv6 Routes
AS32934 AS64562
DR01 4 links EB-01 EB-02
IXIA 8 links
wa495 GHS274 GHS273

200.0.1.0/24
1000 Routes

AS65501
FA01-
UU1
FA01- FA01 AS65501 FA02-
UU1
FA02-
FA02
AS8001 GLC262 UU2 AS8002 AS8001 UU2 AS8002
GLC260
GLC261 GLC259
LC2 B0 B0

18 links
48 links

FA02-
48 links FA02-
FA01- FA01- FA01- FA02- FA02- DU4
FA01- AS7001 DU2
AS7001 DU1 AS7002 DU2 AS7003 DU3 DU4 AS7004 DU1
GLC271 AS7002 DU3 AS7003 GLC269 AS7004
GLC274 GLC272 GLC264 GLC273 GLC263 B0 EBB
GLC270 B0 EBB
B0 B0 7,9

8 links
ITM0 P0 2/5/1
ITM0 P1 2/3/1 IXIA
ITM0 P0 2/15/1
ITM0 P6 6/5/1
ITM1 P2 4/9/1
ITM1 P3 3/15/1

AS65074 AS65074

SSW-2 SSW-3 SSW-1


SSW-1 SSW-4 SSW-2 SSW-3 SSW-4 SSW-1 SSW-2 SSW-3 SSW-4
GLC267 GLC266 GLC254 SSW-1 SSW-2 SSW-3 SSW-4
GLC268 GLC265 GLC253 GLC252 GLC251 GLC278 GLC277 GLC276 GLC275
B0 B0 B0 IN387 IN386 IN385 IN384
B0 B0 B0 B0 B0 B0 B0 B0 B0
4 2,3 8

12 links

AS65401 AS65402 AS65401 AS65402


FSW-1
FSW-1 FSW-2 Harness FSW-1 FSW-2 FSW-2 Harness FSW-1 FSW-2
AS6001 GLC258 GLC256 AS6002 UP709 AS6001 GLC257 GLC255 AS6002 AS6001 GLC280
GLC372 AS6002 UP734 AS6001 in388 in389 AS6002
B0
9/1 10/1 9/1 10/1

8 links
POD1 04:05 04:06
dm1-rack91-ixia1 POD2 POD1 04:07 04:08
dm1-rack91-ixia1 POD2
IPv4: 6K IPv4: 6K
IPv6: 10K IPv6: 10K

RSW-2 RSW-4 RSW-1 RSW-2 RSW-4 RSW-1 RSW-2 RSW-4 RSW-1 RSW-2 RSW-3 RSW-4

F02
RSW-1 RSW-3 RSW-3 RSW-3

F01
UPP417 UPP418 UPP419 UPP406 UPP413 UPP414 UPP415 UPP416 UPP407 UPP408 UPP409 UPP410 UPP411 UPP435 UPP421 UPP422

10/1 15/1 15/1 15/1 15/1 10/1 15/1 15/1 15/1 15/1 10/1 15/1 15/1 24/1 15/1 10/1 15/1 15/1 15/1 15/1
AS2001 AS2002 AS2003 AS2004 dm1-rack91-ixia1:02:03 AS2001 AS2002 AS2003 AS2004 dm1-rack91-ixia1:02:03 AS2001 AS2002 AS2003 AS2004 AS2001 AS2002 AS2003 AS2004
10.191.0.1/32 10.191.0.2/32 10.191.0.3/32 10.191.0.4/32 10.191.0.65/32 10.191.0.66/32 10.191.0.67/32 10.191.0.68/32 10.191.32.1/32 10.191.32.2/32 10.191.32.3/32 10.191.32.4/32 10.191.32.65/32 10.191.32.66/32 10.191.32.67/32 10.191.32.68/32
10.128.225.0/24 10.128.225.0/24
8401:db00:191::1/128 8401:db00:191::2/128 8401:db00:191::3/128 8401:db00:191::4/128 8401:db00:191::65/128 8401:db00:191::66/128 8401:db00:191::67/128 8401:db00:191::68/128 8401:db00:191:32::1/128 8401:db00:191:32::2/128 8401:db00:191:32::3/128 8401:db00:191:32::4/128 8401:db00:191:32::65 8401:db00:191:32::66 8401:db00:191:32::67 8401:db00:191:32::68
2803:db00:129:225::/64 2803:db00:129:225::/64
02:01 07:05 dm1-rack91-ixia1:07:08 dm1-rack91-ixia1:07:03 dm1-rack91-ixia1:07:06 02:02 07:01 dm1-rack91-ixia1:07:02 dm1-rack91-ixia1:07:04 dm1-rack91-ixia1:07:07 02:03 03:01 dm1-rack91-ixia1:03:02 dm1-rack91-ixia1:03:03 dm1-rack91-ixia1:03:04 02:04 03:05 dm1-rack91-ixia1:03:06 dm1-rack91-ixia1:03:07 dm1-rack91-ixia1:03:08
dm1-rack91-ixia1 dm1-rack91-ixia1 dm1-rack91-ixia1 dm1-rack91-ixia1
10.128.225.0/24 10.128.226.0/24 10.128.227.0/24 10.128.228.0/24 10.128.241.0/24 10.128.242.0/24 10.128.243.0/24 10.128.244.0/24 10.129.225.0/24 10.129.226.0/24 10.129.227.0/24 10.129.228.0/24 10.129.241.0/24 10.129.242.0/24 10.129.243.0/24 10.129.244.0/24
2803:db00:128:225::/64 2803:db00:128:226::/64 2803:db00:128:227::/64 2803:db00:128:228::/64 2803:db00:128:241::/64 2803:db00:128:242::/64 2803:db00:128:243::/64 2803:db00:128:244::/64 2803:db00:129:225::/64 2803:db00:129:226::/64 2803:db00:127:225::/64 2803:db00:129:228::/64 2803:db00:129:241::/64 2803:db00:129:242::/64 2803:db00:129:243::/64 2803:db00:129:244::/64

62 Confidential. Copyright © Arista 2017. All rights reserved.

You might also like