Dell Emc Networking Smartfabric Services Deployment With Vxrail 7 0 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 125

Dell EMC Networking SmartFabric Services

Deployment with VxRail 7.0.1


Deployment Guide

Abstract
In this guide, SmartFabric Services (SFS) is used to deploy a new leaf-spine fabric for a
new VxRail cluster. SFS automatically reconfigures the fabric with user-specified VLANs
during VxRail cluster deployment. The SFS-enabled leaf-spine topology is connected to
the data center's existing network using Layer 2 or Layer 3 uplinks.

Dell Technologies Networking Infrastructure Solutions

Part Number: H18618


December 2020
Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

© 2020 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other
trademarks may be trademarks of their respective owners.
Contents

Chapter 1: Introduction................................................................................................................. 5
Purpose of this guide..........................................................................................................................................................5
Dell Technologies................................................................................................................................................................. 5
VxRail......................................................................................................................................................................................5
SmartFabric Services..........................................................................................................................................................5
SmartFabric Services with VxRail....................................................................................................................................7
OpenManage Network Integration.................................................................................................................................. 7
Typographical conventions................................................................................................................................................7

Chapter 2: Hardware Overview...................................................................................................... 8


Supported switches............................................................................................................................................................ 8
Hardware used in this guide..............................................................................................................................................8

Chapter 3: Topology.................................................................................................................... 10
Overview.............................................................................................................................................................................. 10
Production topology with SmartFabric Services........................................................................................................10
Production topology connection details........................................................................................................................11
OOB management topology............................................................................................................................................ 12
OOB management connection details...........................................................................................................................13

Chapter 4: Deployment Planning.................................................................................................. 15


Minimum requirements..................................................................................................................................................... 15
Unsupported environments............................................................................................................................................. 15
Unsupported features....................................................................................................................................................... 15
Production topology deployment options.................................................................................................................... 15
Uplink options......................................................................................................................................................................16
External switches...............................................................................................................................................................16
VLANs and IP addresses...................................................................................................................................................17
VxRail deployment settings............................................................................................................................................. 18
DNS server records........................................................................................................................................................... 19

Chapter 5: Configure the First Leaf Switch Pair...........................................................................21


Cabling.................................................................................................................................................................................. 21
Configure leaf switch OOB management interfaces.................................................................................................21
Enable SmartFabric...........................................................................................................................................................22
Connect to the SmartFabric UI......................................................................................................................................22
Update fabric and switch names................................................................................................................................... 24
Configure L2 uplinks to the external network............................................................................................................ 24
Configure L3 routed uplinks to the external network...............................................................................................32
Configure a jump host port............................................................................................................................................. 49

Chapter 6: Deploy VxRail............................................................................................................. 54


Initial VxRail cluster deployment steps.........................................................................................................................54
Additional configuration steps for L3 uplinks............................................................................................................. 59

Contents 3
Validate and build VxRail cluster....................................................................................................................................62

Chapter 7: Expand to Multirack................................................................................................... 67


Expand SmartFabric and VxRail cluster to multirack................................................................................................67
Verify preferred master setting before fabric expansion........................................................................................ 68
Configure management settings for new switches.................................................................................................. 69
Add switches to SmartFabric......................................................................................................................................... 70
Connect to the SmartFabric UI.......................................................................................................................................71
Configure additional rack and switch names............................................................................................................... 71
Configure leaf switch addresses for L3 uplinks......................................................................................................... 72
Add a VxRail node to the cluster................................................................................................................................... 74

Chapter 8: Deploy and Configure OMNI....................................................................................... 86


Deploy OMNI VM.............................................................................................................................................................. 86
OMNI console configuration...........................................................................................................................................90
OMNI web UI configuration............................................................................................................................................ 95
Register OMNI with vCenter.......................................................................................................................................... 97

Appendix A: Validated Components............................................................................................ 101


General................................................................................................................................................................................101
Dell EMC PowerSwitch systems.................................................................................................................................. 101
VxRail E560F nodes.........................................................................................................................................................101
VxRail appliance software..............................................................................................................................................102
OMNI software.................................................................................................................................................................102

Appendix B: CLI Commands....................................................................................................... 103


Switch CLI validation commands................................................................................................................................. 103
Return to Full Switch mode.......................................................................................................................................... 108

Appendix C: Cisco Nexus External Switch Configuration Example...............................................110


Configure external Nexus switches for L3 routed connections............................................................................110
Configure external Nexus switches for L2 connections......................................................................................... 118
Validated Nexus switches..............................................................................................................................................124

Appendix D: Support and Feedback............................................................................................ 125


Technical resources........................................................................................................................................................ 125
Fabric Design Center...................................................................................................................................................... 125
Feedback and technical support.................................................................................................................................. 125

4 Contents
1
Introduction

Purpose of this guide


This guide demonstrates the deployment of a leaf-spine fabric using SmartFabric Services and shows how SmartFabric Services
simplifies the deployment of a new VxRail cluster. This guide also covers connecting the leaf-spine topology to the existing data
center network, and expanding the Smart Fabric and VxRail cluster from a single rack to multiple racks.

Dell Technologies
Our vision at Dell Technologies is to be the essential technology company for the data era. Dell ensures modernization for
today’s applications and for the emerging cloud-native world.
Dell is committed to disrupting the fundamental economics of the market with an open strategy that gives you the freedom of
choice for networking operating systems and top-tier merchant silicon. Our strategy enables business transformations that
maximize the benefits of collaborative software and standards-based hardware, including lowered costs, flexibility, freedom, and
security. Dell provides further customer enablement through validated deployment guides that demonstrate these benefits while
maintaining a high standard of quality, consistency, and support.

VxRail
VxRail is at the forefront of a fundamental shift in IT infrastructure consumption – away from application-specific, “build-your-
own” infrastructure and toward virtualized, general-purpose, engineered systems. Dell Technologies and VMware have
embraced this shift with the VxRail hyperconverged appliance. VxRail has a simple, scale-out architecture that uses VMware
vSphere and VMware vSAN to provide server virtualization and software-defined storage.

SmartFabric Services
Dell EMC SmartFabric OS10 includes SmartFabric Services (SFS). With SFS, customers can quickly and easily deploy and
automate data center networking fabrics.
There are two types of SFS:
● SFS for Leaf and Spine — supported on selected Dell EMC PowerSwitch S and Z series switches
● SFS for PowerEdge MX — supported on selected modular switches, not applicable to this guide
SFS for Leaf and Spine has two personalities:
● VxRail Layer 2 (L2) Single Rack personality — This is the original (legacy) SFS personality that automates configuration
of a single pair of ToR (or leaf) switches for VxRail clusters.
● Layer 3 (L3) Fabric personality — This is the new SFS personality available as of OS10.5.0.5 that automates
configuration of a leaf-spine fabric.
VxRail L2 Single Rack personality
NOTE: For new single rack and multirack SFS deployments, Dell requires using the L3 Fabric personality instead of the
VxRail L2 Single Rack personality.
The VxRail L2 Single Rack personality is the original SFS personality. It is enabled by running a Python script in the OS10 Linux
shell.
This personality is limited to a single rack and cannot be expanded to a multirack deployment. If switches with this personality
enabled are upgraded, they will continue to operate with the VxRail L2 Single Rack personality.

Introduction 5
NOTE: The VxRail L2 Single Rack personality is not covered in this deployment guide. It is covered in the VMware
Integration for VxRail Fabric Automation SmartFabric User Guide, Release 1.1.
L3 Fabric personality
NOTE: Dell requires using the L3 Fabric personality for new SFS deployments. All examples in this guide use this
personality. Unless otherwise specified, statements in this guide regarding SmartFabric behavior and features are applicable
to the L3 Fabric personality only.

NOTE: The L3 personality provides the option of deploying a VxRail cluster in a single rack or multirack environment.

The L3 Fabric Personality allows users to deploy SmartFabric Services in a single rack and expand to multirack as business needs
evolve.
The SFS L3 Fabric personality automatically builds an L3 leaf-spine fabric. This enables faster time to production for
hyperconverged and private cloud environments while being fully interoperable with existing data center infrastructure.

Figure 1. From a network of boxes to a networked fabric

The SFS L3 fabric build process is as follows:


1. Rack and cable leaf and spine switches.
2. Enable SFS from the OS10 CLI with the smartfabric l3fabric enable role [options] command.
a. Specify the role (leaf or spine).
b. Specify Virtual Link Trunking interconnect (VLTi) ports on leafs.
3. Switches boot in SmartFabric mode.
4. Switches discover each other using LLDP.
5. Switches elect one of the leaf nodes as the SmartFabric master.
6. Leaf and spine connections are established using private IP addresses and external Border Gateway Protocol (eBGP).
7. Leaf Nodes are configured as hardware VTEPs for the infrastructure network overlay using BGP EVPN.

6 Introduction
Figure 2. SFS Layer 3 leaf-spine fabric

SmartFabric Services with VxRail


With SmartFabric Services, switches are automatically configured during VxRail deployment. When additional VxRail nodes are
connected to the SmartFabric, the fabric identifies them as VxRail nodes and automatically onboards them to the required
networks.

OpenManage Network Integration


OpenManage Network Integration (OMNI) enables configuration and management of SFS-enabled Dell EMC PowerSwitch
systems. OMNI is accessed through a vCenter plugin or directly using the web UI. With OMNI, networks created in vCenter are
automatically configured in the fabric.
The following tasks are done in OMNI:
● View the leaf-spine topology
● View switch status
● Configure server-facing interfaces and port channels
● Configure uplinks to external networks
● Create networks
● Configure routing
● Upgrade SmartFabric OS10
● Fabric backup and restore
● Replace a switch in the fabric

Typographical conventions
Monospace text CLI examples
Underlined monospace text CLI examples that wrap the page, or to highlight information in CLI output
Italic monospace text Variables in CLI examples
Bold text UI fields and information that is entered in the UI

Introduction 7
2
Hardware Overview

Supported switches
Only the Dell EMC PowerSwitch systems listed in Table 1 are supported with SFS in leaf or spine roles. SFS does not run on
other Dell EMC PowerSwitch models or third-party switches.
To use the SFS features detailed in this guide, switches must be running SmartFabric OS10.5.2.2 or a later version specified in
the SmartFabric OS10 Solutions (HCI, Storage, MX) Support Matrix.

Table 1. Supported switches


Dell EMC PowerSwitch model Typical role VxRail node connectivity options
S4112F/T-ON, S4128F/T-ON, S4148F/T-ON Leaf 10 GbE
S5212F-ON, S5224F-ON, S5248F-ON, S5296F-ON Leaf 10/25 GbE
S5232F-ON, Z9264F-ON Spine See Note below

NOTE: The roles shown are recommended, with the exception that Z9264F-ON is supported as a spine only. S5232F-ON
may be used as a leaf with ports connected to VxRail nodes broken out to 10 GbE or 25 GbE. VxRail nodes do not currently
support 100 GbE NICs for VxRail system traffic.
Any combination of the leaf and spine switches listed in Table 1 may be used with the exception that leaf switches must be
deployed in pairs. Each leaf switch in the pair must be the same model due to VLT requirements.
SFS supports up to 20 switches and eight racks in the fabric.

Hardware used in this guide


This section briefly describes the hardware used to validate the deployment examples in this guide. Appendix A contains a
detailed listing of hardware and software versions used. All supported leaf and spine switches are listed in Table 1.

Dell EMC PowerSwitch S5248F-ON


The Dell EMC PowerSwitch S5248F-ON is a 1-Rack Unit (1U), multilayer switch with 48x 25 GbE, 4x 100 GbE, and 2x 200 GbE
ports. This guide uses two S5248F-ON switches in each rack as leaf switches.

Figure 3. Dell EMC PowerSwitch S5248F-ON

Dell EMC PowerSwitch S5232F-ON


The Dell EMC PowerSwitch S5232F-ON is a 1U aggregation/spine switch with 32x 100 GbE ports. This guide uses two S5232F-
ON switches as spine switches.

8 Hardware Overview
Figure 4. Dell EMC PowerSwitch S5232F-ON

Dell EMC PowerSwitch S3048-ON


The Dell EMC PowerSwitch S3048-ON is a 1U switch with 48x 1 GbE BASE-T ports and 4x 10 GbE SFP+ ports. This guide uses
one S3048-ON switch in each rack for out-of-band (OOB) management traffic. This includes connections to VxRail node iDRAC
ports and dedicated switch management ports.

Figure 5. Dell EMC PowerSwitch S3048-ON

Dell EMC VxRail nodes


Dell EMC VxRail P, V, S, E, and G Series nodes are built on current PowerEdge servers. The deployment example in this guide
uses a cluster of four VxRail E560F nodes, with three nodes in Rack 1, and one node in Rack 2.

Figure 6. Dell EMC VxRail 1U E series node

NOTE: VxRail supports cluster sizes up to 64 nodes. With SFS, VxRail clusters must have a minimum of three nodes. Two-
node VxRail clusters are not currently supported.

VxRail node network adapters


VxRail nodes support various combinations of network adapters. See the Dell EMC VxRail Network Planning Guide for network
connectivity options by node type and the Dell EMC VxRail Support Matrix (account required) for supported network adapters.
NOTE: For VxRail node connections to the leaf switches listed in Table 1, use supported 10 GbE or 25 GbE network
adapters only. 1 GbE network adapters are not supported for VxRail node to SFS-enabled leaf switch connections.
Each VxRail node also includes an integrated Dell Remote Access Card (iDRAC) for out-of-band management.

Hardware Overview 9
3
Topology

Overview
The topology is divided into two major parts:
● Production
● Out-of-band (OOB) management
The production topology contains redundant components and is used for all mission-critical and end-user network traffic. The
OOB management network is an isolated network for remote management of hardware.

Production topology with SmartFabric Services


The production topology uses a leaf-spine fabric for performance and scalability. SmartFabric Services (SFS) automates the
deployment of this fabric.

Figure 7. SmartFabric topology with connections to VxRail nodes and external network

10 Topology
NOTE: The deployment examples in this guide use two network adapter ports per VxRail node, as shown in Figure 7. See
the Dell EMC VxRail Network Planning Guide for VxRail node connectivity options.
With SFS, two leaf switches are used in each rack for redundancy and performance. A Virtual-Link Trunking interconnect (VLTi)
connects each pair of leaf switches. Every leaf switch has an L3 uplink to every spine switch. Equal-cost multi-path routing
(ECMP) is leveraged to use all available bandwidth on the leaf-spine connections.
SFS uses BGP-EVPN to stretch L2 networks across the L3 leaf-spine fabric. This configuration allows for the scalability of L3
networks with the VM mobility benefits of an L2 network. For example, a VM can be migrated from one rack to another without
the need to change its IP address and gateway information.
The example in this guide builds the SmartFabric shown in Figure 7 in two stages:
1. The first stage is a single rack deployment. Leaf switches 1A and 1B are deployed in Rack 1 without spine switches, and a
two-leaf fabric is created using SFS. The fabric is connected to the external network using either L2 or L3 uplinks. The
external network is typically a preexisting network in the data center. Three VxRail nodes are connected to the two leaf
switches, and a three-node VxRail cluster is deployed.
2. In the second stage, two spine switches are added and connected to leaf switches 1A and 1B. Leaf switches 2A and 2B are
added in Rack 2 and are also connected to the spine switches. The fabric is expanded to include the two spines and two
additional leafs using SFS. A fourth VxRail node is added in Rack 2 and joined to the existing VxRail cluster.
NOTE: Single and multirack deployment options are discussed in Chapter 4.

Production topology connection details


Production network connections for this deployment example are shown in Figure 8. Each leaf switch has one connection to
each spine switch, and each VxRail node has one connection to each leaf. For the switches used in this deployment example,
connections from leafs to spines are 100 GbE, connections from leafs to VxRail nodes are 25 GbE, and VLTi connections
between leafs are 200 GbE.
NOTE: If Dell EMC PowerSwitch S4100 series leaf switches are used (not shown), connections to VxRail nodes are 10 GbE.
The S4100 series switches have 100 GbE ports available for connections to spines.

Topology 11
Figure 8. Production network connection details

NOTE: In this example, the two QSFP28-DD double density ports (2x 100 GbE interfaces per physical port), available on
S5248F-ON switches, are used to create a 400 GbE VLTi. This requires QSFP28-DD DAC cables or optics. On switches
without QSFP28-DD ports, QSFP28 (100 GbE) or QSFP+ (40 GbE) ports are typically used for VLTi connections. The VLTi
synchronizes L2 and L3 control-plane information across the two nodes. The VLTi is used for data traffic only when there is
a link failure that requires the VLTi to reach the destination. Dell Technologies recommends using at least two physical ports
on each switch for the VLTi for redundancy and to provide additional bandwidth if there is a failure.

OOB management topology


The out-of-band (OOB) management network is an isolated network for remote management of hardware. This includes VxRail
nodes, servers, switches, storage arrays, and rack power distribution units (PDUs) using their dedicated management ports.

12 Topology
For OOB management network connections, one S3048-ON switch is installed in each rack, as shown in the figure below.

Figure 9. OOB management network connections

The OOB management network enables connections to the PowerSwitch SFS web UI. It also enables switch console access
using SSH, and VxRail node console access using the iDRAC. This network is also used to carry heartbeat messages between
switches configured as VLT peers, and for OpenManage Network Integration (OMNI) to communicate with the SFS master
switch.

NOTE: OOB management switches are not part of the SmartFabric.

NOTE: This guide covers the equipment shown in Racks 1 and 2. Other devices and racks shown in the figure above are for
demonstration purposes only.
Four 10 GbE SFP+ ports are available on each S3048-ON for use as uplinks to the OOB management network core.
1 GbE BASE-T ports on each S3048-ON are connected downstream to hardware management ports on each device in the rack.
This includes the VxRail node iDRAC ports and switch management ports. Management ports on other devices, such as
PowerEdge server iDRAC ports, storage array management ports, and rack PDU management ports, are also connected to this
network.
OOB management switch configuration is not detailed in this guide. The S3048-ON can function as an OOB management switch
with its OS10 factory default configuration. By default, all ports are in switchport mode, in VLAN 1, administratively up, and rapid
per-VLAN spanning tree plus (RPVST+) is enabled.
NOTE: At a minimum, Dell Technologies recommends changing the admin password to a complex password during the first
login.

NOTE: For reference, devices on the OOB Management network in this guide use the 100.67.0.0/16 IP address block.
These addresses are examples only. Use IP addresses that are suitable for your environment.

OOB management connection details


Figure 10 shows how switch management ports and VxRail node iDRAC ports connect to the OOB management switch in each
rack.

Topology 13
Figure 10. OOB management network connection details

14 Topology
4
Deployment Planning

Minimum requirements
Minimum requirements for VxRail 7.0.1 deployments with SFS include:
● Three VxRail nodes running VxRail appliance software version 7.0.100 or a later version as specified in the SmartFabric OS10
Solutions (HCI, Storage, MX) Support Matrix.
● VxRail nodes must meet the hardware and software requirements listed in the Dell EMC VxRail Support Matrix.
● On-board NICs in VxRail nodes must be 10 GbE or 25 GbE.
● Two Dell EMC PowerSwitch units as listed in Table 1 must be deployed as leaf switches. Each leaf switch in the pair must be
the same model due to VLT requirements.
● Dell EMC PowerSwitch units must be running SmartFabric OS10.5.2.2 or a later version as specified in the SmartFabric OS10
Solutions (HCI, Storage, MX) Support Matrix.
● One 1 GbE BASE-T, also referred to as 1000BASE-T, switch for OOB management connections. Dell Technologies
recommends using one PowerSwitch S3048-ON per rack.
● One DNS server which can be an existing DNS server that is reachable on the network with host records added for this
deployment. The example DNS host records used in this guide are shown in Table 5.

Unsupported environments
SFS does not currently support the following environments:
● vSAN stretched clusters
● VMware Cloud Foundation (VCF)
● NSX-V
● VxRail L3 Everywhere

Unsupported features
SFS does not currently support the following features:
● Multiple VRF tenants
● Route policies or Access Control Lists (ACLs)
● OSPF or routing protocols other than eBGP
● Multicast routing protocols
● Networking features not covered in the SmartFabric Services for OpenManage Network Integration User Guide, Release
2.0. This guide is available on the Dell EMC OpenManage Network Integration for VMware vCenter web site.

Production topology deployment options


Deployment options for deploying SFS for VxRail include:
● Single rack deployment—A two-leaf SmartFabric is deployed in a single rack. All VxRail nodes are in the same rack, are
connected to the two leaf switches, and a VxRail cluster is built. This is covered in Chapter 5 and Chapter 6 of this guide.
● Expand single rack deployment to multirack—The two-leaf SmartFabric is expanded to multirack by adding spine
switches to connect the racks and two leaf switches per rack. VxRail nodes in the additional racks are connected to the
additional SmartFabric leaf switches and are joined to the existing VxRail cluster. This is covered in Chapter 7 of this guide.

Deployment Planning 15
● multirack deployment—A multirack SmartFabric with spines and two leaf switches per rack is deployed. VxRail nodes are
installed in multiple racks and connected to the SmartFabric leaf switches in each rack. A VxRail cluster is built using VxRail
nodes in multiple racks.

Uplink options
SFS uplink options to external network switches include:
● L2 uplinks from a leaf pair
● L3 uplinks from a leaf pair
● L3 uplinks from spines
NOTE: Dell Technologies recommends using uplinks from a leaf pair as a best practice. Leaf switches with uplinks to an
external network are referred to as border leafs. VxRail nodes and other servers in the rack may be connected to border
leafs in the same manner as other leafs in the SmartFabric.

NOTE: If uplinks from spines are used, they must be L3.

L2 uplink planning
If an L2 uplink is used, determine the VLAN ID to use for VxRail external management, and if ports in the uplink will be tagged or
untagged. Typically, this will be the same VLAN used for DNS and NTP services on the existing network, as shown in the
example in this guide. Optionally, traffic may be routed from the external switch to the DNS/NTP servers.
The L2 uplink may be an LACP or static LAG. If L2 uplinks connect to a pair of Dell EMC PowerSwitch systems, Dell
Technologies recommends using LACP with VLT per the example in this guide.
L2 uplink configuration is covered in detail in the Configure L2 uplinks to the external network section of this guide.

NOTE: With L2 uplinks, all routing into and out of the SmartFabric is done on external switches.

L3 uplink planning
SFS supports using L3 routed or L3 VLAN uplinks.
With L3 routed uplinks, each physical link is a point-to-point IP network. With L3 VLAN, all uplinks are in a LAG, and an IP
address is assigned to the VLAN containing the LAG. This guide provides examples using L3 routed uplinks. L3 VLAN examples
are beyond the scope of this guide.
Point-to-point IP networks and addresses must be planned for each physical link in the L3 uplink.
Each leaf switch in the SmartFabric needs an IP address on the External Management VLAN. An anycast gateway address on
the same VLAN is also specified. This is the virtual router/anycast gateway address shared by all leafs in the SmartFabric.
SmartFabric supports routing using eBGP or static routes. eBGP and static routing examples are both provided in this guide.

NOTE: SFS does not support other routing protocols.

If eBGP is used, ASNs and router IDs must be determined for the external switches. These are automatically configured on all
switches in the SmartFabric.
NOTE: SFS uses ASNs 65011 for leafs, and 65012 for spines. If these ASNs conflict with your environment, they may be
changed in the SFS UI under 5. Edit Default Fabric Settings.
L3 uplink configuration is covered in detail in the Configure L3 routed uplinks to the external network section of this guide.

External switches
External switches must have available ports for connections from the existing network to the SFS border leafs (or spines if
applicable). For redundancy, Dell Technologies recommends two external switches with at least two links per switch to the
SmartFabric. Use enough connections to provide sufficient bandwidth for the traffic anticipated across these links. If using Dell

16 Deployment Planning
EMC PowerSwitch systems as external switches, Dell Technologies recommends configuring them as VLT peers, as shown in
the examples in this guide.
NOTE: This guide provides external switch configuration examples for Dell EMC PowerSwitch systems. Cisco Nexus switch
configuration examples are provided in Appendix C.

VLANs and IP addresses


VLANs and IP addresses used for VxRail node traffic must be planned before VxRail deployment can begin. VxRail node traffic is
divided into six or more VLANs, as shown in the following table:

Table 2. VLANs used by VxRail


VLAN Purpose
VxRail Cluster Build Used to build the VxRail cluster. SFS automatically creates
this VLAN and names it SFS Client Management.
VxRail Internal Management Used for VxRail node discovery. SFS automatically creates
this VLAN and names it SFS Client Control.
VxRail External Management User-specified VLAN for VxRail Manager, ESXi, vCenter
Server, NTP, DNS, and vRealize Log Insight traffic
vMotion User-specified VLAN for Virtual machine (VM) migration
traffic
vSAN User-specified VLAN for distributed storage traffic
VM networks User-specified VLAN(s) as required for VM data traffic

NOTE: All VLANs in Table 2 share the physical connections shown in Figure 8 in this deployment.

VLAN IDs and network addresses planned for this deployment example are shown in the following table.

Table 3. VLAN IDs and network addresses


VLAN ID Description Network
4091 SFS Client Management/VxRail cluster 192.168.10.0/24 (default)
build
3939 SFS Client Control/VxRail Internal IPv6 multicast
Management
1811 External Management 172.18.11.0/24
1812 vMotion 172.18.12.0/24
1813 vSAN 172.18.13.0/24
1814 VM Network A 172.18.14.0/24
1815 VM Network B 172.18.15.0/24

NOTE: SFS automatically creates VLANs 4091 and 3939. VLANs 1811 through 1815 and their network IP addresses are
user-defined and are examples only. In SmartFabric mode, VLANs 1 through 3999, excluding 3939, are available for use.
VLANs 4091 and 3939 may be changed from their defaults in the SFS UI under 5. Edit Default Fabric Settings. VLAN
3939 is also a VxRail default VLAN. If VLAN 3939 is changed in the SFS UI, you must also change it to match in VxRail per
the VxRail documentation.

NOTE: VLANs 4000 through 4094 are reserved for SFS. For more information about the reserved VLANs, see the
SmartFabric Services for OpenManage Network Integration User Guide, Release 2.0. The guide is available on the Dell EMC
OpenManage Network Integration for VMware vCenter website.

Deployment Planning 17
NOTE: SFS uses the 172.16.0.0/16, and 172.30.0.0/16 IP address blocks internally for the leaf-spine network configuration.
If these networks conflict with your environment, these default IP addresses blocks may be changed in the SFS UI under 5.
Edit Default Fabric Settings.
In SmartFabric mode, each VLAN in Table 3 is automatically placed in a VXLAN virtual network with a Virtual Network Identifier
(VNI) that matches the VLAN ID. VLAN 4091 is in virtual network 4091, VLAN 1811 is in virtual network 1811, and so on.
The show virtual network command is used to view virtual networks, VLANs, and port-VLAN assignments. This command
is covered in more detail later in this guide.

VxRail deployment settings


The values used in the Dell EMC VxRail Deployment Wizard for the examples in this guide are shown in the right column of the
table below. The IP addresses and VLAN IDs are from Table 3. For more information on the Deployment Wizard options, see the
VxRail Appliance Installation Procedures that are available on Dell Technologies SolVe Online (account required).

Table 4. VxRail deployment settings


Category Section Description Values used in this guide
Global Settings General Top Level Domain dell.lab
vCenter Server Use the VxRail vCenter
Server
DNS Server External a
DNS Server IP Address(es) 172.18.11.50 (In L2 uplink
example) 172.19.11.50 (In L3
uplink example) b
NTP Server(s) ntp.dell.lab c
Syslog Server IP Address(es) blank
NIC Configuration 2x 10 GbE or 2x 25 GbE
vCenter Server Settings vCenter Server vCenter Server Hostname vcenter01
vCenter Server IP Address 172.18.11.62
Join an existing vCenter SSO No
Domain
Same Password for all Yes
Accounts
vCenter Server Management management
Username
vCenter Server Password password
Host Settings ESXi Hosts Host Configuration Method Autofill
ESXi Hostname vxrail
ESXi Starting IP Address 172.18.11.101
Same Credentials for all Hosts Yes
ESXi Management Username management
ESXi Management Password password
ESXi Root Password password
ESXi Host Location Same Rack For All Hosts Yes
Rack Name blank
Rack Position blank

18 Deployment Planning
Table 4. VxRail deployment settings (continued)
Category Section Description Values used in this guide
VxRail Manager Settings VxRail Manager VxRail Manager Hostname vxmgr01
VxRail Manager IP Address 172.18.11.72
VxRail Manager Root password
Password
VxRail Manager Service password
Account Password
Virtual Network Settings VxRail Management Network Management Subnet Mask 255.255.255.0
Management Gateway 172.18.11.254
Management VLAN ID 1811
vSAN vSAN Configuration Method Autofill
vSAN Starting IP Address 172.18.13.101
vSAN Subnet Mask 255.255.255.0
vSAN VLAN ID 1813
vSphere vMotion vSAN Configuration Method Autofill
vSAN Starting IP Address 172.18.12.101
vSAN Subnet Mask 255.255.255.0
vSAN VLAN ID 1812
VM Guest Networks VM Guest Network Name VM_Network_A
VLAN ID 1814
VM Guest Network Name VM_Network_B
VLAN ID 1815
System VM Network Port Binding Ephemeral Binding

a. The VxRail Deployment Wizard now includes an option to use an Internal (VxRail Manager Service) DNS server. To use this
feature, see your VxRail documentation. The deployment example in this guide uses an external DNS server.
b. In the L2 uplink example in this guide, the DNS/NTP servers on the existing network are on the same External
Management VLAN, 1811, as the VxRail nodes. IP addresses on this network use the 172.18.11.0/24 address block. In the L3
uplink example, the DNS/NTP servers are on a different VLAN, 1911, with IP addresses in the 172.19.11.0/24 address block.
VLAN 1911 represents a pre-existing management VLAN and is used only on the external switches in the L3 uplink example.
c. If an NTP server is not provided, VxRail uses the time that is set on VxRail node 1.

DNS server records


VxRail nodes must be able to reach a correctly configured DNS server during and after VxRail deployment. The DNS server must
include forward and reverse lookup entries for ESXi hosts, and VxRail Manager.
Add forward and reverse lookup records on the DNS server using the hostnames and IP addresses used in your deployment. The
DNS entries for the deployment examples in this guide are listed in the following table.

Table 5. DNS hostnames and IP addresses


Hostname IP address
vxrail01.dell.lab 172.18.11.101
vxrail02.dell.lab 172.18.11.102
vxrail03.dell.lab 172.18.11.103

Deployment Planning 19
Table 5. DNS hostnames and IP addresses (continued)
Hostname IP address
vxrail04.dell.lab 172.18.11.104
vcenter01.dell.lab 172.18.11.62
vxmgr01.dell.lab 172.18.11.72
omni.dell.lab 172.18.11.56
ntp.dell.lab In L2 uplink example - 172.18.11.51
In L3 uplink example - 172.19.11.51

In the L2 uplink example in this guide, the DNS server address is 172.18.11.50. In the L3 uplink example, the DNS server address
is 172.19.11.50.
NOTE: The VxRail Deployment Wizard now includes an option to use an Internal (VxRail Manager Service) DNS server. To
use this feature, see your VxRail documentation. The deployment example in this guide uses an external DNS server.

20 Deployment Planning
5
Configure the First Leaf Switch Pair

Cabling
Cable the switches and VxRail nodes, as shown in the figure below, and power on all devices.

Figure 11. VxRail node and leaf switch connections

For connection details, see Figure 8. Also, make OOB management connections, as shown in Figure 10.

Configure leaf switch OOB management interfaces


An IP address is configured on the OOB management interface of each switch. This interface is used to access the SFS web UI,
and it is also used as the VLT backup link. Additionally, it enables console access using SSH as an option to the serial console.
A management route is also configured if routing is used on the OOB management network.
NOTE: Configure a unique OOB management IP address on each switch. The IP addresses shown are examples only. Use IP
addresses suitable for your environment. The management route should not be 0.0.0.0/0, or this may interfere with the
data network’s default route. Use a specific destination prefix, as shown.
This is done on each switch as follows:

OS10# configure terminal


OS10(config)# interface mgmt 1/1/1
OS10(conf-if-ma-1/1/1)# no ip address dhcp
OS10(conf-if-ma-1/1/1)# ip address 100.67.76.30/24
OS10(conf-if-ma-1/1/1)# no shutdown
OS10(conf-if-ma-1/1/1)# exit
OS10(config)# management route 100.67.0.0/16 100.67.76.254
OS10(config)# end
OS10# write memory

Configure the First Leaf Switch Pair 21


NOTE: If % Error: ZTD is in progress(configuration is locked) is preventing entry into configuration
mode, enter the command ztd cancel to proceed.

Other global settings may also be configured here, such as ip name-server and ntp server if used by the switch. These
settings are not required for the deployment example in this guide. The hostname of the switch may be configured at the CLI or
in the SFS UI. In this guide, the SFS UI is used.

Enable SmartFabric

Figure 12. First pair of leaf switches in SmartFabric mode

CAUTION: The following commands delete the existing switch configuration. Switch management settings such
as management IP address, management route, hostname, NTP server, and IP name server are retained.
Ensure the physical VLTi connections are made between leaf pairs before proceeding.
NOTE: This example uses the two QSFP28 2x100 Gb DD ports, interfaces 1/1/49-1/1/52, for the VLTi connections on each
S5248F-ON leaf.
To put the first pair of leaf switches in SmartFabric mode and configure them as VLT peers, run the following commands on
each switch:

OS10# configure terminal


OS10(config)# smartfabric l3fabric enable role LEAF vlti ethernet
1/1/49-1/1/52

Reboot to change the personality? [yes/no]:y

The configuration is applied, and the switches reload.


To verify switches are in SmartFabric mode, run the following command on each switch:

OS10# show switch-operating-mode


Switch-Operating-Mode : Smart Fabric Mode

NOTE: For more information, see SmartFabric Services for OpenManage Network Integration User Guide, Release 2.0. The
guide is available on the Dell EMC OpenManage Network Integration for VMware vCenter website. For additional
SmartFabric CLI commands, see the SmartFabric Services chapter of the Dell EMC SmartFabric OS10 User Guide Release
10.5.2.

Connect to the SmartFabric UI


1. From a workstation with access to the OOB management network, use a browser to connect to the management IP address
of either leaf switch by navigating to https://switch_mgmt_ip_address.
2. Log in as admin.
NOTE: After reloading the switches, it takes about two minutes after the login prompt displays at the switch CLI for
SFS to come up and for the UI to be fully functional.

NOTE: The SFS UI supports Chrome, Firefox, and Edge browsers. Languages other than English are not supported at
this time.
All web UI configuration is done on the SFS master switch. If you connect to an SFS switch that is not the master, a link to
the master is provided. This is outlined in red in the picture below.

22 Configure the First Leaf Switch Pair


Figure 13. Connected to switch that is not the master
3. If applicable, click on the link provided to go to the master switch, and log in as admin.
NOTE: The IPv4 address of the SFS master may also be determined by running show smartfabric cluster from
the CLI of any switch in the SmartFabric. The master is always a leaf switch, never a spine. Only one leaf switch in the
SmartFabric will have ROLE set to MASTER. The remaining leafs will have ROLE set to BACKUP.

When connected to the SFS master switch, the UI appears, as shown in the figure below.

Figure 14. Connected to SFS master switch


4. Optionally, you can hover over each switch (or node) and the VLTi link to view additional information, as shown in Figure 15
and Figure 16.

Figure 15. Node details

Configure the First Leaf Switch Pair 23


Figure 16. VLTi link details

Update fabric and switch names


The fabric, rack, and switch names may be changed from their default settings as follows:
1. On the SFS UI Home page, click 1. Update Default Fabric, Switch Names and Descriptions to open the Set Fabric and
Switch Name window.
2. On the Network Fabric page, update the fabric Name (optional) and Description (optional) and click NEXT.
NOTE: The Network Fabric ID is automatically set to 100 and cannot be changed. All directly connected switches in
SmartFabric mode join this fabric.
3. On the Racks page, update the Name (recommended) and Description (optional) of the rack. In this example, the rack
name is set to Rack 1, as shown in Figure 17.

Figure 17. Rack name changed to Rack 1


4. Click NEXT.
5. On the Switches page, update the Name (recommended, if not previously configured from the CLI) and Description
(optional) of the switches. Hostnames are set to S5248F-Leaf1A and S5248F-Leaf1B as shown in the image below:

Figure 18. Switch name configuration page


6. Click FINISH to apply the settings.

Configure L2 uplinks to the external network


Uplinks to the existing network may be configured as L2, L3 routed, or L3 VLAN. This section covers L2 uplinks.

NOTE: If L3 uplinks are used, proceed to the Configure L3 routed uplinks to the external network section.

24 Configure the First Leaf Switch Pair


The switches are cabled as shown in Figure 19. When L2 uplink configuration is complete, Leaf1A and Leaf1B will connect with a
VLT port channel to a switch pair named External-A and External-B. In this example, an existing DNS/NTP server also connects
to the external switches using a VLT port channel. All VLT port channels use LACP in this guide.

NOTE: DNS and NTP server(s) do not have to connect in this manner as long as they are reachable on the network.

All ports on the four switches shown in Figure 19 are in the External Management VLAN, 1811.

Figure 19. L2 uplinks to the external network

Configure L2 uplinks in SFS


NOTE: Any ports available on the leaf switches may be used as uplinks, provided they are compatible with the
corresponding ports on the external switches. If leaf switch uplink ports will not use their native speeds, the interfaces must
be first broken out to the correct speed before the uplinks are created. This is done using the Breakout Switch Ports
option on the SFS UI home page. A breakout example is shown in the Change the port-group speed in the SFS UI section of
this guide.
L2 uplinks to the external network are configured as follows:
1. On the SFS UI home page, click 2. Create Uplink for External Network Connectivity.
2. On the Uplink Details page, select Layer 2. Enter a Name and optionally a Description.

Configure the First Leaf Switch Pair 25


Figure 20. Uplink details
3. Click NEXT.
4. On the Port Configuration page, select the uplink ports used on each leaf switch and set the LAG Mode to LACP or
Static. In this example, 100 GbE ports 1/1/53-1/1/54 are used on each switch, and the LAG mode is set to LACP.
NOTE: Be sure to configure the corresponding ports on the external switches with the same LAG mode. External switch
configuration examples using LACP are provided in the Configure external switches for L2 connections section of this
guide.

Figure 21. Uplink port configuration


5. Click NEXT.
6. VxRail Manager must be able to contact a DNS server to resolve hostnames during deployment. The External Management
VLAN is created to enable this, and the uplinks are added to it as follows: On the Network Configuration page, click ADD
NETWORK.

Figure 22. Network configuration page


7. In the dialog box that opens, provide a Name, Description (optional), and a VLAN ID for the External Management
network. In this example, VLAN ID 1811 from Table 3 is used.

26 Configure the First Leaf Switch Pair


Figure 23. Network details
8. Click OK.
9. Next to Tagged Networks, select the External Management VLAN created above, ExtMgmt-1811. Use the arrow button to
move it to the box on the right, as outlined in red in Figure 24. This makes the uplinks tagged members of the External
Management VLAN.

Figure 24. Uplink ports tagged in the External Management Network


10. Leave the box next to UnTagged Network set to None.
11. If networks automatically created through vCenter integration are to be extended on this uplink, select Yes. Otherwise,
select No. Yes is used in this example.
NOTE: Networks created through vCenter integration include the External Management, VSAN, vMotion, and VM
Networks created during VxRail deployment. It also includes networks added through OMNI post-deployment.
12. Click FINISH to apply the settings.
After uplink configuration, the SFS UI Home page appears, as shown in Figure 25.

Configure the First Leaf Switch Pair 27


Figure 25. SFS Home page after uplinks configured

Optionally, enter the show smartfabric uplinks command at the leaf switch CLI to view configured interfaces and
networks on the uplink.

NOTE: The command output shown in the following command is for Leaf1A. The output for Leaf1B is the same.

S5248F-Leaf1A# show smartfabric uplinks


----------------------------------------------------------
Name : L2-to-external-network
Description :
ID : 8ca32653-854c-4347-af94-e6afaa136c3a
Media Type : ETHERNET
Native Vlan : 0
Untagged-network :
Networks : network-1811
Configured-Interfaces : D86ZZP2:ethernet1/1/54, D86ZZP2:ethernet1/1/53,
76K00Q2:ethernet1/1/54, 76K00Q2:ethernet1/1/53

Configure external switches for L2 connections


This section shows example configurations for both external switches for L2 connections to the SmartFabric.
NOTE: The external switches used in this example are Dell EMC PowerSwitch systems. If the external switches are Cisco
Nexus, see Appendix C.

NOTE: This is only an example. Modify your external switch configuration as needed for your network.

General settings
Configure the hostname, OOB management IP address, and OOB management route as shown.

External-A External-B

configure terminal configure terminal

28 Configure the First Leaf Switch Pair


External-A External-B

hostname External-A hostname External-B

interface mgmt1/1/1 interface mgmt1/1/1


no ip address no ip address
ip address 100.67.76.41/24 ip address 100.67.76.40/24
no shutdown no shutdown

management route 100.67.0.0/16 management route 100.67.0.0/16


100.67.76.254 100.67.76.254

Configure VLANs
Create the External Management VLAN. If traffic will be routed from the external switches to other external networks, assign a
unique IP address on each switch and configure VRRP to provide gateway redundancy. Set the VRRP priority. The switch with
the highest priority value becomes the master VRRP router. Assign the same virtual address to both switches.

External-A External-B

interface vlan1811 interface vlan1811


description External_Mgmt description External_Mgmt
ip address 172.18.11.252/24 ip address 172.18.11.253/24
vrrp-group 11 vrrp-group 11
priority 150 priority 100
virtual-address 172.18.11.254 virtual-address 172.18.11.254
no shutdown no shutdown

Configure interfaces
Configure the interfaces for connections to the SFS leaf switches. Interfaces 1/1/13 and 1/1/14 are configured in VLT port
channel 100 in this example. Port-channel 100 is set as an LACP port channel with the channel-group 100 mode
active command.
Use the switchport mode trunk command to enable the port channel to carry traffic for multiple VLANs. Configure the
port channel as tagged on VLAN 1811 (the External Management VLAN).
Optionally, allow the forwarding of jumbo frames with the MTU 9216 command.
In this example, interface 1/1/1 on each external switch is configured in VLT port channel 1 for connections to the DNS/NTP
server. Port-channel 1 is set as an LACP port channel with the channel-group 1 mode active command.
Configure ports directly connected to nodes, servers, or other endpoints as STP edge ports. As a best practice, flow control
settings remain at their factory defaults as shown.

External-A External-B

interface port-channel100 interface port-channel100


description "To Leaf1A/1B" description "To Leaf1A/1B"
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1811 switchport trunk allowed vlan 1811
mtu 9216 mtu 9216
vlt-port-channel 100 vlt-port-channel 100

interface range ethernet1/1/13-1/1/14 interface range ethernet1/1/13-1/1/14


description "To Leaf1A/1B" description "To Leaf1A/1B"
no switchport no switchport
channel-group 100 mode active channel-group 100 mode active
mtu 9216 mtu 9216
no shutdown no shutdown

interface port-channel1 interface port-channel1


description "To DNS/NTP" description "To DNS/NTP"
no shutdown no shutdown
switchport access vlan 1811 switchport access vlan 1811
vlt-port-channel 1 vlt-port-channel 1
spanning-tree port type edge spanning-tree port type edge

Configure the First Leaf Switch Pair 29


External-A External-B

interface ethernet1/1/1 interface ethernet1/1/1


description "To DNS/NTP" description "To DNS/NTP"
no switchport no switchport
channel-group 1 mode active channel-group 1 mode active
no shutdown no shutdown
flowcontrol receive on flowcontrol receive on
flowcontrol transmit off flowcontrol transmit off

Configure VLT
This example uses interfaces 1/1/11 and 1/1/12 for the VLTi. Remove each interface from L2 mode with the no switchport
command.
Create the VLT domain. The backup destination is the OOB management IP address of the VLT peer switch. Configure the
interfaces used as the VLTi with the discovery-interface command.
As a best practice, use the vlt-mac command to manually configure the same VLT MAC address on both the VLT peer
switches. This improves VLT convergence time when a switch is reloaded.

CAUTION: Be sure the VLT MAC address is the same on both switches to avoid any unpredictable behavior.

If you do not configure a VLT MAC address, the MAC address of the primary peer is used as the VLT MAC address on both
switches.
NOTE: For more information about VLT, see the Dell EMC SmartFabric OS10 User Guide on the Dell EMC Networking OS10
Info Hub.
When the configuration is complete, exit configuration mode and save the configuration with the end and write memory
commands.

External-A External-B

interface range ethernet1/1/11-1/1/12 interface range ethernet1/1/11-1/1/12


description VLTi description VLTi
no switchport no switchport
no shutdown no shutdown
flowcontrol receive on flowcontrol receive on
flowcontrol transmit off flowcontrol transmit off

vlt-domain 255 vlt-domain 255


backup destination 100.67.76.40 backup destination 100.67.76.41
discovery-interface ethernet1/1/11-1/1/12 discovery-interface ethernet1/1/11-1/1/12
vlt-mac 00:00:01:02:03:20 vlt-mac 00:00:01:02:03:20

end end
write memory write memory

Validation
Once the uplink interfaces have been configured on the external switches and in the SFS UI, additional validation is done using
the switch CLI.
Show command output on External-A
NOTE: The command output shown in the following commands is for the External-A switch. The output for External-B is
similar.
Run the show vlan command to verify ports are correctly assigned to the External Management VLAN. Port channel 100
connects to the SFS leaf switches and is a tagged member of the same VLAN configured on the SmartFabric uplinks (VLAN

30 Configure the First Leaf Switch Pair


1811). It is tagged because it is also tagged on the SmartFabric leaf switches. The DNS/NTP server is connected on port channel
1, which is an access member of VLAN 1811 in this example.

External-A# show vlan


Codes: * - Default VLAN, M - Management VLAN, R - Remote Port Mirroring VLANs,
@ – Attached to Virtual Network, P - Primary, C - Community, I - Isolated
Q: A - Access (Untagged), T - Tagged
NUM Status Description Q Ports
* 1 Active A Eth1/1/2-1/1/10,1/1/15
A Po100,1000
1811 Active External_Mgmt T Po100,1000
A Po1
4094 Active T Po1000

The show port channel summary command confirms port channel 100 connected to the leaf switches is up and active.
Port channel 1000 is the VLTi, and port channel 1 is connected to the DNS/NTP server.

External-A# show port-channel summary


Flags: D - Down I - member up but inactive P - member up and active
U - Up (port-channel) F - Fallback Activated
--------------------------------------------------------------------------------
Group Port-Channel Type Protocol Member Ports
--------------------------------------------------------------------------------
1 port-channel1 (U) Eth DYNAMIC 1/1/1(P)
100 port-channel100 (U) Eth DYNAMIC 1/1/13(P) 1/1/14(P)
1000 port-channel1000 (U) Eth STATIC 1/1/11(P) 1/1/12(P)

Show command output on Leaf1A

NOTE: The command output shown in the following commands is for Leaf1A. The output for Leaf1B is similar.

With SFS, port channel numbers are automatically assigned as they are created. Port channel 1 is the uplink connected to the
external switches and is up and active. Port channel 1000 is reserved for the VLTi.

Leaf1A# show port-channel summary

Flags: D - Down I - member up but inactive P - member up and active


U - Up (port-channel) F - Fallback Activated
--------------------------------------------------------------------------------
Group Port-Channel Type Protocol Member Ports
--------------------------------------------------------------------------------
1 port-channel1 (U) Eth DYNAMIC 1/1/53(P) 1/1/54(P)
1000 port-channel1000 (U) Eth STATIC 1/1/49(P) 1/1/50(P) 1/1/51(P) 1/1/52(P)

The L2 uplink, port channel 1 in this example, is added as a tagged member of VLAN 1811. This is verified at the CLI using the
show virtual-network command as follows:

Leaf1A# show virtual-network


Codes: DP - MAC-learn Dataplane, CP - MAC-learn Controlplane, UUD - Unknown-Unicast-Drop
Un-tagged VLAN: 4080
Virtual Network: 1811
VLTi-VLAN: 1811
Members:
VLAN 1811: port-channel1, port-channel1000
VxLAN Virtual Network Identifier: 1811
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Virtual Network: 3939


Description: In-band SmartFabric Services discovery network
VLTi-VLAN: 3939
Members:
VLAN 3939: port-channel1000, ethernet1/1/1, ethernet1/1/2, ethernet1/1/3
VxLAN Virtual Network Identifier: 3939
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Virtual Network: 4091


Description: Default untagged network for client onboarding
VLTi-VLAN: 4091

Configure the First Leaf Switch Pair 31


Members:
Untagged: ethernet1/1/1, ethernet1/1/2, ethernet1/1/3
VLAN 4091: port-channel1000
VxLAN Virtual Network Identifier: 4091
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

NOTE: Ethernet ports 1/1/1-1/1/3 are connected to the VxRail nodes. SFS automatically puts VxRail node ports in virtual
networks 3939 and 4091.

Configure L3 routed uplinks to the external network


Uplinks to the existing network may be configured as L2, L3 routed, or L3 VLAN. This section covers L3 routed uplinks.
NOTE: If L2 uplinks were configured in the preceding section, skip this section and go to the Configure a jump host port
section.

NOTE: L3 VLAN uplink configuration is beyond the scope of this guide.

Connections, port numbers, and networks used for external management are shown in the figure below. The External
Management VLAN is VLAN 1911 on the external switches and is VLAN 1811 on the SmartFabric switches.

Figure 26. L3 routed uplinks to the external network

Point-to-point IP networks
The point-to-point links used in this deployment are labeled A-E in Figure 27.

32 Configure the First Leaf Switch Pair


Figure 27. Point-to-point connections

Each L3 uplink is a separate, point-to-point IP network. Table 6 details the links labeled in Figure 27. The IP addresses in the
table below are used in the switch configuration examples.

Table 6. L3 routed uplink IP addresses


Link label Source switch Source IP Destination Destination IP Network
address switch address
A External-A 192.168.1.0 Leaf1A 192.168.1.1 192.168.1.0/31
B External-A 192.168.1.2 Leaf1B 192.168.1.3 192.168.1.2/31
C External-B 192.168.2.0 Leaf1A 192.168.2.1 192.168.2.0/31
D External-B 192.168.2.2 Leaf1B 192.168.2.3 192.168.2.2/31
E External-A 192.168.3.20 External-B 192.168.3.21 192.168.3.20/31

BGP example
This section covers the L3 routed uplink configuration with BGP.

NOTE: If BGP is not used, go to the Static route example section.

BGP ASNs and router IDs


Figure 28 shows the autonomous system numbers (ASNs) and router IDs used for the external switches and SFS leaf switches
in this example. External switches share a common ASN, and all SFS leaf switches share a common ASN.

Configure the First Leaf Switch Pair 33


Figure 28. BGP ASNs and router IDs

NOTE: Using private ASNs in the data center is a best practice. Private, 2-byte ASNs range from 64512 through 65534.

In this example, ASN 65101 is used on both external switches. SFS leaf switches use ASN 65011 by default for all leafs in the
fabric.

NOTE: If L3 uplinks are connected from SFS spine switches, the spine switches use ASN 65012 by default.

The IP addresses shown on the external network switches in Figure 28 are loopback addresses used as BGP router IDs. On the
SmartFabric switches, BGP router IDs are automatically configured from the SFS default private subnet address block,
172.16.0.0/16.
NOTE: SFS default ASNs and IP address blocks may be changed by going to 5. Edit Default Fabric Settings in the SFS
web UI.
Configure L3 routed uplinks with BGP in SFS
The following table shows the values entered in the SFS web UI to configure the L3 uplinks for this example. The steps below
the table are run once for each uplink using the values in the table.

Table 7. L3 uplink configuration details with BGP


Field name Leaf1A-to-External-A Leaf1A-to-External-B Leaf1B-to-External-A Leaf1B-to-External-B
Uplink Type L3 Routed L3 Routed L3 Routed L3 Routed
Uplink Name Leaf1A-to-External-A Leaf1A-to-External-B Leaf1B-to-External-A Leaf1B-to-External-B
Switch Group Leaf Leaf Leaf Leaf
Rack Rack 1 Rack 1 Rack 1 Rack 1
Leaf Switch Leaf1A Leaf1A Leaf1B Leaf1B
Interface Ethernet 1/1/53 Ethernet 1/1/54 Ethernet 1/1/53 Ethernet 1/1/54
Network Name Leaf1A-to-ExtA Leaf1A-to-ExtB Leaf1B-to-ExtA Leaf1B-to-ExtB
IPv4 Address 192.168.1.1 192.168.2.1 192.168.1.3 192.168.2.3
Prefix Length 31 31 31 31
Routing Protocol eBGP eBGP eBGP eBGP
Profile Name eBGP-Leaf1A-to-ExtA eBGP-Leaf1A-to-ExtB eBGP-Leaf1B-to-ExtA eBGP-Leaf1B-to-ExtB
Peer IPv4 Address 192.168.1.0 192.168.2.0 192.168.1.2 192.168.2.2

34 Configure the First Leaf Switch Pair


Table 7. L3 uplink configuration details with BGP (continued)
Field name Leaf1A-to-External-A Leaf1A-to-External-B Leaf1B-to-External-A Leaf1B-to-External-B
Remote ASN 65101 65101 65101 65101

NOTE: Any ports available on the leaf switches may be used as uplinks, provided they are compatible with the
corresponding ports on the external switches. If leaf switch uplink ports will not use their native speeds, the interfaces must
be first broken out to the correct speed before the uplinks are created. This is done using the 1. Breakout Switch Ports
option on the SFS web UI home page. A breakout example is shown in the Change the port-group speed in the SFS web UI
section of this guide.
To configure L3 routed uplinks with BGP, do the following using the data from Table 7:
1. In the SFS web UI, select 2. Create Uplink for External Network Connectivity.

Figure 29. SFS web UI Home page


2. On the Uplink Details page:
a. Set Uplink Connectivity to Layer 3.
b. Leave Network Type set to L3 Routed.
c. Enter a unique Name and, optionally, a Description.
3. Click NEXT.
4. On the Port Configuration page:
a. Leave Switch Group set to Leaf.
b. Next to Rack, select the rack that contains the switches with the uplinks. In this example, Rack 1 is selected.
c. Next to Leaf Switch, select the first leaf, Leaf1A in this example.
d. Next to Configured Interface, select the first interface. In this example, 100 GbE interface 1/1/53 is selected.

Configure the First Leaf Switch Pair 35


Figure 30. Port configuration
5. Click NEXT.
6. On the Network Configuration page:
a. Enter a unique Name and, optionally, a Description.
b. Enter the Interface IP Address and Prefix length.
c. Select the Routing Protocol, eBGP.
d. Enter a unique Profile Name.
e. Enter the Peer Interface IP Address and Remote ASN.

36 Configure the First Leaf Switch Pair


Figure 31. Network Configuration page with BGP
7. Click FINISH. Repeat the steps in this section for the remaining three uplinks using the data from Table 7.
After uplink configuration, the SFS web UI Home page displays as shown in the figure below:

Figure 32. SFS Home page after uplinks configured

Individual uplinks created are visible on the Uplinks tab of the SFS web UI, as shown in the figure below:

Configure the First Leaf Switch Pair 37


Figure 33. SFS L3 uplinks created

Static route example


This section shows L3 routed uplink configuration with a static route.
NOTE: If BGP is used instead of a static route, continue to the Configure external switches for L3 connections section.

NOTE: Currently, only one static route per L3 uplink is allowed. If multiple routes are needed, use a default route, 0.0.0.0/0,
as the destination network, or add additional uplinks for specific networks. Support for multiple static routes per L3 uplink is
planned for a future release.

Configure L3 uplinks with a static route in SFS


The following table shows the values entered in the SFS web UI to configure the L3 uplinks for this example. The steps below
the table are run once for each uplink using the values from the table.

Table 8. L3 uplink configuration details with a static route


Field name Leaf1A-to-External-A Leaf1A-to-External-B Leaf1B-to-External-A Leaf1B-to-External-B
Uplink Type L3 Routed L3 Routed L3 Routed L3 Routed
Uplink Name Leaf1A-to-External-A Leaf1A -to-External-B Leaf1B -to- External-A Leaf1B -to- External-B
Switch Group Leaf Leaf Leaf Leaf
Rack Rack 1 Rack 1 Rack 1 Rack 1
Leaf Switch Leaf1A Leaf1A Leaf1B Leaf1B
Interface Ethernet 1/1/53 Ethernet 1/1/54 Ethernet 1/1/53 Ethernet 1/1/54
Network Name Leaf1A-to-ExtA Leaf1A-to-ExtB Leaf1B-to-ExtA Leaf1B-to-ExtB
IPv4 Address 192.168.1.1 192.168.2.1 192.168.1.3 192.168.2.3
Prefix Length 31 31 31 31
Routing Protocol Static Route Static Route Static Route Static Route
Policy Name Leaf1A-to-ExtA Leaf1A-to-ExtB Leaf1B-to-ExtA Leaf1B-to-ExtB
Network Address 172.19.11.0 172.19.11.0 172.19.11.0 172.19.11.0
Prefix Length 24 24 24 24

38 Configure the First Leaf Switch Pair


Table 8. L3 uplink configuration details with a static route (continued)
Field name Leaf1A-to-External-A Leaf1A-to-External-B Leaf1B-to-External-A Leaf1B-to-External-B
Next Hop IP Address 192.168.1.0 192.168.2.0 192.168.1.2 192.168.2.2

NOTE: Any ports available on the leaf switches may be used as uplinks, provided they are compatible with the
corresponding ports on the external switches. If leaf switch uplink ports will not use their native speeds, the interfaces must
be first broken out to the correct speed before the uplinks are created. This is done using the 1. Breakout Switch Ports
option on the SFS web UI home page. A breakout example is shown in the Change the port-group speed in the SFS web UI
section of this guide.
To configure L3 routed uplinks with a static route, perform the following steps:
1. In the SFS web UI, select 2. Create Uplink for External Network Connectivity.

Figure 34. SFS web UI Home page


2. On the Uplink Details page:
a. Set Uplink Connectivity to Layer 3.
b. Leave Network Type set to L3 Routed.
c. Enter a unique Name and, optionally, a Description.

Figure 35. Uplink Details screen


3. Click NEXT.

Configure the First Leaf Switch Pair 39


4. On the Port Configuration page:
a. Leave Switch Group set to Leaf.
b. Next to Racks, select the rack that contains the uplink switches. In this example, Rack 1 is selected.
c. Next to Leaf Switches, select the first leaf, Leaf1A in this example.
d. Next to Configured Interfaces, select the first interface. In this example, 100 GbE interface 1/1/53 is selected.

Figure 36. Port Configuration


5. Click NEXT.
6. On the Network Configuration page:
a. Enter a unique Name and, optionally, a Description.
b. Enter the Interface IP Address and Prefix length.
c. Leave the Routing Protocol set to Static Route.
d. Enter a unique Policy Name.
e. Enter the destination Network Address and Prefix Length. This is the external management network, 172.19.11.0/24,
in this example.
f. Enter the Next Hop IP Address. This is the IP address of the connected interface on the external switch.

40 Configure the First Leaf Switch Pair


Figure 37. Network Configuration page with a static route
7. Click FINISH.
Repeat the steps in this section for the remaining three uplinks using the data from Table 8. After the uplink configuration, the
SFS web UI Home page displays.

Figure 38. SFS Home page after uplink configuration

Individual uplinks created are visible on the Uplinks tab of the SFS web UI as shown.

Configure the First Leaf Switch Pair 41


Figure 39. SFS L3 uplinks created

Configure external switches for L3 connections


This section shows example configurations for both external switches for L3 routed connections to the SmartFabric.
NOTE: The external switches used in this example are Dell EMC PowerSwitch systems. If the external switches are Cisco
Nexus, see Appendix C.

NOTE: This is only an example. Modify your external switch configuration as needed for your network.

General settings
Configure the hostname, OOB management IP address, and management route.

External-A External-B

configure terminal configure terminal

hostname External-A hostname External-B

interface mgmt1/1/1 interface mgmt1/1/1


no ip address no ip address
ip address 100.67.76.41/24 ip address 100.67.76.40/24
no shutdown no shutdown

management route 100.67.0.0/16 management route 100.67.0.0/16


100.67.76.254 100.67.76.254

Configure VLANs
VLAN 1911 represents a preexisting management VLAN on the external network. DNS and NTP services are located on this
VLAN. Assign a unique IP address to the VLAN on each switch.
Configure VRRP to provide gateway redundancy. Set the VRRP priority. The switch with the highest priority value becomes the
master VRRP router. Assign the same virtual address to both switches.

External-A External-B

interface vlan1911 interface vlan1911


no shutdown no shutdown
ip address 172.19.11.252/24 ip address 172.19.11.253/24

vrrp-group 19 vrrp-group 19
priority 150 priority 100
virtual-address 172.19.11.254 virtual-address 172.19.11.254

Configure interfaces

42 Configure the First Leaf Switch Pair


Configure the interfaces for connections to the SFS switches. Ports 1/1/13 and 1/1/14 are configured as L3 interfaces. The IP
addresses used are from Table 6. Optionally, allow the forwarding of jumbo frames with the MTU 9216 command. As a best
practice, flow control settings remain at their factory defaults as shown.
In this example, VLT port channel 1 connects to the DNS/NTP server. It is on VLAN 1911, which represents the preexisting
management VLAN, and the port channel is configured as a spanning tree edge port.
Interface 1/1/1 on each external switch is configured in VLT port channel 1 for connections to the DNS/NTP server. Port-
channel 1 is set as an LACP port channel with the channel-group 1 mode active command.

External-A External-B

interface ethernet1/1/13 interface ethernet1/1/13


description Leaf1A description Leaf1A
no shutdown no shutdown
no switchport no switchport
mtu 9216 mtu 9216
ip address 192.168.1.0/31 ip address 192.168.2.0/31
flowcontrol receive on flowcontrol receive on
flowcontrol transmit off flowcontrol transmit off

interface ethernet1/1/14 interface ethernet1/1/14


description Leaf1B description Leaf1B
no shutdown no shutdown
no switchport no switchport
mtu 9216 mtu 9216
ip address 192.168.1.2/31 ip address 192.168.2.2/31
flowcontrol receive on flowcontrol receive on
flowcontrol transmit off flowcontrol transmit off

interface port-channel1 interface port-channel1


description "To DNS/NTP" description "To DNS/NTP"
no shutdown no shutdown
switchport access vlan 1911 switchport access vlan 1911
vlt-port-channel 1 vlt-port-channel 1
spanning-tree port type edge spanning-tree port type edge

interface ethernet1/1/1 interface ethernet1/1/1


description "To DNS/NTP" description "To DNS/NTP"
no switchport no switchport
channel-group 1 mode active channel-group 1 mode active
no shutdown no shutdown
flowcontrol receive on flowcontrol receive on
flowcontrol transmit off flowcontrol transmit off

Configure VLT
This example uses interfaces 1/1/11 and 1/1/12 for the VLTi. Remove each interface from L2 mode with the no switchport
command. As a best practice, flow control settings remain at their factory defaults, as shown.
Create the VLT domain. The backup destination is the OOB management IP address of the VLT peer switch. Configure the
interfaces used as the VLTi with the discovery-interface command.
As a best practice, use the vlt-mac command to manually configure the same VLT MAC address on both the VLT peer
switches. This improves VLT convergence time when a switch is reloaded.
CAUTION: Be sure the VLT MAC address is the same on both switches to avoid any unpredictable behavior.

If you do not configure a VLT MAC address, the MAC address of the primary peer is used as the VLT MAC address on both
switches.
NOTE: For more information about VLT, see the Dell EMC SmartFabric OS10 User Guide on the Dell EMC Networking OS10
Info Hub.

External-A External-B

interface range ethernet1/1/11-1/1/12 interface range ethernet1/1/11-1/1/12


description VLTi description VLTi
no shutdown no shutdown

Configure the First Leaf Switch Pair 43


External-A External-B

no switchport no switchport
flowcontrol receive on flowcontrol receive on
flowcontrol transmit off flowcontrol transmit off

vlt-domain 255 vlt-domain 255


backup destination 100.67.76.40 backup destination 100.67.76.41
discovery-interface ethernet1/1/11-1/1/12 discovery-interface ethernet1/1/11-1/1/12
vlt-mac 00:00:01:02:03:20 vlt-mac 00:00:01:02:03:20

Configure BGP
NOTE: If BGP is not used, go to the Configure static routes section.

Configure a loopback interface to use for the BGP router ID.


Configure the BGP ASN with the router bgp command. The external switches share the same ASN. Use the address that
was set for interface loopback0 as the router ID.
Use the address-family ipv4 unicast and redistribute connected commands to redistribute IPv4 routes from
physically connected interfaces.
Configure the neighbor IP addresses and ASNs.
VLAN 4000 is used for the iBGP connection between the external switches. VLAN4000 IP addresses are configured per Table 6.
When the configuration is complete, exit configuration mode and save the configuration with the end and write memory
commands.

External-A External-B

interface loopback0 interface loopback0


description router_ID description router_ID
no shutdown no shutdown
ip address 10.0.2.1/32 ip address 10.0.2.2/32

router bgp 65101 router bgp 65101


router-id 10.0.2.1 router-id 10.0.2.2

address-family ipv4 unicast address-family ipv4 unicast


redistribute connected redistribute connected

neighbor 192.168.1.1 neighbor 192.168.2.1


remote-as 65011 remote-as 65011
no shutdown no shutdown

neighbor 192.168.1.3 neighbor 192.168.2.3


remote-as 65011 remote-as 65011
no shutdown no shutdown

neighbor 192.168.3.21 neighbor 192.168.3.20


remote-as 65101 remote-as 65101
no shutdown no shutdown

interface vlan4000 interface vlan4000


description iBGP description iBGP
no shutdown no shutdown
ip address 192.168.3.20/31 ip address 192.168.3.21/31

end end
write memory write memory

44 Configure the First Leaf Switch Pair


Configure static routes
NOTE: If BGP is used, skip this section and go to the Validate BGP example section.

Configure two routes to the external management network. This is 172.18.11.0/24, one to the connected IP address of Leaf1A,
and one to Leaf1B.
When the configuration is complete, exit configuration mode and save the configuration with the end and write memory
commands.

External-A External-B

ip route 172.18.11.0/24 192.168.1.1 ip route 172.18.11.0/24 192.168.2.1

ip route 172.18.11.0/24 192.168.1.3 ip route 172.18.11.0/24 192.168.2.3

end end
write memory write memory

Validate BGP example


NOTE: This section shows validation commands for the BGP example. If static routes are used, skip this section and go to
the Validate static route example section.
Now that the uplink interfaces are configured on the external switches and on the SFS leaf switches, connectivity can be
verified using the switch CLI.
Show command output on External-A (BGP example)
NOTE: The command output shown in the following commands is for the External-A switch. The output for External-B is
similar.
Run the show ip bgp summary command to verify that BGP is up for each neighbor. When BGP is up, uptime is shown in
the Up/Down column. The neighbors for External-A shown in the output below are Leaf1A, Leaf1B, and External-B.

External-A# show ip bgp summary


BGP router identifier 10.0.2.1 local AS number 65101
Neighbor AS MsgRcvd MsgSent Up/Down State/Pfx
192.168.1.1 65011 1327 1316 19:09:00 4
192.168.1.3 65011 1325 1324 19:09:00 4
192.168.3.21 65101 1319 1315 19:01:18 5

Run the show ip interface brief command to verify connected interfaces are up, and IP addresses are configured correctly. In
the output below, interface 1/1/1 and port channel 1 connect to the DNS/NTP server. 1/1/13-1/1/14 are the links to the SFS leaf
switches, and 1/1/11-1/1/12 are the VLTi links. VLAN 4094 and port channel 1000 are automatically configured for the VLTi.
VLAN 1911 is the external management VLAN that contains the DNS/NTP server. VLAN 4094 and port channel 1000 are
automatically configured for the VLTi.
NOTE: Unused interfaces have been removed from the output for brevity.

External-A# show ip interface brief


Interface Name IP-Address OK Method Status Protocol
================================================================================
Ethernet 1/1/1 unassigned YES unset up up
Ethernet 1/1/11 unassigned YES unset up up
Ethernet 1/1/12 unassigned YES unset up up
Ethernet 1/1/13 192.168.1.0/31 YES manual up up
Ethernet 1/1/14 192.168.1.2/31 YES manual up up
Management 1/1/1 100.67.76.41/24 YES manual up up
Vlan 1 unassigned YES unset up up
Vlan 1911 172.19.11.252/24 YES manual up up
Vlan 4000 192.168.3.20/31 YES manual up up
Vlan 4094 unassigned YES unset up up
Port-channel 1 unassigned YES unset up up

Configure the First Leaf Switch Pair 45


Port-channel 1000 unassigned YES unset up up
Loopback 0 10.0.2.1/32 YES manual up up

The show ip route command output for the External-A switch appears as shown. No BGP routes from the SFS fabric are
learned at this stage of deployment. Interfaces 1/1/13 and 1/1/14 are connected to the SFS leaf switches.

External-A# show ip route


Codes: C - connected
S - static
B - BGP, IN - internal BGP, EX - external BGP
O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,
N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
E2 - OSPF external type 2, * - candidate default,
+ - summary route, > - non-active route
Gateway of last resort is not set
Destination Gateway Dist Last Change
----------------------------------------------------------------------------------
C 10.0.2.1/32 via 10.0.2.1 loopback0 0/0 00:39:19
B IN 10.0.2.2/32 via 192.168.3.21 200/0 00:31:38
C 172.19.11.0/24 via 172.19.11.252 vlan1911 0/0 00:44:00
C 192.168.1.0/31 via 192.168.1.0 ethernet1/1/13 0/0 01:44:44
C 192.168.1.2/31 via 192.168.1.2 ethernet1/1/14 0/0 01:40:50
B IN 192.168.2.0/31 via 192.168.3.21 200/0 00:31:38
B IN 192.168.2.2/31 via 192.168.3.21 200/0 00:31:38
C 192.168.3.20/31 via 192.168.3.20 vlan4000 0/0 00:31:51

Show command output on Leaf1A (BGP example)


NOTE: The command output shown in the following commands is for Leaf1A. The output for Leaf1B is similar.

Run the show ip bgp summary command to verify that BGP is up for each neighbor. When BGP is up, uptime is shown in
the Up/Down column. The neighbors for Leaf1A shown in the output below are Leaf1B, External-A, and External-B.

Leaf1A# show ip bgp summary


BGP router identifier 172.16.128.0 local AS number 65011
Neighbor AS MsgRcvd MsgSent Up/Down State/Pfx
172.16.0.0 65011 13 16 00:06:59 8
192.168.1.0 65101 12 14 00:07:30 8
192.168.2.0 65101 8 9 00:04:14 8

Run the show ip interface brief command to verify connected interfaces are up, and IP addresses are configured
correctly.
In the output below, interfaces 1/1/1-1/1/3 are connected to the VxRail nodes, 1/1/49-1/1/52 are the VLTi, and 1/1/53-1/1/54
are the uplinks to the external switches. SFS uses VLAN 4000-4090, Loopback 1, and Loopback 2 internally. VLAN 4094 and
port channel 1000 are automatically configured for the VLTi.
NOTE: Unused interfaces have been removed from the output for brevity.

Leaf1A# show ip interface brief


Interface Name IP-Address OK Method Status Protocol
================================================================================
Ethernet 1/1/1 unassigned YES unset up up
Ethernet 1/1/2 unassigned YES unset up up
Ethernet 1/1/3 unassigned YES unset up up
Ethernet 1/1/49 unassigned YES unset up up
Ethernet 1/1/50 unassigned YES unset up up
Ethernet 1/1/51 unassigned YES unset up up
Ethernet 1/1/52 unassigned YES unset up up
Ethernet 1/1/53 192.168.1.1/31 YES manual up up
Ethernet 1/1/54 192.168.2.1/31 YES manual up up
Management 1/1/1 100.67.76.30/24 YES manual up up
Vlan 4000 unassigned YES unset up up
Vlan 4089 unassigned YES unset up up
Vlan 4090 172.16.0.1/31 YES manual up up
Vlan 4094 unassigned YES unset up up
Port-channel 1000 unassigned YES unset up up
Loopback 1 172.16.128.0/32 YES manual up up
Loopback 2 172.30.0.0/32 YES manual up up
Virtual-network 3939 unassigned YES unset up up

46 Configure the First Leaf Switch Pair


Run the show ip route command to verify routes to the external management VLAN, 172.19.11.0/24, have been learned
using BGP from the external switches. In this example, two routes to 172.19.11.0/24 are learned, one using each external switch.

Leaf1A# show ip route


Codes: C - connected
S - static
B - BGP, IN - internal BGP, EX - external BGP
O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,
N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
E2 - OSPF external type 2, * - candidate default,
+ - summary route, > - non-active route
Gateway of last resort is not set
Destination Gateway Dist Last Change
----------------------------------------------------------------------------------
B EX 10.0.2.1/32 via 192.168.1.0 20/0 00:43:16
via 192.168.2.0
B EX 10.0.2.2/32 via 192.168.1.0 20/0 00:43:16
via 192.168.2.0
C 172.16.0.0/31 via 172.16.0.1 vlan4090 0/0 02:19:46
C 172.16.128.0/32 via 172.16.128.0 loopback1 0/0 02:20:07
B IN 172.16.128.1/32 via 172.16.0.0 200/0 02:19:44
B EX 172.19.11.0/24 via 192.168.1.0 20/0 00:43:32
via 192.168.2.0
C 172.30.0.0/32 via 172.30.0.0 loopback2 0/0 02:20:07
C 192.168.1.0/31 via 192.168.1.1 ethernet1/1/53 0/0 01:12:49
B IN 192.168.1.2/31 via 172.16.0.0 200/0 01:09:12
C 192.168.2.0/31 via 192.168.2.1 ethernet1/1/54 0/0 01:10:18
B IN 192.168.2.2/31 via 172.16.0.0 200/0 01:07:51
B EX 192.168.3.20/31 via 192.168.1.0 20/0 00:43:21
via 192.168.2.0

Validate static route example


NOTE: This section shows validation commands for the static route example. If BGP was used, skip this section and go to
the Configure a jump host port section.
Once the uplink interfaces have been configured on the external switches and in the SFS web UI, connectivity can be verified
using the switch CLI.
Show command output on External-A (static route example)
NOTE: The command output shown in the following commands is for the External-A switch. The output for External-B is
similar.
Run the show ip interface brief command to verify connected interfaces are up, and IP addresses are configured
correctly. In the output below, interface 1/1/1 and port channel 1 connect to the DNS/NTP server. 1/1/13-1/1/14 are the links to
the SFS leaf switches, and 1/1/11-1/1/12 are the VLTi links.
VLAN 1911 is the external management VLAN that contains the DNS/NTP server. VLAN 4094 and port channel 1000 are
automatically configured for the VLTi.
NOTE: Unused interfaces have been removed from the output for brevity.

External-A# show ip interface brief


Interface Name IP-Address OK Method Status Protocol
================================================================================
Ethernet 1/1/1 unassigned YES unset up up
Ethernet 1/1/11 unassigned YES unset up up
Ethernet 1/1/12 unassigned YES unset up up
Ethernet 1/1/13 192.168.1.0/31 YES manual up up
Ethernet 1/1/14 192.168.1.2/31 YES manual up up
Management 1/1/1 100.67.76.41/24 YES manual up up
Vlan 1 unassigned YES unset up up
Vlan 1911 172.19.11.252/24 YES manual up up
Vlan 4094 unassigned YES unset up up
Port-channel 1 unassigned YES unset up up
Port-channel 1000 unassigned YES unset up up

Configure the First Leaf Switch Pair 47


Run the show ip route command to verify static routes to the external management VLAN, 172.18.11.0/24, are properly
configured.

External-A# show ip route


Codes: C - connected
S - static
B - BGP, IN - internal BGP, EX - external BGP
O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,
N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
E2 - OSPF external type 2, * - candidate default,
+ - summary route, > - non-active route
Gateway of last resort is not set
Destination Gateway Dist Last Change
----------------------------------------------------------------------------------
S 172.18.11.0/24 via 192.168.1.1 ethernet1/1/13 1/0 3 days 23:35:18
via 192.168.1.3 ethernet1/1/14
C 172.19.11.0/24 via 172.19.11.252 vlan1911 0/0 3 days 23:26:55
C 192.168.1.0/31 via 192.168.1.0 ethernet1/1/13 0/0 21:58:31
C 192.168.1.2/31 via 192.168.1.2 ethernet1/1/14 0/0 21:58:33

Show command output on Leaf1A (static route example)


NOTE: The command output shown in the following commands is for Leaf1A. The output for Leaf1B is similar.

Run the show ip interface brief command to verify connected interfaces are up, and IP addresses are configured
correctly.
In the output below, interfaces 1/1/1-1/1/3 are connected to the VxRail nodes, 1/1/49-1/1/52 are the VLTi links, and
1/1/53-1/1/54 are the uplinks to the external switches.

NOTE: Unused interfaces have been removed from the output for brevity.

Leaf1A# show ip interface brief


Ethernet 1/1/1 unassigned YES unset up up
Ethernet 1/1/2 unassigned YES unset up up
Ethernet 1/1/3 unassigned YES unset up up
Ethernet 1/1/49 unassigned YES unset up up
Ethernet 1/1/50 unassigned YES unset up up
Ethernet 1/1/51 unassigned YES unset up up
Ethernet 1/1/52 unassigned YES unset up up
Ethernet 1/1/53 192.168.1.1/31 YES manual up up
Ethernet 1/1/54 192.168.2.1/31 YES manual up up
Management 1/1/1 100.67.76.30/24 YES manual up up
Vlan 4000 unassigned YES unset up up
Vlan 4090 172.16.0.1/31 YES manual up up
Vlan 4094 unassigned YES unset up up
Port-channel 1000 unassigned YES unset up up
Loopback 1 172.16.128.0/32 YES manual up up
Loopback 2 172.30.0.0/32 YES manual up up
Virtual-network 3939 unassigned YES unset up up

Run the show ip route command to verify static routes to the external management VLAN, 172.19.11.0/24, are correctly
configured.
NOTE: Since BGP is used by SFS to exchange routes within the fabric, some BGP routes appear in the output.

Leaf1A# show ip route


Codes: C - connected
S - static
B - BGP, IN - internal BGP, EX - external BGP
O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,
N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
E2 - OSPF external type 2, * - candidate default,
+ - summary route, > - non-active route
Gateway of last resort is not set
Destination Gateway Dist Last Change
----------------------------------------------------------------------------------
C 172.16.0.0/31 via 172.16.0.1 vlan4090 0/0 02:09:34

C 172.16.0.0/31 via 172.16.0.1 vlan4090 0/0 00:40:43

48 Configure the First Leaf Switch Pair


B IN 172.16.128.0/32 via 172.16.0.0 200/0 00:40:42
C 172.16.128.1/32 via 172.16.128.1 loopback1 0/0 00:40:50
S 172.19.11.0/24 via 192.168.1.0 ethernet1/1/53 1/0 00:37:51
via 192.168.2.0 ethernet1/1/54
C 172.30.0.0/32 via 172.30.0.0 loopback2 0/0 00:40:50
C 192.168.1.0/31 via 192.168.1.1 ethernet1/1/53 0/0 00:37:56
B IN 192.168.1.2/31 via 172.16.0.0 200/0 00:35:52
C 192.168.2.0/31 via 192.168.2.1 ethernet1/1/54 0/0 00:36:57
B IN 192.168.2.2/31 via 172.16.0.0 200/0 00:34:51

Configure a jump host port


VxRail Manager is used for VxRail deployments. The VxRail Manager VM automatically runs on the master VxRail node, which is
the node with the lowest VxRail serial number.
NOTE: Before VxRail deployment, VxRail Manager is accessible on an untagged port on the SFS Client Management VLAN
(VLAN 4091 by default). The default IP address is 192.168.10.200.
VxRail Manager is accessed by connecting a laptop computer or a jump host directly to any available leaf switch port, as shown
in the following figure.

Figure 40. Jump host connected leaf switch for VxRail deployment

This section covers the configuration of a leaf switch port for connection to a jump host or laptop computer (referred to only as
a jump host for the remainder of this guide).

Change native port speed on S5200 series switches


If the jump host has a 1 GbE or 10 GbE NIC, and it is connected to a 25 GbE port on an S5200 series switch, the switch port
used must be changed from its native 25 GbE speed to 10 GbE for the port to come up.
If the jump host has a 1 GbE or 10 GbE NIC, and is connected to a 10 GbE port on an S4100 series leaf switch, or has a 25 GbE
NIC and connects to an S5200 series leaf switch, leave the port at its native speed, skip this section, and go to the Configure
the jump host interface section.
NOTE: When in 10 GbE mode, an S5200 series switch port will autonegotiate to 1 GbE when connected to a 1 GbE NIC. To
connect a jump host with a 1 GbE BASE-T or 10 GbE BASE-T port to an SFP port on the leaf switch, use a Dell EMC
supported SFP-1G-T or SFP-10G-T adapter.

Configure the First Leaf Switch Pair 49


Determine the port group
Determine the port group containing the port that the jump host will use. This is done by referencing the table below for S5200
series switches, or by running the show port-group command from the leaf switch CLI. For example, if the jump host is
connected to port 1/1/9, it is in port group 1/1/3.

Native physical Port group Native Non-native Non-native Applicable switches


interface name number speed speed logical
interface
name

Eth 1/1/1-1/1/4 1/1/1 25g-4x 10g-4x Eth 1/1/x:1 S5212/S5224/S5248/S5296


Eth 1/1/5-1/1/8 1/1/2 25g-4x 10g-4x Eth 1/1/x:1 S5212/S5224/S5248/S5296
Eth 1/1/9-1/1/12 1/1/3 25g-4x 10g-4x Eth 1/1/x:1 S5212/S5224/S5248/S5296
Eth 1/1/13-1/1/16 1/1/4 25g-4x 10g-4x Eth 1/1/x:1 S5224/S5248/S5296
Eth 1/1/17-1/1/20 1/1/5 25g-4x 10g-4x Eth 1/1/x:1 S5224/S5248/S5296
Eth 1/1/21-1/1/24 1/1/6 25g-4x 10g-4x Eth 1/1/x:1 S5224/S5248/S5296
Eth 1/1/25-1/1/28 1/1/7 25g-4x 10g-4x Eth 1/1/x:1 S5248/S5296
Eth 1/1/29-1/1/32 1/1/8 25g-4x 10g-4x Eth 1/1/x:1 S5248/S5296
Eth 1/1/33-1/1/36 1/1/9 25g-4x 10g-4x Eth 1/1/x:1 S5248/S5296
Eth 1/1/37-1/1/40 1/1/10 25g-4x 10g-4x Eth 1/1/x:1 S5248/S5296
Eth 1/1/41-1/1/44 1/1/11 25g-4x 10g-4x Eth 1/1/x:1 S5248/S5296
Eth 1/1/45-1/1/48 1/1/12 25g-4x 10g-4x Eth 1/1/x:1 S5248/S5296
Eth 1/1/49-1/1/52 1/1/13 25g-4x 10g-4x Eth 1/1/x:1 S5296
Eth 1/1/53-1/1/56 1/1/14 25g-4x 10g-4x Eth 1/1/x:1 S5296
Eth 1/1/57-1/1/60 1/1/15 25g-4x 10g-4x Eth 1/1/x:1 S5296
Eth 1/1/61-1/1/64 1/1/16 25g-4x 10g-4x Eth 1/1/x:1 S5296
Eth 1/1/65-1/1/68 1/1/17 25g-4x 10g-4x Eth 1/1/x:1 S5296
Eth 1/1/69-1/1/72 1/1/18 25g-4x 10g-4x Eth 1/1/x:1 S5296
Eth 1/1/73-1/1/76 1/1/19 25g-4x 10g-4x Eth 1/1/x:1 S5296
Eth 1/1/77-1/1/80 1/1/20 25g-4x 10g-4x Eth 1/1/x:1 S5296
Eth 1/1/81-1/1/84 1/1/21 25g-4x 10g-4x Eth 1/1/x:1 S5296
Eth 1/1/85-1/1/88 1/1/22 25g-4x 10g-4x Eth 1/1/x:1 S5296
Eth 1/1/89-1/1/92 1/1/23 25g-4x 10g-4x Eth 1/1/x:1 S5296
Eth 1/1/93-1/1/96 1/1/24 25g-4x 10g-4x Eth 1/1/x:1 S5296

NOTE: Changing the speed is done for all ports in the port group. In this example, setting port group 1/1/3 to 10g-4x
changes ports 1/1/9-1/1/12 to 10 GbE, and the ports are renamed 1/1/9:1-1/1/12:1.

Change the port-group speed in the SFS web UI


In this section, port group 1/1/3 is changed from its 4x25 G native speed to 4x10 G to accommodate a jump host connected to
port 1/1/9 on Leaf1A.
1. On the SFS web UI Home page, click 1. Breakout Switch Ports.
2. Select the Rack, Switch, and Port group. Set Breakout Capabilities to 4X10GEFixedFormFactor, as shown in the
following figure.

50 Configure the First Leaf Switch Pair


Figure 41. Configure switch ports
3. Click OK to apply the setting.

Configure the jump host interface


The jump host interface is configured as follows:
1. On the SFS web UI Home page, click 2. Configure Jump Host.
2. In the Configure Jump Host window:
a. Enter a Name, such as Jump Host 1.
b. (Optional) Enter a Description.
c. Select the Rack, for example, Rack 1.
d. Select the Switch, for example, Leaf1A.
e. Select the Configured Interface that the jump host uses. In this example, it is 1/1/9:1.
NOTE: Port 1/1/9 was automatically renamed to 1/1/9:1 when its port group was changed from its native setting of
4x 25 GbE to 4x 10 GbE.
f. Next to Untagged Network, leave the network set to Client_Management_Network, as shown in the figure below.

Configure the First Leaf Switch Pair 51


Figure 42. Configure jump host
3. Click OK to apply the settings. The jump host interface is added as an untagged member of the
Client_Management_Network (VLAN 4091 by default). Optionally, from the CLI, run the show virtual-network
command to validate the settings.
NOTE: The output below is with an L3 uplink. If an L2 uplink to the external network is configured, the External
Management Network, VLAN 1811 in this guide, also appears in the command output along with the L2 uplink port channel
as a member. An output example with an L2 uplink is shown in the Show command output on Leaf1A listing in the Validation
section.
VLAN 3939 contains port channel 1000 (the VLTi) and the three interfaces connected to the VxRail nodes, 1/1/1-1/1/3. VLAN
4091 contains port channel 1000, the three interfaces connected to the VxRail nodes, and the jump host port, ethernet 1/1/9:1.

NOTE: The jump host port is only configured on one of the leaf switches.

Leaf1A# show virtual-network


Codes: DP - MAC-learn Dataplane, CP - MAC-learn Controlplane, UUD - Unknown-Unicast-Drop
Un-tagged VLAN: 4080
Virtual Network: 3939
Description: In-band SmartFabric Services discovery network
VLTi-VLAN: 3939
Members:
VLAN 3939: port-channel1000, ethernet1/1/1, ethernet1/1/2, ethernet1/1/3
VxLAN Virtual Network Identifier: 3939
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Virtual Network: 4091


Description: Default untagged network for client onboarding
VLTi-VLAN: 4091
Members:
Untagged: ethernet1/1/1, ethernet1/1/2, ethernet1/1/3, ethernet1/1/9:1
VLAN 4091: port-channel1000
VxLAN Virtual Network Identifier: 4091

52 Configure the First Leaf Switch Pair


Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Configure the jump host IP addresses


By default, the initial VxRail Manager IP address is 192.168.10.200/24, and it is in VLAN 4091. After the initial configuration, the
VxRail Manager address changes to its new address on the External Management VLAN (VLAN 1811 in this example). The new
VxRail Manager address used in this guide is 172.18.11.72/24 per the planning data in Table 4.
During installation, the jump host must be able to reach both the initial and new VxRail Manager addresses, so two addresses are
configured on its network adapter, one for each network.
The IP addresses are configured on the jump host NIC in this example as follows:
● 192.168.10.201/24, to communicate with the initial VxRail Manager address, 192.168.10.200/24
● 172.18.11.201/24, to communicate with the new VxRail Manager address, 172.18.11.72/24
NOTE: Both addresses may be configured simultaneously if the network adapter supports it, or in sequence if required.
During VxRail deployment, the jump host port on the switch is automatically moved from VLAN 4091 to the External
Management VLAN, along with VxRail Manager. Since the jump host port on the switch is untagged in VLAN 4091, and will
also be untagged in the External Management VLAN, no VLAN information is configured on the jump host NIC.
Once the jump host has been configured with an IP address on the 192.168.10.0/24 network, verify the jump host can
communicate with VxRail Manager by pinging 192.168.10.200 from the jump host.

Figure 43. Jump host successfully pings VxRail Manager

Configure the First Leaf Switch Pair 53


6
Deploy VxRail

Initial VxRail cluster deployment steps


In this chapter, the three VxRail nodes in Rack 1 are deployed as a three-node cluster. VLANs specified during VxRail
deployment are automatically configured in the SmartFabric.
1. In a browser on the jump host, go to https://192.168.10.200 to connect to VxRail Manager. The Welcome to VxRail screen
displays, as shown in the figure below.

Figure 44. VxRail Welcome screen


2. From the Welcome screen, click GET STARTED.
3. On the EULA page, review the terms provided, and if you agree, check the ACCEPT box and then click NEXT.
4. On the Cluster Type page, select the Standard Cluster (3 or more hosts) option and click NEXT.

54 Deploy VxRail
Figure 45. VxRail Cluster type

NOTE: 2-node VxRail clusters are not currently supported with SFS.
5. On the Discover Resources page, wait for all the VxRail hosts in the rack and the SmartFabric switch cluster (I3_Fabric) to
be discovered.

NOTE: Discovery may take about 5 minutes. If necessary, click the Refresh icon to refresh the Hosts section or
the Top-of-Rack Switch section as needed.

Deploy VxRail 55
Figure 46. Hosts and SmartFabric discovered

The three VxRail nodes and the SmartFabric switch cluster are discovered.
6. Click NEXT.
7. On the Configuration Method page, select your preferred configuration method, Step-by-step user input, or Upload a
configuration file. Either configuration method may be used.
NOTE: A JSON-formatted configuration file may be used if you have saved one from a previous installation using the
same versions of VxRail and SmartFabric OS10, or if you have been provided one from your sales representative. If you
do not have a configuration file, select Step-by-step user input.
8. Click NEXT.
9. The values entered for screens 6 through 10 (Global Settings through Virtual Network Settings) of the deployment
wizard, are listed in Table 4 in the Deployment Planning chapter.
NOTE: Step-by-step VxRail configuration screens are not in this guide, but are provided in the VxRail Appliance
Installation Procedures that are available on Dell Technologies SolVe Online (account required).
10. From the Configure Switch screen, enter the default REST_USER password, admin, and click CONFIGURE SWITCH.
NOTE: The REST_USER account is used by VxRail and OMNI to configure the switches.

56 Deploy VxRail
Figure 47. Configure Switch screen

The REST_USER Change Default Password screen displays.


11. Enter and confirm a new password then click OK. The switch configuration is updated, and a Success message displays.

Figure 48. Switch configuration Credentials confirmation screen


12. Click NEXT. The Validate Configuration screen displays.

Deploy VxRail 57
CAUTION: Do not click the VALIDATE CONFIGURATION button at this time.

Figure 49. Validate Configuration screen


NOTE: Before proceeding, be sure to keep the browser window that displays the Validate Configuration screen above, open
on the jump host.
Optionally, the switch configuration can be verified at the CLI with the show virtual-network command.
The command output below shows the External Management, vMotion, vSAN, and VM network VLANs (VLANs 1811 through
1815 from Table 4) specified on the VxRail deployment screens have been automatically configured on the leaf switches. The
VxRail node ports, interfaces 1/1/1-1/1/3, and the VLTi port channel, port channel 1000, are members of all VLANs. The jump
host port, interface 1/1/9:1, is still a member of VLAN 4091.
NOTE: The output below is with an L3 uplink. If an L2 uplink is configured, the uplink port channel also appears as a
member of VLANs 1811 through 1815

Leaf1A# show virtual-network


Codes: DP - MAC-learn Dataplane, CP - MAC-learn Controlplane, UUD - Unknown-Unicast-Drop
Un-tagged VLAN: 4080
Virtual Network: 1811
VLTi-VLAN: 1811
Members:
VLAN 1811: port-channel1000, ethernet1/1/1, ethernet1/1/2, ethernet1/1/3
VxLAN Virtual Network Identifier: 1811
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Virtual Network: 1812


VLTi-VLAN: 1812
Members:
VLAN 1812: port-channel1000, ethernet1/1/1, ethernet1/1/2, ethernet1/1/3
VxLAN Virtual Network Identifier: 1812
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Virtual Network: 1813


VLTi-VLAN: 1813
Members:
VLAN 1813: port-channel1000, ethernet1/1/1, ethernet1/1/2, ethernet1/1/3
VxLAN Virtual Network Identifier: 1813
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Virtual Network: 1814


VLTi-VLAN: 1814
Members:
VLAN 1814: port-channel1000, ethernet1/1/1, ethernet1/1/2, ethernet1/1/3
VxLAN Virtual Network Identifier: 1814
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

58 Deploy VxRail
Virtual Network: 1815
VLTi-VLAN: 1815
Members:
VLAN 1815: port-channel1000, ethernet1/1/1, ethernet1/1/2, ethernet1/1/3
VxLAN Virtual Network Identifier: 1815
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Virtual Network: 3939


Description: In-band SmartFabric Services discovery network
VLTi-VLAN: 3939
Members:
VLAN 3939: port-channel1000, ethernet1/1/1, ethernet1/1/2, ethernet1/1/3
VxLAN Virtual Network Identifier: 3939
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Virtual Network: 4091


Description: Default untagged network for client onboarding
VLTi-VLAN: 4091
Members:
Untagged: ethernet1/1/1, ethernet1/1/2, ethernet1/1/3, ethernet1/1/9:1
VLAN 4091: port-channel1000
VxLAN Virtual Network Identifier: 4091
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Additional configuration steps for L3 uplinks


NOTE: If using L2 uplinks, skip this section and go to Validate and build VxRail cluster.

Traffic on the External Management network, VLAN 1811, must be able to reach the DNS server on the external network during
VxRail deployment. To accomplish this with L3 uplinks, an IP address is assigned to each leaf switch on virtual network 1811. An
anycast gateway address shared by all leafs is also configured on the same network.
Since this is on virtual network 1811, available IP addresses in the 172.18.11.0/24 address block are used per the planning data in
Table 3.

Table 9. Leaf switch External Management network IP addresses and anycast gateway
Item IP address/prefix
Leaf1A IP address 172.18.11.253/24
Leaf1B IP address 172.18.11.252/24
Gateway IP address 172.18.11.254/24

NOTE: If present, additional leaf switches in the fabric will also need one IP address per leaf on this network.

The IP addresses and gateway are configured as follows:


1. From a workstation, launch the SFS UI.
2. On the SFS UI Home page, select 3. Update Network Configuration. The Update Network Configuration window
opens.

Deploy VxRail 59
Figure 50. Update network configuration window

a. Next to Network, select the External Management network, Management Network 1811, from the drop-down list.
b. Next to Enable IP Address, select IPv4.
NOTE: When IPv4 is selected, additional fields display, as shown in the figure below.
c. Next to Interface IP Addresses, enter an interface IP address for each leaf switch in the SmartFabric as shown in Table

10. Click the blue Add button to add IP address entry fields.
NOTE: If you plan to expand the fabric, additional leaf switches will also need IP addresses on this network, with one
IP address per leaf. This is covered in Expand SmartFabric and VxRail cluster to multirack.
d. Enter the Prefix Length for the IP addresses and a Gateway IP Address. These values are from Table 10.
When complete, the Update Network Configuration window shows the following configuration options:

60 Deploy VxRail
Figure 51. Update network configuration window
3. Click OK.

BGP Validation
NOTE: If static routes are used, go to the Validate and build VxRail cluster section. (Static route validation was done earlier
in the Validate static route example section of this guide).
If BGP is used on the uplinks, ensure the external switches have learned the routes to the VxRail External Management network,
172.18.11.0/24 in this example, to reach the VxRail nodes and VxRail Manager. This is done with the show ip route
command. The BGP-discovered route to 172.18.11.0/24 is shown in bold in the output below.
NOTE: The command output shown is for the External-A switch. The output for External-B is similar. BGP verification from
the leaf switches was done in Show command output on Leaf1A (BGP example).

NOTE: Command output from a Cisco Nexus switch is shown in Appendix C: BGP validation on N9K-External-A during
VxRail deployment.

External-A# show ip route


Codes: C - connected
S - static
B - BGP, IN - internal BGP, EX - external BGP
O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,

Deploy VxRail 61
N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
E2 - OSPF external type 2, * - candidate default,
+ - summary route, > - non-active route
Gateway of last resort is not set
Destination Gateway Dist/Metric Change
----------------------------------------------------------------------------------
C 10.0.2.1/32 via 10.0.2.1 loopback0 0/0 18:36:55
B IN 10.0.2.2/32 via 192.168.3.21 200/0 18:16:18
C 172.19.11.0/24 via 172.19.11.252 vlan1911 0/0 18:29:16
B EX 172.18.11.0/24 via 192.168.1.1 20/0 16:02:33
via 192.168.1.3
C 192.168.1.0/31 via 192.168.1.0 ethernet1/1/13 0/0 21:10:53
C 192.168.1.2/31 via 192.168.1.2 ethernet1/1/14 0/0 18:36:56
B IN 192.168.2.0/31 via 192.168.3.21 200/0 21:10:51
B IN 192.168.2.2/31 via 192.168.3.21 200/0 18:16:18
C 192.168.3.20/31 via 192.168.3.20 vlan4000 0/0 18:29:12

Validate and build VxRail cluster


1. Return to the Validate Configuration screen in the VxRail deployment wizard on the jump host.

Figure 52. Validate Configuration screen


2. Click VALIDATE CONFIGURATION.
NOTE: VxRail Manager must be able to reach the DNS server for validation to succeed. Expand Show history to view
the validation status. Validation may take 5 to 10 minutes.
When validation is complete, a message indicating whether the configuration has passed or failed displays, as shown in the
figure below.
NOTE: If validation failed, address the items that failed and validate again.

Figure 53. Validation successful

62 Deploy VxRail
NOTE: Once validation passes, Dell Technologies recommends clicking the DOWNLOAD CONFIGURATION FILE
button to save a JSON file with your VxRail settings.
3. Click NEXT.
4. Click APPLY CONFIGURATION.

Figure 54. Apply Configuration screen

The deployment begins.

Figure 55. VxRail deployment starts


5. Expand the Show Details section to view the deployment details.
6. When the deployment is about 27 percent complete, the Redirected to new address prompt displays.

Deploy VxRail 63
NOTE: Ensure the jump host NIC has an IP address on the new network, 172.18.11.0/24 in this example, before
proceeding with the next step. The jump host port on the leaf switch is untagged, so do not configure a VLAN ID on the
jump host NIC.
7. Click YES. You are automatically redirected to the new VxRail Manager IP address in the browser, and VxRail deployment
continues, as shown in the figure below.
NOTE: If the Redirected to new address prompt does not appear when deployment is about 27 percent complete,
and the screen has not updated for at least 5 minutes, perform the following steps :
a. At the CLI of the leaf switch that the jump host is connected to, run the show virtual-network command.
b. Make sure the port that the jump host is connected to (Leaf1A, port 1/1/9:1 in this example) has automatically been
moved from VLAN 4091 to the external management VLAN (1811 in this example).
c. Once the jump host port has moved to the external management VLAN, manually change the IP address in the
browser's address bar from 192.168.10.200 to the new VxRail Manager address, 172.18.11.72. The address is shown in
the figure below. Leave the rest of the URL as-is. The browser connects to the new address and the deployment
continues as shown in the figure below.

Figure 56. Deployment continues using new VxRail Manager address

The switch port connected to the jump host is automatically moved from VLAN 4091 to the External Management VLAN,
VLAN 1811, on the leaf switch to enable it to reach VxRail Manager on the new network.
(Optional) To verify the change, run the show virtual network command on the leaf switch that the jump host is
connected to. In the output below, the jump host port 1/1/9:1 is now untagged in VLAN 1811 and is no longer in VLAN 4091.
NOTE: Virtual networks 1812 through 1815 and 3939 have been removed from the output below for brevity. The output
below is with an L3 uplink. If an L2 uplink is configured, the uplink port channel also appears as a member of VLAN 1811.

Leaf1A# show virtual-network


Codes: DP - MAC-learn Dataplane, CP - MAC-learn Controlplane, UUD - Unknown-Unicast-
Drop
Un-tagged VLAN: 4080
Virtual Network: 1811
VLTi-VLAN: 1811
Members:
Untagged: ethernet1/1/9:1
VLAN 1811: port-channel1000, ethernet1/1/1, ethernet1/1/2, ethernet1/1/3
VxLAN Virtual Network Identifier: 1811
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Virtual Network: 4091


Description: Default untagged network for client onboarding
VLTi-VLAN: 4091
Members:
Untagged: ethernet1/1/1, ethernet1/1/2, ethernet1/1/3
VLAN 4091: port-channel1000
VxLAN Virtual Network Identifier: 4091

64 Deploy VxRail
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Deployment takes about one hour for a four-node cluster. When VxRail is successfully deployed, the VxRail Cluster
Successfully Configured message displays, as shown.

Figure 57. VxRail Successfully Configured screen


8. Click LAUNCH VCENTER to manage the cluster.
NOTE: The jump host or workstation used must be able to reach the DNS server to resolve the hostname of the
vCenter server for the connection to succeed

NOTE: If prompted, click the LAUNCH VSPHERE CLIENT (HTML5) button. The older Flash-based vSphere Web
Client (Flex) is deprecated and is not used in this guide
9. Log in using your vCenter credentials. In this example, the username is [email protected].
The Hosts and Clusters page of the vSphere Client appears, as shown in the figure below.
.

Deploy VxRail 65
Figure 58. Newly created VxRail cluster

CAUTION: Review any warnings that may appear in the vSphere Client.

66 Deploy VxRail
7
Expand to Multirack

Expand SmartFabric and VxRail cluster to multirack


CAUTION: Do not connect any new switches in SmartFabric mode to the fabric until the preferred master
settings are validated on the existing and new switches. This is covered in the Verify preferred master setting
before fabric expansion section of this chapter
In this chapter, the SmartFabric is expanded to a second rack with the addition of Spine1, Spine2, Leaf2A, and Leaf2B. After the
fabric is expanded, VxRail node 4 in Rack 2 is added to the existing VxRail cluster.
NOTE: SFS supports up to 20 switches and eight racks in the fabric. Any combination of leafs and spines may be used with
the exception that leaf switches must be deployed in pairs, and at least two spines are used. VxRail does not support single
spine deployments.

Figure 59. SmartFabric and VxRail cluster expansion

Expand to Multirack 67
Verify preferred master setting before fabric
expansion
During fabric expansion, the newly added switches may come up and form a fabric among themselves and elect a master before
they are connected to the existing fabric. When the new fabric merges with the running fabric, it is possible for the master
switch from the new leaf switches to overwrite the configuration in the existing fabric. It is critical to ensure a pair of leaf nodes
in the existing fabric are configured to be the "preferred master" before expanding the fabric.
When you create an uplink to the external network using the SFS UI or OMNI, the preferred master is automatically set on all
leaf switches in the fabric at that time.

NOTE: Spine switches are never elected SmartFabric master or preferred master switches.

Check preferred master status on existing leafs


Before connecting additional leaf switches to the fabric, verify the preferred master is set on at least one pair of leaf switches in
the SmartFabric. This is done by running the show smartfabric cluster command on each leaf in the existing fabric.
The output for at least one pair of leaf switches in the SmartFabric must show PREFERRED-MASTER is set to true. The
following commands and output are from Leaf1A and Leaf1B:

Leaf1A# show smartfabric cluster


----------------------------------------------------------
CLUSTER DOMAIN ID : 100
VIP : fde2:53ba:e9a0:cccc:0:5eff:fe00:1100
ROLE : BACKUP
SERVICE-TAG : 690ZZP2
MASTER-IPV4 : 100.67.76.29
PREFERRED-MASTER : true
----------------------------------------------------------

Leaf1B# show smartfabric cluster


----------------------------------------------------------
CLUSTER DOMAIN ID : 100
VIP : fde2:53ba:e9a0:cccc:0:5eff:fe00:1100
ROLE : MASTER
SERVICE-TAG : 68X00Q2
MASTER-IPV4 : 100.67.76.29
PREFERRED-MASTER : true
----------------------------------------------------------

For the example in this guide, there are only two leaf switches in the SmartFabric at this stage of deployment. However, if there
are additional leaf switches in the SmartFabric when the uplink is created, they will show that PREFERRED-MASTER is set to
true.
If the fabric is previously expanded after the creating the uplink, the added leafs will not have PREFERRED-MASTER set to
true. This is allowed if PREFERRED-MASTER is set to true on at least one pair of leaf switches in the SmartFabric.

Create an uplink if needed


NOTE: This section only applies if there are no leaf switch pairs in the SmartFabric with PREFERRED-MASTER set to
true. If this does not apply, go to the Check preferred master status on new leafs section.

If leaf switch pairs in the existing SmartFabric do not show PREFERRED-MASTER set to true, create an uplink by following the
instructions in the Configure L2 uplinks to the external network or Configure L3 routed uplinks to the external network sections.
After the uplink is created, return to the preceding section and check the preferred master setting again.
If you are using a demo or lab environment without the need for an uplink, create a temporary uplink to set all the leaf switches
that are in the SmartFabric, to be the preferred master.

NOTE: Physical port connections are not required to create this temporary uplink.

Create a temporary uplink as follows:

68 Expand to Multirack
1. On the SFS UI Home page, select 2. Create Uplink for External Network Connectivity.
2. On the Uplink Details page:
a. Next to Uplink Connectivity, leave Layer 2 selected.
b. Enter a Name, such as temp.
3. Click NEXT.
4. On the Port Configuration page:
a. Next to Racks, select any rack.
b. Next to Configured Interfaces, select an available interface on either switch.
NOTE: You cannot use this interface for other purposes until you delete the uplink.
c. Leave the LAG mode set to LACP.
5. Click NEXT > FINISH.
After the uplink is created, verify all leaf switches in the SmartFabric show PREFERRED-MASTER is set to true.
To make the interface used in the temporary uplink available for other purposes, you can delete the uplink without affecting the
preferred master setting by performing the following steps:
1. On the SFS UI Uplinks page, select the uplink by name, temp in this example.
2. Click DELETE > OK.
The port used for the temporary uplink is now available.

Check preferred master status on new leafs


Run the show smartfabric cluster command on each leaf switch to be added to the SmartFabric and ensure none show
PREFERRED-MASTER set to true. These switches should still be in their factory default mode, Full Switch mode, and the
command and output should appear as follows on each switch:

OS10# show smartfabric cluster

----------------------------------------------------------
CLUSTER DOMAIN ID :
VIP : unknown
ROLE : unknown
SERVICE-TAG : unknown
MASTER-IPV4 :
PREFERRED-MASTER :
----------------------------------------------------------

If any leaf switch to be added to the SmartFabric shows PREFERRED-MASTER is set to true, the switch configuration should
be cleared. This is done by taking each affected leaf switch out of SmartFabric mode and returning to Full Switch mode with the
following commands:

OS10# configure terminal


OS10(config)# no smartfabric l3fabric
Reboot to change the personality? [yes/no]:y

After the switch reloads, run show smartfabric cluster again on each affected leaf switch to confirm PREFERRED-
MASTER is no longer set to true.

NOTE: New switches will be placed in SmartFabric mode in the Add switches to SmartFabric section of this chapter.

Configure management settings for new switches


A unique IP address is configured on the OOB management interface of each switch to be added to the SmartFabric. A
management route is also configured if routing is used on the OOB management network.
NOTE: Configure a unique OOB management IP address for each switch. The management route should not be 0.0.0.0/0,
as it may interfere with the data network’s default route. Use a specific destination prefix, as shown in the example below.

Expand to Multirack 69
Run the following command on each switch to be added to the SmartFabric:

OS10# configure terminal


OS10(config)# interface mgmt 1/1/1
OS10(conf-if-ma-1/1/1)# no ip address dhcp
OS10(conf-if-ma-1/1/1)# ip address 100.67.127.26/24
OS10(conf-if-ma-1/1/1)# no shutdown
OS10(conf-if-ma-1/1/1)# exit
OS10(config)# management route 100.67.0.0/16 100.67.76.254
OS10(config)# end
OS10# write memory

NOTE: If % Error: ZTD is in progress(configuration is locked) is preventing entry into configuration


mode, enter the command ztd cancel to proceed.

Other global settings may also be configured here, such as ip name-server and ntp server if used by the switch. These
settings are not required for the deployment example in this guide. The hostname of the switch may be configured at the CLI or
in the SFS UI. In this guide, the SFS UI is used.

Add switches to SmartFabric


CAUTION: Do not connect any new switches in SmartFabric mode to the fabric until the preferred master
settings are validated on the existing and new leaf switches. This was covered earlier in the Verify preferred
master setting before fabric expansion section of this guide.
In this section, the two spine switches and two leaf switches are added to the SmartFabric. These are Spine1, Spine2, Leaf2A,
and Leaf2B, as shown in Figure 58.
Cable the switches as shown in Figure 58. Connection details are shown in Figure 8. Also, make OOB management connections,
as shown in Figure 10.
CAUTION: The following commands delete the existing switch configuration. Switch management settings such
as management IP address, management route, hostname, NTP server, and IP name server are retained.

Spines
The following commands are run on Spine1 and Spine2. This puts the switches in SmartFabric mode as spines.

OS10# configure terminal


OS10(config)# smartfabric l3fabric enable role SPINE

Reboot to change the personality? [yes/no]:y

The configuration is applied, and the switch reloads. Repeat on the second spine switch.

Leafs
The following commands are run on Leaf2A and Leaf2B. This puts the switches in SmartFabric mode as leafs and configures
them as VLT peers.
NOTE: This example uses the two QSFP28 2x100 Gb DD ports, Ethernet 1/1/49-1/1/52, for the VLTi connections on each
leaf.

OS10# configure terminal


OS10(config)# smartfabric l3fabric enable role LEAF vlti ethernet 1/1/49-1/1/52

Reboot to change the personality? [yes/no]:y

The configuration is applied, and the switch reloads. Repeat on the second leaf switch.

70 Expand to Multirack
Optionally, run the following command to verify that a leaf or spine switch is in SmartFabric mode:

OS10# show switch-operating-mode


Switch-Operating-Mode : Smart Fabric Mode

Connect to the SmartFabric UI


After reloading a switch in SmartFabric mode, it takes about 2 minutes after the login prompt displays at the switch CLI for SFS
to come up and for the web UI to be fully functional. The IPv4 address of the master may be determined by running show
smartfabric cluster from the CLI of any switch in the SmartFabric.
1. From a workstation with access to the OOB management network, use a browser to connect to the management IP address
of the master switch, https://switch_mgmt_ip_address.
NOTE: If you connect to a switch in the fabric that is not the master, a link to the master is provided in the web UI.
2. Log in as admin.
Once logged into the master, the SFS UI Home page shows the leaf-spine topology and configured uplinks. For the example
used in this guide, it appears as shown in the figure below.

Figure 60. SFS UI Home page

NOTE: Since hostnames have not been configured on the four additional switches, each appears with its default hostname,
OS10. Hostnames for the additional switches are configured in the next section.

Configure additional rack and switch names


On the SFS UI Home page, click 1. Update Default Fabric, Switch Names and Descriptions. This opens the Set Fabric
and Switch Name window.
1. On the Network Fabric page, update the Name (optional) and Description (optional) of the fabric and click NEXT.

Expand to Multirack 71
NOTE: The Network Fabric ID is automatically set to 100 and cannot be changed. All directly connected switches in
SmartFabric mode join this fabric.
2. On the Racks page, the second rack appears. Update the Name (recommended) and Description (optional) of the second
rack, as shown in the following figure.

Figure 61. Rack renamed to Rack 2


3. Click NEXT.
4. On the Switches page, the additional switches appear along with their service tags, roles, and models. Update the Names
(recommended) and Descriptions (optional) of the newly added switches. The four additional switches with updated names
are outlined in red in the figure below.

Figure 62. Switch name configuration page


5. Click FINISH to apply the settings.

Configure leaf switch addresses for L3 uplinks


NOTE: If using L2 uplinks to the external network, skip this section and go to the Add a VxRail node to the cluster section.

Traffic on the External Management network, VLAN 1811, must be able to reach the external network. To accomplish this with
L3 uplinks, an IP address on virtual network 1811 is assigned to each leaf switch in the SmartFabric.
IP addresses are configured for the new leafs added to the SmartFabric, Leaf2A, and Leaf2B. The examples used in this guide
are shown in the table below.
NOTE: Existing leaf IP addresses and the gateway IP address were configured during VxRail cluster deployment in the
Additional configuration steps for L3 uplinks section of this guide.

Table 10. Leaf switch External Management network IP addresses and anycast gateway
Item IP address or prefix Status
Leaf1A IP address 172.18.11.253/24 Previously configured
Leaf1B IP address 172.18.11.252/24 Previously configured
Leaf2A IP address 172.18.11.251/24 To be configured

72 Expand to Multirack
Table 10. Leaf switch External Management network IP addresses and anycast gateway (continued)
Item IP address or prefix Status
Leaf2B IP address 172.18.11.250/24 To be configured
Gateway IP address 172.18.11.254/24 Previously configured

This is done as follows:


1. At a workstation, launch the SFS UI.
2. On the SFS UI Home page, select 3. Update Network Configuration. The window opens, as shown in the following figure.

Figure 63. Update network configuration window


3. Next to Networks, select the External Management network, Management Network 1811, from the drop-down list.
NOTE: When Management Network 1811, is selected, additional fields appear, as shown in the figure below.

a. Next to Interface IP Addresses, two IP addresses configured earlier are listed for the existing leaf switches, Leaf1A and

Leaf1B. Use the blue Add button to add an IP address for each leaf switch added to the SmartFabric. The
additional addresses are for Leaf2A and Leaf2B as shown in the table above.
b. Leave the Prefix Length and Gateway IP Address at the settings previously configured.
When complete, the Update Network Configuration window appears, as shown in the following figure.

Expand to Multirack 73
Figure 64. Update network configuration window
4. Click OK to apply the settings.

Add a VxRail node to the cluster


For this example, VxRail node 4 is connected to the fabric. It is in Rack 2 and connected to Leaf2A and Leaf2B, as shown in
Figure 58.
Ensure forward and reverse lookup records have been added to the DNS server for each new node to be added to the cluster. In
this example, the node FQDN is vxrail04.dell.lab, and its IP address is 172.18.11.104. The figure below shows validation commands
run from a Microsoft Windows-based system with connectivity to the DNS server.

74 Expand to Multirack
Figure 65. Forward and reverse lookup commands

Use the vSphere Client to add the VxRail node as follows:


1. Connect to the vCenter in a browser and launch the vSphere Client.
2. In the vSphere Client, right-click the cluster and select VxRail > Add VxRail Hosts.
3. After a brief scan, the fourth VxRail node is discovered, outlined in red as shown in the figure below.

Figure 66. VxRail node discovered


4. Click ADD HOST.
5. In the Discovered Hosts window, check the box next to the service tag, as shown in the following figure.

Expand to Multirack 75
Figure 67. Discovered Hosts window
6. Click NEXT.
7. In the User Credentials window, enter the vCenter and switch REST_USER credentials, as shown in the figure below.

76 Expand to Multirack
Figure 68. User Credentials window
8. Click NEXT.
9. In the NIC Configuration window, the default values are used, as shown.

Expand to Multirack 77
Figure 69. NIC Configuration window
10. Click NEXT.
11. In the Host Settings window, the hostname, IP address, and credentials for the new host are specified.

78 Expand to Multirack
Figure 70. Host Settings window
12. Click NEXT.
13. (Optional) In the Host Location window, the Rack Name and Rack Position may be entered. These fields are left blank in
this example.

Expand to Multirack 79
Figure 71. Host Location window
14. Click NEXT.
15. In the Network Settings window, provide the vSAN and vMotion IP addresses for the host, as shown in the following
figure.

80 Expand to Multirack
Figure 72. Network Settings window
16. Click NEXT.
17. Review the settings in the Validate window. If no changes are needed, click VALIDATE. Validation may take 2 to 5 minutes.

Expand to Multirack 81
Figure 73. Validate screen

The Success message displays when the validation passes.

82 Expand to Multirack
Figure 74. Successful validation notification screen
18. An option to put the added host in maintenance mode is provided. In this example, this option is kept in the default No
setting.
19. Click FINISH.
On the Add VxRail Hosts page, the Host expansion is in progress. Health monitoring is currently disabled during
this task message displays and a progress bar displays under Status. Both are outlined in red in the figure below.

NOTE: The host expansion process may take 10 to 15 minutes to complete.

Figure 75. Host expansion in progress


20. When complete, the Add VxRail Hosts page temporarily shows Host expansion complete.
NOTE: This message may only appear on the screen for about one minute.

Expand to Multirack 83
Figure 76. Host expansion complete message
21. The fourth VxRail node displays in the VxRail cluster as shown in the figure below.

Figure 77. Fourth VxRail node added to the cluster

CAUTION: Review any warnings that may appear in the vSphere Client.

(Optional) Verify the interface connected to the new VxRail node has been automatically added to the VxRail networks on the
leaf switches in Rack 2. To verify the connection, run the show virtual network command. In this example, the new VxRail
node is connected to interface 1/1/1 on Leaf2A and Leaf2B.
The output below confirms the leaf switch interface connected to the new VxRail node, ethernet1/1/1, has been automatically
placed in all VxRail virtual networks/VLANs.

NOTE: The command output shown is for Leaf2A. The output for Leaf2B is the same.

Leaf2A# show virtual-network


Codes: DP - MAC-learn Dataplane, CP - MAC-learn Controlplane, UUD - Unknown-Unicast-Drop
Un-tagged VLAN: 4080
Virtual Network: 1811
VLTi-VLAN: 1811
Members:

84 Expand to Multirack
VLAN 1811: port-channel1000, ethernet1/1/1
VxLAN Virtual Network Identifier: 1811
Source Interface: loopback2(172.30.0.1)
Remote-VTEPs (flood-list): 172.30.0.0(CP)

Virtual Network: 1812


VLTi-VLAN: 1812
Members:
VLAN 1812: port-channel1000, ethernet1/1/1
VxLAN Virtual Network Identifier: 1812
Source Interface: loopback2(172.30.0.1)
Remote-VTEPs (flood-list): 172.30.0.0(CP)

Virtual Network: 1813


VLTi-VLAN: 1813
Members:
VLAN 1813: port-channel1000, ethernet1/1/1
VxLAN Virtual Network Identifier: 1813
Source Interface: loopback2(172.30.0.1)
Remote-VTEPs (flood-list): 172.30.0.0(CP)

Virtual Network: 1814


VLTi-VLAN: 1814
Members:
VLAN 1814: port-channel1000, ethernet1/1/1
VxLAN Virtual Network Identifier: 1814
Source Interface: loopback2(172.30.0.1)
Remote-VTEPs (flood-list): 172.30.0.0(CP)

Virtual Network: 1815


VLTi-VLAN: 1815
Members:
VLAN 1815: port-channel1000, ethernet1/1/1
VxLAN Virtual Network Identifier: 1815
Source Interface: loopback2(172.30.0.1)
Remote-VTEPs (flood-list): 172.30.0.0(CP)

Virtual Network: 3939


Description: In-band SmartFabric Services discovery network
VLTi-VLAN: 3939
Members:
VLAN 3939: port-channel1000, ethernet1/1/1
VxLAN Virtual Network Identifier: 3939
Source Interface: loopback2(172.30.0.1)
Remote-VTEPs (flood-list): 172.30.0.0(CP)

Virtual Network: 4091


Description: Default untagged network for client onboarding
VLTi-VLAN: 4091
Members:
Untagged: ethernet1/1/1
VLAN 4091: port-channel1000
VxLAN Virtual Network Identifier: 4091
Source Interface: loopback2(172.30.0.1)
Remote-VTEPs (flood-list): 172.30.0.0(CP)

Expand to Multirack 85
8
Deploy and Configure OMNI

Deploy OMNI VM
The OMNI VM is available for download from the Dell EMC OpenManage Network Integration for VMware vCenter website.
Download OMNI-version#.zip and extract the OMNI-version#.ova file to a location accessible from the vSphere client.
NOTE: VxRail 7.0.1 supports OMNI 2.0 or a later version specified in the SmartFabric OS10 Solutions (HCI, Storage, MX)
Support Matrix.
1. To deploy the OMNI VM, launch the vSphere Client and go to Hosts and Clusters.
2. Right-click the VxRail cluster and select Deploy OVF Template.

Figure 78. Deploy OVF template


3. On the Select an OVF template page, point to the location of the OMNI .ova file and click NEXT.
4. Enter a Virtual machine name and select a location for the OMNI VM. The default location is used in this example, as
shown in the following figure.

86 Deploy and Configure OMNI


Figure 79. OMNI VM name and folder
5. Click NEXT.
6. On the Select a compute resource page, ensure the VxRail cluster is selected, and Compatibility checks succeeded
displays at the bottom of the page.

Figure 80. VxRail cluster selected


7. Click NEXT. After the validation is complete, the Review details page displays.

Deploy and Configure OMNI 87


Figure 81. Review details page
8. Click NEXT.
9. On the License agreements page, review the terms provided. If you agree, check the I accept all license agreements
box and click NEXT.
10. On the Select storage page, select the VxRail vSAN datastore. Ensure that the Compatibility checks succeeded
message displays at the bottom of the page.

Figure 82. VxRail vSAN datastore selected


11. Click NEXT.
12. On the Select networks page, change the two destination networks to match the two source networks, as shown in the
figure below.

88 Deploy and Configure OMNI


Figure 83. Select networks
13. Click NEXT.
14. On the Ready to complete page, review the settings.

Figure 84. Review settings


15. Click FINISH to deploy the OMNI VM.
NOTE: OMNI VM installation may take 2 to 3 minutes.

When complete, the OMNI VM appears under the VxRail cluster, as shown below.

Deploy and Configure OMNI 89


Figure 85. OMNI VM deployed

OMNI console configuration


NOTE: Before proceeding, determine an IP address on the External Management VLAN and a hostname for the OMNI VM.
Add this information to the DNS server. In this example, the hostname is omni.dell.lab, and the address is 172.18.11.56/24
1. In the vSphere Client, power on the OMNI VM and launch the web console for the VM.
2. Log in with the default OMNI VM username, admin, and password, admin. The first time you log in, follow the prompts to
change the admin password.
3. The OMNI Menu displays.

Figure 86. OMNI menu


4. Select 0. Full setup. The Network Manager terminal user interface (TUI) opens.

90 Deploy and Configure OMNI


Figure 87. Network Manager TUI

NOTE: In the TUI, use the Tab and Arrow keys to navigate and the Enter key to select.
5. Select Edit a connection > Wired connection 1.

Figure 88. Wired connection 1 selected


6. In the Edit Connection window, make the following settings:
a. Change the profile name to external management.
NOTE: The Device field is automatically populated with a MAC address and (ens160).
b. Set the IPv4 Configuration to Manual.
c. Next to IPv4 Configuration, select Show to expand the additional fields.
d. Enter the OMNI VM IP Address/prefix, Gateway, DNS server, and Search domain. The values used for this example
are shown in the figure below.
NOTE: The example shown in the figure below is with L2 uplinks, so the DNS server has an IP address on the same
network as the OMNI VM, 172.18.11.50 in this example. If L3 uplinks are used, the DNS server will be on a different
network. For the L3 uplink examples in this guide, the DNS server IP address is 172.19.11.50.
e. Set the IPv6 Configuration to Ignore.
Default values are used for the remaining settings. When complete, the Edit Connection window appears as shown in the
figure below.

Deploy and Configure OMNI 91


Figure 89. External management connection settings
7. Select OK. The Ethernet connections list displays.
8. Select Wired connection 2.

Figure 90. Wired connection 2 selected

92 Deploy and Configure OMNI


9. In the Edit Connection window, make the following settings:
a. Change the profile name to internal management.
NOTE: The Device field is automatically populated with a MAC address and (ens192).
b. Set IPv4 Configuration to Disabled.
c. Set IPv6 Configuration to Link-Local.
d. Next to IPv6 Configuration, select Show to expand the IPv6 configuration fields.
e. Next to Routing, select Edit > Add.
f. Set the Destination/Prefix to fde1:53ba:e9a0:cccc::/64. Leave the remaining values at their default settings.
The Edit Connection window displays.

Figure 91. Internal management connection settings

NOTE: Only part of the Destination/Prefix field is visible on the screen. Be sure it is set to
fde1:53ba:e9a0:cccc::/64.
10. Select OK > OK > Back to return to the Network Manager TUI menu.
11. On the Network Manager TUI menu, select Activate a connection. The connection activation window displays.

Deploy and Configure OMNI 93


Figure 92. Activate or deactivate connections window

NOTE: When active, connection names have an asterisk (*) next to them.
12. Deactivate both connections as follows:
a. Select external management > Deactivate.
b. Select internal management > Deactivate.
13. Activate both connections as follows:
a. Select external management > Activate.
b. Select internal management > Activate.
14. Select Back to return to the Network Manager TUI menu.
15. On the Network Manager TUI menu, select Set system hostname, as shown.

Figure 93. Set system hostname selected


16. Change the Hostname from its default setting to the new hostname, omni.dell.lab, as shown in this example.

94 Deploy and Configure OMNI


Figure 94. Set hostname to omni.dell.lab
17. Click OK > OK > Quit.
18. After services are started, the NTP Server IP/Hostname: prompt displays. At the prompt, specify the NTP server,
ntp.dell.lab in this example, as shown in the figure below.
NOTE: If an NTP server is not used, press Enter to skip.
19. At the Install SSL Certificates from remote server [y]? prompt, enter n.
NOTE: OMNI certificate installation is outside the scope of this guide. Certificates can be imported later by selecting 5.
Password/SSL configuration menu from the OMNI Menu. Follow the instructions provided in SmartFabric Services
for OpenManage Network Integration User Guide, Release 2.0. The guide is available on the Dell EMC OpenManage
Network Integration for VMware vCenter website.

Figure 95. Specify NTP server


20. Press Enter to return to the OMNI menu.
21. On the OMNI menu, select 8. Logout.
This completes the OMNI VM console configuration.

OMNI web UI configuration


1. In a browser, connect to the OMNI web UI, https://omni.dell.lab provided in this example.
2. Select Launch OMNI Fabric Management Portal.
3. Enter the user name admin and the OMNI admin password configured earlier.
4. Click SIGN IN.
The OMNI Address not configured message displays.

Deploy and Configure OMNI 95


Figure 96. OMNI Address not configured message
5. Click SET OMNI FQDN.
6. Enter the OMNI FQDN or IP Address, such as omni.dell.lab, as shown.

Figure 97. OMNI FQDN entered


7. Click SUBMIT.
8. On the OMNI Home > SmartFabric tab, click the radio button next to the SmartFabric instance as shown.

Figure 98. SmartFabric instance selected


9. Click EDIT.
10. In the Edit a SmartFabric window, enter the REST_USER Password that is configured on the SmartFabric switches and
check the Register for SFS Events box.
NOTE: The REST_USER password was set during VxRail deployment.

96 Deploy and Configure OMNI


Figure 99. Edit SmartFabric instance
11. Click SUBMIT.
The Success message displays.

Figure 100. SmartFabric instance configured

Register OMNI with vCenter


1. In the OMNI web UI, select OMNI Home in the left pane. In the right pane, click the vCenter Instance tab.
2. Click +ADD.
3. Enter the vCenter IP/FQDN and vCenter credentials. Verify that the Registered and Automation radio buttons are both
set to True.

Deploy and Configure OMNI 97


Figure 101. Add a vCenter instance
4. Click ADD.
The Success message displays.

Figure 102. vCenter registration Success notification

OMNI registration with vCenter also installs a plug-in to the vSphere Client.
If you are logged into the vSphere Client when the plug-in is installed, a banner appears at the top of the screen, outlined in
red in the figure below. Click the REFRESH BROWSER button that appears in the banner.

98 Deploy and Configure OMNI


Figure 103. OMNI extension notification

NOTE: If there are other messages present, such as a license warning, the message shown in the figure above may be
located behind the other messages. When there are multiple messages, there are < and > icons present to the left of the
banner to cycle through the messages.
To launch OMNI in the vSphere Client, select Menu > OpenManage Network Integration.

Deploy and Configure OMNI 99


Figure 104. Launch OMNI in the vSphere Client

You may use either the vSphere client or a direct browser connection to connect to the OMNI web UI.
NOTE: After OMNI is deployed, use OMNI for switch configuration instead of the SFS web UI. The SFS web UI is intended
for initial deployment only.

NOTE: For more information, see the SmartFabric Services for OpenManage Network Integration User Guide, Release 2.0.
The guide is available on the Dell EMC OpenManage Network Integration for VMware vCenter website.

100 Deploy and Configure OMNI


A
Validated Components

General
The following tables include hardware, software, and firmware that was used to configure and validate the examples in this
guide.
NOTE: For more information about supported components and versions, see the Dell EMC VxRail Support Matrix (account
required).

NOTE: Switches validated for the Cisco Nexus examples are in Appendix C.

Dell EMC PowerSwitch systems


Table 11. Dell EMC PowerSwitch systems and operating systems
Quantity Item Operating system version
2 Dell EMC PowerSwitch S5232F-ON spine switches 10.5.2.2
4 Dell EMC PowerSwitch S5248F-ON leaf switches 10.5.2.2
2 Dell EMC PowerSwitch S3048-ON OOB management 10.5.2.2
switches
2 Dell EMC PowerSwitch S5212F-ON series external 10.5.0.7P3
switches

VxRail E560F nodes


A cluster of four VxRail E560F nodes was used to validate the examples in this guide. The nodes were each configured per the
table below.

Table 12. VxRail E560F node components


Qty per node Item Firmware version
2 Intel Xeon Silver 4216 CPU @ 2.10 GHz, 16 cores
4 32 GB DDR4 DIMMs (128 GB total)
1 1788.5 GB SATA SSD
1 745.21 GB SAS SSD
2 223.57 GB SATA SSD
1 Dell HBA330 Mini Storage Controller 16.17.01.00
1 Boot Optimized Storage Solution (BOSS-S1) Controller 2.5.13.3024
1 Mellanox ConnectX-4 LX rNDC - 2x25GbE SFP28 ports 14.27.61.22
- BIOS 2.8.2
- CPLD 1.0.6

Validated Components 101


Table 12. VxRail E560F node components (continued)
Qty per node Item Firmware version
- iDRAC with Lifecycle Controller 4.22.00.201
- Backplane Expander 2.46

VxRail appliance software


Table 13. VxRail appliance software
Item Version
VxRail 7.0.100-26719865
Includes:

● ESXi 7.0.1-16850804
● vCenter Server 7.0.1-16858589

OMNI software
OMNI software used in this guide is as follows:

Table 14. OMNI software


Item Version
Dell OpenManage Network Integration 2.0

102 Validated Components


B
CLI Commands

Switch CLI validation commands


This section provides a list of the most common commands and their output for the examples used in this guide.

General commands
show version
Leaf and spine switches must be running a supported version of SmartFabric OS10. Run the show version command to check
the operating system version. SmartFabric OS10 is available on Dell Digital Locker (account required).

OS10# show version


Dell EMC Networking OS10 Enterprise
Copyright (c) 1999-2020 by Dell Inc. All Rights Reserved.
OS Version: 10.5.2.2
Build Version: 10.5.2.2

NOTE: See the SmartFabric OS10 release notes for upgrade instructions.

show license status


Run the command show license status to verify license installation. The License Type: field should indicate
PERPETUAL. If an evaluation license is installed, licenses purchased from Dell are available for download on Dell Digital Locker.
Installation instructions are provided in the Dell EMC SmartFabric OS10 User Guide on Dell EMC Networking OS10 Info Hub.

OS10# show license status


System Information
---------------------------------------------------------
Vendor Name : Dell EMC
Product Name : S5248F-ON
Hardware Version : A01
Platform Name : x86_64-dellemc_s5248f_c3538-r0
PPID : CN046MRJCES0089K0015
Service Tag : 690ZZP2
Product Base :
Product Serial Number:
Product Part Number :
License Details
----------------
Software : OS10-Enterprise
Version : 10.5.2.2
License Type : PERPETUAL
License Duration: Unlimited
License Status : Active
License location: /mnt/license/690ZZP2.lic
---------------------------------------------------------

NOTE: If SmartFabric OS10 was factory installed, a perpetual license is already on the switch.

show interface status

CLI Commands 103


The show interface status | grep up command is used to verify required interfaces are up, and links are established
at their appropriate speeds. In this example, ports 1/1/1-1/1/3 are VxRail nodes, port 1/1/9:1 is a jump host, ports 1/1/49-1/1/52
are the VLTi, ports 1/1/53-1/1/54 are uplinks to the external network, and ports 1/1/55-1/1/56 are uplinks to the spines.

Leaf1A# show interface status | grep up


Port Description Status Speed Duplex Mode Vlan Tagged-Vlans
Eth 1/1/1 up 25G full T -
Eth 1/1/2 up 25G full T -
Eth 1/1/3 up 25G full T -
Eth 1/1/9:1 up 10G full T -
Eth 1/1/49 up 100G full -
Eth 1/1/50 up 100G full -
Eth 1/1/51 up 100G full -
Eth 1/1/52 up 100G full -
Eth 1/1/53 up 100G full -
Eth 1/1/54 up 100G full -
Eth 1/1/55 up 100G full -
Eth 1/1/56 up 100G full -

show port-channel summary


The show port-channel summary command is used to view port channel numbers, interfaces used, and status. Port
channel 1 (number may vary) is the L2 uplink. Port channels 96 through 97 (numbers may vary) are automatically created spine
uplinks. The VLTi is automatically configured as a static LAG using port channel 1000. Ports 1/1/53 and 1/1/54 are port channel
members, and (P) indicates each is up and active.

Leaf1A# show port-channel summary


Flags: D - Down I - member up but inactive P - member up and active
U - Up (port-channel) F - Fallback Activated
--------------------------------------------------------------------------------
Group Port-Channel Type Protocol Member Ports
--------------------------------------------------------------------------------
1 port-channel1 (U) Eth DYNAMIC 1/1/53(P) 1/1/54(P)
96 port-channel96 (U) Eth STATIC 1/1/55(P)
97 port-channel97 (U) Eth STATIC 1/1/56(P)
1000 port-channel1000 (U) Eth STATIC 1/1/49(P) 1/1/50(P) 1/1/51(P) 1/1/52(P)

show virtual network


The show virtual network command is used to view virtual networks, VLANs, and interfaces assigned to each VLAN. Port
channel 1 (number may vary) is the L2 uplink if configured. Port channel 1000 is the VLTi. Interfaces 1/1/1-1/1/3 are connected
to VxRail nodes. Interface 1/1/9:1, shown under VLAN 1811, is the jump host port.

NOTE: The jump host port was automatically moved from VLAN 4091 to VLAN 1811 during VxRail deployment.

Leaf1A# show virtual-network


Codes: DP - MAC-learn Dataplane, CP - MAC-learn Controlplane, UUD - Unknown-Unicast-Drop
Un-tagged VLAN: 4080
Virtual Network: 1811
VLTi-VLAN: 1811
Members:
Untagged: ethernet1/1/9:1
VLAN 1811: port-channel1, port-channel1000, ethernet1/1/1, ethernet1/1/2,
ethernet1/1/3
VxLAN Virtual Network Identifier: 1811
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Virtual Network: 1812


VLTi-VLAN: 1812
Members:
VLAN 1812: port-channel1, port-channel1000, ethernet1/1/1, ethernet1/1/2,
ethernet1/1/3
VxLAN Virtual Network Identifier: 1812
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Virtual Network: 1813


VLTi-VLAN: 1813
Members:
VLAN 1813: port-channel1, port-channel1000, ethernet1/1/1, ethernet1/1/2,

104 CLI Commands


ethernet1/1/3
VxLAN Virtual Network Identifier: 1813
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Virtual Network: 1814


VLTi-VLAN: 1814
Members:
VLAN 1814: port-channel1, port-channel1000, ethernet1/1/1, ethernet1/1/2,
ethernet1/1/3
VxLAN Virtual Network Identifier: 1814
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Virtual Network: 1815


VLTi-VLAN: 1815
Members:
VLAN 1815: port-channel1, port-channel1000, ethernet1/1/1, ethernet1/1/2,
ethernet1/1/3
VxLAN Virtual Network Identifier: 1815
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Virtual Network: 3939


Description: In-band SmartFabric Services discovery network
VLTi-VLAN: 3939
Members:
VLAN 3939: port-channel1000, ethernet1/1/1, ethernet1/1/2, ethernet1/1/3
VxLAN Virtual Network Identifier: 3939
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Virtual Network: 4091


Description: Default untagged network for client onboarding
VLTi-VLAN: 4091
Members:
Untagged: ethernet1/1/1, ethernet1/1/2, ethernet1/1/3
VLAN 4091: port-channel1000
VxLAN Virtual Network Identifier: 4091
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

show lldp neighbors


The show lldp neighbors command is useful for identifying connected equipment. Interfaces 1/1/1-3 are connected to the
VxRail nodes, and Interface 1/1/9:1 is connected to the jump host. Interfaces 1/1/53-54 are connected to the external network,
and 1/1/55-56 are connected to the spines. Mgmt1/1/1 is connected to the OOB management switch.

Leaf1A# show lldp neighbors


Loc PortID Rem Host Name Rem Port Id Rem Chassis Id
--------------------------------------------------------------------------------------
ethernet1/1/1 Not Advertised 1c:34:da:5e:d9:fc 1c:34:da:5e:d9:fe
ethernet1/1/1 Not Advertised 1c:34:da:5e:d9:fc 1c:34:da:5e:d9:fc
ethernet1/1/1 vxrail-01.dell.lab 1c:34:da:5e:d9:fc vmnic0
ethernet1/1/2 Not Advertised 1c:34:da:5e:da:04 1c:34:da:5e:da:06
ethernet1/1/2 Not Advertised 1c:34:da:5e:da:04 1c:34:da:5e:da:04
ethernet1/1/2 vxrail-02.dell.lab 1c:34:da:5e:da:04 vmnic0
ethernet1/1/3 Not Advertised 1c:34:da:60:3b:ec 1c:34:da:60:3b:ee
ethernet1/1/3 Not Advertised 1c:34:da:60:3b:ec 1c:34:da:60:3b:ec
ethernet1/1/3 vxrail-03.dell.lab 1c:34:da:60:3b:ec vmnic0
ethernet1/1/9:1 Not Advertised 24:6e:96:3a:13:48 24:6e:96:3a:13:48
ethernet1/1/49 Leaf1B ethernet1/1/49 3c:2c:30:22:79:80
ethernet1/1/50 Leaf1B ethernet1/1/50 3c:2c:30:22:79:80
ethernet1/1/51 Leaf1B ethernet1/1/51 3c:2c:30:22:79:80
ethernet1/1/52 Leaf1B ethernet1/1/52 3c:2c:30:22:79:80
ethernet1/1/53 External-A ethernet1/1/13 3c:2c:30:20:f1:00
ethernet1/1/54 External-B ethernet1/1/13 3c:2c:30:20:ef:00
ethernet1/1/55 Spine1 ethernet1/1/1 8c:04:ba:b7:d6:40
ethernet1/1/56 Spine2 ethernet1/1/1 8c:04:ba:b7:9d:40
mgmt1/1/1 OOB-Mgmt-1 ethernet1/1/1 88:6f:d4:b0:78:e2

CLI Commands 105


NOTE: If an entry is not shown for the jump host, port 1/1/9:1 in the above example, the jump host may be connected to
the other leaf switch, or the NIC on the jump host does not have LLDP enabled. This is not required and may be ignored.
show ip interface brief
Run the show ip interface brief command to verify connected interfaces are up, and IP addresses are configured
correctly.
In the output below, interfaces 1/1/1-1/1/3 are connected to the VxRail nodes, 1/1/49-1/1/52 are the VLTi, and 1/1/53-1/1/54
are L3 uplinks to the external switches. VLAN 4090, Loopback 1, and Loopback 2 are automatically configured by SFS. VLAN
4090 is used for iBGP, Loopback 1 is the router ID, and Loopback 2 is the VTEP IP address.

NOTE: Unused interfaces have been removed from the output for brevity.

Leaf1A# show ip interface brief


Ethernet 1/1/1 unassigned YES unset up up
Ethernet 1/1/2 unassigned YES unset up up
Ethernet 1/1/3 unassigned YES unset up up
Ethernet 1/1/49 unassigned YES unset up up
Ethernet 1/1/50 unassigned YES unset up up
Ethernet 1/1/51 unassigned YES unset up up
Ethernet 1/1/52 unassigned YES unset up up
Ethernet 1/1/53 192.168.1.1/31 YES manual up up
Ethernet 1/1/54 192.168.2.1/31 YES manual up up
Management 1/1/1 100.67.76.30/24 YES manual up up
Vlan 4000 unassigned YES unset up up
Vlan 4090 172.16.0.1/31 YES manual up up
Vlan 4094 unassigned YES unset up up
Port-channel 1000 unassigned YES unset up up
Loopback 1 172.16.128.0/32 YES manual up up
Loopback 2 172.30.0.0/32 YES manual up up
Virtual-network 3939 unassigned YES unset up up

show ip bgp summary


Run the show ip bgp summary command to verify that BGP is up for each neighbor. When BGP is up, uptime is shown in
the Up/Down column. The neighbors for Leaf1A shown in the output below are Leaf1B, External-A, and External-B.

Leaf1A# show ip bgp summary


BGP router identifier 172.16.128.0 local AS number 65011
Neighbor AS MsgRcvd MsgSent Up/Down State/Pfx
172.16.0.0 65011 13 16 00:46:59 8
172.16.0.5 65012 56 55 00:38:19 15
172.16.0.15 65012 50 51 00:37:57 15
192.168.1.0 65101 12 14 00:07:30 8
192.168.2.0 65101 8 9 00:04:14 8

show ip route
With L3 uplinks, the show ip route command is used to ensure the leaf switches have routes to the external network,
172.19.11.0/24 in this example, to reach the DNS server. This BGP-discovered route is shown in the output below.

Leaf1A# show ip route


Codes: C - connected
S - static
B - BGP, IN - internal BGP, EX - external BGP
O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,
N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
E2 - OSPF external type 2, * - candidate default,
+ - summary route, > - non-active route
Gateway of last resort is not set
Destination Gateway Dist/Metric Change
----------------------------------------------------------------------------------
B EX 10.0.2.1/32 via 192.168.1.0 20/0 21:07:08
via 192.168.2.0
B EX 10.0.2.2/32 via 192.168.1.0 20/0 21:07:08
via 192.168.2.0
C 172.16.0.0/31 via 172.16.0.1 vlan4090 0/0 21:10:36
C 172.16.128.0/32 via 172.16.128.0 loopback1 0/0 21:10:39
B IN 172.16.128.1/32 via 172.16.0.0 200/0 21:09:53
B EX 172.19.11.0/24 via 192.168.1.0 20/0 21:07:08

106 CLI Commands


via 192.168.2.0
C 172.18.11.0/24 via 172.18.11.250 virtual-network1811 0/0 16:02:54
C 172.30.0.0/32 via 172.30.0.0 loopback2 0/0 21:10:39
C 192.168.1.0/31 via 192.168.1.1 ethernet1/1/53 0/0 21:10:41
B IN 192.168.1.2/31 via 172.16.0.0 200/0 21:09:53
C 192.168.2.0/31 via 192.168.2.1 ethernet1/1/54 0/0 21:07:14
B IN 192.168.2.2/31 via 172.16.0.0 200/0 21:09:53
B EX 192.168.3.20/31 via 192.168.1.0 20/0 21:07:08
via 192.168.2.0

VLT commands
show vlt domain_id
This command is used to validate the VLT configuration status. In SmartFabric mode, the VLT domain ID is 255. The Role for
one switch in the VLT pair is primary, and its peer switch (not shown) is assigned the Secondary role. The VLTi Link
Status and VLT Peer Status must both be up.

Leaf1A# show vlt 255


Domain ID : 255
Unit ID : 2
Role : primary
Version : 2.3
Local System MAC address : 3c:2c:30:10:36:00
Role priority : 32768
VLT MAC address : 3c:2c:30:10:36:00
IP address : fda5:74c8:b79e:1::2
Delay-Restore timer : 90 seconds
Peer-Routing : Disabled
Peer-Routing-Timeout timer : 0 seconds
VLTi Link Status
port-channel1000 : up

VLT Peer Unit ID System MAC Address Status IP Address Version


----------------------------------------------------------------------------------
1 3c:2c:30:10:41:00 up fda5:74c8:b79e:1::1 2.3

show vlt domain_id backup-link


This command is used to verify VLT peers are communicating on the backup link over the OOB management network. The
Destination is the management IP address of the peer. The Peer Heartbeat status must be Up.

Leaf1A# show vlt 255 backup-link


VLT Backup Link
------------------------
Destination : 100.67.76.29
Peer Heartbeat status : Up
Heartbeat interval : 30
Heartbeat timeout : 90
Destination VRF : default

show vlt domain_id mismatch


This command highlights any potential configuration issues between VLT peers. All items must indicate No mismatch.

Leaf1A# show vlt 255 mismatch


VLT-MAC mismatch:
No mismatch

Peer-routing mismatch:
No mismatch

VLAN mismatch:
No mismatch

VLT VLAN mismatch:


No mismatch

VLT Virtual Network Mismatch:

CLI Commands 107


Virtual Network Name Mismatch:
No mismatch

Virtual Network VLTi-VLAN Mismatch:


No mismatch

Virtual Network Mode Mismatch:


No mismatch

Virtual Network Tagged Interfaces Mismatch:


No mismatch

Virtual Network Untagged Interfaces Mismatch:


No mismatch

Virtual Network VNI Mismatch:


No mismatch

Virtual Network Remote-VTEP Mismatch:


No mismatch

Virtual Network anycast ip Mismatch:


No mismatch

Virtual Network anycast mac Mismatch:


No mismatch

EVPN Mismatch:
EVPN Mode Mismatch:
No mismatch

EVPN EVI Mismatch:


No mismatch

NVE Mismatch:
No mismatch

DHCP Snooping Mismatch:

Global Snooping Configuration Mismatch


------------------------------------------------------------
Codes: SE - Static Entry Mismatch
DT - DAI Trust Mismatch
ST - Snooping Trust Mismatch
SAV - Source-Address-Validation Mismatch
ARP - ARP Inspection Mismatch
VS - VLAN Snooping Mismatch
Interface Interface Snooping Configuration Mismatch
---------------------------------------------------------------------
Multicast routing mismatches:
Global status:
Parameter VRF Local Peer
---------------------------------------------------------------------
No mismatch

Vlan status IPv4 IPv6


VlanId Local Peer Local Peer
---------------------------------------------------------------------
No mismatch

Return to Full Switch mode


CAUTION: The following command deletes the existing switch configuration.

108 CLI Commands


To delete the existing switch configuration and go from SmartFabric mode back to Full Switch mode, run the following
commands on each switch:

Leaf1A# configure terminal


S5248F-Leaf1A(config)# no smartfabric l3fabric
Reboot to change the personality? [yes/no]:y

The switch reboots into Full Switch mode. The mode can be verified with the following command:

OS10# show switch-operating-mode


Switch-Operating-Mode : Full Switch Mode

CLI Commands 109


C
Cisco Nexus External Switch Configuration
Example

Configure external Nexus switches for L3 routed


connections
SmartFabric uplinks may be connected to external Cisco Nexus switches. This appendix includes a Cisco Nexus 9000 switch
configuration example for L3 routed connections to SmartFabric leaf switches.
NOTE: L3 routed uplinks on the SmartFabric leaf switches are configured per the Configure L3 routed uplinks with BGP in
SFS section of this guide.
Connections, port numbers, and networks used for external management in this example are shown in the following figure. The
External Management VLAN is VLAN 1911 on the external Nexus switches, and is VLAN 1811 on the SmartFabric switches.

Figure 105. L3 routed uplinks from SmartFabric to external Nexus switches

In this example, an existing DNS/NTP server connects to the Nexus switches using a vPC in VLAN 1911.

110 Cisco Nexus External Switch Configuration Example


NOTE: DNS and NTP servers do not have to connect in the manner shown if they are reachable on the network.

Point-to-point IP networks
The L3 point-to-point links used in this example are labeled A-D in the figure below.

Figure 106. Point-to-point connections

Each L3 uplink is a separate, point-to-point IP network. The following table details the links labeled in the figure above.

NOTE: The IP addresses in the table are used in the switch configuration examples.

Table 15. L3 routed uplink IP addresses


Link label Source switch Source IP Destination Destination IP Network
address switch address
A N9K-External-A 192.168.1.0 Leaf1A 192.168.1.1 192.168.1.0/31
B N9K-External-A 192.168.1.2 Leaf1B 192.168.1.3 192.168.1.2/31
C N9K-External-B 192.168.2.0 Leaf1A 192.168.2.1 192.168.2.0/31
D N9K-External-B 192.168.2.2 Leaf1B 192.168.2.3 192.168.2.2/31

BGP ASNs and router IDs


The following figure shows the ASNs and router IDs used for the external Nexus switches and SFS leaf switches in this example.
External switches share a common ASN, and all SFS leaf switches share a common ASN.

Cisco Nexus External Switch Configuration Example 111


Figure 107. BGP ASNs and router IDs

In this example, ASN 65101 is used on both Nexus external switches. SFS leaf switches use ASN 65011 by default for all leafs in
the fabric.

NOTE: If L3 uplinks are connected from SFS spine switches, the spine switches use ASN 65012 by default.

The IP addresses shown on the external network switches in the figure above are loopback addresses used as BGP router IDs.
On the SmartFabric switches, BGP router IDs are automatically configured from the SFS default private subnet address block,
172.16.0.0/16.
NOTE: SFS default ASNs and IP address blocks may be changed by going to 5. Edit Default Fabric Settings in the SFS
UI.

NOTE: All of the Nexus switch configuration commands used to validate this topology are shown in the sections that
follow. The Nexus switches were reset to their default configuration settings using the write erase command before
running the configuration commands below. This is only an example. Modify your external switch configuration as needed
for your environment.

General settings
Enable the following features: interface-vlan, lacp, vrrp, vpc, bgp, and lldp. Configure the hostname, OOB
management IP address on VRF management, and the VRF management route as shown.

N9K-External-A N9K-External-B

configure terminal configure terminal

feature interface-vlan feature interface-vlan


feature lacp feature lacp
feature vrrp feature vrrp
feature vpc feature vpc
feature bgp feature bgp
feature lldp feature lldp

hostname N9K-External-A hostname N9K-External-B

interface mgmt 0 interface mgmt 0


ip address 100.67.127.30/24 ip address 100.67.127.29/24
vrf member management vrf member management
no shutdown no shutdown

112 Cisco Nexus External Switch Configuration Example


N9K-External-A N9K-External-B

vrf context management vrf context management


ip route 100.67.0.0/16 100.67.76.254 ip route 100.67.0.0/16 100.67.76.254

Configure the External Management VLAN


VLAN 1911 represents a preexisting management VLAN on the external network. DNS and NTP services are located on this
VLAN. Optionally, enable jumbo frames with the mtu 9216 command. Assign a unique IP address to the VLAN on each switch.
Configure VRRP to provide gateway redundancy and assign the same virtual address to both switches.

N9K-External-A N9K-External-B

vlan 1911 vlan 1911


name ExtMgmt name ExtMgmt
no shutdown no shutdown

interface Vlan1911 interface Vlan1911


description ExtMgmt description ExtMgmt
no shutdown no shutdown
mtu 9216 mtu 9216
ip address 172.19.11.252/24 ip address 172.19.11.253/24
vrrp 11 vrrp 11
address 172.19.11.254 address 172.19.11.254
no shutdown no shutdown

Configure the vPC domain and peer link


Create the vPC domain. The peer-keepalive destination is the OOB management IP address of the vPC peer switch.
Configure a port channel to use as the vPC peer link. Put the port channel in trunk mode and allow the default and External
Management VLANs, 1 and 1911 respectively.
Configure the interfaces to use in the vPC peer link. Put the interfaces in trunk mode and allow the default and External
Management VLANs, 1 and 1911 respectively. Add the interfaces to the peer link port channel.

N9K-External-A N9K-External-B

vpc domain 129 vpc domain 129


role priority 1 role priority 65535
peer-keepalive destination 100.67.127.29 peer-keepalive destination 100.67.127.30

interface port-channel 1000 interface port-channel 1000


description "Peer-Link to External-B" description "Peer-Link to External-A"
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1,1911 switchport trunk allowed vlan 1,1911
vpc peer-link vpc peer-link
no shutdown no shutdown

interface ethernet 1/51-52 interface ethernet 1/51-52


description "Link to External-B" description "Link to External-A"
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1,1911 switchport trunk allowed vlan 1,1911
channel-group 1000 mode active channel-group 1000 mode active
no shutdown no shutdown

Cisco Nexus External Switch Configuration Example 113


Configure interfaces
Configure the interfaces for connections to the SFS switches. Ports 1/49 and 1/50 are configured as L3 interfaces. The IP
addresses used are from the table below. Optionally, allow the forwarding of jumbo frames using the mtu 9216 command.
Create port channel 1. In this example, port channel 1 connects to the DNS/NTP server. It is on VLAN 1911, which represents the
preexisting management VLAN. Add the port channel to vPC 1.
Interface 1/1 on each external switch is connected to the DNS/NTP server. Each interface is added to VLAN 1911 and port-
channel 1. Port-channel 1 is set as an LACP port-channel with the channel-group 1 mode active command.

N9K-External-A N9K-External-B

interface ethernet 1/49 interface ethernet 1/49


description Leaf1A description Leaf1A
no shutdown no shutdown
no switchport no switchport
mtu 9216 mtu 9216
ip address 192.168.1.0/31 ip address 192.168.2.0/31

interface ethernet 1/50 interface ethernet 1/50


description Leaf1B description Leaf1B
no shutdown no shutdown
no switchport no switchport
mtu 9216 mtu 9216
ip address 192.168.1.2/31 ip address 192.168.2.2/31

interface port-channel 1 interface port-channel 1


description "vPC to DNS/NTP" description "vPC to DNS/NTP"
switchport switchport
switchport mode access switchport mode access
switchport access vlan 1911 switchport access vlan 1911
vpc 1 vpc 1
no shutdown no shutdown

interface ethernet 1/1 interface ethernet 1/1


description "Link to DNS/NTP" description "Link to DNS/NTP"
switchport switchport
switchport mode access switchport mode access
switchport access vlan 1911 switchport access vlan 1911
channel-group 1 mode active channel-group 1 mode active
no shutdown no shutdown

Configure BGP
Configure a loopback interface to use for the BGP router ID.
Allow BGP to distribute routes with the route-map allow permit command.
Configure the BGP ASN with the router bgp command. The external switches share the same ASN. Use the address that
was set for interface loopback0 as the router ID.
Use the address-family ipv4 unicast and redistribute direct route-map allow commands to redistribute
IPv4 routes from physically connected interfaces.
Use the maximum-paths 2 command to configure the maximum number of paths that BGP adds to the route table for equal-
cost multipath load balancing.
Specify the neighbor IP addresses and ASNs. Configure an address family for each neighbor.
When the configuration is complete, exit configuration mode and save the configuration with the end and copy running-
config startup-config commands.

External-A External-B

interface loopback0 interface loopback0


description router_ID description router_ID

114 Cisco Nexus External Switch Configuration Example


External-A External-B

no shutdown no shutdown
ip address 10.0.2.1/32 ip address 10.0.2.2/32

route-map allow permit 10 route-map allow permit 10

router bgp 65101 router bgp 65101


router-id 10.0.2.1 router-id 10.0.2.2
address-family ipv4 unicast address-family ipv4 unicast
redistribute direct route-map allow redistribute direct route-map allow
maximum-paths 2 maximum-paths 2

neighbor 192.168.1.1 remote-as 65011 neighbor 192.168.2.1 remote-as 65011


address-family ipv4 unicast address-family ipv4 unicast
no shutdown no shutdown

neighbor 192.168.1.3 remote-as 65011 neighbor 192.168.2.3 remote-as 65011


address-family ipv4 unicast address-family ipv4 unicast
no shutdown no shutdown

end end
copy running-config startup-config copy running-config startup-config

Validate L3 connections to Cisco Nexus switches


After the uplink interfaces are configured on the Nexus external switches and on the SFS leaf switches, connectivity can be
verified using the switch CLI.
Show command output on N9K-External-A
NOTE: The command output shown in the following commands is for the N9K-External-A switch. The output for N9K-
External-B is similar.
Run the show ip bgp summary command to verify that BGP is up for each neighbor. When BGP is up, uptime is shown in
the Up/Down column. The neighbors for N9K-External-A shown in the output below are Leaf1A and Leaf1B.

N9K-External-A# show ip bgp summary


BGP summary information for VRF default, address family IPv4 Unicast
BGP router identifier 10.0.2.1, local AS number 65101
BGP table version is 15, IPv4 Unicast config peers 2, capable peers 2
7 network entries and 14 paths using 2296 bytes of memory
BGP attribute entries [2/312], BGP AS path entries [1/6]
BGP community entries [0/0], BGP clusterlist entries [0/0]

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd


192.168.1.1 4 65011 2912 2529 15 0 0 1d18h 5
192.168.1.3 4 65011 2907 2529 15 0 0 1d18h 5

Run the show ip interface brief command to verify IP addresses are configured correctly. VLAN 1911 is the external
management VLAN that contains the DNS/NTP server. Loopback 0 is the router ID, and interfaces 1/49-1/50 are connected to
the SFS leaf switches.

N9K-External-A# show ip interface brief


IP Interface Status for VRF "default"(1)
Interface IP Address Interface Status
Vlan1911 172.19.11.252 protocol-up/link-up/admin-up
Lo0 10.0.2.1 protocol-up/link-up/admin-up
Eth1/49 192.168.1.0 protocol-up/link-up/admin-up
Eth1/50 192.168.1.2 protocol-up/link-up/admin-up

The show ip route command output for the N9K-External-A switch appears as shown.

Cisco Nexus External Switch Configuration Example 115


NOTE: The 172.18.11.0/24 External Management network has not yet been configured on the SFS fabric, so it is not learned
using BGP at this stage of deployment.

N9K-External-A# show ip route


IP Route Table for VRF "default"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>

10.0.2.1/32, ubest/mbest: 2/0, attached


*via 10.0.2.1, Lo0, [0/0], 18:53:33, local
*via 10.0.2.1, Lo0, [0/0], 18:53:33, direct
172.19.11.0/24, ubest/mbest: 1/0, attached
*via 172.19.11.252, Vlan1911, [0/0], 18:52:51, direct
172.19.11.252/32, ubest/mbest: 1/0, attached
*via 172.19.11.252, Vlan1911, [0/0], 18:52:51, local
172.19.11.254/32, ubest/mbest: 1/0, attached
*via 172.19.11.254, Vlan1911, [0/0], 18:52:51, vrrp_engine
192.168.1.0/31, ubest/mbest: 1/0, attached
*via 192.168.1.0, Eth1/49, [0/0], 00:00:09, direct
192.168.1.0/32, ubest/mbest: 1/0, attached
*via 192.168.1.0, Eth1/49, [0/0], 00:00:09, local
192.168.1.2/31, ubest/mbest: 1/0, attached
*via 192.168.1.2, Eth1/50, [0/0], 18:53:35, direct
192.168.1.2/32, ubest/mbest: 1/0, attached
*via 192.168.1.2, Eth1/50, [0/0], 18:53:35, local
192.168.2.0/31, ubest/mbest: 2/0
*via 192.168.1.1, [20/0], 00:00:05, bgp-65101, external, tag 65011
*via 192.168.1.3, [20/0], 00:01:31, bgp-65101, external, tag 65011
192.168.2.2/31, ubest/mbest: 2/0
*via 192.168.1.1, [20/0], 00:00:05, bgp-65101, external, tag 65011
*via 192.168.1.3, [20/0], 00:01:31, bgp-65101, external, tag 65011

Show command output on Leaf1A

NOTE: The command output shown in the following commands is for Leaf1A. The output for Leaf1B is similar.

Run the show ip bgp summary command to verify that BGP is up for each neighbor. When BGP is up, uptime is shown in
the Up/Down column. The neighbors for Leaf1A shown in the output below are Leaf1B, N9K-External-A, and N9K-External-B.

Leaf1A# show ip bgp summary


BGP router identifier 172.16.128.0 local AS number 65011
Neighbor AS MsgRcvd MsgSent Up/Down State/Pfx
172.16.0.1 65011 3222 3240 1d:22:14:58 8
192.168.1.0 65101 2794 3231 1d:18:29:11 4
192.168.2.0 65101 2795 3226 1d:18:26:04 4

Run the show ip interface brief command to verify connected interfaces are up, and IP addresses are configured
correctly.
In the output below, interfaces 1/1/1-1/1/3 are connected to the VxRail nodes, 1/1/49-1/1/52 are the VLTi, and 1/1/53-1/1/54
are the uplinks to the external switches. VLAN 4090, Loopback 1, and Loopback 2 are used internally by SFS. VLAN 4094 and
port channel 1000 are automatically configured for the VLTi.

NOTE: Unused interfaces have been removed from the output for brevity.

Leaf1A# show ip interface brief


Interface Name IP-Address OK Method Status Protocol
================================================================================
Ethernet 1/1/1 unassigned YES unset up up
Ethernet 1/1/2 unassigned YES unset up up
Ethernet 1/1/3 unassigned YES unset up up
Ethernet 1/1/49 unassigned YES unset up up
Ethernet 1/1/50 unassigned YES unset up up
Ethernet 1/1/51 unassigned YES unset up up
Ethernet 1/1/52 unassigned YES unset up up
Ethernet 1/1/53 192.168.1.1/31 YES manual up up
Ethernet 1/1/54 192.168.2.1/31 YES manual up up
Management 1/1/1 100.67.76.30/24 YES manual up up

116 Cisco Nexus External Switch Configuration Example


Vlan 4000 unassigned YES unset up up
Vlan 4090 172.16.0.1/31 YES manual up up
Vlan 4094 unassigned YES unset up up
Port-channel 1000 unassigned YES unset up up
Loopback 1 172.16.128.0/32 YES manual up up
Loopback 2 172.30.0.0/32 YES manual up up
Virtual-network 3939 unassigned YES unset up up

Run the show ip route command to verify routes to the External Management VLAN, 172.19.11.0/24, have been learned
using BGP from the Nexus switches. In this example, two routes to 172.19.11.0/24 are learned, one using each Nexus switch.
The routs are shown in the output below.

Leaf1A# show ip route


Codes: C - connected
S - static
B - BGP, IN - internal BGP, EX - external BGP
O - OSPF, IA - OSPF inter area, N1 - OSPF NSSA external type 1,
N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
E2 - OSPF external type 2, * - candidate default,
+ - summary route, > - non-active route
Gateway of last resort is not set
Destination Gateway Dist Last Change
----------------------------------------------------------------------------------
B EX 10.0.2.1/32 via 192.168.1.0 20/0 00:43:16
via 192.168.2.0
B EX 10.0.2.2/32 via 192.168.1.0 20/0 00:43:16
via 192.168.2.0
C 172.16.0.0/31 via 172.16.0.1 vlan4090 0/0 02:19:46
C 172.16.128.0/32 via 172.16.128.0 loopback1 0/0 02:20:07
B IN 172.16.128.1/32 via 172.16.0.0 200/0 02:19:44
B EX 172.19.11.0/24 via 192.168.1.0 20/0 00:43:32
via 192.168.2.0
C 172.30.0.0/32 via 172.30.0.0 loopback2 0/0 02:20:07
C 192.168.1.0/31 via 192.168.1.1 ethernet1/1/53 0/0 01:12:49
B IN 192.168.1.2/31 via 172.16.0.0 200/0 01:09:12
C 192.168.2.0/31 via 192.168.2.1 ethernet1/1/54 0/0 01:10:18
B IN 192.168.2.2/31 via 172.16.0.0 200/0 01:07:51

To continue deployment, go to the Configure a jump host port section of this guide.
BGP validation on N9K-External-A during VxRail deployment
During VxRail deployment, virtual networks are automatically configured on the SmartFabric leaf switches. IP addresses are then
manually assigned to each leaf switch on the External Management network, 172.18.11.0/24 in this guide, as shown in the
Additional configuration steps for L3 uplinks section.
Once the items above are done, run the show ip route command on the external Nexus switches to verify routes to the
External Management network, 172.18.11.0/24, have been learned using BGP from the SmartFabric leaf switches. These are
shown in the output below.

NOTE: The following command output is for the N9K-External-A switch. The output for N9K-External-B is similar.

N9K-External-A# show ip route


IP Route Table for VRF "default"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>

10.0.2.1/32, ubest/mbest: 2/0, attached


*via 10.0.2.1, Lo0, [0/0], 18:53:33, local
*via 10.0.2.1, Lo0, [0/0], 18:53:33, direct
172.19.11.0/24, ubest/mbest: 1/0, attached
*via 172.19.11.252, Vlan1911, [0/0], 18:52:51, direct
172.19.11.252/32, ubest/mbest: 1/0, attached
*via 172.19.11.252, Vlan1911, [0/0], 18:52:51, local
172.19.11.254/32, ubest/mbest: 1/0, attached
*via 172.19.11.254, Vlan1911, [0/0], 18:52:51, vrrp_engine
172.18.11.0/24, ubest/mbest: 2/0
*via 192.168.1.1, [20/0], 00:00:05, bgp-65101, external, tag 65011
*via 192.168.1.3, [20/0], 00:01:31, bgp-65101, external, tag 65011
192.168.1.0/31, ubest/mbest: 1/0, attached

Cisco Nexus External Switch Configuration Example 117


*via 192.168.1.0, Eth1/49, [0/0], 00:00:09, direct
192.168.1.0/32, ubest/mbest: 1/0, attached
*via 192.168.1.0, Eth1/49, [0/0], 00:00:09, local
192.168.1.2/31, ubest/mbest: 1/0, attached
*via 192.168.1.2, Eth1/50, [0/0], 18:53:35, direct
192.168.1.2/32, ubest/mbest: 1/0, attached
*via 192.168.1.2, Eth1/50, [0/0], 18:53:35, local
192.168.2.0/31, ubest/mbest: 2/0
*via 192.168.1.1, [20/0], 00:00:05, bgp-65101, external, tag 65011
*via 192.168.1.3, [20/0], 00:01:31, bgp-65101, external, tag 65011
192.168.2.2/31, ubest/mbest: 2/0
*via 192.168.1.1, [20/0], 00:00:05, bgp-65101, external, tag 65011
*via 192.168.1.3, [20/0], 00:01:31, bgp-65101, external, tag 65011

To continue deployment, go to the Validate and build VxRail cluster section of this guide.

Configure external Nexus switches for L2 connections


The external Nexus and SmartFabric leaf switches are cabled as shown in the following figure and are powered on. When L2
uplink configuration is complete, Leaf1A and Leaf1B connect with a VLT port channel to a virtual PortChannel (vPC) on the
external Nexus switches. In this example, an existing DNS/NTP server also connects to the Nexus switches using a vPC.

Figure 108. L2 uplinks to external Nexus 9000 switches

NOTE: DNS and NTP servers do not have to connect in this manner if they are reachable on the network.

All ports on the four switches shown in the figure above are in the External Management VLAN, 1811, in this example.
NOTE: All Nexus switch configuration commands used to validate this topology are shown in the sections that follow.
These are only examples. Modify your Nexus external switch configuration as needed for your environment.

118 Cisco Nexus External Switch Configuration Example


General settings
Enable the following features: interface-vlan, lacp, vrrp, vpc, and lldp. Configure the hostname, OOB
management IP address on VRF management, and the VRF management route as shown.
NOTE: Nexus spanning tree settings are at their factory defaults in this example. You may configure spanning tree on the
Nexus switches as needed for your environment. On Dell leaf switches in SmartFabric mode, spanning tree is disabled on L2
uplinks. See Dell EMC Networking SmartFabric Services Deployment with VxRail for more information.

N9K-External-A N9K-External-B

configure terminal configure terminal

feature interface-vlan feature interface-vlan


feature lacp feature lacp
feature vrrp feature vrrp
feature vpc feature vpc
feature lldp feature lldp

hostname N9K-External-A hostname N9K-External-B

interface mgmt 0 interface mgmt 0


ip address 100.67.127.30/24 ip address 100.67.127.29/24
vrf member management vrf member management
no shutdown no shutdown

vrf context management vrf context management


ip route 100.67.0.0/16 100.67.76.254 ip route 100.67.0.0/16 100.67.76.254

Configure the External Management VLAN


VLAN 1811 represents a preexisting management VLAN on the external network. DNS and NTP services are located on this
VLAN. Optionally, enable jumbo frames with the mtu 9216 command.
If traffic will be routed from the external switches to other external networks, assign a unique IP address on each switch and
configure VRRP to provide gateway redundancy. Assign the same virtual address to both switches.

N9K-External-A N9K-External-B

vlan 1811 vlan 1811


name ExtMgmt name ExtMgmt
no shutdown no shutdown

interface Vlan1811 interface Vlan1811


description ExtMgmt description ExtMgmt
no shutdown no shutdown
mtu 9216 mtu 9216
ip address 172.18.11.252/24 ip address 172.18.11.253/24
vrrp 11 vrrp 11
address 172.18.11.254 address 172.18.11.254
no shutdown no shutdown

Configure the vPC domain and peer link


Create the vPC domain. The peer-keepalive destination is the OOB management IP address of the vPC peer switch.
Configure a port channel to use as the vPC peer link. Put the port channel in trunk mode and allow the default and External
Management VLANs, 1 and 1811 respectively.
Configure the interfaces to use in the vPC peer link. Put the interfaces in trunk mode and allow the default and External
Management VLANs, 1 and 1811 respectively. Add the interfaces to the peer link port channel. Port-channel 1000 is set as an
LACP port-channel with the channel-group 1000 mode active command.

Cisco Nexus External Switch Configuration Example 119


N9K-External-A N9K-External-B

vpc domain 129 vpc domain 129


role priority 1 role priority 65535
peer-keepalive destination 100.67.127.29 peer-keepalive destination 100.67.127.30

interface port-channel 1000 interface port-channel 1000


description "Peer-Link to External-B" description "Peer-Link to External-A"
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1,1811 switchport trunk allowed vlan 1,1811
vpc peer-link vpc peer-link
no shutdown no shutdown

interface ethernet 1/51-52 interface ethernet 1/51-52


description "Link to External-B" description "Link to External-A"
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1,1811 switchport trunk allowed vlan 1,1811
channel-group 1000 mode active channel-group 1000 mode active
no shutdown no shutdown

Configure interfaces
Configure the interfaces for connections to the SFS leaf switches. Interfaces 1/49 and 1/50 are configured in vPC 100 in this
example. Port-channel 100 is set as an LACP port-channel with the channel-group 100 mode active command.
Use the switchport mode trunk command to enable the port-channel to carry traffic for multiple VLANs. Allow VLAN 1811
(the External Management VLAN).
Optionally, allow the forwarding of jumbo frames with the mtu 9216 command.
In this example, interface 1/1 on each external switch is configured in vPC 1 for connections to the DNS/NTP server. Port-
channel 1 is set as an LACP port-channel with the channel-group 1 mode active command.
When the configuration is complete, exit configuration mode and save the configuration with the end and copy running-
config startup-config commands.

N9K-External-A N9K-External-B

interface port-channel 100 interface port-channel 100


description "vPC to Leaf1A/1B" description "vPC to Leaf1A/1B"
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1811 switchport trunk allowed vlan 1811
vpc 100 vpc 100
mtu 9216 mtu 9216
no shutdown no shutdown

interface ethernet 1/49-50 interface ethernet 1/49-50


description "Link to Leaf1A/1B" description "Link to Leaf1A/1B"
switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1811 switchport trunk allowed vlan 1811
mtu 9216 mtu 9216
channel-group 100 mode active channel-group 100 mode active
no shutdown no shutdown

interface port-channel 1 interface port-channel 1


description "vPC to DNS/NTP" description "vPC to DNS/NTP"
switchport switchport
switchport mode access switchport mode access
switchport access vlan 1811 switchport access vlan 1811
vpc 1 vpc 1
no shutdown no shutdown

interface ethernet 1/1 interface ethernet 1/1

120 Cisco Nexus External Switch Configuration Example


N9K-External-A N9K-External-B

description "Link to DNS/NTP" description "Link to DNS/NTP"


switchport switchport
switchport mode access switchport mode access
switchport access vlan 1811 switchport access vlan 1811
channel-group 1 mode active channel-group 1 mode active
no shutdown no shutdown

end end
copy running-config startup-config copy running-config startup-config

Validation
Once the uplink interfaces have been configured in the SFS UI and on the external Nexus switches, connectivity can be verified
using the switch CLI.

Show command output on Leaf1A


NOTE: The command output shown in the following commands is for Leaf1A. The output for Leaf1B is similar.

With SFS, port channel numbers are automatically assigned as they are created. In this example, port channel 1 is the uplink
connected to the Nexus switches. It has two members that are both up and active. Port channel 1000 is reserved for the VLTi.

Leaf1A# show port-channel summary

Flags: D - Down I - member up but inactive P - member up and active


U - Up (port-channel) F - Fallback Activated
--------------------------------------------------------------------------------
Group Port-Channel Type Protocol Member Ports
--------------------------------------------------------------------------------
1 port-channel1 (U) Eth DYNAMIC 1/1/53(P) 1/1/54(P)
1000 port-channel1000 (U) Eth STATIC 1/1/49(P) 1/1/50(P) 1/1/51(P)
1/1/52(P)

The L2 uplink, port channel 1 in this example, is a tagged member of VLAN 1811. This is verified at the CLI using the show
virtual-network command as follows:

Leaf1A# show virtual-network


Codes: DP - MAC-learn Dataplane, CP - MAC-learn Controlplane, UUD - Unknown-Unicast-Drop
Un-tagged VLAN: 4080
Virtual Network: 1811
VLTi-VLAN: 1811
Members:
Untagged: ethernet1/1/9:1
VLAN 1811: port-channel1, port-channel1000, ethernet1/1/1, ethernet1/1/2,
ethernet1/1/3
VxLAN Virtual Network Identifier: 1811
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Virtual Network: 1812


VLTi-VLAN: 1812
Members:
VLAN 1812: port-channel1, port-channel1000, ethernet1/1/1, ethernet1/1/2,
ethernet1/1/3
VxLAN Virtual Network Identifier: 1812
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Virtual Network: 1813


VLTi-VLAN: 1813
Members:
VLAN 1813: port-channel1, port-channel1000, ethernet1/1/1, ethernet1/1/2,
ethernet1/1/3
VxLAN Virtual Network Identifier: 1813

Cisco Nexus External Switch Configuration Example 121


Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Virtual Network: 1814


VLTi-VLAN: 1814
Members:
VLAN 1814: port-channel1, port-channel1000, ethernet1/1/1, ethernet1/1/2,
ethernet1/1/3
VxLAN Virtual Network Identifier: 1814
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Virtual Network: 1815


VLTi-VLAN: 1815
Members:
VLAN 1815: port-channel1, port-channel1000, ethernet1/1/1, ethernet1/1/2,
ethernet1/1/3
VxLAN Virtual Network Identifier: 1815
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Virtual Network: 3939


Description: In-band SmartFabric Services discovery network
VLTi-VLAN: 3939
Members:
VLAN 3939: port-channel1000, ethernet1/1/1, ethernet1/1/2, ethernet1/1/3
VxLAN Virtual Network Identifier: 3939
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Virtual Network: 4091


Description: Default untagged network for client onboarding
VLTi-VLAN: 4091
Members:
Untagged: ethernet1/1/1, ethernet1/1/2, ethernet1/1/3
VLAN 4091: port-channel1000
VxLAN Virtual Network Identifier: 4091
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):

Use the show vlt 255 vlt-port-detail command to verify the status of VLT ports. Port channel 1 is the L2 uplink to
the Nexus switches. The output shows information for both VLT peer switches. An asterisk (*) denotes the local switch. In this
case, Leaf1A is VLT unit 1, and Leaf1B is VLT unit 2.

Leaf1A# show vlt 255 vlt-port-detail


vlt-port-channel ID : 1
VLT Unit ID Port-Channel Status Configured ports Active ports
-------------------------------------------------------------------------------
* 1 port-channel1 up 2 2
2 port-channel1 up 2 2

Show command output on N9K-External-A


NOTE: The command output shown in the following commands is for the N9K-External-A switch. The output for N9K-
External-B is similar.
The show port-channel summary command confirms port channels are up. Po1 connects to the DNS/NTP server, Po100
connects to the SFS leaf switches, and Po1000 is the peer link.

N9K-External-A# show port-channel summary


Flags: D - Down P - Up in port-channel (members)
I - Individual H - Hot-standby (LACP only)
s - Suspended r - Module-removed
S - Switched R - Routed
U - Up (port-channel)
p - Up in delay-lacp mode (member)
M - Not in use. Min-links not met
--------------------------------------------------------------------------------
Group Port- Type Protocol Member Ports

122 Cisco Nexus External Switch Configuration Example


Channel
--------------------------------------------------------------------------------
1 Po1(SU) Eth LACP Eth1/1(P)
100 Po100(SU) Eth LACP Eth1/49(P) Eth1/50(P)
1000 Po1000(SU) Eth LACP Eth1/51(P) Eth1/52(P)

Run the show vlan command to verify ports are correctly assigned to the External Management VLAN (VLAN 1811). Po1
connects to the DNS/NTP server, Po100 connects to the SFS leaf switches, and Po1000 is the peer link.

N9K-External-A# show vlan

VLAN Name Status Ports


---- -------------------------------- --------- -------------------------------
1 default active Po1000, Eth1/51, Eth1/52
1811 ExtMgmt active Po1, Po100, Po1000, Eth1/49
Eth1/50, Eth1/51, Eth1/52

VLAN Type Vlan-mode


---- ----- ----------
1 enet CE
1811 enet CE

Remote SPAN VLANs


-------------------------------------------------------------------------------

Primary Secondary Type Ports


------- --------- --------------- -------------------------------------------

Run the show vpc command to verify all vpc connections are up. In this example, Po1000 is the peer link, Po1 connects to the
DNS/NTP server, and Po100 connects to the SFS leaf switches.

N9K-External-A# show vpc


Legend:
(*) - local vPC is down, forwarding via vPC peer-link

vPC domain id : 129


Peer status : peer adjacency formed ok
vPC keep-alive status : peer is alive
Configuration consistency status : success
Per-vlan consistency status : success
Type-2 consistency status : success
vPC role : primary
Number of vPCs configured : 2
Peer Gateway : Disabled
Dual-active excluded VLANs : -
Graceful Consistency Check : Enabled
Auto-recovery status : Disabled
Delay-restore status : Timer is off.(timeout = 30s)
Delay-restore SVI status : Timer is off.(timeout = 10s)

vPC Peer-link status


---------------------------------------------------------------------
id Port Status Active vlans
-- ---- ------ --------------------------------------------------
1 Po1000 up 1,1811

vPC status
----------------------------------------------------------------------
id Port Status Consistency Reason Active vlans
-- ---- ------ ----------- ------ ------------
1 Po1 up success success 1811

100 Po100 up success success 1811

NOTE: To continue deployment, go to the Configure a jump host port section.

Cisco Nexus External Switch Configuration Example 123


Validated Nexus switches
The Cisco Nexus switches in the table below were used to validate the example in this appendix.

Table 16. External Cisco Nexus switches


Quantity Item Operating system version
2 Cisco Nexus 93180YC-EX switches 7.0(3)I4(2)

NOTE: Other validated components are listed in Appendix A.

124 Cisco Nexus External Switch Configuration Example


D
Support and Feedback

Technical resources
Dell EMC Networking Info Hub
Dell EMC Networking OS10 Info Hub
Dell EMC SmartFabric OS10 User Guide Release 10.5.2
SmartFabric OS10 Solutions (HCI, Storage, MX) Support Matrix
Dell EMC PowerSwitch S3048-ON Documentation
Dell EMC PowerSwitch S5248F-ON Documentation
Dell EMC PowerSwitch S5232F-ON Documentation
Dell EMC Networking Transceivers and Cables
Dell EMC OpenManage Network Integration for VMware vCenter
NOTE: This site includes OMNI software and the SmartFabric Services for OpenManage Network Integration User Guide,
Release 2.0
Dell EMC OS10 SmartFabric Services FAQ
Dell EMC VxRail Network Planning Guide
Dell EMC VxRail 7.x Support Matrix (account required)
Dell Technologies SolVe Online (account required)
Dell EMC VxRail support and documentation (account required)
VxRail Documentation Quick Reference List (account required)
Dell EMC Networking SmartFabric Services Deployment with VxRail 4.7
Dell EMC Networking SmartFabric Services Deployment with VxRail 7.0

Fabric Design Center


The Dell EMC Fabric Design Center (FDC) is a cloud-based application that automates the planning, design, and deployment of
network fabrics that power Dell EMC compute, storage, and hyperconverged infrastructure solutions. The FDC is ideal for
turnkey solutions and automation based on validated deployment guides.
FDC allows design customization and flexibility to go beyond validated deployment guides. For additional information, go to the
Dell EMC Fabric Design Center.

Feedback and technical support


We encourage readers to provide feedback on the quality and usefulness of this publication by sending an email to
[email protected].
For technical support, go to http://www.dell.com/support.

Support and Feedback 125

You might also like