Dell Emc Networking Smartfabric Services Deployment With Vxrail 7 0 1
Dell Emc Networking Smartfabric Services Deployment With Vxrail 7 0 1
Dell Emc Networking Smartfabric Services Deployment With Vxrail 7 0 1
Abstract
In this guide, SmartFabric Services (SFS) is used to deploy a new leaf-spine fabric for a
new VxRail cluster. SFS automatically reconfigures the fabric with user-specified VLANs
during VxRail cluster deployment. The SFS-enabled leaf-spine topology is connected to
the data center's existing network using Layer 2 or Layer 3 uplinks.
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2020 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other
trademarks may be trademarks of their respective owners.
Contents
Chapter 1: Introduction................................................................................................................. 5
Purpose of this guide..........................................................................................................................................................5
Dell Technologies................................................................................................................................................................. 5
VxRail......................................................................................................................................................................................5
SmartFabric Services..........................................................................................................................................................5
SmartFabric Services with VxRail....................................................................................................................................7
OpenManage Network Integration.................................................................................................................................. 7
Typographical conventions................................................................................................................................................7
Chapter 3: Topology.................................................................................................................... 10
Overview.............................................................................................................................................................................. 10
Production topology with SmartFabric Services........................................................................................................10
Production topology connection details........................................................................................................................11
OOB management topology............................................................................................................................................ 12
OOB management connection details...........................................................................................................................13
Contents 3
Validate and build VxRail cluster....................................................................................................................................62
4 Contents
1
Introduction
Dell Technologies
Our vision at Dell Technologies is to be the essential technology company for the data era. Dell ensures modernization for
today’s applications and for the emerging cloud-native world.
Dell is committed to disrupting the fundamental economics of the market with an open strategy that gives you the freedom of
choice for networking operating systems and top-tier merchant silicon. Our strategy enables business transformations that
maximize the benefits of collaborative software and standards-based hardware, including lowered costs, flexibility, freedom, and
security. Dell provides further customer enablement through validated deployment guides that demonstrate these benefits while
maintaining a high standard of quality, consistency, and support.
VxRail
VxRail is at the forefront of a fundamental shift in IT infrastructure consumption – away from application-specific, “build-your-
own” infrastructure and toward virtualized, general-purpose, engineered systems. Dell Technologies and VMware have
embraced this shift with the VxRail hyperconverged appliance. VxRail has a simple, scale-out architecture that uses VMware
vSphere and VMware vSAN to provide server virtualization and software-defined storage.
SmartFabric Services
Dell EMC SmartFabric OS10 includes SmartFabric Services (SFS). With SFS, customers can quickly and easily deploy and
automate data center networking fabrics.
There are two types of SFS:
● SFS for Leaf and Spine — supported on selected Dell EMC PowerSwitch S and Z series switches
● SFS for PowerEdge MX — supported on selected modular switches, not applicable to this guide
SFS for Leaf and Spine has two personalities:
● VxRail Layer 2 (L2) Single Rack personality — This is the original (legacy) SFS personality that automates configuration
of a single pair of ToR (or leaf) switches for VxRail clusters.
● Layer 3 (L3) Fabric personality — This is the new SFS personality available as of OS10.5.0.5 that automates
configuration of a leaf-spine fabric.
VxRail L2 Single Rack personality
NOTE: For new single rack and multirack SFS deployments, Dell requires using the L3 Fabric personality instead of the
VxRail L2 Single Rack personality.
The VxRail L2 Single Rack personality is the original SFS personality. It is enabled by running a Python script in the OS10 Linux
shell.
This personality is limited to a single rack and cannot be expanded to a multirack deployment. If switches with this personality
enabled are upgraded, they will continue to operate with the VxRail L2 Single Rack personality.
Introduction 5
NOTE: The VxRail L2 Single Rack personality is not covered in this deployment guide. It is covered in the VMware
Integration for VxRail Fabric Automation SmartFabric User Guide, Release 1.1.
L3 Fabric personality
NOTE: Dell requires using the L3 Fabric personality for new SFS deployments. All examples in this guide use this
personality. Unless otherwise specified, statements in this guide regarding SmartFabric behavior and features are applicable
to the L3 Fabric personality only.
NOTE: The L3 personality provides the option of deploying a VxRail cluster in a single rack or multirack environment.
The L3 Fabric Personality allows users to deploy SmartFabric Services in a single rack and expand to multirack as business needs
evolve.
The SFS L3 Fabric personality automatically builds an L3 leaf-spine fabric. This enables faster time to production for
hyperconverged and private cloud environments while being fully interoperable with existing data center infrastructure.
6 Introduction
Figure 2. SFS Layer 3 leaf-spine fabric
Typographical conventions
Monospace text CLI examples
Underlined monospace text CLI examples that wrap the page, or to highlight information in CLI output
Italic monospace text Variables in CLI examples
Bold text UI fields and information that is entered in the UI
Introduction 7
2
Hardware Overview
Supported switches
Only the Dell EMC PowerSwitch systems listed in Table 1 are supported with SFS in leaf or spine roles. SFS does not run on
other Dell EMC PowerSwitch models or third-party switches.
To use the SFS features detailed in this guide, switches must be running SmartFabric OS10.5.2.2 or a later version specified in
the SmartFabric OS10 Solutions (HCI, Storage, MX) Support Matrix.
NOTE: The roles shown are recommended, with the exception that Z9264F-ON is supported as a spine only. S5232F-ON
may be used as a leaf with ports connected to VxRail nodes broken out to 10 GbE or 25 GbE. VxRail nodes do not currently
support 100 GbE NICs for VxRail system traffic.
Any combination of the leaf and spine switches listed in Table 1 may be used with the exception that leaf switches must be
deployed in pairs. Each leaf switch in the pair must be the same model due to VLT requirements.
SFS supports up to 20 switches and eight racks in the fabric.
8 Hardware Overview
Figure 4. Dell EMC PowerSwitch S5232F-ON
NOTE: VxRail supports cluster sizes up to 64 nodes. With SFS, VxRail clusters must have a minimum of three nodes. Two-
node VxRail clusters are not currently supported.
Hardware Overview 9
3
Topology
Overview
The topology is divided into two major parts:
● Production
● Out-of-band (OOB) management
The production topology contains redundant components and is used for all mission-critical and end-user network traffic. The
OOB management network is an isolated network for remote management of hardware.
Figure 7. SmartFabric topology with connections to VxRail nodes and external network
10 Topology
NOTE: The deployment examples in this guide use two network adapter ports per VxRail node, as shown in Figure 7. See
the Dell EMC VxRail Network Planning Guide for VxRail node connectivity options.
With SFS, two leaf switches are used in each rack for redundancy and performance. A Virtual-Link Trunking interconnect (VLTi)
connects each pair of leaf switches. Every leaf switch has an L3 uplink to every spine switch. Equal-cost multi-path routing
(ECMP) is leveraged to use all available bandwidth on the leaf-spine connections.
SFS uses BGP-EVPN to stretch L2 networks across the L3 leaf-spine fabric. This configuration allows for the scalability of L3
networks with the VM mobility benefits of an L2 network. For example, a VM can be migrated from one rack to another without
the need to change its IP address and gateway information.
The example in this guide builds the SmartFabric shown in Figure 7 in two stages:
1. The first stage is a single rack deployment. Leaf switches 1A and 1B are deployed in Rack 1 without spine switches, and a
two-leaf fabric is created using SFS. The fabric is connected to the external network using either L2 or L3 uplinks. The
external network is typically a preexisting network in the data center. Three VxRail nodes are connected to the two leaf
switches, and a three-node VxRail cluster is deployed.
2. In the second stage, two spine switches are added and connected to leaf switches 1A and 1B. Leaf switches 2A and 2B are
added in Rack 2 and are also connected to the spine switches. The fabric is expanded to include the two spines and two
additional leafs using SFS. A fourth VxRail node is added in Rack 2 and joined to the existing VxRail cluster.
NOTE: Single and multirack deployment options are discussed in Chapter 4.
Topology 11
Figure 8. Production network connection details
NOTE: In this example, the two QSFP28-DD double density ports (2x 100 GbE interfaces per physical port), available on
S5248F-ON switches, are used to create a 400 GbE VLTi. This requires QSFP28-DD DAC cables or optics. On switches
without QSFP28-DD ports, QSFP28 (100 GbE) or QSFP+ (40 GbE) ports are typically used for VLTi connections. The VLTi
synchronizes L2 and L3 control-plane information across the two nodes. The VLTi is used for data traffic only when there is
a link failure that requires the VLTi to reach the destination. Dell Technologies recommends using at least two physical ports
on each switch for the VLTi for redundancy and to provide additional bandwidth if there is a failure.
12 Topology
For OOB management network connections, one S3048-ON switch is installed in each rack, as shown in the figure below.
The OOB management network enables connections to the PowerSwitch SFS web UI. It also enables switch console access
using SSH, and VxRail node console access using the iDRAC. This network is also used to carry heartbeat messages between
switches configured as VLT peers, and for OpenManage Network Integration (OMNI) to communicate with the SFS master
switch.
NOTE: This guide covers the equipment shown in Racks 1 and 2. Other devices and racks shown in the figure above are for
demonstration purposes only.
Four 10 GbE SFP+ ports are available on each S3048-ON for use as uplinks to the OOB management network core.
1 GbE BASE-T ports on each S3048-ON are connected downstream to hardware management ports on each device in the rack.
This includes the VxRail node iDRAC ports and switch management ports. Management ports on other devices, such as
PowerEdge server iDRAC ports, storage array management ports, and rack PDU management ports, are also connected to this
network.
OOB management switch configuration is not detailed in this guide. The S3048-ON can function as an OOB management switch
with its OS10 factory default configuration. By default, all ports are in switchport mode, in VLAN 1, administratively up, and rapid
per-VLAN spanning tree plus (RPVST+) is enabled.
NOTE: At a minimum, Dell Technologies recommends changing the admin password to a complex password during the first
login.
NOTE: For reference, devices on the OOB Management network in this guide use the 100.67.0.0/16 IP address block.
These addresses are examples only. Use IP addresses that are suitable for your environment.
Topology 13
Figure 10. OOB management network connection details
14 Topology
4
Deployment Planning
Minimum requirements
Minimum requirements for VxRail 7.0.1 deployments with SFS include:
● Three VxRail nodes running VxRail appliance software version 7.0.100 or a later version as specified in the SmartFabric OS10
Solutions (HCI, Storage, MX) Support Matrix.
● VxRail nodes must meet the hardware and software requirements listed in the Dell EMC VxRail Support Matrix.
● On-board NICs in VxRail nodes must be 10 GbE or 25 GbE.
● Two Dell EMC PowerSwitch units as listed in Table 1 must be deployed as leaf switches. Each leaf switch in the pair must be
the same model due to VLT requirements.
● Dell EMC PowerSwitch units must be running SmartFabric OS10.5.2.2 or a later version as specified in the SmartFabric OS10
Solutions (HCI, Storage, MX) Support Matrix.
● One 1 GbE BASE-T, also referred to as 1000BASE-T, switch for OOB management connections. Dell Technologies
recommends using one PowerSwitch S3048-ON per rack.
● One DNS server which can be an existing DNS server that is reachable on the network with host records added for this
deployment. The example DNS host records used in this guide are shown in Table 5.
Unsupported environments
SFS does not currently support the following environments:
● vSAN stretched clusters
● VMware Cloud Foundation (VCF)
● NSX-V
● VxRail L3 Everywhere
Unsupported features
SFS does not currently support the following features:
● Multiple VRF tenants
● Route policies or Access Control Lists (ACLs)
● OSPF or routing protocols other than eBGP
● Multicast routing protocols
● Networking features not covered in the SmartFabric Services for OpenManage Network Integration User Guide, Release
2.0. This guide is available on the Dell EMC OpenManage Network Integration for VMware vCenter web site.
Deployment Planning 15
● multirack deployment—A multirack SmartFabric with spines and two leaf switches per rack is deployed. VxRail nodes are
installed in multiple racks and connected to the SmartFabric leaf switches in each rack. A VxRail cluster is built using VxRail
nodes in multiple racks.
Uplink options
SFS uplink options to external network switches include:
● L2 uplinks from a leaf pair
● L3 uplinks from a leaf pair
● L3 uplinks from spines
NOTE: Dell Technologies recommends using uplinks from a leaf pair as a best practice. Leaf switches with uplinks to an
external network are referred to as border leafs. VxRail nodes and other servers in the rack may be connected to border
leafs in the same manner as other leafs in the SmartFabric.
L2 uplink planning
If an L2 uplink is used, determine the VLAN ID to use for VxRail external management, and if ports in the uplink will be tagged or
untagged. Typically, this will be the same VLAN used for DNS and NTP services on the existing network, as shown in the
example in this guide. Optionally, traffic may be routed from the external switch to the DNS/NTP servers.
The L2 uplink may be an LACP or static LAG. If L2 uplinks connect to a pair of Dell EMC PowerSwitch systems, Dell
Technologies recommends using LACP with VLT per the example in this guide.
L2 uplink configuration is covered in detail in the Configure L2 uplinks to the external network section of this guide.
NOTE: With L2 uplinks, all routing into and out of the SmartFabric is done on external switches.
L3 uplink planning
SFS supports using L3 routed or L3 VLAN uplinks.
With L3 routed uplinks, each physical link is a point-to-point IP network. With L3 VLAN, all uplinks are in a LAG, and an IP
address is assigned to the VLAN containing the LAG. This guide provides examples using L3 routed uplinks. L3 VLAN examples
are beyond the scope of this guide.
Point-to-point IP networks and addresses must be planned for each physical link in the L3 uplink.
Each leaf switch in the SmartFabric needs an IP address on the External Management VLAN. An anycast gateway address on
the same VLAN is also specified. This is the virtual router/anycast gateway address shared by all leafs in the SmartFabric.
SmartFabric supports routing using eBGP or static routes. eBGP and static routing examples are both provided in this guide.
If eBGP is used, ASNs and router IDs must be determined for the external switches. These are automatically configured on all
switches in the SmartFabric.
NOTE: SFS uses ASNs 65011 for leafs, and 65012 for spines. If these ASNs conflict with your environment, they may be
changed in the SFS UI under 5. Edit Default Fabric Settings.
L3 uplink configuration is covered in detail in the Configure L3 routed uplinks to the external network section of this guide.
External switches
External switches must have available ports for connections from the existing network to the SFS border leafs (or spines if
applicable). For redundancy, Dell Technologies recommends two external switches with at least two links per switch to the
SmartFabric. Use enough connections to provide sufficient bandwidth for the traffic anticipated across these links. If using Dell
16 Deployment Planning
EMC PowerSwitch systems as external switches, Dell Technologies recommends configuring them as VLT peers, as shown in
the examples in this guide.
NOTE: This guide provides external switch configuration examples for Dell EMC PowerSwitch systems. Cisco Nexus switch
configuration examples are provided in Appendix C.
NOTE: All VLANs in Table 2 share the physical connections shown in Figure 8 in this deployment.
VLAN IDs and network addresses planned for this deployment example are shown in the following table.
NOTE: SFS automatically creates VLANs 4091 and 3939. VLANs 1811 through 1815 and their network IP addresses are
user-defined and are examples only. In SmartFabric mode, VLANs 1 through 3999, excluding 3939, are available for use.
VLANs 4091 and 3939 may be changed from their defaults in the SFS UI under 5. Edit Default Fabric Settings. VLAN
3939 is also a VxRail default VLAN. If VLAN 3939 is changed in the SFS UI, you must also change it to match in VxRail per
the VxRail documentation.
NOTE: VLANs 4000 through 4094 are reserved for SFS. For more information about the reserved VLANs, see the
SmartFabric Services for OpenManage Network Integration User Guide, Release 2.0. The guide is available on the Dell EMC
OpenManage Network Integration for VMware vCenter website.
Deployment Planning 17
NOTE: SFS uses the 172.16.0.0/16, and 172.30.0.0/16 IP address blocks internally for the leaf-spine network configuration.
If these networks conflict with your environment, these default IP addresses blocks may be changed in the SFS UI under 5.
Edit Default Fabric Settings.
In SmartFabric mode, each VLAN in Table 3 is automatically placed in a VXLAN virtual network with a Virtual Network Identifier
(VNI) that matches the VLAN ID. VLAN 4091 is in virtual network 4091, VLAN 1811 is in virtual network 1811, and so on.
The show virtual network command is used to view virtual networks, VLANs, and port-VLAN assignments. This command
is covered in more detail later in this guide.
18 Deployment Planning
Table 4. VxRail deployment settings (continued)
Category Section Description Values used in this guide
VxRail Manager Settings VxRail Manager VxRail Manager Hostname vxmgr01
VxRail Manager IP Address 172.18.11.72
VxRail Manager Root password
Password
VxRail Manager Service password
Account Password
Virtual Network Settings VxRail Management Network Management Subnet Mask 255.255.255.0
Management Gateway 172.18.11.254
Management VLAN ID 1811
vSAN vSAN Configuration Method Autofill
vSAN Starting IP Address 172.18.13.101
vSAN Subnet Mask 255.255.255.0
vSAN VLAN ID 1813
vSphere vMotion vSAN Configuration Method Autofill
vSAN Starting IP Address 172.18.12.101
vSAN Subnet Mask 255.255.255.0
vSAN VLAN ID 1812
VM Guest Networks VM Guest Network Name VM_Network_A
VLAN ID 1814
VM Guest Network Name VM_Network_B
VLAN ID 1815
System VM Network Port Binding Ephemeral Binding
a. The VxRail Deployment Wizard now includes an option to use an Internal (VxRail Manager Service) DNS server. To use this
feature, see your VxRail documentation. The deployment example in this guide uses an external DNS server.
b. In the L2 uplink example in this guide, the DNS/NTP servers on the existing network are on the same External
Management VLAN, 1811, as the VxRail nodes. IP addresses on this network use the 172.18.11.0/24 address block. In the L3
uplink example, the DNS/NTP servers are on a different VLAN, 1911, with IP addresses in the 172.19.11.0/24 address block.
VLAN 1911 represents a pre-existing management VLAN and is used only on the external switches in the L3 uplink example.
c. If an NTP server is not provided, VxRail uses the time that is set on VxRail node 1.
Deployment Planning 19
Table 5. DNS hostnames and IP addresses (continued)
Hostname IP address
vxrail04.dell.lab 172.18.11.104
vcenter01.dell.lab 172.18.11.62
vxmgr01.dell.lab 172.18.11.72
omni.dell.lab 172.18.11.56
ntp.dell.lab In L2 uplink example - 172.18.11.51
In L3 uplink example - 172.19.11.51
In the L2 uplink example in this guide, the DNS server address is 172.18.11.50. In the L3 uplink example, the DNS server address
is 172.19.11.50.
NOTE: The VxRail Deployment Wizard now includes an option to use an Internal (VxRail Manager Service) DNS server. To
use this feature, see your VxRail documentation. The deployment example in this guide uses an external DNS server.
20 Deployment Planning
5
Configure the First Leaf Switch Pair
Cabling
Cable the switches and VxRail nodes, as shown in the figure below, and power on all devices.
For connection details, see Figure 8. Also, make OOB management connections, as shown in Figure 10.
Other global settings may also be configured here, such as ip name-server and ntp server if used by the switch. These
settings are not required for the deployment example in this guide. The hostname of the switch may be configured at the CLI or
in the SFS UI. In this guide, the SFS UI is used.
Enable SmartFabric
CAUTION: The following commands delete the existing switch configuration. Switch management settings such
as management IP address, management route, hostname, NTP server, and IP name server are retained.
Ensure the physical VLTi connections are made between leaf pairs before proceeding.
NOTE: This example uses the two QSFP28 2x100 Gb DD ports, interfaces 1/1/49-1/1/52, for the VLTi connections on each
S5248F-ON leaf.
To put the first pair of leaf switches in SmartFabric mode and configure them as VLT peers, run the following commands on
each switch:
NOTE: For more information, see SmartFabric Services for OpenManage Network Integration User Guide, Release 2.0. The
guide is available on the Dell EMC OpenManage Network Integration for VMware vCenter website. For additional
SmartFabric CLI commands, see the SmartFabric Services chapter of the Dell EMC SmartFabric OS10 User Guide Release
10.5.2.
NOTE: The SFS UI supports Chrome, Firefox, and Edge browsers. Languages other than English are not supported at
this time.
All web UI configuration is done on the SFS master switch. If you connect to an SFS switch that is not the master, a link to
the master is provided. This is outlined in red in the picture below.
When connected to the SFS master switch, the UI appears, as shown in the figure below.
NOTE: If L3 uplinks are used, proceed to the Configure L3 routed uplinks to the external network section.
NOTE: DNS and NTP server(s) do not have to connect in this manner as long as they are reachable on the network.
All ports on the four switches shown in Figure 19 are in the External Management VLAN, 1811.
Optionally, enter the show smartfabric uplinks command at the leaf switch CLI to view configured interfaces and
networks on the uplink.
NOTE: The command output shown in the following command is for Leaf1A. The output for Leaf1B is the same.
NOTE: This is only an example. Modify your external switch configuration as needed for your network.
General settings
Configure the hostname, OOB management IP address, and OOB management route as shown.
External-A External-B
Configure VLANs
Create the External Management VLAN. If traffic will be routed from the external switches to other external networks, assign a
unique IP address on each switch and configure VRRP to provide gateway redundancy. Set the VRRP priority. The switch with
the highest priority value becomes the master VRRP router. Assign the same virtual address to both switches.
External-A External-B
Configure interfaces
Configure the interfaces for connections to the SFS leaf switches. Interfaces 1/1/13 and 1/1/14 are configured in VLT port
channel 100 in this example. Port-channel 100 is set as an LACP port channel with the channel-group 100 mode
active command.
Use the switchport mode trunk command to enable the port channel to carry traffic for multiple VLANs. Configure the
port channel as tagged on VLAN 1811 (the External Management VLAN).
Optionally, allow the forwarding of jumbo frames with the MTU 9216 command.
In this example, interface 1/1/1 on each external switch is configured in VLT port channel 1 for connections to the DNS/NTP
server. Port-channel 1 is set as an LACP port channel with the channel-group 1 mode active command.
Configure ports directly connected to nodes, servers, or other endpoints as STP edge ports. As a best practice, flow control
settings remain at their factory defaults as shown.
External-A External-B
Configure VLT
This example uses interfaces 1/1/11 and 1/1/12 for the VLTi. Remove each interface from L2 mode with the no switchport
command.
Create the VLT domain. The backup destination is the OOB management IP address of the VLT peer switch. Configure the
interfaces used as the VLTi with the discovery-interface command.
As a best practice, use the vlt-mac command to manually configure the same VLT MAC address on both the VLT peer
switches. This improves VLT convergence time when a switch is reloaded.
CAUTION: Be sure the VLT MAC address is the same on both switches to avoid any unpredictable behavior.
If you do not configure a VLT MAC address, the MAC address of the primary peer is used as the VLT MAC address on both
switches.
NOTE: For more information about VLT, see the Dell EMC SmartFabric OS10 User Guide on the Dell EMC Networking OS10
Info Hub.
When the configuration is complete, exit configuration mode and save the configuration with the end and write memory
commands.
External-A External-B
end end
write memory write memory
Validation
Once the uplink interfaces have been configured on the external switches and in the SFS UI, additional validation is done using
the switch CLI.
Show command output on External-A
NOTE: The command output shown in the following commands is for the External-A switch. The output for External-B is
similar.
Run the show vlan command to verify ports are correctly assigned to the External Management VLAN. Port channel 100
connects to the SFS leaf switches and is a tagged member of the same VLAN configured on the SmartFabric uplinks (VLAN
The show port channel summary command confirms port channel 100 connected to the leaf switches is up and active.
Port channel 1000 is the VLTi, and port channel 1 is connected to the DNS/NTP server.
NOTE: The command output shown in the following commands is for Leaf1A. The output for Leaf1B is similar.
With SFS, port channel numbers are automatically assigned as they are created. Port channel 1 is the uplink connected to the
external switches and is up and active. Port channel 1000 is reserved for the VLTi.
The L2 uplink, port channel 1 in this example, is added as a tagged member of VLAN 1811. This is verified at the CLI using the
show virtual-network command as follows:
NOTE: Ethernet ports 1/1/1-1/1/3 are connected to the VxRail nodes. SFS automatically puts VxRail node ports in virtual
networks 3939 and 4091.
Connections, port numbers, and networks used for external management are shown in the figure below. The External
Management VLAN is VLAN 1911 on the external switches and is VLAN 1811 on the SmartFabric switches.
Point-to-point IP networks
The point-to-point links used in this deployment are labeled A-E in Figure 27.
Each L3 uplink is a separate, point-to-point IP network. Table 6 details the links labeled in Figure 27. The IP addresses in the
table below are used in the switch configuration examples.
BGP example
This section covers the L3 routed uplink configuration with BGP.
NOTE: Using private ASNs in the data center is a best practice. Private, 2-byte ASNs range from 64512 through 65534.
In this example, ASN 65101 is used on both external switches. SFS leaf switches use ASN 65011 by default for all leafs in the
fabric.
NOTE: If L3 uplinks are connected from SFS spine switches, the spine switches use ASN 65012 by default.
The IP addresses shown on the external network switches in Figure 28 are loopback addresses used as BGP router IDs. On the
SmartFabric switches, BGP router IDs are automatically configured from the SFS default private subnet address block,
172.16.0.0/16.
NOTE: SFS default ASNs and IP address blocks may be changed by going to 5. Edit Default Fabric Settings in the SFS
web UI.
Configure L3 routed uplinks with BGP in SFS
The following table shows the values entered in the SFS web UI to configure the L3 uplinks for this example. The steps below
the table are run once for each uplink using the values in the table.
NOTE: Any ports available on the leaf switches may be used as uplinks, provided they are compatible with the
corresponding ports on the external switches. If leaf switch uplink ports will not use their native speeds, the interfaces must
be first broken out to the correct speed before the uplinks are created. This is done using the 1. Breakout Switch Ports
option on the SFS web UI home page. A breakout example is shown in the Change the port-group speed in the SFS web UI
section of this guide.
To configure L3 routed uplinks with BGP, do the following using the data from Table 7:
1. In the SFS web UI, select 2. Create Uplink for External Network Connectivity.
Individual uplinks created are visible on the Uplinks tab of the SFS web UI, as shown in the figure below:
NOTE: Currently, only one static route per L3 uplink is allowed. If multiple routes are needed, use a default route, 0.0.0.0/0,
as the destination network, or add additional uplinks for specific networks. Support for multiple static routes per L3 uplink is
planned for a future release.
NOTE: Any ports available on the leaf switches may be used as uplinks, provided they are compatible with the
corresponding ports on the external switches. If leaf switch uplink ports will not use their native speeds, the interfaces must
be first broken out to the correct speed before the uplinks are created. This is done using the 1. Breakout Switch Ports
option on the SFS web UI home page. A breakout example is shown in the Change the port-group speed in the SFS web UI
section of this guide.
To configure L3 routed uplinks with a static route, perform the following steps:
1. In the SFS web UI, select 2. Create Uplink for External Network Connectivity.
Individual uplinks created are visible on the Uplinks tab of the SFS web UI as shown.
NOTE: This is only an example. Modify your external switch configuration as needed for your network.
General settings
Configure the hostname, OOB management IP address, and management route.
External-A External-B
Configure VLANs
VLAN 1911 represents a preexisting management VLAN on the external network. DNS and NTP services are located on this
VLAN. Assign a unique IP address to the VLAN on each switch.
Configure VRRP to provide gateway redundancy. Set the VRRP priority. The switch with the highest priority value becomes the
master VRRP router. Assign the same virtual address to both switches.
External-A External-B
vrrp-group 19 vrrp-group 19
priority 150 priority 100
virtual-address 172.19.11.254 virtual-address 172.19.11.254
Configure interfaces
External-A External-B
Configure VLT
This example uses interfaces 1/1/11 and 1/1/12 for the VLTi. Remove each interface from L2 mode with the no switchport
command. As a best practice, flow control settings remain at their factory defaults, as shown.
Create the VLT domain. The backup destination is the OOB management IP address of the VLT peer switch. Configure the
interfaces used as the VLTi with the discovery-interface command.
As a best practice, use the vlt-mac command to manually configure the same VLT MAC address on both the VLT peer
switches. This improves VLT convergence time when a switch is reloaded.
CAUTION: Be sure the VLT MAC address is the same on both switches to avoid any unpredictable behavior.
If you do not configure a VLT MAC address, the MAC address of the primary peer is used as the VLT MAC address on both
switches.
NOTE: For more information about VLT, see the Dell EMC SmartFabric OS10 User Guide on the Dell EMC Networking OS10
Info Hub.
External-A External-B
no switchport no switchport
flowcontrol receive on flowcontrol receive on
flowcontrol transmit off flowcontrol transmit off
Configure BGP
NOTE: If BGP is not used, go to the Configure static routes section.
External-A External-B
end end
write memory write memory
Configure two routes to the external management network. This is 172.18.11.0/24, one to the connected IP address of Leaf1A,
and one to Leaf1B.
When the configuration is complete, exit configuration mode and save the configuration with the end and write memory
commands.
External-A External-B
end end
write memory write memory
Run the show ip interface brief command to verify connected interfaces are up, and IP addresses are configured correctly. In
the output below, interface 1/1/1 and port channel 1 connect to the DNS/NTP server. 1/1/13-1/1/14 are the links to the SFS leaf
switches, and 1/1/11-1/1/12 are the VLTi links. VLAN 4094 and port channel 1000 are automatically configured for the VLTi.
VLAN 1911 is the external management VLAN that contains the DNS/NTP server. VLAN 4094 and port channel 1000 are
automatically configured for the VLTi.
NOTE: Unused interfaces have been removed from the output for brevity.
The show ip route command output for the External-A switch appears as shown. No BGP routes from the SFS fabric are
learned at this stage of deployment. Interfaces 1/1/13 and 1/1/14 are connected to the SFS leaf switches.
Run the show ip bgp summary command to verify that BGP is up for each neighbor. When BGP is up, uptime is shown in
the Up/Down column. The neighbors for Leaf1A shown in the output below are Leaf1B, External-A, and External-B.
Run the show ip interface brief command to verify connected interfaces are up, and IP addresses are configured
correctly.
In the output below, interfaces 1/1/1-1/1/3 are connected to the VxRail nodes, 1/1/49-1/1/52 are the VLTi, and 1/1/53-1/1/54
are the uplinks to the external switches. SFS uses VLAN 4000-4090, Loopback 1, and Loopback 2 internally. VLAN 4094 and
port channel 1000 are automatically configured for the VLTi.
NOTE: Unused interfaces have been removed from the output for brevity.
Run the show ip interface brief command to verify connected interfaces are up, and IP addresses are configured
correctly.
In the output below, interfaces 1/1/1-1/1/3 are connected to the VxRail nodes, 1/1/49-1/1/52 are the VLTi links, and
1/1/53-1/1/54 are the uplinks to the external switches.
NOTE: Unused interfaces have been removed from the output for brevity.
Run the show ip route command to verify static routes to the external management VLAN, 172.19.11.0/24, are correctly
configured.
NOTE: Since BGP is used by SFS to exchange routes within the fabric, some BGP routes appear in the output.
Figure 40. Jump host connected leaf switch for VxRail deployment
This section covers the configuration of a leaf switch port for connection to a jump host or laptop computer (referred to only as
a jump host for the remainder of this guide).
NOTE: Changing the speed is done for all ports in the port group. In this example, setting port group 1/1/3 to 10g-4x
changes ports 1/1/9-1/1/12 to 10 GbE, and the ports are renamed 1/1/9:1-1/1/12:1.
NOTE: The jump host port is only configured on one of the leaf switches.
54 Deploy VxRail
Figure 45. VxRail Cluster type
NOTE: 2-node VxRail clusters are not currently supported with SFS.
5. On the Discover Resources page, wait for all the VxRail hosts in the rack and the SmartFabric switch cluster (I3_Fabric) to
be discovered.
NOTE: Discovery may take about 5 minutes. If necessary, click the Refresh icon to refresh the Hosts section or
the Top-of-Rack Switch section as needed.
Deploy VxRail 55
Figure 46. Hosts and SmartFabric discovered
The three VxRail nodes and the SmartFabric switch cluster are discovered.
6. Click NEXT.
7. On the Configuration Method page, select your preferred configuration method, Step-by-step user input, or Upload a
configuration file. Either configuration method may be used.
NOTE: A JSON-formatted configuration file may be used if you have saved one from a previous installation using the
same versions of VxRail and SmartFabric OS10, or if you have been provided one from your sales representative. If you
do not have a configuration file, select Step-by-step user input.
8. Click NEXT.
9. The values entered for screens 6 through 10 (Global Settings through Virtual Network Settings) of the deployment
wizard, are listed in Table 4 in the Deployment Planning chapter.
NOTE: Step-by-step VxRail configuration screens are not in this guide, but are provided in the VxRail Appliance
Installation Procedures that are available on Dell Technologies SolVe Online (account required).
10. From the Configure Switch screen, enter the default REST_USER password, admin, and click CONFIGURE SWITCH.
NOTE: The REST_USER account is used by VxRail and OMNI to configure the switches.
56 Deploy VxRail
Figure 47. Configure Switch screen
Deploy VxRail 57
CAUTION: Do not click the VALIDATE CONFIGURATION button at this time.
58 Deploy VxRail
Virtual Network: 1815
VLTi-VLAN: 1815
Members:
VLAN 1815: port-channel1000, ethernet1/1/1, ethernet1/1/2, ethernet1/1/3
VxLAN Virtual Network Identifier: 1815
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):
Traffic on the External Management network, VLAN 1811, must be able to reach the DNS server on the external network during
VxRail deployment. To accomplish this with L3 uplinks, an IP address is assigned to each leaf switch on virtual network 1811. An
anycast gateway address shared by all leafs is also configured on the same network.
Since this is on virtual network 1811, available IP addresses in the 172.18.11.0/24 address block are used per the planning data in
Table 3.
Table 9. Leaf switch External Management network IP addresses and anycast gateway
Item IP address/prefix
Leaf1A IP address 172.18.11.253/24
Leaf1B IP address 172.18.11.252/24
Gateway IP address 172.18.11.254/24
NOTE: If present, additional leaf switches in the fabric will also need one IP address per leaf on this network.
Deploy VxRail 59
Figure 50. Update network configuration window
a. Next to Network, select the External Management network, Management Network 1811, from the drop-down list.
b. Next to Enable IP Address, select IPv4.
NOTE: When IPv4 is selected, additional fields display, as shown in the figure below.
c. Next to Interface IP Addresses, enter an interface IP address for each leaf switch in the SmartFabric as shown in Table
10. Click the blue Add button to add IP address entry fields.
NOTE: If you plan to expand the fabric, additional leaf switches will also need IP addresses on this network, with one
IP address per leaf. This is covered in Expand SmartFabric and VxRail cluster to multirack.
d. Enter the Prefix Length for the IP addresses and a Gateway IP Address. These values are from Table 10.
When complete, the Update Network Configuration window shows the following configuration options:
60 Deploy VxRail
Figure 51. Update network configuration window
3. Click OK.
BGP Validation
NOTE: If static routes are used, go to the Validate and build VxRail cluster section. (Static route validation was done earlier
in the Validate static route example section of this guide).
If BGP is used on the uplinks, ensure the external switches have learned the routes to the VxRail External Management network,
172.18.11.0/24 in this example, to reach the VxRail nodes and VxRail Manager. This is done with the show ip route
command. The BGP-discovered route to 172.18.11.0/24 is shown in bold in the output below.
NOTE: The command output shown is for the External-A switch. The output for External-B is similar. BGP verification from
the leaf switches was done in Show command output on Leaf1A (BGP example).
NOTE: Command output from a Cisco Nexus switch is shown in Appendix C: BGP validation on N9K-External-A during
VxRail deployment.
Deploy VxRail 61
N2 - OSPF NSSA external type 2, E1 - OSPF external type 1,
E2 - OSPF external type 2, * - candidate default,
+ - summary route, > - non-active route
Gateway of last resort is not set
Destination Gateway Dist/Metric Change
----------------------------------------------------------------------------------
C 10.0.2.1/32 via 10.0.2.1 loopback0 0/0 18:36:55
B IN 10.0.2.2/32 via 192.168.3.21 200/0 18:16:18
C 172.19.11.0/24 via 172.19.11.252 vlan1911 0/0 18:29:16
B EX 172.18.11.0/24 via 192.168.1.1 20/0 16:02:33
via 192.168.1.3
C 192.168.1.0/31 via 192.168.1.0 ethernet1/1/13 0/0 21:10:53
C 192.168.1.2/31 via 192.168.1.2 ethernet1/1/14 0/0 18:36:56
B IN 192.168.2.0/31 via 192.168.3.21 200/0 21:10:51
B IN 192.168.2.2/31 via 192.168.3.21 200/0 18:16:18
C 192.168.3.20/31 via 192.168.3.20 vlan4000 0/0 18:29:12
62 Deploy VxRail
NOTE: Once validation passes, Dell Technologies recommends clicking the DOWNLOAD CONFIGURATION FILE
button to save a JSON file with your VxRail settings.
3. Click NEXT.
4. Click APPLY CONFIGURATION.
Deploy VxRail 63
NOTE: Ensure the jump host NIC has an IP address on the new network, 172.18.11.0/24 in this example, before
proceeding with the next step. The jump host port on the leaf switch is untagged, so do not configure a VLAN ID on the
jump host NIC.
7. Click YES. You are automatically redirected to the new VxRail Manager IP address in the browser, and VxRail deployment
continues, as shown in the figure below.
NOTE: If the Redirected to new address prompt does not appear when deployment is about 27 percent complete,
and the screen has not updated for at least 5 minutes, perform the following steps :
a. At the CLI of the leaf switch that the jump host is connected to, run the show virtual-network command.
b. Make sure the port that the jump host is connected to (Leaf1A, port 1/1/9:1 in this example) has automatically been
moved from VLAN 4091 to the external management VLAN (1811 in this example).
c. Once the jump host port has moved to the external management VLAN, manually change the IP address in the
browser's address bar from 192.168.10.200 to the new VxRail Manager address, 172.18.11.72. The address is shown in
the figure below. Leave the rest of the URL as-is. The browser connects to the new address and the deployment
continues as shown in the figure below.
The switch port connected to the jump host is automatically moved from VLAN 4091 to the External Management VLAN,
VLAN 1811, on the leaf switch to enable it to reach VxRail Manager on the new network.
(Optional) To verify the change, run the show virtual network command on the leaf switch that the jump host is
connected to. In the output below, the jump host port 1/1/9:1 is now untagged in VLAN 1811 and is no longer in VLAN 4091.
NOTE: Virtual networks 1812 through 1815 and 3939 have been removed from the output below for brevity. The output
below is with an L3 uplink. If an L2 uplink is configured, the uplink port channel also appears as a member of VLAN 1811.
64 Deploy VxRail
Source Interface: loopback2(172.30.0.0)
Remote-VTEPs (flood-list):
Deployment takes about one hour for a four-node cluster. When VxRail is successfully deployed, the VxRail Cluster
Successfully Configured message displays, as shown.
NOTE: If prompted, click the LAUNCH VSPHERE CLIENT (HTML5) button. The older Flash-based vSphere Web
Client (Flex) is deprecated and is not used in this guide
9. Log in using your vCenter credentials. In this example, the username is [email protected].
The Hosts and Clusters page of the vSphere Client appears, as shown in the figure below.
.
Deploy VxRail 65
Figure 58. Newly created VxRail cluster
CAUTION: Review any warnings that may appear in the vSphere Client.
66 Deploy VxRail
7
Expand to Multirack
Expand to Multirack 67
Verify preferred master setting before fabric
expansion
During fabric expansion, the newly added switches may come up and form a fabric among themselves and elect a master before
they are connected to the existing fabric. When the new fabric merges with the running fabric, it is possible for the master
switch from the new leaf switches to overwrite the configuration in the existing fabric. It is critical to ensure a pair of leaf nodes
in the existing fabric are configured to be the "preferred master" before expanding the fabric.
When you create an uplink to the external network using the SFS UI or OMNI, the preferred master is automatically set on all
leaf switches in the fabric at that time.
NOTE: Spine switches are never elected SmartFabric master or preferred master switches.
For the example in this guide, there are only two leaf switches in the SmartFabric at this stage of deployment. However, if there
are additional leaf switches in the SmartFabric when the uplink is created, they will show that PREFERRED-MASTER is set to
true.
If the fabric is previously expanded after the creating the uplink, the added leafs will not have PREFERRED-MASTER set to
true. This is allowed if PREFERRED-MASTER is set to true on at least one pair of leaf switches in the SmartFabric.
If leaf switch pairs in the existing SmartFabric do not show PREFERRED-MASTER set to true, create an uplink by following the
instructions in the Configure L2 uplinks to the external network or Configure L3 routed uplinks to the external network sections.
After the uplink is created, return to the preceding section and check the preferred master setting again.
If you are using a demo or lab environment without the need for an uplink, create a temporary uplink to set all the leaf switches
that are in the SmartFabric, to be the preferred master.
NOTE: Physical port connections are not required to create this temporary uplink.
68 Expand to Multirack
1. On the SFS UI Home page, select 2. Create Uplink for External Network Connectivity.
2. On the Uplink Details page:
a. Next to Uplink Connectivity, leave Layer 2 selected.
b. Enter a Name, such as temp.
3. Click NEXT.
4. On the Port Configuration page:
a. Next to Racks, select any rack.
b. Next to Configured Interfaces, select an available interface on either switch.
NOTE: You cannot use this interface for other purposes until you delete the uplink.
c. Leave the LAG mode set to LACP.
5. Click NEXT > FINISH.
After the uplink is created, verify all leaf switches in the SmartFabric show PREFERRED-MASTER is set to true.
To make the interface used in the temporary uplink available for other purposes, you can delete the uplink without affecting the
preferred master setting by performing the following steps:
1. On the SFS UI Uplinks page, select the uplink by name, temp in this example.
2. Click DELETE > OK.
The port used for the temporary uplink is now available.
----------------------------------------------------------
CLUSTER DOMAIN ID :
VIP : unknown
ROLE : unknown
SERVICE-TAG : unknown
MASTER-IPV4 :
PREFERRED-MASTER :
----------------------------------------------------------
If any leaf switch to be added to the SmartFabric shows PREFERRED-MASTER is set to true, the switch configuration should
be cleared. This is done by taking each affected leaf switch out of SmartFabric mode and returning to Full Switch mode with the
following commands:
After the switch reloads, run show smartfabric cluster again on each affected leaf switch to confirm PREFERRED-
MASTER is no longer set to true.
NOTE: New switches will be placed in SmartFabric mode in the Add switches to SmartFabric section of this chapter.
Expand to Multirack 69
Run the following command on each switch to be added to the SmartFabric:
Other global settings may also be configured here, such as ip name-server and ntp server if used by the switch. These
settings are not required for the deployment example in this guide. The hostname of the switch may be configured at the CLI or
in the SFS UI. In this guide, the SFS UI is used.
Spines
The following commands are run on Spine1 and Spine2. This puts the switches in SmartFabric mode as spines.
The configuration is applied, and the switch reloads. Repeat on the second spine switch.
Leafs
The following commands are run on Leaf2A and Leaf2B. This puts the switches in SmartFabric mode as leafs and configures
them as VLT peers.
NOTE: This example uses the two QSFP28 2x100 Gb DD ports, Ethernet 1/1/49-1/1/52, for the VLTi connections on each
leaf.
The configuration is applied, and the switch reloads. Repeat on the second leaf switch.
70 Expand to Multirack
Optionally, run the following command to verify that a leaf or spine switch is in SmartFabric mode:
NOTE: Since hostnames have not been configured on the four additional switches, each appears with its default hostname,
OS10. Hostnames for the additional switches are configured in the next section.
Expand to Multirack 71
NOTE: The Network Fabric ID is automatically set to 100 and cannot be changed. All directly connected switches in
SmartFabric mode join this fabric.
2. On the Racks page, the second rack appears. Update the Name (recommended) and Description (optional) of the second
rack, as shown in the following figure.
Traffic on the External Management network, VLAN 1811, must be able to reach the external network. To accomplish this with
L3 uplinks, an IP address on virtual network 1811 is assigned to each leaf switch in the SmartFabric.
IP addresses are configured for the new leafs added to the SmartFabric, Leaf2A, and Leaf2B. The examples used in this guide
are shown in the table below.
NOTE: Existing leaf IP addresses and the gateway IP address were configured during VxRail cluster deployment in the
Additional configuration steps for L3 uplinks section of this guide.
Table 10. Leaf switch External Management network IP addresses and anycast gateway
Item IP address or prefix Status
Leaf1A IP address 172.18.11.253/24 Previously configured
Leaf1B IP address 172.18.11.252/24 Previously configured
Leaf2A IP address 172.18.11.251/24 To be configured
72 Expand to Multirack
Table 10. Leaf switch External Management network IP addresses and anycast gateway (continued)
Item IP address or prefix Status
Leaf2B IP address 172.18.11.250/24 To be configured
Gateway IP address 172.18.11.254/24 Previously configured
a. Next to Interface IP Addresses, two IP addresses configured earlier are listed for the existing leaf switches, Leaf1A and
Leaf1B. Use the blue Add button to add an IP address for each leaf switch added to the SmartFabric. The
additional addresses are for Leaf2A and Leaf2B as shown in the table above.
b. Leave the Prefix Length and Gateway IP Address at the settings previously configured.
When complete, the Update Network Configuration window appears, as shown in the following figure.
Expand to Multirack 73
Figure 64. Update network configuration window
4. Click OK to apply the settings.
74 Expand to Multirack
Figure 65. Forward and reverse lookup commands
Expand to Multirack 75
Figure 67. Discovered Hosts window
6. Click NEXT.
7. In the User Credentials window, enter the vCenter and switch REST_USER credentials, as shown in the figure below.
76 Expand to Multirack
Figure 68. User Credentials window
8. Click NEXT.
9. In the NIC Configuration window, the default values are used, as shown.
Expand to Multirack 77
Figure 69. NIC Configuration window
10. Click NEXT.
11. In the Host Settings window, the hostname, IP address, and credentials for the new host are specified.
78 Expand to Multirack
Figure 70. Host Settings window
12. Click NEXT.
13. (Optional) In the Host Location window, the Rack Name and Rack Position may be entered. These fields are left blank in
this example.
Expand to Multirack 79
Figure 71. Host Location window
14. Click NEXT.
15. In the Network Settings window, provide the vSAN and vMotion IP addresses for the host, as shown in the following
figure.
80 Expand to Multirack
Figure 72. Network Settings window
16. Click NEXT.
17. Review the settings in the Validate window. If no changes are needed, click VALIDATE. Validation may take 2 to 5 minutes.
Expand to Multirack 81
Figure 73. Validate screen
82 Expand to Multirack
Figure 74. Successful validation notification screen
18. An option to put the added host in maintenance mode is provided. In this example, this option is kept in the default No
setting.
19. Click FINISH.
On the Add VxRail Hosts page, the Host expansion is in progress. Health monitoring is currently disabled during
this task message displays and a progress bar displays under Status. Both are outlined in red in the figure below.
Expand to Multirack 83
Figure 76. Host expansion complete message
21. The fourth VxRail node displays in the VxRail cluster as shown in the figure below.
CAUTION: Review any warnings that may appear in the vSphere Client.
(Optional) Verify the interface connected to the new VxRail node has been automatically added to the VxRail networks on the
leaf switches in Rack 2. To verify the connection, run the show virtual network command. In this example, the new VxRail
node is connected to interface 1/1/1 on Leaf2A and Leaf2B.
The output below confirms the leaf switch interface connected to the new VxRail node, ethernet1/1/1, has been automatically
placed in all VxRail virtual networks/VLANs.
NOTE: The command output shown is for Leaf2A. The output for Leaf2B is the same.
84 Expand to Multirack
VLAN 1811: port-channel1000, ethernet1/1/1
VxLAN Virtual Network Identifier: 1811
Source Interface: loopback2(172.30.0.1)
Remote-VTEPs (flood-list): 172.30.0.0(CP)
Expand to Multirack 85
8
Deploy and Configure OMNI
Deploy OMNI VM
The OMNI VM is available for download from the Dell EMC OpenManage Network Integration for VMware vCenter website.
Download OMNI-version#.zip and extract the OMNI-version#.ova file to a location accessible from the vSphere client.
NOTE: VxRail 7.0.1 supports OMNI 2.0 or a later version specified in the SmartFabric OS10 Solutions (HCI, Storage, MX)
Support Matrix.
1. To deploy the OMNI VM, launch the vSphere Client and go to Hosts and Clusters.
2. Right-click the VxRail cluster and select Deploy OVF Template.
When complete, the OMNI VM appears under the VxRail cluster, as shown below.
NOTE: In the TUI, use the Tab and Arrow keys to navigate and the Enter key to select.
5. Select Edit a connection > Wired connection 1.
NOTE: Only part of the Destination/Prefix field is visible on the screen. Be sure it is set to
fde1:53ba:e9a0:cccc::/64.
10. Select OK > OK > Back to return to the Network Manager TUI menu.
11. On the Network Manager TUI menu, select Activate a connection. The connection activation window displays.
NOTE: When active, connection names have an asterisk (*) next to them.
12. Deactivate both connections as follows:
a. Select external management > Deactivate.
b. Select internal management > Deactivate.
13. Activate both connections as follows:
a. Select external management > Activate.
b. Select internal management > Activate.
14. Select Back to return to the Network Manager TUI menu.
15. On the Network Manager TUI menu, select Set system hostname, as shown.
OMNI registration with vCenter also installs a plug-in to the vSphere Client.
If you are logged into the vSphere Client when the plug-in is installed, a banner appears at the top of the screen, outlined in
red in the figure below. Click the REFRESH BROWSER button that appears in the banner.
NOTE: If there are other messages present, such as a license warning, the message shown in the figure above may be
located behind the other messages. When there are multiple messages, there are < and > icons present to the left of the
banner to cycle through the messages.
To launch OMNI in the vSphere Client, select Menu > OpenManage Network Integration.
You may use either the vSphere client or a direct browser connection to connect to the OMNI web UI.
NOTE: After OMNI is deployed, use OMNI for switch configuration instead of the SFS web UI. The SFS web UI is intended
for initial deployment only.
NOTE: For more information, see the SmartFabric Services for OpenManage Network Integration User Guide, Release 2.0.
The guide is available on the Dell EMC OpenManage Network Integration for VMware vCenter website.
General
The following tables include hardware, software, and firmware that was used to configure and validate the examples in this
guide.
NOTE: For more information about supported components and versions, see the Dell EMC VxRail Support Matrix (account
required).
NOTE: Switches validated for the Cisco Nexus examples are in Appendix C.
● ESXi 7.0.1-16850804
● vCenter Server 7.0.1-16858589
OMNI software
OMNI software used in this guide is as follows:
General commands
show version
Leaf and spine switches must be running a supported version of SmartFabric OS10. Run the show version command to check
the operating system version. SmartFabric OS10 is available on Dell Digital Locker (account required).
NOTE: See the SmartFabric OS10 release notes for upgrade instructions.
NOTE: If SmartFabric OS10 was factory installed, a perpetual license is already on the switch.
NOTE: The jump host port was automatically moved from VLAN 4091 to VLAN 1811 during VxRail deployment.
NOTE: Unused interfaces have been removed from the output for brevity.
show ip route
With L3 uplinks, the show ip route command is used to ensure the leaf switches have routes to the external network,
172.19.11.0/24 in this example, to reach the DNS server. This BGP-discovered route is shown in the output below.
VLT commands
show vlt domain_id
This command is used to validate the VLT configuration status. In SmartFabric mode, the VLT domain ID is 255. The Role for
one switch in the VLT pair is primary, and its peer switch (not shown) is assigned the Secondary role. The VLTi Link
Status and VLT Peer Status must both be up.
Peer-routing mismatch:
No mismatch
VLAN mismatch:
No mismatch
EVPN Mismatch:
EVPN Mode Mismatch:
No mismatch
NVE Mismatch:
No mismatch
The switch reboots into Full Switch mode. The mode can be verified with the following command:
In this example, an existing DNS/NTP server connects to the Nexus switches using a vPC in VLAN 1911.
Point-to-point IP networks
The L3 point-to-point links used in this example are labeled A-D in the figure below.
Each L3 uplink is a separate, point-to-point IP network. The following table details the links labeled in the figure above.
NOTE: The IP addresses in the table are used in the switch configuration examples.
In this example, ASN 65101 is used on both Nexus external switches. SFS leaf switches use ASN 65011 by default for all leafs in
the fabric.
NOTE: If L3 uplinks are connected from SFS spine switches, the spine switches use ASN 65012 by default.
The IP addresses shown on the external network switches in the figure above are loopback addresses used as BGP router IDs.
On the SmartFabric switches, BGP router IDs are automatically configured from the SFS default private subnet address block,
172.16.0.0/16.
NOTE: SFS default ASNs and IP address blocks may be changed by going to 5. Edit Default Fabric Settings in the SFS
UI.
NOTE: All of the Nexus switch configuration commands used to validate this topology are shown in the sections that
follow. The Nexus switches were reset to their default configuration settings using the write erase command before
running the configuration commands below. This is only an example. Modify your external switch configuration as needed
for your environment.
General settings
Enable the following features: interface-vlan, lacp, vrrp, vpc, bgp, and lldp. Configure the hostname, OOB
management IP address on VRF management, and the VRF management route as shown.
N9K-External-A N9K-External-B
N9K-External-A N9K-External-B
N9K-External-A N9K-External-B
N9K-External-A N9K-External-B
Configure BGP
Configure a loopback interface to use for the BGP router ID.
Allow BGP to distribute routes with the route-map allow permit command.
Configure the BGP ASN with the router bgp command. The external switches share the same ASN. Use the address that
was set for interface loopback0 as the router ID.
Use the address-family ipv4 unicast and redistribute direct route-map allow commands to redistribute
IPv4 routes from physically connected interfaces.
Use the maximum-paths 2 command to configure the maximum number of paths that BGP adds to the route table for equal-
cost multipath load balancing.
Specify the neighbor IP addresses and ASNs. Configure an address family for each neighbor.
When the configuration is complete, exit configuration mode and save the configuration with the end and copy running-
config startup-config commands.
External-A External-B
no shutdown no shutdown
ip address 10.0.2.1/32 ip address 10.0.2.2/32
end end
copy running-config startup-config copy running-config startup-config
Run the show ip interface brief command to verify IP addresses are configured correctly. VLAN 1911 is the external
management VLAN that contains the DNS/NTP server. Loopback 0 is the router ID, and interfaces 1/49-1/50 are connected to
the SFS leaf switches.
The show ip route command output for the N9K-External-A switch appears as shown.
NOTE: The command output shown in the following commands is for Leaf1A. The output for Leaf1B is similar.
Run the show ip bgp summary command to verify that BGP is up for each neighbor. When BGP is up, uptime is shown in
the Up/Down column. The neighbors for Leaf1A shown in the output below are Leaf1B, N9K-External-A, and N9K-External-B.
Run the show ip interface brief command to verify connected interfaces are up, and IP addresses are configured
correctly.
In the output below, interfaces 1/1/1-1/1/3 are connected to the VxRail nodes, 1/1/49-1/1/52 are the VLTi, and 1/1/53-1/1/54
are the uplinks to the external switches. VLAN 4090, Loopback 1, and Loopback 2 are used internally by SFS. VLAN 4094 and
port channel 1000 are automatically configured for the VLTi.
NOTE: Unused interfaces have been removed from the output for brevity.
Run the show ip route command to verify routes to the External Management VLAN, 172.19.11.0/24, have been learned
using BGP from the Nexus switches. In this example, two routes to 172.19.11.0/24 are learned, one using each Nexus switch.
The routs are shown in the output below.
To continue deployment, go to the Configure a jump host port section of this guide.
BGP validation on N9K-External-A during VxRail deployment
During VxRail deployment, virtual networks are automatically configured on the SmartFabric leaf switches. IP addresses are then
manually assigned to each leaf switch on the External Management network, 172.18.11.0/24 in this guide, as shown in the
Additional configuration steps for L3 uplinks section.
Once the items above are done, run the show ip route command on the external Nexus switches to verify routes to the
External Management network, 172.18.11.0/24, have been learned using BGP from the SmartFabric leaf switches. These are
shown in the output below.
NOTE: The following command output is for the N9K-External-A switch. The output for N9K-External-B is similar.
To continue deployment, go to the Validate and build VxRail cluster section of this guide.
NOTE: DNS and NTP servers do not have to connect in this manner if they are reachable on the network.
All ports on the four switches shown in the figure above are in the External Management VLAN, 1811, in this example.
NOTE: All Nexus switch configuration commands used to validate this topology are shown in the sections that follow.
These are only examples. Modify your Nexus external switch configuration as needed for your environment.
N9K-External-A N9K-External-B
N9K-External-A N9K-External-B
Configure interfaces
Configure the interfaces for connections to the SFS leaf switches. Interfaces 1/49 and 1/50 are configured in vPC 100 in this
example. Port-channel 100 is set as an LACP port-channel with the channel-group 100 mode active command.
Use the switchport mode trunk command to enable the port-channel to carry traffic for multiple VLANs. Allow VLAN 1811
(the External Management VLAN).
Optionally, allow the forwarding of jumbo frames with the mtu 9216 command.
In this example, interface 1/1 on each external switch is configured in vPC 1 for connections to the DNS/NTP server. Port-
channel 1 is set as an LACP port-channel with the channel-group 1 mode active command.
When the configuration is complete, exit configuration mode and save the configuration with the end and copy running-
config startup-config commands.
N9K-External-A N9K-External-B
end end
copy running-config startup-config copy running-config startup-config
Validation
Once the uplink interfaces have been configured in the SFS UI and on the external Nexus switches, connectivity can be verified
using the switch CLI.
With SFS, port channel numbers are automatically assigned as they are created. In this example, port channel 1 is the uplink
connected to the Nexus switches. It has two members that are both up and active. Port channel 1000 is reserved for the VLTi.
The L2 uplink, port channel 1 in this example, is a tagged member of VLAN 1811. This is verified at the CLI using the show
virtual-network command as follows:
Use the show vlt 255 vlt-port-detail command to verify the status of VLT ports. Port channel 1 is the L2 uplink to
the Nexus switches. The output shows information for both VLT peer switches. An asterisk (*) denotes the local switch. In this
case, Leaf1A is VLT unit 1, and Leaf1B is VLT unit 2.
Run the show vlan command to verify ports are correctly assigned to the External Management VLAN (VLAN 1811). Po1
connects to the DNS/NTP server, Po100 connects to the SFS leaf switches, and Po1000 is the peer link.
Run the show vpc command to verify all vpc connections are up. In this example, Po1000 is the peer link, Po1 connects to the
DNS/NTP server, and Po100 connects to the SFS leaf switches.
vPC status
----------------------------------------------------------------------
id Port Status Consistency Reason Active vlans
-- ---- ------ ----------- ------ ------------
1 Po1 up success success 1811
Technical resources
Dell EMC Networking Info Hub
Dell EMC Networking OS10 Info Hub
Dell EMC SmartFabric OS10 User Guide Release 10.5.2
SmartFabric OS10 Solutions (HCI, Storage, MX) Support Matrix
Dell EMC PowerSwitch S3048-ON Documentation
Dell EMC PowerSwitch S5248F-ON Documentation
Dell EMC PowerSwitch S5232F-ON Documentation
Dell EMC Networking Transceivers and Cables
Dell EMC OpenManage Network Integration for VMware vCenter
NOTE: This site includes OMNI software and the SmartFabric Services for OpenManage Network Integration User Guide,
Release 2.0
Dell EMC OS10 SmartFabric Services FAQ
Dell EMC VxRail Network Planning Guide
Dell EMC VxRail 7.x Support Matrix (account required)
Dell Technologies SolVe Online (account required)
Dell EMC VxRail support and documentation (account required)
VxRail Documentation Quick Reference List (account required)
Dell EMC Networking SmartFabric Services Deployment with VxRail 4.7
Dell EMC Networking SmartFabric Services Deployment with VxRail 7.0