Configure The Network For VxRail
Configure The Network For VxRail
1
Table Of Contents
Introduction
Configure the network switch for VxRail connectivity
Configure the upstream network for VxRail connectivity
Configure the network to support RoCE
Confirm your data center network
Confirm your data center environment
2
Configure the Network for VxRail
Introduction
Configure the adjacent top-of-rack switches and upstream network before you plug
in the VxRail nodes and power them on. That way the VxRail initialization process
can pass validation and build the cluster.
This section provides guidance on the tasks that must be undertaken on the data
center network to prepare for the VxRail initial implementation. You can use the
information in Appendix C: VxRail Cluster Setup Checklist for guidance. Be sure to
follow your vendor documentation for specific switch configuration activities and for
best practices for performance and availability.
Note: You can skip this section if you plan to enable Dell SmartFabric Services and
extend VxRail automation to the TOR switch layer.
Follow the steps in this section for the configuration settings required for VxRail
networking.
The VMware Loudmouth service depends on multicasting, which is required for the
VxRail internal management network. The network switch ports that connect to
VxRail nodes must allow for pass-through of multicast traffic on the VxRail Internal
Management VLAN. Multicast is not required on your entire network. It is only
required on the ports that are connected to VxRail nodes.
3
Configure the Network for VxRail
VxRail creates little traffic through multicasting for auto-discovery and device
management. Furthermore, the network traffic for the Internal Management network
is restricted through a VLAN. If MLD Snooping and MLD Querier are supported on
your switches, you can enable them on the VLAN.
For early versions of VxRail, multicast was required for the vSAN VLAN. One or
more network switches that connected to VxRail had to allow for the pass-through of
multicast traffic on the vSAN VLAN. Starting with VxRail v4.5 all vSAN traffic
replaces multicast with unicast. This change helps reduce network configuration
complexity and simplifies switch configuration. Unicast is a common protocol that is
enabled by default on most enterprise Ethernet switches.
If you are required to configure multicast, VxRail multicast traffic for vSAN is limited
to broadcast domain per vSAN VLAN. There is minimal impact on network overhead
as management traffic is nominal. You can limit multicast traffic by enabling Internet
Group Management Protocol (IGMP) Snooping and IGMP Querier. If your switch
supports both IGMP Snooping and IGMP Querier and you configure this setting, Dell
Technologies recommends enabling them.
IGMP Querier sends out IGMP group membership queries on a timed interval,
retrieves IGMP membership reports from active members, and allows updates to
group membership tables. By default, most switches enable IGMP Snooping but
disable IGMP Querier. If this setting is true on your switch, change the settings.
4
Configure the Network for VxRail
Configure VLANs for the VxRail networks
Configure the VLANs on the switches depending on the VxRail version being
deployed and the type of cluster being deployed. The VLANs are assigned to the
switch ports as a later task.
The additional VxRail Witness traffic separation VLAN to manage traffic between the
VxRail cluster and the witness. This VLAN is only needed if deploying VxRail
stretched cluster or 2-Node cluster.
5
Configure the Network for VxRail
Figure 54. VxRail version 4.7 and later logical networks
6
Configure the Network for VxRail
Figure 55. VxRail 2-Node cluster logical networks with witness
Perform the following steps using Appendix A: VxRail Network Configuration Table:
7
Configure the Network for VxRail
2. For VxRail version 4.7 and later, configure the Internal Management VLAN
(Row 2) on the switches.
3. Allow multicast on the Internal Management network to support device
discovery.
4. Configure a vSphere vMotion VLAN (Row 3) on the switches.
5. Configure a vSAN VLAN (Row 4) on the switches. Unicast is required for
VxRail clusters that are built with version 4.5 and later.
6. Configure the VLANs for your VM Networks (Rows 6) on the switches. These
networks can be added after the cluster initial build is complete.
7. If you choose to create a separate subnet for the vCenter Server Network,
configure the vCenter Server Network VLAN (Row 7). Then configure the
VLAN on the switches.
8. Configure the optional VxRail Witness Traffic Separation VLAN (Row 74) on
the switches ports if required.
9. Configure the switch uplinks to allow the External Management VLAN (Row
1) and VM Network VLANs (Row 6) to pass through.
a. Optionally configure the vSphere vMotion VLAN (Row 3) , vSAN
VLAN (Row 4), and vCenter Server Network VLAN (Row 7) for
pass-through as well.
b. If a vSAN witness is required for the VxRail cluster, include the
VxRail Witness Traffic Separation VLAN (Row 74) on the uplinks.
Configure the port mode on your switch based on the plan for the VxRail logical
networks, and whether VLANs will be used to segment VxRail network traffic. Ports
on a switch operate in one of the following modes:
Access mode—The port accepts untagged packets only and distributes the
untagged packets to all VLANs on that port. This mode is typically the
default for all ports. Use this mode only for supporting VxRail clusters for
8
Configure the Network for VxRail
test environments or temporary usage.
Trunk mode—When this port receives a tagged packet, it passes the packet
to the VLAN specified in the tag. To configure the acceptance of untagged
packets on a trunk port, you must first configure a single VLAN as a “Native
VLAN.” A “Native VLAN” is when you configure one VLAN to use as the
VLAN for all untagged traffic.
Tagged-access mode—The port accepts tagged packets only.
Link aggregation is supported for the VxRail initial implementation process only if:
If either of these conditions are not applicable, do not enable link aggregation on any
switch ports that are connected to VxRail node ports before initial implementation.
This limitation includes protocols such as Link Aggregation Control Protocol (LACP)
and EtherChannel.
During the VxRail initial build process, either two or four ports are selected on each
node. These ports support the VxRail management networks and any guest
networks that are configured then. The VxRail initial build process configures a
virtual-distributed switch in the cluster. Then it configures a port group on that virtual-
distributed switch for each VxRail management network.
9
Configure the Network for VxRail
Figure 56. Unused VxRail node ports configured for non-VxRail network traffic
When the initial implementation process completes, you can configure link
aggregation on the operational VxRail cluster, as described in Configure link
aggregation on VxRail networks. Your requirements may include using any spare
network ports on the VxRail nodes that were not configured for VxRail network traffic
for other use cases. You can configure link aggregation to support that network
traffic. These ports can include any unused ports on the NDC-OCP or on the optional
PCIe adapter cards.
Network traffic must be allowed uninterrupted passage between the physical switch
ports and the VxRail nodes. Certain Spanning Tree states can place restrictions on
network traffic and can force the port into an unexpected timeout mode. These
conditions that Spanning Tree causes can disrupt VxRail normal operations and
impact performance.
If Spanning Tree is enabled in your network, ensure that the physical switch ports
that are connected to VxRail nodes are configured with a setting such as “Portfast.”
Or you can configure the port as an edge port. These settings set the port to
forwarding state, so no disruption occurs. vSphere virtual switches do not support
STP. You can enable Spanning Tree to avoid loops within the physical switch
network. If you do, you must configure physical switch ports that are connected to an
ESXi host with a setting such as “Portfast.”
The absence of flow control on a congested network can result in increased error
rates and force network bandwidth to be consumed for error recovery. The flow
10
Configure the Network for VxRail
control settings can be adjusted depending on network conditions, but VxRail
recommends that flow control should be “receive on” and “transmit off.”
Now that the switch base settings are complete, the next step is the switch ports.
Perform the following steps for each switch port that is connected to a VxRail node:
The upstream network from the VxRail cluster must be configured to allow passage
for VxRail networks that require external access. Use Appendix A: VxRail Network
Configuration Table for reference. Upstream passage is required for:
11
Configure the Network for VxRail
Optionally, the vSphere vMotion VLAN (Row 3) and vSAN VLAN (Row 4)
can be configured for upstream passage.
If you plan to expand the VxRail cluster beyond a single rack, configure the VxRail
network VLANs for either:
If your Layer 2 or Layer 3 boundary is at the lowest network tier (top-of-rack switch),
perform the following tasks:
12
Configure the Network for VxRail
upstream passage.
If your Layer 2 or Layer 3 boundary is upstream from at the lowest network tier (top-
of-rack switch), perform the following tasks:
Connect ports on the adjacent upstream switch to the uplinks on the top-of-
rack switches.
Configure logical pairings of the ports on the adjacent upstream switch and
the top-of-rack switch.
Configure the logical port pairings, commonly known as “port channels” or
“EtherChannels,” to allow upstream passage of external VxRail networks.
Note: This section is only relevant if you are planning to deploy a VxRail cluster with
vSAN using RoCE-compliant Ethernet adapters.
The VxRail nodes that are targeted for deployment may be configured with Ethernet
adapters that support Remote Direct Memory Access (RDMA) over Converged
Ethernet (RoCE). If the switches and supporting network must be configured to
enable a “lossless” network for the vSAN traffic.
Converting a cluster to enable RoCE on the vSAN datastore is performed after the
VxRail initial build. Consult the technical reference guides from your switch vendor
for the specific steps to configure on the physical network. The basic steps to ensure
a “lossless” network are as follows:
vSAN with RDMA supports NIC failover but does not support link
aggregation or NIC teaming based on IP hash.
Data Center Bridging must be enabled on the switches supporting the
VxRail cluster.
Control traffic flow on the switches using a mechanism such as Priority Flow
Control (PFC). Set the RoCE network to a higher priority as outlined in your
vendor documentation.
Configure RoCE on vSAN on a priority-enabled VLAN.
If the vSAN traffic is on a routed Layer 3 network, the “lossless” network
settings must be preserved when routed across network devices. This
routing uses a feature such as the Differentiated Serviced Code Point
(DSCP) QoS setting.
13
Configure the Network for VxRail
Upon completion of the switch configuration, there should be unobstructed network
paths between the switch ports and the ports on the VxRail nodes. The VxRail
management network and VM network should have unobstructed passage to your
data center network. Before forming the VxRail cluster, the VxRail initialization
process performs several verification steps, including:
Certain data center environment and network configuration errors can cause the
validation to fail, and the VxRail cluster will not form. When validation fails, the data
center settings and switch configurations must undergo troubleshooting to resolve
the problems reported.
Confirm the settings on the switch, using the switch vendor instructions for guidance:
14
Configure the Network for VxRail
5. If you plan to use a customer-supplied vCenter, verify open communication
between the vCenter instance and the VxRail managed hosts.
6. If you plan to use a third-party syslog server instead of Log Insight, verify that
open communication exists between the syslog server and the VxRail
management components.
7. If you plan to deploy a separate network for ESXi host management (iDRAC),
verify that your IT administrators can communicate with the iDRAC network.
8. You may plan to use an external Secure Remote Services (SRS) gateway in
your data center instead of SRS-VE deployed in the VxRail cluster. If so, verify
the open communications between VxRail management and the SRS gateway.
See Appendix D: VxRail Open Ports Requirements for information of VxRail port
requirements.
Perform the steps in this section to confirm your data center environment:
1. Confirm that you cannot ping any IP address that is reserved for VxRail
management components.
2. Confirm that your DNS servers are reachable from the VxRail external
management network.
3. Confirm the forward and reverse DNS entries for the VxRail management
components.
4. Confirm that your management gateway IP address is accessible.
5. Confirm that the vCenter Server management gateway IP is accessible, if
configured.
6. If you decide to use the TCP-IP stack for vMotion instead of the default TCP-IP
stack, confirm that your vMotion gateway IP address is accessible.
7. If you have configured NTP servers, confirm that you can reach them from your
configured VxRail external management network.
8. If you have configured a third-party syslog server for logging, confirm that you
can reach it from the network supporting VxRail Manager.
9. If you plan to use a customer-supplied vCenter, confirm that it is accessible
from the network supporting VxRail Manager.
10. If you plan to use a local certificate authority for certificate renewal on VxRail,
verify that it is accessible from network supporting VxRail Manager.
11. If you plan to use Secure Connect Gateways to enable connectivity to the back-
end Customer Support centers, verify that the gateways are accessible from
network supporting VxRail Manager.
12. You may plan to deploy a witness at a remote site to monitor vSAN, and plan to
enable Witness Traffic Separation. If so, confirm that there is a routable path
15
Configure the Network for VxRail
between the witness and this network.
13. You may plan to install the VxRail nodes in more than one rack, and to
terminate the VxRail networks at the ToR switches. If so, verify that routing
services have been configured upstream for the VxRail networks.
16
Configure the Network for VxRail