0% found this document useful (0 votes)
648 views16 pages

Configure The Network For VxRail

The document provides guidance on configuring a network to support a Dell VxRail cluster. It describes configuring network switches by enabling multicast for device discovery, configuring VLANs for the different VxRail networks, setting up inter-switch links, and configuring switch ports in the appropriate mode. The goal is to prepare the network infrastructure before deploying and powering on the VxRail nodes.

Uploaded by

Saad OUACHE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
648 views16 pages

Configure The Network For VxRail

The document provides guidance on configuring a network to support a Dell VxRail cluster. It describes configuring network switches by enabling multicast for device discovery, configuring VLANs for the different VxRail networks, setting up inter-switch links, and configuring switch ports in the appropriate mode. The goal is to prepare the network infrastructure before deploying and powering on the VxRail nodes.

Uploaded by

Saad OUACHE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Configure the Network for VxRail

(Dell VxRail Network Planning Guide)


Apr 2022

1
Table Of Contents

Introduction
Configure the network switch for VxRail connectivity
Configure the upstream network for VxRail connectivity
Configure the network to support RoCE
Confirm your data center network
Confirm your data center environment

2
Configure the Network for VxRail
Introduction

Configure the adjacent top-of-rack switches and upstream network before you plug
in the VxRail nodes and power them on. That way the VxRail initialization process
can pass validation and build the cluster.

This section provides guidance on the tasks that must be undertaken on the data
center network to prepare for the VxRail initial implementation. You can use the
information in Appendix C: VxRail Cluster Setup Checklist for guidance. Be sure to
follow your vendor documentation for specific switch configuration activities and for
best practices for performance and availability.

Note: You can skip this section if you plan to enable Dell SmartFabric Services and
extend VxRail automation to the TOR switch layer.

Configure the network switch for VxRail connectivity

Follow the steps in this section for the configuration settings required for VxRail
networking.

Configure multicast for the VxRail Internal


Management network
Note: You can skip this task if you plan not to use the auto-discover method due to
multicast restrictions and instead use the manual method for selecting nodes for the
cluster build operation.

VxRail clusters have no backplane, so communication between its nodes is


facilitated through the network switch. This communication between the nodes for
device discovery purposes uses the VMware Loudmouth capabilities, which are
based on the RFC-recognized “Zero Network Configuration" protocol. New VxRail
nodes advertise themselves on the network using the VMware Loudmouth service.
VxRail Manager discovers them with the Loudmouth service.

The VMware Loudmouth service depends on multicasting, which is required for the
VxRail internal management network. The network switch ports that connect to
VxRail nodes must allow for pass-through of multicast traffic on the VxRail Internal
Management VLAN. Multicast is not required on your entire network. It is only
required on the ports that are connected to VxRail nodes.

3
Configure the Network for VxRail
VxRail creates little traffic through multicasting for auto-discovery and device
management. Furthermore, the network traffic for the Internal Management network
is restricted through a VLAN. If MLD Snooping and MLD Querier are supported on
your switches, you can enable them on the VLAN.

If MLD Snooping is enabled, MLD Querier must be enabled. If MLD Snooping is


disabled, MLD Querier must be disabled.

Configure unicast for VxRail vSAN network


Note: You can skip this task if you do not plan to use vSAN as the primary storage
resource on the VxRail cluster.

For early versions of VxRail, multicast was required for the vSAN VLAN. One or
more network switches that connected to VxRail had to allow for the pass-through of
multicast traffic on the vSAN VLAN. Starting with VxRail v4.5 all vSAN traffic
replaces multicast with unicast. This change helps reduce network configuration
complexity and simplifies switch configuration. Unicast is a common protocol that is
enabled by default on most enterprise Ethernet switches.

If you are required to configure multicast, VxRail multicast traffic for vSAN is limited
to broadcast domain per vSAN VLAN. There is minimal impact on network overhead
as management traffic is nominal. You can limit multicast traffic by enabling Internet
Group Management Protocol (IGMP) Snooping and IGMP Querier. If your switch
supports both IGMP Snooping and IGMP Querier and you configure this setting, Dell
Technologies recommends enabling them.

IGMP Snooping software examines IGMP protocol messages within a VLAN to


discover which interfaces are connected to hosts or other devices that are interested
in receiving this traffic. Using the interface information, IGMP Snooping can reduce
bandwidth consumption in a multi-access LAN environment to avoid flooding an
entire VLAN. IGMP Snooping tracks ports that are attached to multicast-capable
routers to help manage IGMP membership report forwarding. It also responds to
topology change notifications.

IGMP Querier sends out IGMP group membership queries on a timed interval,
retrieves IGMP membership reports from active members, and allows updates to
group membership tables. By default, most switches enable IGMP Snooping but
disable IGMP Querier. If this setting is true on your switch, change the settings.

If IGMP Snooping is enabled, IGMP Querier must be enabled. If IGMP Snooping is


disabled, IGMP Querier must be disabled.

4
Configure the Network for VxRail
Configure VLANs for the VxRail networks
Configure the VLANs on the switches depending on the VxRail version being
deployed and the type of cluster being deployed. The VLANs are assigned to the
switch ports as a later task.

For VxRail clusters using version 4.7 or later:

VxRail External Management VLAN—The default is untagged/native.


VxRail vCenter Server Management VLAN—If different from the VxRail
External Management VLAN.
VxRail Internal Management VLAN—Ensure that multicast is enabled on
this VLAN if enabling node discovery.

For VxRail clusters using versions earlier than 4.7:

VxRail Management VLAN—Ensure that multicast is enabled on this VLAN.


The default is “untagged/native.”

For all VxRail clusters:

vSAN VLAN—In cases where vSAN is the primary storage resource.


Ensure that unicast is enabled.
vSphere vMotion VLAN
VM Networks VLAN—This VLAN can be configured after the VxRail initial
deployment.

The additional VxRail Witness traffic separation VLAN to manage traffic between the
VxRail cluster and the witness. This VLAN is only needed if deploying VxRail
stretched cluster or 2-Node cluster.

5
Configure the Network for VxRail
Figure 54. VxRail version 4.7 and later logical networks

6
Configure the Network for VxRail
Figure 55. VxRail 2-Node cluster logical networks with witness

Perform the following steps using Appendix A: VxRail Network Configuration Table:

1. Configure the External Management VLAN (Row 1) on the switches.


a. If you entered “Native VLAN,” set the ports on the switch to accept
untagged traffic and tag it to the native management VLAN ID.
Untagged management traffic is the default management VLAN
setting on VxRail.

7
Configure the Network for VxRail
2. For VxRail version 4.7 and later, configure the Internal Management VLAN
(Row 2) on the switches.
3. Allow multicast on the Internal Management network to support device
discovery.
4. Configure a vSphere vMotion VLAN (Row 3) on the switches.
5. Configure a vSAN VLAN (Row 4) on the switches. Unicast is required for
VxRail clusters that are built with version 4.5 and later.
6. Configure the VLANs for your VM Networks (Rows 6) on the switches. These
networks can be added after the cluster initial build is complete.
7. If you choose to create a separate subnet for the vCenter Server Network,
configure the vCenter Server Network VLAN (Row 7). Then configure the
VLAN on the switches.
8. Configure the optional VxRail Witness Traffic Separation VLAN (Row 74) on
the switches ports if required.
9. Configure the switch uplinks to allow the External Management VLAN (Row
1) and VM Network VLANs (Row 6) to pass through.
a. Optionally configure the vSphere vMotion VLAN (Row 3) , vSAN
VLAN (Row 4), and vCenter Server Network VLAN (Row 7) for
pass-through as well.
b. If a vSAN witness is required for the VxRail cluster, include the
VxRail Witness Traffic Separation VLAN (Row 74) on the uplinks.

Configure the interswitch links


If more than one top-of-rack switch is being deployed to support the VxRail cluster,
configure interswitch links between the switches. Configure the interswitch links to
allow all VLANs to pass through.

Configure switch ports


Perform the steps in this section to configure the switch ports.

Determine switch port mode

Configure the port mode on your switch based on the plan for the VxRail logical
networks, and whether VLANs will be used to segment VxRail network traffic. Ports
on a switch operate in one of the following modes:

Access mode—The port accepts untagged packets only and distributes the
untagged packets to all VLANs on that port. This mode is typically the
default for all ports. Use this mode only for supporting VxRail clusters for

8
Configure the Network for VxRail
test environments or temporary usage.
Trunk mode—When this port receives a tagged packet, it passes the packet
to the VLAN specified in the tag. To configure the acceptance of untagged
packets on a trunk port, you must first configure a single VLAN as a “Native
VLAN.” A “Native VLAN” is when you configure one VLAN to use as the
VLAN for all untagged traffic.
Tagged-access mode—The port accepts tagged packets only.

Disable link aggregation on switch ports supporting VxRail


networks

Link aggregation is supported for the VxRail initial implementation process only if:

The VxRail version on the nodes is 7.0.130.


You correctly follow the guidance to deploy the virtual-distributed switches in
your external vCenter with the proper link aggregation settings.

If either of these conditions are not applicable, do not enable link aggregation on any
switch ports that are connected to VxRail node ports before initial implementation.
This limitation includes protocols such as Link Aggregation Control Protocol (LACP)
and EtherChannel.

During the VxRail initial build process, either two or four ports are selected on each
node. These ports support the VxRail management networks and any guest
networks that are configured then. The VxRail initial build process configures a
virtual-distributed switch in the cluster. Then it configures a port group on that virtual-
distributed switch for each VxRail management network.

9
Configure the Network for VxRail
Figure 56. Unused VxRail node ports configured for non-VxRail network traffic

When the initial implementation process completes, you can configure link
aggregation on the operational VxRail cluster, as described in Configure link
aggregation on VxRail networks. Your requirements may include using any spare
network ports on the VxRail nodes that were not configured for VxRail network traffic
for other use cases. You can configure link aggregation to support that network
traffic. These ports can include any unused ports on the NDC-OCP or on the optional
PCIe adapter cards.

Updates can be configured on the virtual-distributed switch that is deployed during


VxRail initial build to support the new networks. Or you can configure a new virtual-
distributed switch. The initial virtual-distributed switch is under the management and
control of VxRail. The best practice is to configure a separate virtual-distributed
switch in the vCenter instance to support these networking use cases.

Limit spanning tree protocol on VxRail switch ports

Network traffic must be allowed uninterrupted passage between the physical switch
ports and the VxRail nodes. Certain Spanning Tree states can place restrictions on
network traffic and can force the port into an unexpected timeout mode. These
conditions that Spanning Tree causes can disrupt VxRail normal operations and
impact performance.

If Spanning Tree is enabled in your network, ensure that the physical switch ports
that are connected to VxRail nodes are configured with a setting such as “Portfast.”
Or you can configure the port as an edge port. These settings set the port to
forwarding state, so no disruption occurs. vSphere virtual switches do not support
STP. You can enable Spanning Tree to avoid loops within the physical switch
network. If you do, you must configure physical switch ports that are connected to an
ESXi host with a setting such as “Portfast.”

Enable flow control

Network instability or congestion contributes to low performance in VxRail and has a


negative effect on the vSAN I-O datastore operations. VxRail recommends enabling
flow control on the switch to assure reliability on a congested network. Flow control
is a switch feature that helps manage the rate of data transfer to avoid buffer
overrun. During periods of high congestion and bandwidth consumption, the
receiving network temporarily injects pause frames to the sender network to slow
transmission. Inserting pause frames helps to avoid buffer overrun.

The absence of flow control on a congested network can result in increased error
rates and force network bandwidth to be consumed for error recovery. The flow

10
Configure the Network for VxRail
control settings can be adjusted depending on network conditions, but VxRail
recommends that flow control should be “receive on” and “transmit off.”

Configure ports on your switches

Now that the switch base settings are complete, the next step is the switch ports.
Perform the following steps for each switch port that is connected to a VxRail node:

1. Configure the MTU size if using jumbo frames.


2. Set the port to the appropriate speed or to autonegotiate speed.
3. Set spanning tree mode to disable transition to a blocking state, which can
cause a timeout condition.
4. Enable flow control receive mode and disable flow control transmit mode.
5. Configure the External Management VLAN (Row 1) on the switch ports. If you
entered “Native VLAN,” set the ports on the switch to accept untagged traffic
and tag it to the native management VLAN ID. Untagged management traffic is
the default management VLAN setting on VxRail.
6. For VxRail version 4.7 and later, configure the Internal Management VLAN
(Row 2) on the switch ports.
7. If required, allow multicast on the VxRail switch ports to support the Internal
Management network.
8. Configure a vSphere vMotion VLAN (Row 3) on the switch ports.
9. Configure a vSAN VLAN (Row 4) on the switch ports. Allow unicast traffic on
this VLAN.
10. Configure the VLANs for your VM Networks (Rows 6) on the switch ports.
11. Configure the optional vCenter Server Network VLAN (Row 7) on the switch
ports.
12. Configure the optional VxRail Witness Traffic Separation VLAN (Row 74) on
the switch ports, if required.

Configure the upstream network for VxRail connectivity

The upstream network from the VxRail cluster must be configured to allow passage
for VxRail networks that require external access. Use Appendix A: VxRail Network
Configuration Table for reference. Upstream passage is required for:

The External Management VLAN (Row 1)


Any VM Network VLANs (Row 6)
The optional vCenter Server Network VLAN (Row 7)
If a vSAN witness is required for the VxRail cluster, include the VxRail
Witness Traffic Separation VLAN (Row 74) for upstream passage.
The VxRail Internal Management VLAN (Row 2) must be blocked from
outbound upstream passage.

11
Configure the Network for VxRail
Optionally, the vSphere vMotion VLAN (Row 3) and vSAN VLAN (Row 4)
can be configured for upstream passage.

If you plan to expand the VxRail cluster beyond a single rack, configure the VxRail
network VLANs for either:

Stretched Layer 2 networks across racks


To pass upstream to routing services if new subnets will be assigned in
expansion racks.

Figure 57. Logical networks connecting to upstream elements

If your Layer 2 or Layer 3 boundary is at the lowest network tier (top-of-rack switch),
perform the following tasks:

Configure point-to-point links with the adjacent upstream switches.


Terminate the VLANs requiring upstream access on the top-of-rack
switches.
Enable and configure routing services for the VxRail networks requiring

12
Configure the Network for VxRail
upstream passage.

If your Layer 2 or Layer 3 boundary is upstream from at the lowest network tier (top-
of-rack switch), perform the following tasks:

Connect ports on the adjacent upstream switch to the uplinks on the top-of-
rack switches.
Configure logical pairings of the ports on the adjacent upstream switch and
the top-of-rack switch.
Configure the logical port pairings, commonly known as “port channels” or
“EtherChannels,” to allow upstream passage of external VxRail networks.

Configure the network to support RoCE

Note: This section is only relevant if you are planning to deploy a VxRail cluster with
vSAN using RoCE-compliant Ethernet adapters.

The VxRail nodes that are targeted for deployment may be configured with Ethernet
adapters that support Remote Direct Memory Access (RDMA) over Converged
Ethernet (RoCE). If the switches and supporting network must be configured to
enable a “lossless” network for the vSAN traffic.

Converting a cluster to enable RoCE on the vSAN datastore is performed after the
VxRail initial build. Consult the technical reference guides from your switch vendor
for the specific steps to configure on the physical network. The basic steps to ensure
a “lossless” network are as follows:

vSAN with RDMA supports NIC failover but does not support link
aggregation or NIC teaming based on IP hash.
Data Center Bridging must be enabled on the switches supporting the
VxRail cluster.
Control traffic flow on the switches using a mechanism such as Priority Flow
Control (PFC). Set the RoCE network to a higher priority as outlined in your
vendor documentation.
Configure RoCE on vSAN on a priority-enabled VLAN.
If the vSAN traffic is on a routed Layer 3 network, the “lossless” network
settings must be preserved when routed across network devices. This
routing uses a feature such as the Differentiated Serviced Code Point
(DSCP) QoS setting.

Confirm your data center network

13
Configure the Network for VxRail
Upon completion of the switch configuration, there should be unobstructed network
paths between the switch ports and the ports on the VxRail nodes. The VxRail
management network and VM network should have unobstructed passage to your
data center network. Before forming the VxRail cluster, the VxRail initialization
process performs several verification steps, including:

Verifying switch and data center environment supportability


Verifying passage of VxRail logical networks
Verifying accessibility of required data center applications
Verifying compatibility with the planned VxRail implementation

Certain data center environment and network configuration errors can cause the
validation to fail, and the VxRail cluster will not form. When validation fails, the data
center settings and switch configurations must undergo troubleshooting to resolve
the problems reported.

Confirm the settings on the switch, using the switch vendor instructions for guidance:

1. External management traffic is untagged on the native VLAN by default. If a


tagged VLAN is used instead, the switches must be customized with the new
VLAN.
2. Internal device discovery network traffic uses the default VLAN of 3939. If this
VLAN has changed, all ESXi hosts must be customized with the new VLAN, or
device discovery will not work.
3. Confirm that the switch ports that attach to VxRail nodes allow passage of all
VxRail network VLANs.
4. Confirm that the switch uplinks allow passage of external VxRail networks.
5. If you have two or more switches, confirm that an interswitch link is configured
between them to support passage of the VxRail network VLANs.

Confirm your firewall settings


You may have positioned a firewall between the switches that are planned for VxRail
and the rest of your data center network. If so, be sure that the required firewall ports
are open for VxRail network traffic.

1. Verify that VxRail can communicate with your DNS server.


2. Verify that VxRail can communicate with your NTP server.
3. Verify that VxRail can communicate with your syslog server if you plan to
capture logging.
4. Verify that your IT administrators can communicate with the VxRail
management system.

14
Configure the Network for VxRail
5. If you plan to use a customer-supplied vCenter, verify open communication
between the vCenter instance and the VxRail managed hosts.
6. If you plan to use a third-party syslog server instead of Log Insight, verify that
open communication exists between the syslog server and the VxRail
management components.
7. If you plan to deploy a separate network for ESXi host management (iDRAC),
verify that your IT administrators can communicate with the iDRAC network.
8. You may plan to use an external Secure Remote Services (SRS) gateway in
your data center instead of SRS-VE deployed in the VxRail cluster. If so, verify
the open communications between VxRail management and the SRS gateway.

See Appendix D: VxRail Open Ports Requirements for information of VxRail port
requirements.

Confirm your data center environment

Perform the steps in this section to confirm your data center environment:

1. Confirm that you cannot ping any IP address that is reserved for VxRail
management components.
2. Confirm that your DNS servers are reachable from the VxRail external
management network.
3. Confirm the forward and reverse DNS entries for the VxRail management
components.
4. Confirm that your management gateway IP address is accessible.
5. Confirm that the vCenter Server management gateway IP is accessible, if
configured.
6. If you decide to use the TCP-IP stack for vMotion instead of the default TCP-IP
stack, confirm that your vMotion gateway IP address is accessible.
7. If you have configured NTP servers, confirm that you can reach them from your
configured VxRail external management network.
8. If you have configured a third-party syslog server for logging, confirm that you
can reach it from the network supporting VxRail Manager.
9. If you plan to use a customer-supplied vCenter, confirm that it is accessible
from the network supporting VxRail Manager.
10. If you plan to use a local certificate authority for certificate renewal on VxRail,
verify that it is accessible from network supporting VxRail Manager.
11. If you plan to use Secure Connect Gateways to enable connectivity to the back-
end Customer Support centers, verify that the gateways are accessible from
network supporting VxRail Manager.
12. You may plan to deploy a witness at a remote site to monitor vSAN, and plan to
enable Witness Traffic Separation. If so, confirm that there is a routable path

15
Configure the Network for VxRail
between the witness and this network.
13. You may plan to install the VxRail nodes in more than one rack, and to
terminate the VxRail networks at the ToR switches. If so, verify that routing
services have been configured upstream for the VxRail networks.

16
Configure the Network for VxRail

You might also like