Design and Best Practices

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Neither 10 GB nor 25 GB networks can be used for vSAN traffic.

A 1 GB network can only be used for a VxRail hybrid cluster with a single processor.

All VxRail nodes must run the same base network speed, that is, 25 GB, 10 GB, or 1 GB.

The mixing of different VxRail series in the same cluster is supported; however, the processors must be from the same vendor.

The mixing of All-Flash nodes and NVMe nodes in the same cluster is supported.

VxRail nodes and VxRail dynamic nodes cannot be supported in the same cluster.

The VxRail G-Series platform can be partially populated.

The maximum number of VxRail nodes per cluster is 64.

A two-node vSAN configuration is required with a witness virtual appliance; scale-out is not supported on a two-node vSAN
cluster.

The next section will discuss two scenarios for VxRail scale-out.

Scenario 1
In Figure 5.1, there is a VxRail All-Flash cluster with three nodes (VxRail E660F A/B/C). Each
node is installed with eight 3.8 TB-capacity SSDs and two 800 GB-cache SSDs. In this cluster, we
configured two vSAN disk groups; the first one is disk slots 0 to 3 (capacity tier) and disk slot 8
(cache tier), and the other is disk slots 4 to 7 (capacity tier) and disk slot 9 (cache tier). This cluster is
running VxRail software 7.0.240 and has applied the vSAN storage policy with FTM set to RAID 1
and FTT set to 1:

Figure 5.1 – VxRail All-Flash cluster with three nodes

We will add a new VxRail E660F node to this VxRail cluster. According to the VxRail scale-out
rules, the new node must fulfill the following requirements:
The new node is the E660F All-Flash model.

The number of network uplinks and the speed are the same as in the existing VxRail cluster.
The VxRail software on the new node must be the same as the existing VxRail cluster.

The CPU model of the new node is the same as the existing VxRail nodes.

The storage devices of the new node are the same as the existing VxRail nodes—that is, the number and type of capacity and
cache devices.

In Figure 5.2, the total resources of this VxRail cluster will be increased automatically after adding
the new node (VxRail E660F D), new node resources will be added to the existing cluster, and
capacity will be increased. For scale-up and scale-out activities, all existing policies would be
compatible with the future state of the cluster:

Figure 5.2 – VxRail All-Flash cluster with four nodes

IMPORTANT NOTE
The VMware vSphere and vSAN license edition of the new VxRail node must be the same as the existing VxRail node.

This scenario is an example of a standard VxRail scale-out operation. We will now discuss another
scenario of VxRail scale-out in the next section.

Scenario 2
In Figure 5.3, there is a VxRail All-Flash cluster with three nodes (VxRail E660F A/B/C). Each
node installed four 3.8 TB-capacity SSDs and two 800 GB-cache SSDs. This cluster configuration
has one vSAN disk group: disk slots 0 to 3 (capacity tier) and disk slot 8 (cache tier). This cluster
runs in VxRail software 7.0.240 and applies the vSAN storage policy with FTM set to RAID-1 and
FTT set to 1:
Figure 5.3 – VxRail All-Flash cluster with three nodes

In this scenario, we will add two new VxRail E660F nodes to the VxRail cluster. You can change the
FTM settings at any time if the storage resources can fulfill the requirement of your new FTM
settings. This operation does not interrupt the operation of the VMs in the VxRail cluster; this is one
of the key features of the VxRail system.

In Figure 5.4, the total resources of this VxRail cluster will be increased automatically after adding
the new nodes (VxRail E660F D/E) to the existing VxRail cluster, including CPU, memory, and
storage resources. The vSAN storage policy with FTM set to RAID-1 and FTT set to 1 is still valid in
this VxRail cluster. When the cluster expansion is successfully completed, then you can change the
FTM to RAID-5:

Figure 5.4 – VxRail All-Flash cluster with five nodes

You can see from the examples in Figure 5.2 and Figure 5.4 that the VxRail scale-out operation is
very simple and flexible. The following sections will discuss the disk group design of each VxRail
model.

Design of disk groups on VxRail E-Series


The cache and capacity disks are predefined in specific disk slots before the VxRail system is
delivered to the customer from the Dell factory. The disk slot locations for cache and capacity are
fixed when they are made in the Dell factory, and you cannot change them. Figure 5.5 shows the
front view of VxRail E660, E660F, and E660N. There are 10 disk slots on these 3 types of VxRail
E-Series; disk slots 0 to 7 are used for the capacity tier and disk slots 8 to 9 are used for the cache
tier. The capacity disks support SAS/SATA/SSD, and the cache disks support SSD and NVMe SSD:

Figure 5.5 – Front view of VxRail E660/F/N

If you scale up the VxRail cluster, you need to consider the following VxRail scale-up rules:
Mixing the SAS/SATA/NVMe SSDs in the same disk group is not supported.

Mixing the capacity HDDs with capacity SSDs in the same disk group is not supported.

All the capacity disks must be of the same capacity in the same disk group.

Having different capacity disk sizes and types in the different disk groups in a VxRail node is supported.

Having different cache disk sizes and types in the different disk groups in a VxRail node is supported.

In VxRail software 7.0.200 or later, a large-capacity disk can be used when replacing a disk in the disk group.

In VxRail software 7.0.200 or later, a large-capacity disk can be used when expanding a disk group.

In the release of VxRail 7.0.201 and later, VxRail E-Series supports two options of disk groups:
Option 1: VxRail E-Series supports two vSAN disk groups, which contain one cache disk and up to four capacity disks per disk
group. Table 5.1 shows a two-disk-groups configuration for each disk slot in VxRail E660/F/N:

Disk Groups Cache Tier Capacity Tier

Disk Group 1 Slot 8 Slot 0

Slot 1
Slot 2

Slot 3

Disk Group 2 Slot 9 Slot 4

Slot 5

Slot 6

Slot 7

Table 5.1 – Two disk groups configuration in VxRail E660/F/N

Option 2: VxRail E-Series supports one vSAN disk group that contains one cache disk and up to seven capacity disks. Table 5.2
shows a one disk group configuration for each disk slot in VxRail E660/F/N:

Disk Groups Cache Tier Capacity Tier

Disk Group 1 Slot 8 Slot 0

Slot 1

Slot 2

Slot 3

Slot 4

Slot 5

Slot 6

Table 5.2 – One disk group configuration in VxRail E660/F/N

If you choose the one disk group configuration in VxRail E-Series, two disk slots (slot 7 and slot 9)
are useless (refer to the details in Figure 5.6):
Figure 5.6 – Disk layout for one disk group configuration in VxRail E660/F/N

In VxRail E660 and E660F, you can choose SAS, SATA, or SSD for the capacity tier. For the cache
tier, you can choose SSD only. VxRail E660N only supports capacity NVMe for the capacity tier and
cache NVMe for the cache tier. Dell’s 15th-generation PowerEdge server has new models of VxRail
E-Series: E665, E665F, and E665N. The disk group configuration of E665N is the same as the
E660N. The disk group configuration of E665 and E665F is different from E660 and E660F because
there are only eight disk slots in VxRail E665 and E665F. Both models also support two disk groups,
each disk group with one cache disk and up to three capacity disks (refer to Figure 5.7):

Figure 5.7 – Front view of VxRail E665/F

Table 5.3 shows the two disk groups configuration for each disk slot in VxRail E665/F:

Disk Groups Cache Tier Capacity Tier

Disk Group 1 Slot 6 Slot 0

Slot 1

Slot 2

Disk Group 2 Slot 7 Slot 3

Slot 4

Slot 5

You might also like