Vmware Virtual San 6.1 Stretched Cluster Guide
Vmware Virtual San 6.1 Stretched Cluster Guide
Contents
INTRODUCTION ......................................................................................................................................... 5
SUPPORT STATEMENTS...................................................................................................................... 7
VS PHERE VERSIONS ................................................................................................................................... 7
VS PHERE & V IRTUAL SAN ...................................................................................................................... 7
HYBRID AND ALL-FLASH SUPPORT ..................................................................................................... 7
ON-DISK FORMATS..................................................................................................................................... 7
WITNESS HOST AS AN ESXI VM........................................................................................................... 8
FEATURES SUPPORTED ON VSAN BUT NOT VSAN STRETCHED CLUSTERS...................... 8
FEATURES SUPPORTED ON VMSC BUT NOT VSAN STRETCHED CLUSTERS ...................... 8
NEW CONCEPTS IN VIRTUAL SAN - STRETCHED CLUSTER ................................ 9
VIRTUAL SAN STRETCHED CLUSTERS VERSUS FAULT DOMAINS........................................... 9
THE WITNESS HOST.................................................................................................................................... 9
READ LOCALITY IN VIRTUAL SAN STRETCHED CLUSTER .......................................................... 9
REQUIREMENTS...................................................................................................................................... 11
VMWARE VCENTER SERVER ................................................................................................................ 11
A WITNESS HOST ...................................................................................................................................... 11
NETWORKING AND LATENCY REQUIREMENTS............................................................................... 12
Layer 2 and Layer 3 support .................................................................................................... 12
Supported geographical distances ...................................................................................... 12
Data site to data site network latency ............................................................................... 12
Data site to data site bandwidth ........................................................................................... 12
Data Site to witness network latency ................................................................................. 13
Data Site to witness network bandwidth ......................................................................... 13
Inter-site MTU consistency ........................................................................................................ 13
CONFIGURATION MINIMUMS AND MAXIMUMS.............................................................. 14
VIRTUAL MACHINES PER HOST............................................................................................................ 14
HOSTS PER CLUSTER ............................................................................................................................... 14
WITNESS HOST .......................................................................................................................................... 14
NUMBER OF FAILURES TO TOLERATE ............................................................................................. 14
FAULT DOMAINS ....................................................................................................................................... 15
DESIGN CONSIDERATIONS............................................................................................................ 16
WITNESS HOST SIZING - COMPUTE .................................................................................................... 16
WITNESS HOST SIZING – MAGNETIC DISK ........................................................................................ 16
WITNESS HOST SIZING – FLASH DEVICE........................................................................................... 16
CLUSTER COMPUTE RESOURCE UTILIZATION ................................................................................ 17
NETWORKING DESIGN CONSIDERATIONS....................................................................................... 18
Connectivity ....................................................................................................................................... 18
Type of networks ............................................................................................................................ 18
Considerations related to single default gateway on ESXi hosts ..................... 18
Caution when implementing static routes ...................................................................... 19
Dedicated/Customer TCPIP stacks for VSAN Traffic .............................................. 19
L2 design versus L3 design ....................................................................................................... 20
Why not L3 between data sites? ........................................................................................... 21
Introduction
VMware Virtual SAN 6.1, shipping with vSphere 6.0 Update 1, introduces a new
feature called VMware Virtual SAN Stretched Cluster. Virtual SAN Stretched
Cluster is a specific configuration implemented in environments where
disaster/downtime avoidance is a key requirement. This guide was developed
to provide additional insight and information for installation, configuration and
operation of a Virtual SAN Stretched Cluster infrastructure in conjunction with
VMware vSphere. This guide will explain how vSphere handles specific failure
scenarios and discuss various design considerations and operational procedures.
Virtual SAN Stretched Clusters with Witness Host refers to a deployment where
a user sets up a Virtual SAN cluster with 2 active/active sites with an identical
number of ESXi hosts distributed evenly between the two sites. The sites are
connected via a high bandwidth/low latency link.
The third site hosting the Virtual SAN Witness Host is connected to both of the
active/active data-sites. This connectivity can be via low bandwidth/high
latency links.
Each site is configured as a Virtual SAN Fault Domain. The nomenclature used
to describe a Virtual SAN Stretched Cluster configuration is X+Y+Z, where X is
the number of ESXi hosts at data site A, Y is the number of ESXi hosts at data
site B, and Z is the number of witness hosts at site C. Data sites are where virtual
machines are deployed. The minimum supported configuration is 1+1+1 (3
nodes). The maximum configuration is 15+15+1 (31 nodes).
In Virtual SAN Stretched Clusters, there is only one witness host in any
configuration.
A virtual machine deployed on a Virtual SAN Stretched Cluster will have one
copy of its data on site A, a second copy of its data on site B and any witness
components placed on the witness host in site C. This configuration is achieved
through fault domains alongside hosts and VM groups, and affinity rules. In the
event of a complete site failure, there will be a full copy of the virtual machine
data as well as greater than 50% of the components available. This will allow the
virtual machine to remain available on the Virtual SAN datastore. If the virtual
machine needs to be restarted on the other site, vSphere HA will handle this
task.
Support Statements
vSphere versions
Virtual SAN Stretched Cluster configurations required vSphere 6.0 Update 1 (U1).
This implies both vCenter Server 6.0 U1 and ESXi 6.0 U1. This version of vSphere
includes Virtual SAN version 6.1. This is the minimum version required for Virtual
SAN Stretched Cluster support.
On-disk formats
VMware supports Virtual SAN Stretched Cluster with the v2 on-disk format only.
The v1 on-disk format is based on VMFS and is the original on-disk format used
for Virtual SAN. The v2 on-disk format is the version which comes by default
with Virtual SAN version 6.x. Customers that upgraded from the original Virtual
SAN 5.5 to Virtual SAN 6.0 may not have upgraded the on-disk format for v1 to
v2, and are thus still using v1. VMware recommends upgrading the on-disk
format to v2 for improved performance and scalability, as well as stretched
cluster support.
Both physical ESXi hosts and virtual ESXi hosts (nested ESXi) are supported for
the witness host. VMware provides a Witness Appliance for those customers
who wish to use the ESXi VM. A witness host/VM cannot be shared between
multiple Virtual SAN Stretched Clusters.
A common question is how stretched cluster differs from Fault Domains, which
is a Virtual SAN feature that was introduced with Virtual SAN version 6.0. Fault
domains enable what might be termed “rack awareness” where the components
of virtual machines could be distributed amongst multiple hosts in multiple
racks, and should a rack failure event occur, the virtual machine would continue
to be available. However, these racks would typically be hosted in the same data
center, and if there was a data center wide event, fault domains would not be
able to assist with virtual machines availability.
Stretched clusters essentially build on what fault domains did, and now provide
what might be termed “data center awareness”. Virtual SAN Stretched Clusters
can now provide availability for virtual machines even if a data center suffers a
catastrophic outage.
Requirements
In addition to Virtual SAN hosts, the following is a list of requirements for
implementing Virtual SAN Stretched Cluster.
A witness host
In a Virtual SAN Stretched Cluster, the witness components are only ever placed
on the witness host. Either a physical ESXi host or a special witness appliance
provided by VMware, can be used as the witness host.
If a witness appliance is used for the witness host, it will not consume any of the
customer’s vSphere licenses. A physical ESXi host that is used as a witness host
will need to be licensed accordingly, as this can still be used to provision virtual
machines should a customer choose to do so.
It is important that witness host is not added to the VSAN cluster. The witness
host is selected during the creation of a Virtual SAN Stretched Cluster.
The witness appliance will have a unique identifier in the vSphere web client UI
to assist with identifying that a host is in fact a witness appliance (ESXi in a VM).
It is shown as a “blue” host, as highlighted below:
Note this is only visible when the appliance ESXi witness is deployed. If a
physical host is used as the witness, then it does not change its appearance in
the web client. A witness host is dedicated for each stretched cluster.
When Virtual SAN is deployed in a stretched cluster across multiple sites using
fault domains, there are certain networking requirements that must be adhered
to.
Both Layer 2 (same subnet) and Layer 3 (routed) configurations are used in a
recommended Virtual SAN Stretched Cluster deployment.
• VMware recommends that Virtual SAN communication between the
data sites be over stretched L2.
• VMware recommends that Virtual SAN communication between the
data sites and the witness site is over L3.
Note: A common question is whether L2 for Virtual SAN traffic across all sites
is supported. There are some considerations with the use of a stretched L2
domain between the data sites and the witness site, and these are discussed in
further detail in the design considerations section of this guide. Another
common question is whether L3 for VSAN traffic across all sites is supported.
While this should work, it is not the VMware recommended network topology
for Virtual SAN Stretched Clusters at this time.
Virtual SAN traffic between data sites is multicast. Witness traffic between a
data site and the witness site is unicast.
For VMware Virtual SAN Stretched Clusters, geographical distances are not a
support concern. The key requirement is the actual latency numbers between
sites.
Data site to data site network refers to the communication between non-witness
sites, in other words, sites that run virtual machines and hold virtual machine
data. Latency or RTT (Round Trip Time) between sites hosting virtual machine
objects should not be greater than 5msec (< 2.5msec one-way).
This refers to the communication between non-witness sites and the witness
site.
In most Virtual SAN Stretched Cluster configurations, latency or RTT (Round
Trip Time) between sites hosting VM objects and the witness nodes should not
be greater than 200msec (100msec one-way).
In typical 2 Node configurations, such as Remote Office/Branch Office
deployments, this latency or RTT is supported up to 500msec (250msec one-
way).
The latency to the witness is dependent on the number of objects in the cluster.
VMware recommends that on Virtual SAN Stretched Cluster configurations up
to 10+10+1, a latency of less than or equal to 200 milliseconds is acceptable,
although if possible, a latency of less than or equal to 100 milliseconds is
preferred. For configurations that are greater than 10+10+1, VMware
recommends a latency of less than or equal to 100 milliseconds is required.
Bandwidth between sites hosting VM objects and the witness nodes are
dependent on the number of objects residing on Virtual SAN. It is important to
size data site to witness bandwidth appropriately for both availability and
growth. A standard rule of thumb is 2Mbps for every 1000 objects on Virtual
SAN.
Please refer to the Design Considerations section of this guide for further
details on how to determine bandwidth requirements.
It is important to maintain a consistent MTU size between data nodes and the
witness in a Stretched Cluster configuration. Ensuring that each VMkernel
interface designated for Virtual SAN traffic, is set to the same MTU size will
prevent traffic fragmentation. The Virtual SAN Health Check checks for a
uniform MTU size across the Virtual SAN data network, and reports on any
inconsistencies.
The maximum number of virtual machines per ESXi host is unaffected by the
Virtual SAN Stretched Cluster configuration. The maximum is the same as for
normal VSAN deployments.
VMware recommends that customers should run their hosts at 50% of
maximum number of virtual machines supported in a standard Virtual SAN
cluster to accommodate a full site failure. In the event of full site failures, the
virtual machines on the failed site can be restarted on the hosts in the surviving
site.
Witness host
There is a maximum of 1 witness host per Virtual SAN Stretched Cluster. The
witness host requirements are discussed in the design considerations section of
this guide. VMware provides a fully supported witness virtual appliance, in Open
Virtual Appliance (OVA) format, for customers who do not wish to dedicate a
physical ESXi host as the witness. This OVA is essentially a pre-licensed ESXi
host running in a virtual machine, and can be deployed on a physical ESXi host
on the third site.
Fault Domains
Fault domains play an important role in Virtual SAN Stretched Cluster. Similar
to the NumberOfFailuresToTolerate (FTT) policy setting discussed previously,
the maximum number of fault domains in a Virtual SAN Stretched Cluster is 3.
The first FD is the “preferred” data site, the second FD is the “secondary” data
site and the third FD is the witness host site.
Design Considerations
Witness host sizing - compute
When dealing with a physical server, the minimum ESXi host requirements will
meet the needs of a witness host. The witness host must be capable of running
the same version of ESXi as Virtual SAN data nodes.
When using a witness appliance (ESXi in a VM), the size is dependent on the
configurations and this is decided during the deployment process. The witness
appliance, irrespective of the configuration, uses at least two vCPUs. The
physical host that the witness appliance runs on must be at least vSphere 5.5 or
greater.
VMware recommends the flash device capacity (e.g. SSD) on the witness host
should be approximately 10GB in size for the maximum number of 45,000
components is required. In the witness appliance, one of the VMDKs is tagged
as a flash device. There is no requirement for an actual flash device.
Note that this witness host sizing is for component maximums. Smaller
configurations that do not need the maximum number of components can run
with fewer resources. Here are the three different sizes for the witness appliance.
Note: When a physical host is used for the witness host, VMware will also
support the tagging of magnetic disks as SSDs, implying that there is no need
to purchase a flash device for physical witness hosts. This tagging can be done
from the vSphere web client UI.
Connectivity
Type of networks
VMware recommends the following network types for Virtual SAN Stretched
Cluster:
The major consideration with implementing this configuration is that each ESXi
host comes with a default TCPIP stack, and as a result, only has a single default
gateway. The default route is typically associated with the management
network TCPIP stack. Now consider the situation where, for isolation and
security reasons, the management network and the Virtual SAN network are
completely isolated from one another. The management network might be
using vmk0 on physical NIC 0, and the VSAN network might be using vmk2 on
physical NIC 1, i.e. completely distinct network adapters and two distinct TCPIP
stacks. This implies that the Virtual SAN network has no default gateway.
Consider also that the Virtual SAN network is stretched over data site 1 and 2
on an L2 broadcast domain, e.g. 172.10.0.0 and the witness on site 3 the VSAN
network is on another broadcast domain, e.g. 172.30.0.0. If the VMkernel
adapters on the VSAN network on data site 1 or 2 tries to initiate a connection
to the VSAN network on the witness site (site 3), and since there is only one
default gateway associated with the management network, the connection will
fail. This is because the traffic will be routed through the default gateway on the
ESXi host, and thus the management network on the ESXi host, and there is no
route from the management network to the VSAN network.
One solution to this issue is to use static routes. This allows an administrator to
define a new routing entry indicating which path should be followed to reach a
particular network. In the case of the Virtual SAN network on a Virtual SAN
Stretched Cluster, static routes could be added as follows, using the above
example IP addresses:
1. Hosts on data site 1 have a static route added so that requests to reach the
172.30.0.0 witness network on site 3 are routed via the 172.10.0.0 interface
2. Hosts on data site 2 have a static route added so that requests to reach the
172.30.0.0 witness network on site 3 are routed via the 172.10.0.0 interface
3. The witness host on site 3 has a static route added so that requests to
reach the 172.10.0.0 data site 1 and data site 2 network are routed via
the 172.30.0.0 interface
Static routes are added via the esxcli network ip route or esxcfg-route
commands. Refer to the appropriate vSphere Command Line Guide for more
information.
Using static routes requires administrator intervention. Any new ESXi hosts that
are added to the cluster at either site 1 or site 2 needed to have static routes
manually added before they can successfully communicate to the witness, and
the other data site. Any replacement of the witness host will also require the
static routes to be updated to facilitate communication to the data sites.
At this time, the Virtual SAN traffic does not have its own dedicated TCPIP stack.
Custom TCPIP stacks are also not applicable for Virtual SAN traffic.
Consider a design where the Virtual SAN Stretched Cluster is configured in one
large L2 design as follows, where Site 1 and Site 2 are where the virtual machines
are deployed. The Witness site contains the witness host:
In the event of the link between Switch 1 and Switch 2 is broken (the link
between the Site 1 and Site 2). Network traffic will now route from Site 1 to Site
2 via Site 3. Considering VMware will support a much lower bandwidth for the
witness host, customers may see a decrease in performance if network traffic is
routed through a lower specification Site 3.
If there are situations where routing traffic between data sites through the
witness site does not impact latency of applications, and bandwidth is
acceptable, a stretched L2 configuration between sites is supported. However,
in most cases, VMware feels that such a configuration is not feasible for the
majority of customers.
To avoid the situation previously outlined, and to ensure that data traffic is not
routed through the witness site, VMware recommends the following network
topology:
• Between Site 1 and Site 2, implement either a stretched L2 (same
subnet) or a L3 (routed) configuration.
• Between Site 1 and Witness Site 3, implement a L3 (routed)
configuration.
• Between Site 2 and Witness Site 3, implement a L3 (routed)
configuration.
• In the event of a failure on either of the data sites network, this
configuration will prevent any traffic from Site 1 being routed to Site 2
via Witness Site 3, and thus avoid any performance degradation.
It is also important to consider that having different subnets at the data sites is
going to painful for any virtual machines that failover to the other site since
there is no easy, automated way to re-IP the guest OS to the network on the
other data site.
In this first configuration, the data sites are connected over a stretched L2
network. This is also true for the data sites’ management network, VSAN
network, vMotion network and virtual machine network. The physical network
router in this network infrastructure does not automatically route traffic from
the hosts in the data sites (Site 1 and Site 2) to the host in the Site 3. In order for
the Virtual SAN Stretched Cluster to be successfully configured, all hosts in the
cluster must communicate. How can a stretched cluster be deployed in this
environment?
The solution is to use static routes configured on the ESXi hosts so that the
Virtual SAN traffic from Site 1 and Site 2 is able to reach the witness host in Site
3, and vice versa. While this is not a preferred configuration option, this setup
can be very useful for proof-of-concept design where there may be some issues
with getting the required network changes implemented at a customer site.
In the case of the ESXi hosts on the data sites, a static route must be added to
the Virtual SAN VMkernel interface which will redirect traffic for the witness host
on the witness site via a default gateway for that network. In the case of the
witness host, the Virtual SAN interface must have a static route added which
redirects Virtual SAN traffic destined for the data sites’ hosts. Adding static
routes is achieved using the esxcfg-route –a command on the ESXi hosts. This
will have to be repeated on all ESXi hosts in the stretched cluster.
For this to work, the network switches need to be IP routing enabled between
the Virtual SAN network VLANs, in this example VLANs 11 and 21. Once requests
arrive for a remote host (either witness -> data or data -> witness), the switch
will route the packet appropriately. This communication is essential for
Virtual SAN Stretched Cluster to work properly.
Note that we have not mentioned the ESXi management network here. The
vCenter server will still be required to manage both the ESXi hosts at the data
sites and the ESXi witness. In many cases, this is not an issue for customer.
However, in the case of stretched clusters, it might be necessary to add a static
route from the vCenter server to reach the management network of the witness
ESXi host if it is not routable, and similarly a static route may need to be added
to the ESXi witness management network to reach the vCenter server. This is
because the vCenter server will route all traffic via the default gateway.
As long as there is direct connectivity from the witness host to vCenter (without
NAT’ing), there should be no additional concerns regarding the management
network.
Requirements: Since the virtual ESXi witness is a virtual machine that will be
deployed on a physical ESXi host when deployed on-premises, the underlying
physical ESXi host will need to have a minimum of one VM network pre-
configured. This VM network will need to reach both the management network
and the VSAN network shared by the ESXi hosts on the data sites. An alterative
option that might be simpler to implement is to have two preconfigured VM
networks on the underlying physical ESXi host, one for the management
network and one for the VSAN network. When the virtual ESXi witness is
deployed on this physical ESXi host, the network will need to be
attached/configured accordingly.
Once the virtual ESXi witness has been successfully deployed, the static route
configuration must be configured.
As before, the data sites are connected over a stretched L2 network. This is also
true for data sites’ management network, VSAN network, vMotion network and
virtual machine network. Once again, physical network router in this
environment does not automatically route traffic from the hosts in the Preferred
and Secondary data sites to the host in the witness site. In order for the Virtual
SAN Stretched Cluster to be successfully configured, all hosts in the cluster
require static routes added so that the VSAN traffic from the Preferred and
Secondary sites is able to reach the witness host in the witness site, and vice
versa. As mentioned before, this is not a preferred configuration option, but this
setup can be very useful for proof-of-concept design where there may be some
issues with getting the required network changes implemented at a customer
site.
Once again, the static routes are added using the esxcfg-route –a command on
the ESXi hosts. This will have to be repeated on all ESXi hosts in the cluster,
both on the data sites and on the witness host.
Note that once again we have not mentioned the management network here.
As mentioned before, vCenter needs to manage the remote ESXi witness and
the hosts on the data sites. If necessary, a static route should be added to the
vCenter server to reach the management network of the witness ESXi host, and
similarly a static route should be added to the ESXi witness to reach the vCenter
server.
Also note that, as before, that there is no need to configure a vMotion network
or a VM network or add any static routes for these network in the context of a
Virtual SAN Stretched Cluster. This is because there will never be a migration or
deployment of virtual machines to the VSAN witness. Its purpose is to maintain
witness objects only, and does not require either of these networks for this task.
Management traffic for the data nodes is typically automatically routed to the
vCenter server at the central datacenter. Routing for the VSAN network, as
shown in previous scenarios, will require static routes between the VSAN
interfaces on each data node and the witness VM running in the central
datacenter.
Because they reside in the same physical location, networking between data
nodes is consistent with that of a traditional Virtual SAN cluster. Data nodes still
require a static route to the Witness VM residing in the central datacenter. The
witness VM’s secondary interface, designated for Virtual SAN traffic will also
require a static route to each of data node’s VSAN traffic enabled VMkernel
interface.
The management VMkernel for the witness VM, in the central datacenter, can
easily reside on the same management VLAN in the central datacenter, not
requiring any static routing.
The VSAN network in each site must also have routing to the respective witness
VM VSAN interface. Because the VMkernel interface with VSAN traffic enabled
uses the same gateway, static routes will be required to and from the data nodes
to the witness VMs. Remember the witness VM will never run an VM workloads,
and therefore the only traffic requirements are for management and VSAN
witness traffic, because its purpose is to maintain witness objects only.
For remote site VMs to communicate with central datacenter VMs, appropriate
routing for the VM Network will also be required.
Bandwidth calculation
As stated in the requirements section, the bandwidth requirement between the
two main sites is dependent on workload and in particular the number of write
operations per ESXi host. Other factors such as read locality not in operation
(where the virtual machine resides on one site but reads data from the other
site) and rebuild traffic, may also need to be factored in.
Reads are not included in the calculation as we are assuming read locality, which
means that there should be no inter-site read traffic. The required bandwidth
between the two data sites (B) is equal to the Write bandwidth (Wb) * data
multiplier (md) * resynchronization multiplier (mr):
B = Wb *md * mr
The data multiplier is comprised of overhead for Virtual SAN metadata traffic
and miscellaneous related operations. VMware recommends a data multiplier
of 1.4
The resynchronization multiplier is included to account for resynchronizing
events. It is recommended to allocate bandwidth capacity on top of required
bandwidth capacity for resynchronization events.
Including the Virtual SAN network requirements, the required bandwidth would be
560Mbps.
B = 320 Mbps * 1.4 * 1.25 = 560 Mbps.
Including the Virtual SAN network requirements, the required bandwidth would be
approximately 4Gbps.
Using the above formula, a Virtual SAN Stretched Cluster with a dedicated
10Gbps inter-site link, can accommodate approximately 170,000 4KB write
IOPS. Customers will need to evaluate their I/O requirements but VMware feels
that 10Gbps will meet most design requirements.
Above this configuration, customers would need to consider multiple 10Gb NICs
teamed, or a 40Gb network.
While it might be possible to use 1Gbps connectivity for very small Virtual SAN
Stretched Cluster implementations, the majority of implementations will require
10Gbps connectivity between sites. Therefore, VMware recommends a
minimum of 10Gbps network connectivity between sites for optimal
performance and for possible future expansion of the cluster.
Note that the previous calculations are only for regular Stretched Cluster traffic
with read locality. If there is a device failure, read operations also have to
traverse the inter-site network. This is because the mirrored copy of data is on
the alternate site when using NumberOfFailurestoTolerate=1 .
The same equation for every 4K read IO of the objects in a degraded state
would be added on top of the above calculations. The expected read IO would
be used to calculate the additional bandwidth requirement.
In an example of a single failed disk, with objects from 5 VMs residing on the
failed disk, with 10,000 (4KB) read IOPS, an additional 40 Mbps, or 320 Mbps
would be required, in addition to the above Stretched Cluster requirements, to
provide sufficient read IO bandwidth, during peak write IO, and resync
operations.
Witness bandwidth isn’t calculated in the same way as bandwidth between data
sites. Because hosts designated as a witness do not maintain any VM data, but
rather only component metadata, the requirements are much smaller.
Virtual Machines on Virtual SAN are comprised of many objects, which can
potentially be split into multiple components, depending on factors like policy
and size. The number of components on Virtual SAN have a direct impact on
the bandwidth requirement between the data sites and the witness.
The required bandwidth between the Witness and each site is equal to ~1138 B x Number
of Components / 5s
The 1138 B value comes from operations that occur when the Preferred Site goes
offline, and the Secondary Site takes ownership of all of the components.
When the primary site goes offline, the secondary site becomes the master. The
Witness sends updates to the new master, followed by the new master replying
to the Witness as ownership is updated.
In the event of a Preferred Site failure, the link must be large enough to allow
for the cluster ownership to change, as well ownership of all of the components
within 5 seconds.
Approximately 166 VMs with the above configuration would require the Witness
to contain 996 components.
With the 10% buffer included, a rule of thumb can be stated that for every 1,000
components, 2 Mbps is appropriate.
Workload 2
With a VM being comprised of
o 3 objects {VM namespace, vmdk (under 255GB), and vmSwap)
o Failure to Tolerate of 1 (FTT=1)
o Stripe Width of 2
Approximately 1,500 VMs with the above configuration would require 18,000
components to be stored on the Witness.
vSphere HA Turn on
Host Monitoring Enabled
Host Hardware Monitoring – VM Disabled (default)
Component Protection:
“Protect against Storage
Connectivity Loss”
Virtual Machine Monitoring Customer Preference – Disabled by
default
Admission Control Set to 50%
Host Isolation Response Power off and restart VMs
Datastore Heartbeats “Use datastores only from the specified
list”, but do not select any datastores from
the list. This disables Datastore
Heartbeats
Advanced Settings:
das.usedefaultisolationaddress False
das.isolationaddress0 IP address on VSAN network on site
1
das.isolationaddress1 IP address on VSAN network on site
2
Turn on vSphere HA
To turn on vSphere HA, select the cluster object in the vCenter
inventory, Manage, then vSphere HA. From here, vSphere HA can be
turned on and off via a check box.
Host Monitoring
Host monitoring should be enabled on Virtual SAN stretch cluster
configurations. This feature uses network heartbeat to determine the status of
hosts participating in the cluster, and if corrective action is required, such as
restarting virtual machines on other nodes in the cluster.
Admission Control
Datastore heartbeats are now disabled on the cluster. Note that this may give
rise to a notification in the summary tab of the host, stating that the number of
vSphere HA heartbeat datastore for this host is 0, which is less than required:2.
This message may be removed by following KB Article 2004739 which details
how to add the advanced setting das.ignoreInsufficientHbDatastore = true.
If you have a heartbeat datastore and only the VSAN traffic network fails,
vSphere HA does not restart the virtual machines on another host in the cluster.
When you restore the link, the virtual machines will continue to run. If virtual
machine availability is your utmost concern, keeping in mind that a virtual
machine restart is necessary in the event of a host isolation event, then you
should not setup a heartbeat datastore. Any time the VSAN network causes a
host to get isolated, vSphere HA will power on the virtual machine on another
host in the cluster.
Of course, with a restart the in-memory state of the apps is lost, but the virtual
machine has minimal downtime. If you do not want a virtual machine to fail over
when there is a VSAN traffic network glitch, then a heartbeat datastore should
Advanced Options
In a Virtual SAN Stretched Cluster, one of the isolation addresses should reside
in the site 1 datacenter and the other should reside in the site 2 datacenter. This
would enable vSphere HA to validate complete network isolation in the case of
a connection failure between sites.
VMware recommends enabling host isolation response and specifying an
isolation response addresses that is on the VSAN network rather than the
management network. The vSphere HA advanced setting
das.usedefaultisolationaddress should be set to false. VMware recommends
specifying two additional isolation response addresses, and each of these
addresses should be site specific. In other words, select an isolation response IP
address from the preferred Virtual SAN Stretched Cluster site and another
isolation response IP address from the secondary Virtual SAN Stretched Cluster
site. The vSphere HA advanced setting used for setting the first isolation
response IP address is das.isolationaddress0 and it should be set to an IP
address on the VSAN network which resides on the first site. The vSphere HA
advanced setting used for adding a second isolation response IP address is
das.isolationaddress1 and this should be an IP address on the VSAN network
that resides on the second site.
For further details on how to configure this setting, information can be found in
KB Article 1002117.
Host groups
VM Groups
Two VM groups should also be created; one to hold the virtual machines placed
on site 1 and the other to hold the virtual machines placed on site 2. Whenever
a virtual machine is created and before it is powered on, assuming a
NumberOfFailuresToTolerate policy setting of 1, the virtual machine should be
added to the correct host affinity group. This will then ensure that a virtual
always remains on the same site, reading from the same replica, unless a site
critical event occurs necessitating the VM being failed over to the secondary
site.
Note that to correctly use VM groups, first off all create the VM, but do power
it on. Next, edit the VM groups and add the new VM to the desired group. Once
added, and saved, the virtual machine can now be powered on. With DRS
enabled on the cluster, the virtual machine will be checked to see if it is on the
correct site according to the VM/Host Rules (discussed next) and if not, it is
automatically migrated to the appropriate site, either “preferred” or
“secondary”.
VM/Host Rules
When deploying virtual machines on a Virtual SAN Stretched Cluster, for the
majority of cases, we wish the virtual machine to reside on the set of hosts in
the selected host group. However, in the event of a full site failure, we wish the
virtual machines to be restarted on the surviving site.
To achieve this, VMware recommends implementing “should respect rules” in
the VM/Host Rules configuration section. These rules may be violated by
vSphere HA in the case of a full site outage. If “must rules” were implemented,
vSphere HA does not violate the rule-set, and this could potentially lead to
service outages. vSphere HA will not restart the virtual machines in this case, as
they will not have the required affinity to start on the hosts in the other site.
Thus, the recommendation to implement “should rules” will allow vSphere HA
to restart the virtual machines in the other site.
The vSphere HA Rule Settings are found in the VM/Host Rules section. This
allows administrators to decide which virtual machines (that are part of a VM
Group) are allowed to run on which hosts (that are part of a Host Group). It also
allows an administrator to decide on how strictly “VM to Host affinity rules” are
enforced.
As stated above, the VM to Host affinity rules should be set to “should respect”
to allow the virtual machines on one site to be started on the hosts on the other
site in the event of a complete site failure. The “should rules” are implemented
by clicking on the “Edit” button in the vSphere HA Rule Settings at the bottom
of the VM/Host Rules view, and setting VM to Host affinity rules to “vSphere HA
should respect rules during failover”.
vSphere DRS communicates these rules to vSphere HA, and these are stored in
a “compatibility list” governing allowed startup behavior. Note once again that
with a full site failure, vSphere HA will be able to restart the virtual machines on
hosts that violate the rules. Availability takes preference in this scenario.
Installation
The installation of Virtual SAN Stretched Cluster is almost identical to how Fault
Domains were implemented in earlier VSAN versions, with a couple of additional
steps. This part of the guide will walk the reader through a stretched cluster
configuration.
Since virtual machines deployed on Virtual SAN Stretched Cluster will have
compute on one site, but a copy of the data on both sites, VSAN will use a read
locality algorithm to read 100% from the data copy on the local site, i.e. same
site where the compute resides. This is not the regular VSAN algorithm, which
reads in a round-robin fashion across all replica copies of the data.
This new algorithm for Virtual SAN Stretched Clusters will reduce the latency
incurred on read operations.
If latency is less than 5ms and there is enough bandwidth between the sites,
read locality could be disabled. However please note that disabling read locality
means that the read algorithm reverts to the round robin mechanism, and for
Virtual SAN Stretched Clusters, 50% of the read requests will be sent to the
remote site. This is a significant consideration for sizing of the network
bandwidth. Please refer to the sizing of the network bandwidth between the
two main sites for more details.
not visible in the Advanced System Settings vSphere web client. It is only
available the CLI.
Caution: Read locality is enabled by default when Virtual SAN Stretched Cluster
is configured – it should only be disabled under the guidance of VMware’s Global
Support Services organization, and only when extremely low latency is available
across all sites.
When configuring your Virtual SAN stretched cluster, only data hosts must be
in the cluster object in vCenter. The witness host must remain outside of the
cluster, and must not be added to the cluster at any point. Thus for a 1+1+1
configuration, where there is one host at each site and one physical ESXi witness
host, the configuration will look similar to the following:
Note that the witness host is not shaded in blue in this case. The witness host
only appears shaded in blue when a witness appliance (OVA) is deployed.
Physical hosts that are used as witness hosts are not shaded in blue.
Virtual SAN 6.1, shipped with vSphere 6.0U1, has a health check feature built in.
This functionality was first available for Virtual SAN 6.0. The updated 6.1 version
of the health check for Virtual SAN has enhancements specifically for Virtual
SAN stretched cluster.
Once the ESXi hosts have been upgraded or installed with ESXi version 6.0U1,
there are no additional requirements for enabling the VSAN health check. Note
that ESXi version 6.0U1 is a requirement for Virtual SAN Stretched Cluster.
Similarly, once the vCenter Server has been upgraded to version 6.0U1, the
VSAN Health Check plugin components are also upgraded automatically,
provided vSphere DRS is licensed, and DRS Automation is set to Fully
Automated. If vSphere DRS is not licensed, or not set to Fully Automated, then
hosts will have to be evacuated and the Health Check vSphere Installable Bundle
(vib) will have to be installed manually.
Please refer to the 6.1 Health Check Guide got additional information. The
location is available in the appendix of this guide.
As mentioned, there are new health checks for Virtual SAN Stretched Cluster.
Select the Cluster object in the vCenter inventory, click on Monitor > Virtual SAN
> Health. Ensure the stretched cluster health checks pass when the cluster is
configured.
Note that the stretched cluster checks will not be visible until the stretch cluster
configuration is completed.
If the MAC address of the virtual machine network adapter matches the MAC
address of the nested ESXi vmnic, no packets are dropped. The witness ESXi
virtual machine OVA has been configured to have the MAC addresses match,
then promiscuous mode would not be needed.
Examine the details. Note that it states that this is the VMware Virtual SAN
Witness Appliance, version 6.1.
Give the witness a name (e.g. witness-01), and select a folder to deploy it to.
At this point a decision needs to be made regarding the expected size of the
stretched cluster configuration. There are three options offered. If you expect
the number of VMs deployed on the Virtual SAN Stretched Cluster to be 10 or
fewer, select the Tiny configuration. If you expect to deploy more than 10 VMs,
but less than 500 VMs, then the Normal (default option) should be chosen. For
more than 500VMs, choose the Large option. On selecting a particular
configuration, the resources consumed by the appliance and displayed in the
wizard (CPU, Memory and Disk):
Select a datastore for the witness ESXi VM. This will be one of the datastore
available to the underlying physical host. You should consider when the witness
is deployed as thick or thin, as thin VMs may grow over time, so ensure there is
enough capacity on the selected datastore.
Select a network for the management network. This gets associated with both
network interfaces (management and VSAN) at deployment, so later on the
VSAN network configuration will need updating.
At this point, the witness appliance (ESXi VM) is ready to be deployed. You can
choose to power it on after deployment by selecting the checkbox below, or
power it on manually via the vSphere web client UI later:
Once the witness appliance is deployed and powered on, select it in the
vSphere web client UI and begin the next steps in the configuration process.
At this point, the console of the witness ESXi virtual machine should be access
to add the correct networking information, such as IP address and DNS, for the
management network.
On launching the console, unless you have a DHCP server on the management
network, it is very likely that the landing page of the DCUI will look something
similar to the following:
Use the <F2> key to customize the system. The root login and password will
need to be provided at this point. This is the root password that was added
during the OVA deployment earlier.
Select the Network Adapters view. There will be two network adapters, each
corresponding to the network adapters on the virtual machine. You should note
that the MAC address of the network adapters from the DCUI view match the
MAC address of the network adapters from the virtual machine view. Because
these match, there is no need to use promiscuous mode on the network, as
discussed earlier.
Select vmnic0, and if you wish to view further information, select the key <D>
to see more details.
Navigate to the IPv4 Configuration section. This will be using DHCP by default.
Select the static option as shown below and add the appropriate IP address,
subnet mask and default gateway for this witness ESXi’s management network.
The next step is to configure DNS. A primary DNS server should be added and
an optional alternate DNS server can also be added. The FQDN, fully qualified
domain name, of the host should also be added at this point.
When all the tests have passed, and the FQDN is resolvable, administrators can
move onto the next step of the configuration, which is adding the witness ESXi
to the vCenter server.
Provide the appropriate credentials, in this example root user and password:
There should be no virtual machines on the witness appliance. Note that it can
never run VMs in a Virtual SAN Stretched Cluster configuration. Note also the
mode: VMware Virtual Platform. Note also that builds number may differ to the
one shown here.
The witness appliance also comes with its own license. You do not need to
consume vSphere licenses for the witness appliance:
The next step is to choose a location for VMs. This will not matter for the witness
appliance, as it will never host virtual machines of its own:
Click finish when ready to complete the addition of the witness to the vCenter
server:
One final item of note is the appearance of the witness appliance in the vCenter
inventory. It has a light blue shading, to differentiate it from standard ESXi hosts.
It might be a little difficult to see in the screen shot below, but should be clearly
visible in your infrastructure. (Note: the “No datastores have been configured”
message is because the nested ESXi host has no VMFS datastore. This can be
ignored, or if necessary a small 2GB disk can be added to the host and a VMFS
volume can be built on it to remove the message completely).
One final recommendation is to verify that the settings of the witness appliance
matches the Tiny, Normal or Large configuration selected during deployment.
For example, the Normal deployment should have an 8GB HDD for boot, a 10GB
Flash that will be configured later on as a cache device and another 350 HDD
that will also be configured later on as a capacity device.
Once confirmed, you can proceed to the next step of configuring the VSAN
network for the witness appliance.
Select the witnessPg portgroup (which has a VMkernel adapter), and then select
the option to edit it. Tag the VMkernel port for VSAN traffic, as shown below:
In the NIC settings, ensure the MTU is set to the same value as the Stretched
Cluster hosts’ VSAN VMkernel interface.
In the IPV4 settings, a default IP address has been allocated. Modify it for the
VSAN traffic network.
Once the VMkernel has been tagged for VSAN traffic, and has a valid IP, click
OK.
Note once again that the VSAN network is a stretched L2 broadcast domain
between the data sites as per VMware recommendations, but L3 is required to
reach the VSAN network of the witness appliance. Therefore, static routes are
needed between the data hosts and the witness host for the VSAN network, but
they are not required for the data hosts on different sites to communicate to
each other over the VSAN network.
Other useful commands are esxcfg-route –n, which will display the network
neighbors on various interfaces, and esxcli network ip route ipv4 list, to display
gateways for various networks. Make sure this step is repeated for all hosts.
The final test is a ping test to ensure the remote VSAN network can now be
reached, in both directions. Now the Virtual SAN Stretched Cluster can now be
configured.
When the witness is selected, a flash device and a magnetic disk need to be
chosen to create a disk group. These are already available in the witness
appliance.
When the stretched cluster has completed configuration, which can take a
number of seconds, verify that the fault domain view is as expected:
Navigate to cluster > Manage > VM/Host Groups. Select the option to “add” a
group. Give the group a name, and ensure the group type is “Host Group” as
opposed to “VM Group”. Next, click on the “Add” button to select the hosts
should be in the host group. Select the hosts from site A.
Once the hosts have been added to the Host Group, click OK. Review the
settings of the host group, and click OK once more to create it:
This step will need to be repeated for the secondary site. Create a host group
for the secondary site and add the ESXi hosts from the secondary site to the
host group. When hosts groups for both data sites have been created, the next
step is to create VM groups. However before you can do this, virtual machines
should be created on the cluster.
Once the host groups are created, the initial set of virtual machines should now
be created. Do not power on the virtual machines just yet. Once the virtual
machines are in the inventory, you can now proceed with the creation of the VM
Groups. First create the VM Group for the preferred site. Select the virtual
machines that you want for the preferred site.
In the same way that a second host group had to be created previously for the
secondary site, a secondary VM Group must be created for the virtual machines
that should reside on the secondary site.
Now that the host groups and VM groups are created, it is time to associate VM
groups with host groups and ensure that particular VMs run on a particular site.
Navigate to the VM/Host rules to associate a VM group with a host group. In
the example shown below, I am associating the VMs in the sec-vms VM group
with the host group called sec, which will run the virtual machines in that group
on the hosts in the secondary site.
One item highlighted above is that this is a “should” rule. We use a “should” rule
as it allows vSphere HA to start the virtual machines on the other side of the
stretched cluster in the event of a site failure.
Another VM/Host rule must be created for the primary site. Again this should
be a "should” rule. Please note that DRS will be required to enforce the VM/Host
Rules. Without DRS enabled, the soft “should” rules have no effect on placement
behavior in the cluster.
There is one final setting that needs to be placed on the VM/Host Rules. This
setting once again defines how vSphere HA will behave when there is a
complete site failure. In the screenshot below, there is a section in the VM/Host
rules called vSphere HA Rule Settings. One of the settings is for VM to Host
Affinity rules. A final step is to edit this from the default of “ignore” and change
it to “vSphere HA should respect VM/Host affinity rules” as shown below:
That completes the setup of the Virtual SAN Stretched Cluster. The final steps
are to power up the virtual machines created earlier, and examine the
component layout. When NumberOfFailuresToTolerate = 1 is chosen, a copy of
the data should go to both sites, and the witness should be placed on the
witness host.
In the example below, esx01-sitea and esx02-sitea resides on site 1, whilst esx01-
siteb and esx02-siteb resides on site 2. The host witness-01 is the witness. The
layout shows that the VM has been deployed correctly.
As we can clearly see, one copy of the data resides on storage in site1, a second
copy of the data resides on storage in site2 and the witness component resides
on the witness host and storage on the witness site. Everything is working as
expected.
The fault domains are persisted, but VSAN does not know which FD is the
preferred one. Therefore, under Fault Domains, the secondary FD will need to
be moved to the secondary column as per of the reconfiguration.
Failure Scenarios
In this section, we will discuss the behavior of the Virtual SAN Stretched Cluster
when various failures occur. In this example, there is a 1+1+1 Stretched VSAN
deployment. This means that there is a single data host at site 1, a single data
host at site 2 and a witness host at a third site.
A single VM has also been deployed. When the Physical Disk Placement is
examined, we can see that the replicas are placed on the preferred and
secondary data site respectively, and the witness component is placed on the
witness site:
The next step is to introduce some failures and examine how Virtual SAN
handles such events. Before beginning these tests, please ensure that the Virtual
SAN Health Check Plugin is working correctly, and that all VSAN Health Checks
have passed.
The health check plugin should be referred to regularly during failure scenario
testing. Note that alarms are now raised in version 6.1 for any health check that
fails. Alarms may also be reference at the cluster level throughout this testing.
Finally, when the term site is used in the failure scenarios, it implies a fault
domain.
When the virtual machine starts on the other site, either as part of a vMotion
operation or a power on from vSphere HA restarting it, Virtual SAN instantiates
the in-memory state for all the objects of said virtual machine on the host where
it moved. That includes the “owner” (coordinator) logic for each object. The
owner checks if the cluster is setup in a “stretch cluster” mode, and if so, which
fault domain it is running in. It then uses the different read protocol — instead
of the default round-robin protocol across replicas (at the granularity of 1MB),
it sends 100% of the reads to the replica that is on the same site (but not
necessarily the same host) as the virtual machine.
In the first part of this test, the secondary host will be rebooted, simulating a
temporary outage.
There will be some power and HA events related to the secondary host visible
in the vSphere web client UI. Change to the Physical Disk Place view of the
virtual machine. After a few moments, the components that were on the
secondary host will go “Absent”, as shown below:
Since the ESXi host which holds the compute of the virtual machine is
unaffected by this failure, there is no reason for vSphere HA to take action.
At this point, the VSAN Health Check plugin can be examined. There will be
quite a number of failures due to the fact that the secondary host is no longer
available, as one might expect.
Further testing should not be initiated until the secondary host has completed
a reboot and has successfully rejoined the cluster. All “Failed” health check tests
should show OK before another test is started. Also confirm that there are no
“Absent” components on the VMs objects, and that all components are once
again Active.
A reboot can now be initiated on the preferred host. There will be a number of
vSphere HA related events. As before, the components that were on the
preferred host will show up as “Absent”:
Since the host on which the virtual machine’s compute resides is no longer
available, vSphere HA will restart the virtual machine on another the host in the
cluster. This will verify that the vSphere HA affinity rules are “should” rules and
not “must” rules. If “must” rules are configured, this will not allow vSphere HA
to restart the virtual machine on the other site, so it is important that this test
behaves as expected. “Should” rules will allow vSphere HA to restart the virtual
machine on hosts that are not in the VM/Host affinity rules when no other hosts
are available.
Note that if there were more than one host on each site, then the virtual machine
would be restarted on another host on the same site. However, since this is a
test on a 1+1+1 configuration, there are no additional hosts available on the
preferred site. Therefore the virtual machine is restarted on a host on the
secondary site after a few moments. If you are testing this behavior on
As before, wait for all issues to be resolved before attempting another test.
Remember: test one thing at a time. Allow time for the secondary site host to
reboot and verify that all absent components are active, and that all health
check tests pass before continuing.
This should have no impact on the run state of the virtual machine, but the
witness components residing on the witness host should show up as “Absent”.
First, verify which host is the witness host from the fault domains
configuration. In this setup, it is host cs-ie-dell03.ie.local. It should be labeled
“External witness host for Virtual SAN Stretched Cluster”.
After verifying that there are no absent components on the virtual machine,
and that all health checks have passed, reboot the witness host:
After a short period of time, the witness component of the virtual machine will
appear as “Absent”:
To test this functionality, there are various ways to cause it. Once could simply
unplug the VSAN network from the host or indeed the switch. Alternatively, the
physical adapter(s) used for VSAN traffic are moved from active to “unused”
for the VSAN VMkernel port on the host running the virtual machine. This can
be done by editing the “Teaming and failover” properties of the VSAN traffic
port group on a per host basic. In this case, the operation is done on a host on
the “preferred” site. This results in two components of the virtual machine object
getting marked as absent since the host can no longer communicate to the
other data site where the other copy of the data resides, nor can it communicate
to the witness.
Note: Simply disabling the VSAN network service will not lead to an isolation
event since vSphere HA will still be able to use the network for
communication.
This isolation state of the host is a trigger for vSphere HA to implement the
isolation response action, which has previously been configured to “Power off
VMs and restart”. The virtual machine should then power up on the other site. If
you navigate to the policies view after the virtual machine has been restarted
on the other host, and click on the icon to check compliance, it should show that
two out of the three components are now available, and since there is a full copy
of the data, and more than 50% of the components available, the virtual machine
is accessible. Launch the console to verify.
Note: It would be worth including a check at this point to ensure that the virtual
machine is accessible on the VM network on the new site. There is not much in
having the virtual machine failover to the remaining site and not being able to
reach it on the network.
Remember that this is a simple 1+1+1 configuration of Virtual SAN Stretched
Cluster. If there were additional hosts on each site, the virtual machine should
be restarted on hosts on the same site, adhering to the VM/Host affinity rules
defined earlier. Because the rules are “should” rules and not “must” rules, the
virtual machine can be restarted on the other site when there are no hosts
available on the site to which the virtual machine has affinity.
Once the correct behavior has been observed, repair the network.
Note that the VM/Host affinity rules will trigger a move of the virtual machine(s)
back to hosts on the preferred site. Run a VSAN Health Check test before
continuing to test anything else. Remember with NumberOfFailuresToTolerate
= 1, test one thing at a time. Verify that all absent components are active and
that all health check tests pass before continuing.
If there is more than one host at each site, you could try setting the uplinks for
the VSAN network to “unused” on each host on one site. What you should
observe is that the virtual machine(s) is restarted on another host on the same
site to adhere to the configured VM/Host affinity rules. Only when there is no
remaining host on the site should the virtual machine be restarted on the other
site.
Data network test on host that contains virtual machine data only
If the network is disabled on the ESXi host that does not run the virtual machine
but contains a copy of the data, then the virtual machines on the primary site
will only see one absent component. In this case the virtual machine remains
accessible and is not impact by this failure. However, if there are any virtual
machines on the secondary host running on the VSAN datastore, these will
suffer the same issues seen in the previous test.
After the test, repair the network, and note that the VM/Host affinity rules will
trigger a move of the virtual machine(s) back to hosts on the preferred site. Run
a VSAN Health Check test before continuing to test anything else. Remember
with NumberOfFailuresToTolerate = 1, test one thing at a time. Verify that all
absent components are active and that all health check tests pass before
continuing.
As per the previous test, for physical witness hosts, the VSAN network can be
physical removed from either the host or the network switch. Alternatively, the
uplinks that are used for the VSAN network can be set to an “unused” state in
the “Teaming and failover” properties of the VSAN network port group.
If the witness host is an ESXi VM, then the network connection used by VSAN
can simply be disconnected from the virtual machine.
The expectation is that this will not impact the running virtual machine since
there is one full copy of the data must be available, and more than 50% of the
components that go to make up the object are available.
Once the behavior has been verified, repair the network, and run a VSAN Health
Check test before continuing with further tests. Test one thing at a time. Verify
that all absent components are active and that all health check tests pass before
continuing.
In this case, when one site is down and there is a need to provision virtual
machines, the ForceProvision capability is used to provision the VM. This means
that the virtual machine is provisioned with a NumberOfFailuresToTolerate = 0,
meaning that there is no redundancy. Administrators will need to rectify the
issues on the failing site and bring it back online. When this is done, Virtual SAN
will automatically update the virtual machine configuration to
NumberOfFailuresToTolerate = 1, creating a second copy of the data and any
required witness components.
At this point, the failed witness needs to be removed from the configuration.
Navigate to Cluster > Manage > Virtual SAN > Fault Domains. For this particular
test, a 2+2+1 configuration is used, implying two ESXi hosts in the “preferred”
data site, two ESXi hosts in the “secondary” data site and a single witness host.
The failing witness host can be removed from the Virtual SAN Stretched
Cluster via the UI (red X in fault domains view).
The next step is to rebuild the VSAN stretched and selecting the new witness
host. In the same view, click on the “configure stretched cluster” icon. Align
hosts to the preferred and secondary sites as before. This is quite simple to do
since the hosts are still in the original fault domain, so simply select the
secondary fault domain and move all the hosts over in a single click:
Create the disk group and complete the Virtual SAN Stretched Cluster
creation.
On completion, verify that the health check failures have resolved. Note that the
Virtual SAN Object health test will continue to fail as the witness component of
VM still remains “Absent”. When Clomd timer expires after a default of 60
minutes, witness components will be rebuilt on new witness host. Rerun the
health check tests and they should all pass at this point, and all witness
components should show as active.
https://my.vmware.com/web/vmware/info/slug/datacenter_cloud_infrastruct
ure/vmware_virtual_san/6_0#drivers_tools
ESXCLI
New ESXCLI commands for Virtual SAN Stretched Cluster.
Available Commands:
get Get the preferred fault domain for a stretched cluster.
set Set the preferred fault domain for a stretched cluster.
Available Commands:
add Add a unicast agent to the Virtual SAN cluster configuration.
list List all unicast agents in the Virtual SAN cluster configuration.
remove Remove a unicast agent from the Virtual SAN cluster configuration.
vsan.stretchedcluster.config_witness
Configure a witness host. The name of the cluster, the witness host and the
preferred fault domain must all be provided as arguments.
/localhost/Site-A/computers> vsan.stretchedcluster.config_witness -h
usage: config_witness cluster witness_host preferred_fault_domain
Configure witness host to form a Virtual SAN Stretched Cluster
cluster: A cluster with virtual SAN enabled
witness_host: Witness host for the stretched cluster
preferred_fault_domain: preferred fault domain for witness host
--help, -h: Show this message
/localhost/Site-A/computers>
vsan.stretchedcluster.remove_witness
/localhost/Site-A/computers> vsan.stretchedcluster.remove_witness -h
usage: remove_witness cluster
Remove witness host from a Virtual SAN Stretched Cluster
cluster: A cluster with virtual SAN stretched cluster enabled
--help, -h: Show this message
vsan.stretchedcluster.witness_info
/localhost/Site-A/computers> ls
0 Site-A (cluster): cpu 100 GHz, memory 241 GB
1 cs-ie-dell04.ie.local (standalone): cpu 33 GHz, memory 81 GB
/localhost/Site-A/computers> vsan.stretchedcluster.witness_info 0
Found witness host for Virtual SAN stretched cluster.
+------------------------+--------------------------------------+
| Stretched Cluster | Site-A |
+------------------------+--------------------------------------+
| Witness Host Name | cs-ie-dell04.ie.local |
| Witness Host UUID | 55684ccd-4ea7-002d-c3a 9-ecf4bbd59370 |
| Preferred Fault Domain | Preferred |
| Unicast Agent Address | 172.3.0.16 |
+------------------------+--------------------------------------+
VMwar e, Inc . 3401 Hillview Aven u e Palo Alto CA 94304 USA Tel 877-486-9273 F ax 650-427-5001 www.v m ware.c om
Copyrig h t © 2012 VMware, Inc. All rights reser ve d. This produc t is protec ted by U.S. and international copyri g ht and intell ec tu al property laws.
VMware produc ts are cover e d by one or more patents listed at http://www.v m ware.c om/go/patents . VMware is a registered trademark or trademark of
VMware, Inc. in the United States and/or other jurisdic ti o n. All other mar ks and name s menti o ne d herei n may be trademark s of their res pec tive
compa ni es .