Vsphere Esxi Vcenter Server 652 Availability Guide
Vsphere Esxi Vcenter Server 652 Availability Guide
Update 2
Modified on 13 AUG 2020
VMware vSphere 6.5
VMware ESXi 6.5
vCenter Server 6.5
vSphere Availability
You can find the most up-to-date technical documentation on the VMware website at:
https://docs.vmware.com/
VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com
©
Copyright 2009-2020 VMware, Inc. All rights reserved. Copyright and trademark information.
VMware, Inc. 2
Contents
Updated Information 7
VMware, Inc. 3
vSphere Availability
VMware, Inc. 4
vSphere Availability
VMware, Inc. 5
About vSphere Availability
vSphere Availability describes solutions that provide business continuity, including how to
®
establish vSphere High Availability (HA) and vSphere Fault Tolerance.
Intended Audience
This information is for anyone who wants to provide business continuity through the vSphere
HA and Fault Tolerance solutions. The information in this book is for experienced Windows or
Linux system administrators who are familiar with virtual machine technology and data center
operations.
Task instructions in this guide are based on the vSphere Web Client. You can also perform most
of the tasks in this guide by using the new vSphere Client. The new vSphere Client user interface
terminology, topology, and workflow are closely aligned with the same aspects and elements of
the vSphere Web Client user interface. You can apply the vSphere Web Client instructions to the
new vSphere Client unless otherwise instructed.
Note Not all functionality in the vSphere Web Client has been implemented for the vSphere Client
in the vSphere 6.5 release. For an up-to-date list of unsupported functionality, see Functionality
Updates for the vSphere Client Guide at http://www.vmware.com/info?id=1413.
VMware, Inc. 6
Updated Information
This vSphere Availability is updated with each release of the product or when necessary.
Revision Description
10 AUG 2020 At VMware, we value inclusion. To foster this principle within our customer, partner, and internal
community, we are replacing some of the terminology in our content. We have updated this guide to
remove instances of non-inclusive language.
01 JUN 2018 n Added updated information for configuring MSCS for High Availability. See,Configure MSCS for High
Availability.
VMware, Inc. 7
Business Continuity and
Minimizing Downtime 1
Downtime, whether planned or unplanned, brings considerable costs. However, solutions that
ensure higher levels of availability have traditionally been costly, hard to implement, and difficult to
manage.
VMware software makes it simpler and less expensive to provide higher levels of availability for
important applications. With vSphere, you can increase the baseline level of availability provided
for all applications and provide higher levels of availability more easily and cost effectively. With
vSphere, you can:
vSphere makes it possible to reduce planned downtime, prevent unplanned downtime, and
recover rapidly from outages.
VMware, Inc. 8
vSphere Availability
vSphere makes it possible for organizations to dramatically reduce planned downtime. Because
workloads in a vSphere environment can be dynamically moved to different physical servers
without downtime or service interruption, server maintenance can be performed without requiring
application and service downtime. With vSphere, organizations can:
These vSphere capabilities are part of virtual infrastructure and are transparent to the operating
system and applications running in virtual machines. These features can be configured and utilized
by all the virtual machines on a physical system, reducing the cost and complexity of providing
higher availability. Key availability capabilities are built into vSphere:
n Shared storage. Eliminate single points of failure by storing virtual machine files on shared
storage, such as Fibre Channel or iSCSI SAN, or NAS. The use of SAN mirroring and
replication features can be used to keep updated copies of virtual disk at disaster recovery
sites.
In addition to these capabilities, the vSphere HA and Fault Tolerance features can minimize
or eliminate unplanned downtime by providing rapid recovery from outages and continuous
availability, respectively.
n It protects against a server failure by restarting the virtual machines on other hosts within the
cluster.
VMware, Inc. 9
vSphere Availability
n It protects virtual machines against network isolation by restarting them if their host becomes
isolated on the management or vSAN network. This protection is provided even if the network
has become partitioned.
Unlike other clustering solutions, vSphere HA provides the infrastructure to protect all workloads
with the infrastructure:
n You do not need to install special software within the application or virtual machine. All
workloads are protected by vSphere HA. After vSphere HA is configured, no actions are
required to protect new virtual machines. They are automatically protected.
n You can combine vSphere HA with vSphere Distributed Resource Scheduler (DRS) to protect
against failures and to provide load balancing across the hosts within a cluster.
Minimal setup
After a vSphere HA cluster is set up, all virtual machines in the cluster get failover support
without additional configuration.
The virtual machine acts as a portable container for the applications and it can be moved
among hosts. Administrators avoid duplicate configurations on multiple machines. When you
use vSphere HA, you must have sufficient resources to fail over the number of hosts you
want to protect with vSphere HA. However, the VMware vCenter Server® system automatically
manages resources and configures clusters.
Any application running inside a virtual machine has access to increased availability. Because
the virtual machine can recover from hardware failure, all applications that start at boot
have increased availability without increased computing needs, even if the application is not
itself a clustered application. By monitoring and responding to VMware Tools heartbeats and
restarting nonresponsive virtual machines, it protects against guest operating system crashes.
If a host fails and virtual machines are restarted on other hosts, DRS can provide migration
recommendations or migrate virtual machines for balanced resource allocation. If one or both
of the source and destination hosts of a migration fail, vSphere HA can help recover from that
failure.
VMware, Inc. 10
vSphere Availability
Fault Tolerance provides continuous availability by ensuring that the states of the Primary and
Secondary VMs are identical at any point in the instruction execution of the virtual machine.
If either the host running the Primary VM or the host running the Secondary VM fails, an
immediate and transparent failover occurs. The functioning ESXi host seamlessly becomes the
Primary VM host without losing network connections or in-progress transactions. With transparent
failover, there is no data loss and network connections are maintained. After a transparent failover
occurs, a new Secondary VM is respawned and redundancy is re-established. The entire process is
transparent and fully automated and occurs even if vCenter Server is unavailable.
n Deploy an Active node with an embedded Platform Services Controller. As part of the cloning
process, the Platform Services Controller and all is services are cloned as well. As part of
synchronization from Active node to Passive node, Platform Services Controller on the Passive
node is updated.
When failover from the Active node to the Passive node occurs, the Platform Services
Controller on the passive node are available and the complete environment is available.
n Deploy at least two Platform Services Controller instances and place them behind a load
balancer.
When failover from the Active node to the Passive node occurs, the Passive node continues to
point to the load balancer. When one of the Platform Services Controller instances becomes
unavailable, the load balancer directs requests to the second Platform Services Controller
instance.
VMware, Inc. 11
vSphere Availability
Option Description
Basic The Basic option clones the Active node to the Passive node and witness node, and configures the nodes for
you.
If your environment meets one the following requirements, you can use this option.
n Either the vCenter Server Appliance that becomes the Active node is managing its own ESXi host and its
own virtual machine. This configuration is sometimes called a self-managed vCenter Server.
n Or the vCenter Server Appliance managed by another vCenter Server (management vCenter Server) and
both vCenter Server instances are in the same vCenter Single Sign-On domain. That means they both use
an external Platform Services Controller and both are running vSphere 6.5.
See Configure vCenter HA With the Basic Option.
Advanced The Advanced option offers more flexibility. You can use this option provided that your environment meets
hardware and software requirements.
If you select this option, you are responsible for cloning the Active node to the Passive node and the Witness
node. You must also perform some networking configuration.
See Configure vCenter HA With the Advanced Option.
If a vCenter service fails, VMware Service Lifecycle Manager restarts it. VMware Service Lifecycle
Manager monitors the health of services and it takes preconfigured remediation action when it
detects a failure. Service does not restart if multiple attempts to remediate fail.
VMware, Inc. 12
Creating and Using vSphere HA
Clusters 2
vSphere HA clusters enable a collection of ESXi hosts to work together so that, as a group,
they provide higher levels of availability for virtual machines than each ESXi host can provide
individually. When you plan the creation and usage of a new vSphere HA cluster, the options you
select affect the way that cluster responds to failures of hosts or virtual machines.
Before you create a vSphere HA cluster, you should know how vSphere HA identifies host failures
and isolation and how it responds to these situations. You also should know how admission control
works so that you can choose the policy that fits your failover needs. After you establish a cluster,
you can customize its behavior with advanced options and optimize its performance by following
recommended best practices.
Note You might get an error message when you try to use vSphere HA. For information
about error messages related to vSphere HA, see the VMware knowledge base article at http://
kb.vmware.com/kb/1033634.
n vSphere HA Interoperability
VMware, Inc. 13
vSphere Availability
When you create a vSphere HA cluster, a single host is automatically elected as the primary
host. The primary host communicates with vCenter Server and monitors the state of all protected
virtual machines and of the secondary hosts. Different types of host failures are possible, and
the primary host must detect and appropriately deal with the failure. The primary host must
distinguish between a failed host and one that is in a network partition or that has become network
isolated. The primary host uses network and datastore heartbeating to determine the type of
failure.
(Sphere HA Clusters )
When vSphere HA is enabled for a cluster, all active hosts (that are not in standby, maintenance
mode or not disconnected) participate in an election to choose the cluster's primary host. The host
that mounts the greatest number of datastores has an advantage in the election. Only one primary
host typically exists per cluster and all other hosts are secondary hosts. If the primary host fails, is
shut down or put in standby mode, or is removed from the cluster a new election is held.
n Monitoring the state of secondary hosts. If a secondary host fails or becomes unreachable, the
primary host identifies which virtual machines must be restarted.
n Monitoring the power state of all protected virtual machines. If one virtual machine fails, the
primary host ensures that it is restarted. Using a local placement engine, the primary host also
determines where the restart takes place.
n Acting as the vCenter Server management interface to the cluster and reporting the cluster
health state.
The secondary hosts primarily contribute to the cluster by running virtual machines locally,
monitoring their runtime states, and reporting state updates to the primary host. A primary host
can also run and monitor virtual machines. Both secondary hosts and primary hosts implement the
VM and Application Monitoring features.
VMware, Inc. 14
vSphere Availability
One of the functions performed by the primary host is to orchestrate restarts of protected virtual
machines. A virtual machine is protected by a primary host after vCenter Server observes that the
virtual machine's power state has changed from powered off to powered on in response to a user
action. The primary host persists the list of protected virtual machines in the cluster's datastores. A
newly elected primary host uses this information to determine which virtual machines to protect.
Note If you disconnect a host from a cluster, the virtual machines registered to that host are
unprotected by vSphere HA.
The primary host monitors the liveness of the secondary hosts in the cluster. This communication
happens through the exchange of network heartbeats every second. When the primary host stops
receiving these heartbeats from a secondary host, it checks for host liveness before declaring
the host failed. The liveness check that the primary host performs is to determine whether the
secondary host is exchanging heartbeats with one of the datastores. See Datastore Heartbeating .
Also, the primary host checks whether the host responds to ICMP pings sent to its management IP
addresses.
If a primary host cannot communicate directly with the agent on a secondary host, the secondary
host does not respond to ICMP pings. If the agent is not issuing heartbeats, it is viewed as failed.
The host's virtual machines are restarted on alternate hosts. If such a secondary host is exchanging
heartbeats with a datastore, the primary host assumes that the secondary host is in a network
partition or is network isolated. So, the primary host continues to monitor the host and its virtual
machines. See Network Partitions .
Host network isolation occurs when a host is still running, but it can no longer observe traffic from
vSphere HA agents on the management network. If a host stops observing this traffic, it attempts
to ping the cluster isolation addresses. If this pinging also fails, the host declares that it is isolated
from the network.
The primary host monitors the virtual machines that are running on an isolated host. If the primary
host observes that the VMs power off, and the primary host is responsible for the VMs, it restarts
them.
Note If you ensure that the network infrastructure is sufficiently redundant and that at least one
network path is always available, host network isolation is less likely to occur.
VMware, Inc. 15
vSphere Availability
Proactive HA Failures
A Proactive HA failure occurs when a host component fails, which results in a loss of redundancy
or a noncatastrophic failure. However, the functional behavior of the VMs residing on the host is
not yet affected. For example, if a power supply on the host fails, but other power supplies are
available, that is a Proactive HA failure.
If a Proactive HA failure occurs, you can automate the remediation action taken in the vSphere
Availability section of the vSphere Web Client. The VMs on the affected host can be evacuated to
other hosts and the host is either placed in Quarantine mode or Maintenance mode.
Note Your cluster must use vSphere DRS for the Proactive HA failure monitoring to work.
The following settings apply to all virtual machines in the cluster in the case of a host failure
or isolation. You can also configure exceptions for specific virtual machines. See Customize an
Individual Virtual Machine .
Note If a virtual machine has a restart priority setting of Disabled, no host isolation response is
made.
To use the Shutdown and restart VMs setting, you must install VMware Tools in the guest
operating system of the virtual machine. Shutting down the virtual machine provides the
advantage of preserving its state. Shutting down is better than powering off the virtual machine,
which does not flush most recent changes to disk or commit transactions. Virtual machines that
are in the process of shutting down take longer to fail over while the shutdown completes. Virtual
Machines that have not shut down in 300 seconds, or the time specified in the advanced option
das.isolationshutdowntimeout, are powered off.
VMware, Inc. 16
vSphere Availability
After you create a vSphere HA cluster, you can override the default cluster settings for Restart
Priority and Isolation Response for specific virtual machines. Such overrides are useful for virtual
machines that are used for special tasks. For example, virtual machines that provide infrastructure
services like DNS or DHCP might need to be powered on before other virtual machines in the
cluster.
A virtual machine "split-brain" condition can occur when a host becomes isolated or partitioned
from a primary host and the primary host cannot communicate with it using heartbeat datastores.
In this situation, the primary host cannot determine that the host is alive and so declares it dead.
The primary host then attempts to restart the virtual machines that are running on the isolated
or partitioned host. This attempt succeeds if the virtual machines remain running on the isolated/
partitioned host and that host lost access to the virtual machines' datastores when it became
isolated or partitioned. A split-brain condition then exists because there are two instances of the
virtual machine. However, only one instance is able to read or write the virtual machine's virtual
disks. VM Component Protection can be used to prevent this split-brain condition. When you
activate VMCP with the aggressive setting, it monitors the datastore accessibility of powered-on
virtual machines, and shuts down those that lose access to their datastores.
To recover from this situation, ESXi generates a question on the virtual machine that has lost the
disk locks for when the host comes out of isolation and cannot reacquire the disk locks. vSphere
HA automatically answers this question, allowing the virtual machine instance that has lost the disk
locks to power off, leaving just the instance that has the disk locks.
File accessibility
Before a virtual machine can be started, its files must be accessible from one of the active
cluster hosts that the primary can communicate with over the network
If there are accessible hosts, the virtual machine must be compatible with at least one of them.
The compatibility set for a virtual machine includes the effect of any required VM-Host affinity
rules. For example, if a rule only permits a virtual machine to run on two hosts, it is considered
for placement on those two hosts.
VMware, Inc. 17
vSphere Availability
Resource reservations
Of the hosts that the virtual machine can run on, at least one must have sufficient unreserved
capacity to meet the memory overhead of the virtual machine and any resource reservations.
Four types of reservations are considered: CPU, Memory, vNIC, and Virtual flash. Also,
sufficient network ports must be available to power on the virtual machine.
Host limits
In addition to resource reservations, a virtual machine can only be placed on a host if doing
so does not violate the maximum number of allowed virtual machines or the number of in-use
vCPUs.
Feature constraints
If the advanced option has been set that requires vSphere HA to enforce VM to VM anti-
affinity rules, vSphere HA does not violate this rule. Also, vSphere HA does not violate any
configured per host limits for fault tolerant virtual machines.
If no hosts satisfy the preceding considerations, the primary host issues an event stating that
there are not enough resources for vSphere HA to start the VM and tries again when the cluster
conditions have changed. For example, if the virtual machine is not accessible, the primary host
tries again after a change in file accessibility.
When you enable VM Monitoring, the VM Monitoring service (using VMware Tools) evaluates
whether each virtual machine in the cluster is running by checking for regular heartbeats and I/O
activity from the VMware Tools process running inside the guest. If no heartbeats or I/O activity
are received, this is most likely because the guest operating system has failed or VMware Tools
is not being allocated any time to complete tasks. In such a case, the VM Monitoring service
determines that the virtual machine has failed and the virtual machine is rebooted to restore
service.
Occasionally, virtual machines or applications that are still functioning properly stop sending
heartbeats. To avoid unnecessary resets, the VM Monitoring service also monitors a virtual
machine's I/O activity. If no heartbeats are received within the failure interval, the I/O stats interval
(a cluster-level attribute) is checked. The I/O stats interval determines if any disk or network
activity has occurred for the virtual machine during the previous two minutes (120 seconds). If not,
the virtual machine is reset. This default value (120 seconds) can be changed using the advanced
option das.iostatsinterval.
VMware, Inc. 18
vSphere Availability
To enable Application Monitoring, you must first obtain the appropriate SDK (or be using an
application that supports VMware Application Monitoring) and use it to set up customized
heartbeats for the applications you want to monitor. After you have done this, Application
Monitoring works much the same way that VM Monitoring does. If the heartbeats for an
application are not received for a specified time, its virtual machine is restarted.
You can configure the level of monitoring sensitivity. Highly sensitive monitoring results in a more
rapid conclusion that a failure has occurred. While unlikely, highly sensitive monitoring might lead
to falsely identifying failures when the virtual machine or application in question is actually still
working, but heartbeats have not been received due to factors such as resource constraints. Low
sensitivity monitoring results in longer interruptions in service between actual failures and virtual
machines being reset. Select an option that is an effective compromise for your needs.
You can also specify custom values for both monitoring sensitivity and the I/O stats interval by
selecting the Custom checkbox.
High 30 1 hour
Medium 60 24 hours
After failures are detected, vSphere HA resets virtual machines. The reset ensures that services
remain available. To avoid resetting virtual machines repeatedly for nontransient errors, by
default, virtual machines will be reset only three times during a certain configurable time interval.
After virtual machines have been reset three times, vSphere HA makes no further attempts to
reset the virtual machines after subsequent failures until after the specified time has elapsed. You
can configure the number of resets using the Maximum per-VM resets custom setting.
Note The reset statistics are cleared when a virtual machine is powered off then back on, or when
it is migrated using vMotion to another host. This causes the guest operating system to reboot, but
is not the same as a 'restart' in which the power state of the virtual machine is changed.
VM Component Protection
If VM Component Protection (VMCP) is activated, vSphere HA can detect datastore accessibility
failures and provide automated recovery for affected virtual machines.
VMware, Inc. 19
vSphere Availability
VMCP provides protection against datastore accessibility failures that can affect a virtual machine
running on a host in a vSphere HA cluster. When a datastore accessibility failure occurs, the
affected host can no longer access the storage path for a specific datastore. You can determine
the response that vSphere HA will make to such a failure, ranging from the creation of event
alarms to virtual machine restarts on other hosts.
Note When you use the VM Component Protection feature, your ESXi hosts must be version 6.0
or higher.
Types of Failure
There are two types of datastore accessibility failure:
PDL
PDL (Permanent Device Loss) is an unrecoverable loss of accessibility that occurs when a
storage device reports the datastore is no longer accessible by the host. This condition cannot
be reverted without powering off virtual machines.
APD
APD (All Paths Down) represents a transient or unknown accessibility loss or any other
unidentified delay in I/O processing. This type of accessibility issue is recoverable.
Configuring VMCP
VM Component Protection is configured in the vSphere Web Client. Go to the Configure tab and
click vSphere Availability and Edit. Under Failures and Responses you can select Datastore with
PDL or Datastore with APD. The storage protection levels you can choose and the virtual machine
remediation actions available differ depending on the type of database accessibility failure.
PDL Failures
Under Datastore with PDL, you can select Issue events or Power off and restart VMs.
APD Failures
The response to APD events is more complex and accordingly the configuration is more
fine-grained. You can select Issue events, Power off and restart VMs--conservative restart
policy, or Power off and restart VMs--aggressive restart policy
Note If either the Host Monitoring or VM Restart Priority settings are deactivated, VMCP cannot
perform virtual machine restarts. Storage health can still be monitored and events can be issued,
however.
Network Partitions
When a management network failure occurs for a vSphere HA cluster, a subset of the cluster's
hosts might be unable to communicate over the management network with the other hosts.
Multiple partitions can occur in a cluster.
VMware, Inc. 20
vSphere Availability
A partitioned cluster leads to degraded virtual machine protection and cluster management
functionality. Correct the partitioned cluster as soon as possible.
n Virtual machine protection. vCenter Server allows a virtual machine to be powered on, but
it can be protected only if it is running in the same partition as the primary host that is
responsible for it. The primary host must be communicating with vCenter Server. A primary
host is responsible for a virtual machine if it has exclusively locked a system-defined file on the
datastore that contains the virtual machine's configuration file.
n Cluster management. vCenter Server can communicate with the primary host, but only a
subset of the secondary hosts. As a result, changes in configuration that affect vSphere HA
might not take effect until after the partition is resolved. This failure could result in one of the
partitions operating under the old configuration, while another uses the new settings.
Datastore Heartbeating
When the primary host in a VMware vSphere® High Availability cluster cannot communicate with
a secondary host over the management network, the primary host uses datastore heartbeating to
determine whether the secondary host has failed, is in a network partition, or is network isolated.
If the secondary host has stopped datastore heartbeating, it is considered to have failed and its
virtual machines are restarted elsewhere.
VMware vCenter Server® selects a preferred set of datastores for heartbeating. This selection is
made to maximize the number of hosts that have access to a heartbeating datastore and minimize
the likelihood that the datastores are backed by the same LUN or NFS server.
You can use the advanced option das.heartbeatdsperhost to change the number of heartbeat
datastores selected by vCenter Server for each host. The default is two and the maximum valid
value is five.
vSphere HA creates a directory at the root of each datastore that is used for both datastore
heartbeating and for persisting the set of protected virtual machines. The name of the directory
is .vSphere-HA. Do not delete or modify the files stored in this directory, because this can have
an impact on operations. Because more than one cluster might use a datastore, subdirectories for
this directory are created for each cluster. Root owns these directories and files and only root can
read and write to them. The disk space used by vSphere HA depends on several factors including
which VMFS version is in use and the number of hosts that use the datastore for heartbeating.
With vmfs3, the maximum usage is 2GB and the typical usage is 3MB. With vmfs5, the maximum
and typical usage is 3MB. vSphere HA use of the datastores adds negligible overhead and has no
performance impact on other datastore operations.
VMware, Inc. 21
vSphere Availability
vSphere HA limits the number of virtual machines that can have configuration files on a single
datastore. See Configuration Maximums for updated limits. If you place more than this number of
virtual machines on a datastore and power them on, vSphere HA protects virtual machines only up
to the limit.
Note A vSAN datastore cannot be used for datastore heartbeating. Therefore, if no other shared
storage is accessible to all hosts in the cluster, there can be no heartbeat datastores in use.
However, if you have storage that is accessible by an alternate network path independent of the
vSAN network, you can use it to set up a heartbeat datastore.
vSphere HA Security
vSphere HA is enhanced by several security features.
vSphere HA uses TCP and UDP port 8182 for agent-to-agent communication. The firewall
ports open and close automatically to ensure they are open only when needed.
Detailed logging
The location where vSphere HA places log files depends on the version of host.
n For ESXi 5.x hosts, vSphere HA writes to syslog only by default, so logs are placed where
syslog is configured to put them. The log file names for vSphere HA are prepended with
fdm, fault domain manager, which is a service of vSphere HA.
n For legacy ESXi 4.x hosts, vSphere HA writes to /var/log/vmware/fdm on local disk, as
well as syslog if it is configured.
vSphere HA logs onto the vSphere HA agents using a user account, vpxuser, created
by vCenter Server. This account is the same account used by vCenter Server to
manage the host. vCenter Server creates a random password for this account and
changes the password periodically. The time period is set by the vCenter Server
VirtualCenter.VimPasswordExpirationInDays setting. Users with administrative privileges on
the root folder of the host can log in to the agent.
Secure communication
VMware, Inc. 22
vSphere Availability
All communication between vCenter Server and the vSphere HA agent is done over SSL.
Agent-to-agent communication also uses SSL except for election messages, which occur over
UDP. Election messages are verified over SSL so that a rogue agent can prevent only the
host on which the agent is running from being elected as a primary host. In this case, a
configuration issue for the cluster is issued so the user is aware of the problem.
vSphere HA requires that each host have a verified SSL certificate. Each host generates
a self-signed certificate when it is booted for the first time. This certificate can then be
regenerated or replaced with one issued by an authority. If the certificate is replaced, vSphere
HA needs to be reconfigured on the host. If a host becomes disconnected from vCenter
Server after its certificate is updated and the ESXi or ESX Host agent is restarted, then
vSphere HA is automatically reconfigured when the host is reconnected to vCenter Server.
If the disconnection does not occur because vCenter Server host SSL certificate verification is
deactivated at the time, verify the new certificate and reconfigure vSphere HA on the host.
Admission control imposes constraints on resource usage. Any action that might violate these
constraints is not permitted. Actions that might be disallowed include the following examples:
The basis for vSphere HA admission control is how many host failures your cluster is allowed to
tolerate and still guarantee failover. The host failover capacity can be set in three ways:
n Slot policy
Note vSphere HA admission control can be deactivated. However, without it you have no
assurance that the expected number of virtual machines can be restarted after a failure. Do not
permanently deactivate admission control.
Regardless of the admission control option chosen, a VM resource reduction threshold also exists.
You use this setting to specify the percentage of resource degradation to tolerate, but it is not
available unless vSphere DRS is activated.
VMware, Inc. 23
vSphere Availability
The resource reduction calculation is checked for both CPU and memory. It considers a virtual
machine's reserved memory and memory overload to decide whether to permit it to power on,
migrate, or have reservation changes. The actual memory consumed by the virtual machine is not
considered in the calculation because the memory reservation does not always correlate with the
actual memory usage of the virtual machine. If the actual usage is more than reserved memory,
insufficient failover capacity is available, resulting in performance degradation on failover.
Setting a performance reduction threshold allows you to specify the occurrence of a configuration
issue. For example:
n If you reduce the threshold to 0%, a warning is generated when cluster usage exceeds the
available capacity.
n If you reduce the threshold to 20%, the performance reduction that can be tolerated is
calculated as performance reduction = current utilization * 20%. When the current
usage minus the performance reduction exceeds the available capacity, a configuration notice
is issued.
With this type of admission control, vSphere HA ensures that a specified percentage of aggregate
CPU and memory resources are reserved for failover.
With the cluster resources percentage option, vSphere HA enforces admission control as follows:
1 Calculates the total resource requirements for all powered-on virtual machines in the cluster.
3 Calculates the Current CPU Failover Capacity and Current Memory Failover Capacity for the
cluster.
4 Determines if either the Current CPU Failover Capacity or Current Memory Failover Capacity is
less than the corresponding Configured Failover Capacity (provided by the user).
vSphere HA uses the actual reservations of the virtual machines. If a virtual machine does not
have reservations, meaning that the reservation is 0, a default of 0MB memory and 32MHz CPU is
applied.
Note The cluster resources percentage option for admission control also checks that there are at
least two vSphere HA-enabled hosts in the cluster (excluding hosts that are entering maintenance
mode). If there is only one vSphere HA-enabled host, an operation is not allowed, even if there is
a sufficient percentage of resources available. The reason for this extra check is that vSphere HA
cannot perform failover if there is only a single host in the cluster.
VMware, Inc. 24
vSphere Availability
n The CPU component by summing the CPU reservations of the powered-on virtual machines. If
you have not specified a CPU reservation for a virtual machine, it is assigned a default value of
32MHz (this value can be changed using the das.vmcpuminmhz advanced option.)
n The memory component by summing the memory reservation (plus memory overhead) of
each powered-on virtual machine.
The total host resources available for virtual machines is calculated by adding the hosts' CPU
and memory resources. These amounts are those contained in the host's root resource pool, not
the total physical resources of the host. Resources being used for virtualization purposes are not
included. Only hosts that are connected, not in maintenance mode, and have no vSphere HA
errors are considered.
The Current CPU Failover Capacity is computed by subtracting the total CPU resource
requirements from the total host CPU resources and dividing the result by the total host CPU
resources. The Current Memory Failover Capacity is calculated similarly.
n The cluster is comprised of three hosts, each with a different amount of available CPU and
memory resources. The first host (H1) has 9GHz of available CPU resources and 9GB of
available memory, while Host 2 (H2) has 9GHz and 6GB and Host 3 (H3) has 6GHz and 6GB.
n There are five powered-on virtual machines in the cluster with differing CPU and memory
requirements. VM1 needs 2GHz of CPU resources and 1GB of memory, while VM2 needs 2GHz
and 1GB, VM3 needs 1GHz and 2GB, VM4 needs 1GHz and 1GB, and VM5 needs 1GHz and 1GB.
n The Configured Failover Capacity for CPU and Memory are both set to 25%.
VMware, Inc. 25
vSphere Availability
Figure 2-1. Admission Control Example with Percentage of Cluster Resources Reserved Policy
VM1 VM2 VM3 VM4 VM5
2GHz 2GHz 1GHz 1GHz 1GHz
1GB 1GB 2GB 1GB 1GB
total resource requirements
7GHz, 6GB
H1 H2 H3
9GHz 9GHz 6GHz
9GB 6GB 6GB
The total resource requirements for the powered-on virtual machines is 7GHz and 6GB. The total
host resources available for virtual machines is 24GHz and 21GB. Based on this, the Current CPU
Failover Capacity is 70% ((24GHz - 7GHz)/24GHz). Similarly, the Current Memory Failover Capacity
is 71% ((21GB-6GB)/21GB).
Because the cluster's Configured Failover Capacity is set to 25%, 45% of the cluster's total CPU
resources and 46% of the cluster's memory resources are still available to power on additional
virtual machines.
Using the slot policy, vSphere HA performs admission control in the following way:
A slot is a logical representation of memory and CPU resources. By default, it is sized to satisfy
the requirements for any powered-on virtual machine in the cluster.
2 Determines how many slots each host in the cluster can hold.
This is the number of hosts that can fail and still leave enough slots to satisfy all of the
powered-on virtual machines.
4 Determines whether the Current Failover Capacity is less than the Configured Failover
Capacity (provided by the user).
VMware, Inc. 26
vSphere Availability
Note You can set a specific slot size for both CPU and memory in the admission control section of
the vSphere HA settings in the vSphere Web Client.
n vSphere HA calculates the CPU component by obtaining the CPU reservation of each
powered-on virtual machine and selecting the largest value. If you have not specified a CPU
reservation for a virtual machine, it is assigned a default value of 32MHz. You can change this
value by using the das.vmcpuminmhz advanced option.)
n vSphere HA calculates the memory component by obtaining the memory reservation, plus
memory overhead, of each powered-on virtual machine and selecting the largest value. There
is no default value for the memory reservation.
If your cluster contains any virtual machines that have much larger reservations than the others,
they will distort slot size calculation. To avoid this, you can specify an upper bound for the CPU or
memory component of the slot size by using the das.slotcpuinmhz or das.slotmeminmb advanced
options, respectively. See vSphere HA Advanced Options.
You can also determine the risk of resource fragmentation in your cluster by viewing the number
of virtual machines that require multiple slots. This can be calculated in the admission control
section of the vSphere HA settings in the vSphere Web Client. Virtual machines might require
multiple slots if you have specified a fixed slot size or a maximum slot size using advanced options.
The maximum number of slots that each host can support is then determined. To do this, the
host’s CPU resource amount is divided by the CPU component of the slot size and the result is
rounded down. The same calculation is made for the host's memory resource amount. These two
numbers are compared and the smaller number is the number of slots that the host can support.
VMware, Inc. 27
vSphere Availability
The Current Failover Capacity is computed by determining how many hosts (starting from the
largest) can fail and still leave enough slots to satisfy the requirements of all powered-on virtual
machines.
n The cluster is comprised of three hosts, each with a different amount of available CPU and
memory resources. The first host (H1) has 9GHz of available CPU resources and 9GB of
available memory, while Host 2 (H2) has 9GHz and 6GB and Host 3 (H3) has 6GHz and 6GB.
n There are five powered-on virtual machines in the cluster with differing CPU and memory
requirements. VM1 needs 2GHz of CPU resources and 1GB of memory, while VM2 needs 2GHz
and 1GB, VM3 needs 1GHz and 2GB, VM4 needs 1GHz and 1GB, and VM5 needs 1GHz and 1GB.
Figure 2-2. Admission Control Example with Host Failures Cluster Tolerates Policy
VM1 VM2 VM3 VM4 VM5
2GHz 2GHz 1GHz 1GHz 1GHz
1GB 1GB 2GB 1GB 1GB
slot size
2GHz, 2GB
H1 H2 H3
9GHz 9GHz 6GHz
9GB 6GB 6GB
6 slots remaining
if H1 fails
1 Slot size is calculated by comparing both the CPU and memory requirements of the virtual
machines and selecting the largest.
The largest CPU requirement (shared by VM1 and VM2) is 2GHz, while the largest memory
requirement (for VM3) is 2GB. Based on this, the slot size is 2GHz CPU and 2GB memory.
H1 can support four slots. H2 can support three slots (which is the smaller of 9GHz/2GHz and
6GB/2GB) and H3 can also support three slots.
VMware, Inc. 28
vSphere Availability
The largest host is H1 and if it fails, six slots remain in the cluster, which is sufficient for all five
of the powered-on virtual machines. If both H1 and H2 fail, only three slots remain, which is
insufficient. Therefore, the Current Failover Capacity is one.
The cluster has one available slot (the six slots on H2 and H3 minus the five used slots).
With dedicated failover hosts admission control, when a host fails, vSphere HA attempts to restart
its virtual machines on any of the specified failover hosts. If restarting the virtual machines is not
possible, for example the failover hosts have failed or have insufficient resources, then vSphere HA
attempts to restart those virtual machines on other hosts in the cluster.
To ensure that spare capacity is available on a failover host, you are prevented from powering on
virtual machines or using vMotion to migrate virtual machines to a failover host. Also, DRS does
not use a failover host for load balancing.
Note If you use dedicated failover hosts admission control and designate multiple failover hosts,
DRS does not attempt to enforce VM-VM affinity rules for virtual machines that are running on
failover hosts.
vSphere HA Interoperability
vSphere HA can interoperate with many other features, such as DRS and vSAN.
Before configuring vSphere HA, you should be aware of the limitations of its interoperability with
these other features or products.
To use vSphere HA with vSAN, you must be aware of certain considerations and limitations for the
interoperability of these two features.
VMware, Inc. 29
vSphere Availability
Networking Differences
vSAN has its own network. If vSAN and vSphere HA are activated for the same cluster, the HA
interagent traffic flows over this storage network rather than the management network. vSphere
HA uses the management network only if vSAN is deactivated. vCenter Server chooses the
appropriate network if vSphere HA is configured on a host.
If you change the vSAN network configuration, the vSphere HA agents do not automatically pick
up the new network settings. To make changes to the vSAN network, you must take the following
steps in the vSphere Web Client:
3 Right-click all hosts in the cluster and select Reconfigure for vSphere HA.
Table 2-2. vSphere HA Networking Differences shows the differences in vSphere HA networking
whether or not vSAN is used.
Heartbeat datastores Any datastore mounted to > 1 host, Any datastore mounted to > 1 host
but not vSAN datastores
Host declared isolated Isolation addresses not pingable and Isolation addresses not pingable and
vSAN storage network inaccessible management network inaccessible
For example, if the vSAN rule set allows for only two failures, the vSphere HA admission control
policy must reserve capacity that is equivalent to only one or two host failures. If you are using
the Percentage of Cluster Resources Reserved policy for a cluster that has eight hosts, you must
not reserve more than 25% of the cluster resources. In the same cluster, with the Host Failures
Cluster Tolerates policy, the setting must not be higher than two hosts. If vSphere HA reserves less
capacity, failover activity might be unpredictable. Reserving too much capacity overly constrains
the powering on of virtual machines and intercluster vSphere vMotion migrations.
VMware, Inc. 30
vSphere Availability
When vSphere HA performs failover and restarts virtual machines on different hosts, its first
priority is the immediate availability of all virtual machines. After the virtual machines have
been restarted, those hosts on which they were powered on might be heavily loaded, while
other hosts are comparatively lightly loaded. vSphere HA uses the virtual machine's CPU and
memory reservation and overhead memory to determine if a host has enough spare capacity to
accommodate the virtual machine.
In a cluster using DRS and vSphere HA with admission control turned on, virtual machines might
not be evacuated from hosts entering maintenance mode. This behavior occurs because of the
resources reserved for restarting virtual machines in the event of a failure. You must manually
migrate the virtual machines off of the hosts using vMotion.
In some scenarios, vSphere HA might not be able to fail over virtual machines because of resource
constraints. This can occur for several reasons.
n VM-Host affinity (required) rules might limit the hosts on which certain virtual machines can be
placed.
n There might be sufficient aggregate resources but these can be fragmented across multiple
hosts so that they can not be used by virtual machines for failover.
In such cases, vSphere HA can use DRS to try to adjust the cluster (for example, by bringing hosts
out of standby mode or migrating virtual machines to defragment the cluster resources) so that HA
can perform the failovers.
If DPM is in manual mode, you might need to confirm host power-on recommendations. Similarly,
if DRS is in manual mode, you might need to confirm migration recommendations.
If you are using VM-Host affinity rules that are required, be aware that these rules cannot be
violated. vSphere HA does not perform a failover if doing so would violate such a rule.
For more information about DRS, see the vSphere Resource Management documentation.
The two types of rules for which you can specify vSphere HA failover behavior are the following:
n VM anti-affinity rules force specified virtual machines to remain apart during failover actions.
VMware, Inc. 31
vSphere Availability
n VM-Host affinity rules place specified virtual machines on a particular host or a member of a
defined group of hosts during failover actions.
When you edit a DRS affinity rule, you must use vSphere HA advanced options to enforce the
desired failover behavior for vSphere HA.
n HA must respect VM anti-affinity rules during failover -- When the advanced option for VM
anti-affinity rules is set, vSphere HA does not fail over a virtual machine if doing so violates a
rule. Instead, vSphere HA issues an event reporting there are insufficient resources to perform
the failover.
n HA should respect VM to Host affinity rules during failover --vSphere HA attempts to place
VMs with this rule on the specified hosts if at all possible.
VM Component Protection
VM Component Protection (VMCP) has the following interoperability issues and limitations:
n VMCP does not support vSphere Fault Tolerance. If VMCP is activated for a cluster using Fault
Tolerance, the affected FT virtual machines will automatically receive overrides that deactivate
VMCP.
n VMCP does not detect or respond to accessibility issues for files located on vSAN datastores. If
a virtual machine's configuration and VMDK files are located only on vSAN datastores, they are
not protected by VMCP.
n VMCP does not detect or respond to accessibility issues for files located on Virtual Volume
datastores. If a virtual machine's configuration and VMDK files are located only on Virtual
Volume datastores, they are not protected by VMCP.
n VMCP does not protect against inaccessible Raw Device Mapping (RDM)s.
IPv6
vSphere HA can be used with IPv6 network configurations, which are fully supported if the
following considerations are observed:
n The management network for all hosts in the cluster must be configured with the same IP
version, either IPv6 or IPv4. vSphere HA clusters cannot contain both types of networking
configuration.
VMware, Inc. 32
vSphere Availability
n The network isolation addresses used by vSphere HA must match the IP version used by the
cluster for its management network.
In addition to the previous restrictions, the following types of IPv6 address types are not
supported for use with the vSphere HA isolation address or management network: link-local,
ORCHID, and link-local with zone indices. Also, the loopback address type cannot be used for the
management network.
Note To upgrade an existing IPv4 deployment to IPv6, you must first deactivate vSphere HA.
When you create a vSphere HA cluster, you must configure a number of settings that determine
how the feature works. Before you do this, identify your cluster's nodes. These nodes are the ESXi
hosts that will provide the resources to support virtual machines and that vSphere HA will use
for failover protection. You should then determine how those nodes are to be connected to one
another and to the shared storage where your virtual machine data resides. After that networking
architecture is in place, you can add the hosts to the cluster and finish configuring vSphere HA.
You can activate and configure vSphere HA before you add host nodes to the cluster. However,
until the hosts are added, your cluster is not fully operational and some of the cluster settings are
unavailable. For example, the Specify a Failover Host admission control policy is unavailable until
there is a host that can be designated as the failover host.
Note The Virtual Machine Startup and Shutdown (automatic startup) feature is deactivated for
all virtual machines residing on hosts that are in (or moved into) a vSphere HA cluster. Automatic
startup is not supported when used with vSphere HA.
vSphere HA Checklist
The vSphere HA checklist contains requirements that you must be aware of before creating and
using a vSphere HA cluster.
Review this list before you set up a vSphere HA cluster. For more information, follow the
appropriate cross reference.
n All hosts must be configured with static IP addresses. If you are using DHCP, you must ensure
that the address for each host persists across reboots.
VMware, Inc. 33
vSphere Availability
n All hosts must have at least one management network in common. The best practice is to have
at least two management networks in common. You should use the VMkernel network with the
Management traffic checkbox enabled. The networks must be accessible to each other and
vCenter Server and the hosts must be accessible to each other on the management networks.
SeeBest Practices for Networking.
n To ensure that any virtual machine can run on any host in the cluster, all hosts must have
access to the same virtual machine networks and datastores. Similarly, virtual machines must
be located on shared, not local, storage otherwise they cannot be failed over in the case of a
host failure.
n For VM Monitoring to work, VMware tools must be installed. See VM and Application
Monitoring.
n vSphere HA supports both IPv4 and IPv6. See Other vSphere HA Interoperability Issues for
considerations when using IPv6.
n For VM Component Protection to work, hosts must have the All Paths Down (APD) Timeout
feature enabled.
n To use VM Component Protection, clusters must contain ESXi 6.0 hosts or later.
n Only vSphere HA clusters that contain ESXi 6.0 or later hosts can be used to enable VMCP.
Clusters that contain hosts from an earlier release cannot enable VMCP, and such hosts cannot
be added to a VMCP-enabled cluster.
n If your cluster uses Virtual Volume datastores, when vSphere HA is enabled a configuration
Virtual Volume is created on each datastore by vCenter Server. In these containers, vSphere
HA stores the files it uses to protect virtual machines. vSphere HA does not function correctly
if you delete these containers. Only one container is created per Virtual Volume datastore.
Prerequisites
n Verify that all virtual machines and their configuration files reside on shared storage.
n Verify that the hosts are configured to access the shared storage so that you can power on the
virtual machines by using different hosts in the cluster.
n Verify that hosts are configured to have access to the virtual machine network.
VMware, Inc. 34
vSphere Availability
n Verify that you are using redundant management network connections for vSphere HA. For
information about setting up network redundancy, see Best Practices for Networking.
n Verify that you have configured hosts with at least two datastores to provide redundancy for
vSphere HA datastore heartbeating.
n Connect vSphere Web Client to vCenter Server by using an account with cluster administrator
permissions.
Procedure
1 In the vSphere Web Client, browse to the data center where you want the cluster to reside and
click Create a Cluster.
4 Based on your plan for the resources and networking architecture of the cluster, use the
vSphere Web Client to add hosts to the cluster.
d Select Turn ON Proactive HA to allow proactive migrations of VMs from hosts on which a
provider has notified a health degradation.
With Host Monitoring enabled, hosts in the cluster can exchange network heartbeats and
vSphere HA can take action when it detects failures. Host Monitoring is required for the
vSphere Fault Tolerance recovery process to work properly.
Select VM Monitoring Only to restart individual virtual machines if their heartbeats are not
received within a set time. You can also select VM and Application Monitoring to enable
application monitoring.
8 Click OK.
Results
What to do next
VMware, Inc. 35
vSphere Availability
n Admission Control
n Heartbeat Datastores
n Advanced Options
In the vSphere Web Client, you can configure following the vSphere HA settings:
Provide settings here for host failure responses, host isolation, VM monitoring, and VM
Component Protection.
Provide settings for how Proactive HA responds when a provider has notified its health
degradation to vCenter, indicating a partial failure of that host.
Admission Control
Activate or deactivate admission control for the vSphere HA cluster and choose a policy for
how it is enforced.
Heartbeat Datastores
Specify preferences for the datastores that vSphere HA uses for datastore heartbeating.
Advanced Options
In this part of the vSphere Web Client, you can determine the specific responses the vSphere
HA cluster has for host failures and isolation. You can also configure VM Component Protection
(VMCP) actions when Permanent Device Loss (PDL) and All Paths Down (APD) situations occur
and you can enable VM monitoring.
VMware, Inc. 36
vSphere Availability
Procedure
4 Enable VM Monitoring
You can turn on VM and Application Monitoring and also set the monitoring sensitivity for
your vSphere HA cluster.
Procedure
4 Click Failures and Responses and then expand Host Failure Response.
Option Description
Failure Response If you select Disabled, this setting turns off host monitoring and VMs are not
restarted when host failures occur. If Restart VMs is selected, VMs are failed
over based on their restart priority when a host fails.
Default VM Restart Priority The restart priority determines the order in which virtual machines are
restarted when the host fails. Higher priority virtual machines are started first.
If multiple hosts fail, all virtual machines are migrated from the first host in
order of priority, then all virtual machines from the second host in order of
priority, and so on.
VM Dependency Restart Condition A specific condition must be selected as well as a delay after that condition
has been met, before vSphere HA is allowed to continue to the next VM
restart priority.
6 Click OK.
VMware, Inc. 37
vSphere Availability
Results
Procedure
4 Click Failures and Responses and expand Response for Host Isolation.
5 To configure the host isolation response, select Disabled, Shut down and restart VMs, or
Power off and restart VMs.
6 Click OK.
Results
Procedure
4 Click Failures and Responses, and expand either Datastore with PDL or Datastore with APD .
5 If you clicked Datastore with PDL, you can set the VMCP failure response for this type of
issue, either Disabled, Issue Events, or Power off and restart VMs.
6 If you clicked Datastore with APD, you can set the VMCP failure response for this type
of issue, either Disabled, Issue Events, Power off and restart VMs--Conservative restart
policy, or Power off and restart VMs--Aggressive restart policy. You can also set Response
recovery, which is the number of minutes that VMCP waits before taking action.
7 Click OK.
VMware, Inc. 38
vSphere Availability
Results
Enable VM Monitoring
You can turn on VM and Application Monitoring and also set the monitoring sensitivity for your
vSphere HA cluster.
Procedure
These settings turn on VMware Tools heartbeats and application heartbeats, respectively.
6 To set the heartbeat monitoring sensitivity, move the slider between Low and High or select
Custom to provide custom settings.
7 Click OK.
Results
Configure Proactive HA
You can configure how Proactive HA responds when a provider has notified its health degradation
to vCenter, indicating a partial failure of that host.
Procedure
VMware, Inc. 39
vSphere Availability
Option Description
Automation Level Determine whether host quarantine or maintenance mode and VM migrations
are recommendations or automatic.
n Manual. vCenter Server suggests migration recommendations for virtual
machines.
n Automated. Virtual machines are migrated to healthy hosts and
degraded hosts are entered into quarantine or maintenance mode
depending on the configured Proactive HA automation level.
To enable Proactive HA providers for this cluster, select the check boxes. Providers appear
when their corresponding vSphere Web Client plugin has been installed and the providers
monitor every host in the cluster. To view or edit the failure conditions supported by the
provider, click the edit link.
7 Click OK.
The Admission Control page appears only if you activated vSphere HA.
Procedure
5 Select a number for the Host failures cluster tolerates. This is the maximum number of host
failures that the cluster can recover from or guarantees failover for.
VMware, Inc. 40
vSphere Availability
Option Description
Cluster resource percentage Specify a percentage of the cluster’s CPU and memory resources to reserve
as spare capacity to support failovers.
Slot Policy (powered-on VMs) Select a slot size policy that covers all powered on VMs or is a fixed size. You
can also calculate how many VMs require multiple slots.
Dedicated failover hosts Select hosts to use for failover actions. Failovers can still occur on other hosts
in the cluster if a default failover host does not have enough resources.
Disabled Select this option to deactivate admission control and allow virtual machine
power ons that violate availability constraints.
This setting determines what percentage of performance degradation the VMs in the cluster
are allowed to tolerate during a failure.
8 Click OK.
Results
You can specify the datastores that you want to be used for datastore heartbeating.
Procedure
4 Click Heartbeat Datastores to display the configuration options for datastore heartbeating.
5 To instruct vSphere HA about how to select the datastores and how to treat your preferences,
select from the following options.
Table 2-3.
Use datastores from the specified list and complement automatically if needed
VMware, Inc. 41
vSphere Availability
6 In the Available heartbeat datastores pane, select the datastores that you want to use for
heartbeating.
The listed datastores are shared by more than one host in the vSphere HA cluster. When a
datastore is selected, the lower pane displays all the hosts in the vSphere HA cluster that can
access it.
7 Click OK.
Prerequisites
Note Because these options affect the functioning of vSphere HA, change them with caution.
Procedure
5 Click Add and type the name of the advanced option in the text box.
You can set the value of the option in the text box in the Value column.
6 Repeat step 5 for each new option that you want to add and click OK.
Results
What to do next
Once you have set an advanced vSphere HA option, it persists until you do one the following:
n Using the vSphere Web Client, reset its value to the default value.
n Manually edit or delete the option from the fdm.cfg file on all hosts in the cluster.
VMware, Inc. 42
vSphere Availability
Option Description
das.isolationshutdowntimeout The period of time the system waits for a virtual machine
to shut down before powering it off. This only applies if the
host's isolation response is Shut down VM. Default value is
300 seconds.
das.slotcpuinmhz Defines the maximum bound on the CPU slot size. If this
option is used, the slot size is the smaller of this value or
the maximum CPU reservation of any powered-on virtual
machine in the cluster.
VMware, Inc. 43
vSphere Availability
Option Description
VMware, Inc. 44
vSphere Availability
Option Description
Note Once this limit is changed, for all hosts in the cluster
you must run the Reconfigure HA task. Also, when a new
host is added to the cluster or an existing host is rebooted,
this task should be performed on those hosts in order to
update this memory setting.
Note If you change the value of any of the following advanced options, you must deactivate and
then re-activate vSphere HA before your changes take effect.
n das.isolationaddress[...]
n das.usedefaultisolationaddress
n das.isolationshutdowntimeout
VMware, Inc. 45
vSphere Availability
Procedure
4 Use the + button to select virtual machines to which to apply the overrides.
5 Click OK.
6 (Optional) You can change other settings, such as the Automation level, VM restart priority,
Response for Host Isolation, VMCP settings,VM Monitoring, or VM monitoring sensitivity
settings.
Note You can view the cluster defaults for these settings by first expanding Relevant Cluster
Settings and then expanding vSphere HA.
7 Click OK.
Results
The virtual machine’s behavior now differs from the cluster defaults for each setting that you
changed.
You can also refer to the vSphere High Availability Deployment Best Practices publication for
further discussion.
n When changing the networks that your clustered ESXi hosts are on, suspend the Host
Monitoring feature. Changing your network hardware or networking settings can interrupt
the heartbeats that vSphere HA uses to detect host failures, which might result in unwanted
attempts to fail over virtual machines.
VMware, Inc. 46
vSphere Availability
n When you change the networking configuration on the ESXi hosts themselves, for example,
adding port groups, or removing vSwitches, suspend Host Monitoring. After you have made
the networking configuration changes, you must reconfigure vSphere HA on all hosts in
the cluster, which causes the network information to be reinspected. Then re-enable Host
Monitoring.
Note Because networking is a vital component of vSphere HA, if network maintenance must be
performed inform the vSphere HA administrator.
n On legacy ESX hosts in the cluster, vSphere HA communications travel over all networks that
are designated as service console networks. VMkernel networks are not used by these hosts
for vSphere HA communications. To contain vSphere HA traffic to a subset of the ESX console
networks, use the allowedNetworks advanced option.
n On ESXi hosts in the cluster, vSphere HA communications, by default, travel over VMkernel
networks. With an ESXi host, if you want to use a network other than the one vCenter
Server uses to communicate with the host for vSphere HA, you must explicitly enable the
Management traffic check box.
To keep vSphere HA agent traffic on the networks you have specified, configure hosts so vmkNICs
used by vSphere HA do not share subnets with vmkNICs used for other purposes. vSphere HA
agents send packets using any pNIC that is associated with a given subnet when there is also at
least one vmkNIC configured for vSphere HA management traffic. Therefore, to ensure network
flow separation, the vmkNICs used by vSphere HA and by other features must be on different
subnets.
By default, the network isolation address is the default gateway for the host. Only one default
gateway is specified, regardless of how many management networks have been defined. Use the
das.isolationaddress[...] advanced option to add isolation addresses for additional networks.
See vSphere HA Advanced Options.
VMware, Inc. 47
vSphere Availability
The first way you can implement network redundancy is at the NIC level with NIC teaming.
Using a team of two NICs connected to separate physical switches improves the reliability of
a management network. Because servers connected through two NICs (and through separate
switches) have two independent paths for sending and receiving heartbeats, the cluster is more
resilient. To configure a NIC team for the management network, configure the vNICs in vSwitch
configuration for Active or Standby configuration. The recommended parameter settings for the
vNICs are:
n Failback = No
After you have added a NIC to a host in your vSphere HA cluster, you must reconfigure vSphere
HA on that host.
Note Configure the fewest possible number of hardware segments between the servers in a
cluster. The goal being to limit single points of failure. Also, routes with too many hops can cause
networking packet delays for heartbeats, and increase the possible points of failure.
VMware, Inc. 48
vSphere Availability
3 Activate vSAN.
VMware, Inc. 49
vSphere Availability
Note The default alarms include the feature name, vSphere HA.
VMware, Inc. 50
Providing Fault Tolerance for
Virtual Machines 3
You can use vSphere Fault Tolerance for your virtual machines to ensure continuity with higher
levels of availability and data protection.
Fault Tolerance is built on the ESXi host platform, and it provides availability by having identical
virtual machines run on separate hosts.
To obtain the optimal results from Fault Tolerance you must be familiar with how it works, how to
enable it for your cluster, virtual machines and the best practices for its usage.
The protected virtual machine is called the Primary VM. The duplicate virtual machine, the
Secondary VM, is created and runs on another host. The Secondary VM's execution is identical
to that of the Primary VM and it can take over at any point without interruption, thereby providing
fault tolerant protection.
VMware, Inc. 51
vSphere Availability
The Primary and Secondary VMs continuously monitor the status of one another to ensure that
Fault Tolerance is maintained. A transparent failover occurs if the host running the Primary VM
fails, in which case the Secondary VM is immediately activated to replace the Primary VM. A new
Secondary VM is started and Fault Tolerance redundancy is reestablished automatically. If the host
running the Secondary VM fails, it is also immediately replaced. In either case, users experience no
interruption in service and no loss of data.
A fault tolerant virtual machine and its secondary copy are not allowed to run on the same host.
This restriction ensures that a host failure cannot result in the loss of both VMs.
Note You can also use VM-Host affinity rules to dictate which hosts designated virtual machines
can run on. If you use these rules, be aware that for any Primary VM that is affected by such a rule,
its associated Secondary VM is also affected by that rule. For more information about affinity rules,
see the vSphere Resource Management documentation.
Fault Tolerance avoids "split-brain" situations, which can lead to two active copies of a virtual
machine after recovery from a failure. Atomic file locking on shared storage is used to coordinate
failover so that only one side continues running as the Primary VM and a new Secondary VM is
respawned automatically.
vSphere Fault Tolerance can accommodate symmetric multiprocessor (SMP) virtual machines with
up to four vCPUs. Earlier versions of vSphere used a different technology for Fault Tolerance
(now known as legacy FT), with different requirements and characteristics (including a limitation of
single vCPUs for legacy FT VMs). If compatibility with these earlier requirements is necessary, you
can instead use legacy FT. However, this involves the setting of an advanced option for each VM.
See Legacy Fault Tolerance for more information.
Fault Tolerance provides a higher level of business continuity than vSphere HA. When a
Secondary VM is called upon to replace its Primary VM counterpart, the Secondary VM
immediately takes over the Primary VM’s role with the entire state of the virtual machine
preserved. Applications are already running, and data stored in memory does not need to be
reentered or reloaded. Failover provided by vSphere HA restarts the virtual machines affected by
a failure.
This higher level of continuity and the added protection of state information and data informs the
scenarios when you might want to deploy Fault Tolerance.
n Applications which must always be available, especially applications that have long-lasting
client connections that users want to maintain during hardware failure.
n Cases where high availability might be provided through custom clustering solutions, which
are too complicated to configure and maintain.
VMware, Inc. 52
vSphere Availability
Another key use case for protecting a virtual machine with Fault Tolerance can be described as
On-Demand Fault Tolerance. In this case, a virtual machine is adequately protected with vSphere
HA during normal operation. During certain critical periods, you might want to enhance the
protection of the virtual machine. For example, you might be running a quarter-end report which,
if interrupted, might delay the availability of critical information. With vSphere Fault Tolerance,
you can protect this virtual machine before running this report and then turn off or suspend
Fault Tolerance after the report has been produced. You can use On-Demand Fault Tolerance to
protect the virtual machine during a critical time period and return the resources to normal during
non-critical operation.
Requirements
The following CPU and networking requirements apply to FT.
CPUs that are used in host machines for fault tolerant VMs must be compatible with vSphere
vMotion or improved with Enhanced vMotion Compatibility. Also, CPUs that support Hardware
MMU virtualization (Intel EPT or AMD RVI) are required. The following CPUs are supported.
Use a 10-Gbit logging network for FT and verify that the network is low latency. A dedicated FT
network is highly recommended.
Limits
In a cluster configured to use Fault Tolerance, two limits are enforced independently.
das.maxftvmsperhost
The maximum number of fault tolerant VMs allowed on a host in the cluster. Both Primary VMs
and Secondary VMs count toward this limit. The default value is 4.
das.maxftvcpusperhost
The maximum number of vCPUs aggregated across all fault tolerant VMs on a host. vCPUs
from both Primary VMs and Secondary VMs count toward this limit. The default value is 8.
Licensing
The number of vCPUs supported by a single fault tolerant VM is limited by the level of licensing
that you have purchased for vSphere. Fault Tolerance is supported as follows:
VMware, Inc. 53
vSphere Availability
Note FT and legacy FT are not supported in vSphere Essentials and vSphere Essentials Plus.
Before configuring vSphere Fault Tolerance, you must be aware of the features and products Fault
Tolerance cannot interoperate with.
The following vSphere features are not supported for fault tolerant virtual machines.
n Snapshots. Snapshots must be removed or committed before Fault Tolerance can be enabled
on a virtual machine. In addition, it is not possible to take snapshots of virtual machines on
which Fault Tolerance is enabled.
Note Disk-only snapshots created for vStorage APIs - Data Protection (VADP) backups are
supported with Fault Tolerance. However, legacy FT does not support VADP.
n Storage vMotion. You cannot invoke Storage vMotion for virtual machines with Fault Tolerance
turned on. To migrate the storage, you should temporarily turn off Fault Tolerance, and
perform the storage vMotion action. When this is complete, you can turn Fault Tolerance back
on.
n Linked clones. You cannot use Fault Tolerance on a virtual machine that is a linked clone, nor
can you create a linked clone from an FT-enabled virtual machine.
n VM Component Protection (VMCP). If your cluster has VMCP enabled, overrides are created
for fault tolerant virtual machines that turn this feature off.
n I/O filters.
For a virtual machine to be compatible with Fault Tolerance, the Virtual Machine must not use the
following features or devices.
VMware, Inc. 54
vSphere Availability
Table 3-1. Features and Devices Incompatible with Fault Tolerance and Corrective Actions
Physical Raw Disk mapping (RDM). With legacy FT you can reconfigure virtual machines with
physical RDM-backed virtual devices to use virtual RDMs
instead.
CD-ROM or floppy virtual devices backed by a physical or Remove the CD-ROM or floppy virtual device or
remote device. reconfigure the backing with an ISO installed on shared
storage.
USB and sound devices. Remove these devices from the virtual machine.
N_Port ID Virtualization (NPIV). Deactivate the NPIV configuration of the virtual machine.
Hot-plugging devices. The hot plug feature is automatically deactivated for fault
tolerant virtual machines. To hot plug devices (either
adding or removing), you must momentarily turn off Fault
Tolerance, perform the hot plug, and then turn on Fault
Tolerance.
Serial or parallel ports Remove these devices from the virtual machine.
Video devices that have 3D activated. Fault Tolerance does not support video devices that have
3D activated.
Virtual EFI firmware Ensure that the virtual machine is configured to use BIOS
firmware before installing the guest operating system.
When a cluster has EVC activated, DRS makes the initial placement recommendations for fault
tolerant virtual machines and allows you to assign a DRS automation level to Primary VMs (the
Secondary VM always assumes the same setting as its associated Primary VM.)
VMware, Inc. 55
vSphere Availability
When vSphere Fault Tolerance is used for virtual machines in a cluster that has EVC deactivated,
the fault tolerant virtual machines are given DRS automation levels of "disabled". In such a cluster,
each Primary VM is powered on only on its registered host and its Secondary VM is automatically
placed.
If you use affinity rules with a pair of fault tolerant virtual machines, a VM-VM affinity rule applies
to the Primary VM only, while a VM-Host affinity rule applies to both the Primary VM and its
Secondary VM. If a VM-VM affinity rule is set for a Primary VM, DRS attempts to correct any
violations that occur after a failover (that is, after the Primary VM effectively moves to a new host).
n If you use multi-CPU vSphere FT on virtual machines, Site Recovery Manager does not
deactivate vSphere FT on the recovered virtual machines and powering on those virtual
machines fails. You must manually deactivate vSphere FT on the recovered virtual machines by
removing FT properties and running the recovery plan again.
n If you use uni-processor vSphere FT on virtual machines, you must configure the virtual
machines on the protected site so that Site Recovery Manager can deactivate vSphere FT
after a recovery. For information about how to configure virtual machines for uni-processor
vSphere FT on the protected site, see http://kb.vmware.com/kb/2109813.
The tasks you should complete before attempting to set up Fault Tolerance for your cluster include
the following:
n Ensure that your cluster, hosts, and virtual machines meet the requirements outlined in the
Fault Tolerance checklist.
After your cluster and hosts are prepared for Fault Tolerance, you are ready to turn on Fault
Tolerance for your virtual machines. See Turn On Fault Tolerance.
VMware, Inc. 56
vSphere Availability
Note The failover of fault tolerant virtual machines is independent of vCenter Server, but you
must use vCenter Server to set up your Fault Tolerance clusters.
n Fault Tolerance logging and VMotion networking configured. See Configure Networking for
Host Machines.
n vSphere HA cluster created and enabled. See Creating a vSphere HA Cluster. vSphere HA
must be enabled before you can power on fault tolerant virtual machines or add a host to a
cluster that already supports fault tolerant virtual machines.
n The configuration for each host must have Hardware Virtualization (HV) enabled in the BIOS.
Note VMware recommends that the hosts you use to support FT VMs have their BIOS power
management settings turned to "Maximum performance" or "OS-managed performance".
To confirm the compatibility of the hosts in the cluster to support Fault Tolerance, you can also run
profile compliance checks as described in Create Cluster and Check Compliance.
n No unsupported devices attached to the virtual machine. See Fault Tolerance Interoperability.
n Incompatible features must not be running with the fault tolerant virtual machines. See Fault
Tolerance Interoperability.
n Virtual machine files (except for the VMDK files) must be stored on shared storage. Acceptable
shared storage solutions include Fibre Channel, (hardware and software) iSCSI, NFS, and NAS.
VMware, Inc. 57
vSphere Availability
n If you are using NFS to access shared storage, use dedicated NAS hardware with at least a
1Gbit NIC to obtain the network performance required for Fault Tolerance to work properly.
n The memory reservation of a fault tolerant virtual machine is set to the VM's memory size when
Fault Tolerance is turned on. Ensure that a resource pool containing fault tolerant VMs has
memory resources above the memory size of the virtual machines. Without this excess in the
resource pool, there might not be any memory available to use as overhead memory.
n To ensure redundancy and maximum Fault Tolerance protection, you should have a minimum
of three hosts in the cluster. In a failover situation, this provides a host that can accommodate
the new Secondary VM that is created.
To set up Fault Tolerance for a host, you must complete this procedure for each port group
option (vMotion and FT logging) to ensure that sufficient bandwidth is available for Fault Tolerance
logging. Select one option, finish this procedure, and repeat the procedure a second time,
selecting the other port group option.
Prerequisites
Multiple gigabit Network Interface Cards (NICs) are required. For each host supporting Fault
Tolerance, a minimum of two physical NICs is recommended. For example, you need one
dedicated to Fault Tolerance logging and one dedicated to vMotion. Use three or more NICs
to ensure availability.
Note The vMotion and FT logging NICs must be on different subnets. If you are using legacy FT,
IPv6 is not supported on the FT logging NIC.
Procedure
6 Click Finish.
VMware, Inc. 58
vSphere Availability
Results
After you create both a vMotion and Fault Tolerance logging virtual switch, you can create other
virtual switches, as needed. Add the host to the cluster and complete any steps needed to turn on
Fault Tolerance.
What to do next
Note If you configure networking to support FT but subsequently suspend the Fault Tolerance
logging port, pairs of fault tolerant virtual machines that are powered on remain powered on. If a
failover situation occurs, when the Primary VM is replaced by its Secondary VM a new Secondary
VM is not started, causing the new Primary VM to run in a Not Protected state.
Procedure
Results
The results of the compliance test appear, and the compliance or noncompliance of each host is
shown.
Before Fault Tolerance can be turned on, validation checks are performed on a virtual machine.
After these checks are passed and you turn on vSphere Fault Tolerance for a virtual machine, new
options are added to the Fault Tolerance section of its context menu. These include turning off or
deactivating Fault Tolerance, migrating the Secondary VM, testing failover, and testing restart of
the Secondary VM.
VMware, Inc. 59
vSphere Availability
Several validation checks are performed on a virtual machine before Fault Tolerance can be turned
on.
n The host must be in a vSphere HA cluster or a mixed vSphere HA and DRS cluster.
n The host must have ESXi 6.x or greater installed (ESX/ESXi 4.x or greater for legacy FT).
n The virtual machine must not have a video device with 3D activated.
n The BIOS of the hosts where the fault tolerant virtual machines reside must have Hardware
Virtualization (HV) activated.
n The host that supports the Primary VM must have a processor that supports Fault Tolerance.
n Your hardware should be certified as compatible with Fault Tolerance. To confirm that it
is, use the VMware Compatibility Guide at http://www.vmware.com/resources/compatibility/
search.php and select Search by Fault Tolerant Compatible Sets.
n The configuration of the virtual machine must be valid for use with Fault Tolerance (for
example, it must not contain any unsupported devices).
Secondary VM Placement
When your effort to turn on Fault Tolerance for a virtual machine passes the validation checks, the
Secondary VM is created. The placement and immediate status of the Secondary VM depends
upon whether the Primary VM was powered-on or powered-off when you turned on Fault
Tolerance.
n The entire state of the Primary VM is copied and the Secondary VM is created, placed on a
separate compatible host, and powered on if it passes admission control.
n The Fault Tolerance Status displayed for the virtual machine is Protected.
n The Secondary VM is immediately created and registered to a host in the cluster (it might be
re-registered to a more appropriate host when it is powered on.)
n The Secondary VM is not powered on until after the Primary VM is powered on.
VMware, Inc. 60
vSphere Availability
n The Fault Tolerance Status displayed for the virtual machine is Not Protected, VM not
Running.
n When you attempt to power on the Primary VM after Fault Tolerance has been turned on, the
additional validation checks listed above are performed.
After these checks are passed, the Primary and Secondary VMs are powered on and placed
on separate, compatible hosts. The virtual machine's Fault Tolerance Status is tagged as
Protected.
When Fault Tolerance is turned on, vCenter Server resets the virtual machine's memory limit and
sets the memory reservation to the memory size of the virtual machine. While Fault Tolerance
remains turned on, you cannot change the memory reservation, size, limit, number of vCPUs, or
shares. You also cannot add or remove disks for the VM. When Fault Tolerance is turned off, any
parameters that were changed are not reverted to their original values.
Connect vSphere Web Client to vCenter Server using an account with cluster administrator
permissions.
Prerequisites
The option to turn on Fault Tolerance is unavailable (dimmed) if any of these conditions apply:
n The virtual machine resides on a host that does not have a license for the feature.
n The virtual machine resides on a host that is in maintenance mode or standby mode.
n The virtual machine is disconnected or orphaned (its .vmx file cannot be accessed).
n The user does not have permission to turn the feature on.
Procedure
1 In the vSphere Web Client, browse to the virtual machine for which you want to turn on Fault
Tolerance.
2 Right-click the virtual machine and select Fault Tolerance > Turn On Fault Tolerance.
3 Click Yes.
4 Select a datastore on which to place the Secondary VM configuration files. Then click Next.
5 Select a host on which to place the Secondary VM. Then click Next.
Results
The specified virtual machine is designated as a Primary VM, and a Secondary VM is established
on another host. The Primary VM is now fault tolerant.
VMware, Inc. 61
vSphere Availability
Use the Turn Off Fault Tolerance option if you do not plan to reenable the feature. Otherwise, use
the Suspend Fault Tolerance option.
Note If the Secondary VM resides on a host that is in maintenance mode, disconnected, or not
responding, you cannot use the Turn Off Fault Tolerance option. In this case, you should suspend
and resume Fault Tolerance instead.
Procedure
1 In the vSphere Web Client, browse to the virtual machine for which you want to turn off Fault
Tolerance.
2 Right-click the virtual machine and select Fault Tolerance > Turn Off Fault Tolerance.
3 Click Yes.
Results
Fault Tolerance is turned off for the selected virtual machine. The history and the secondary virtual
machine for the selected virtual machine are deleted.
Procedure
1 In the vSphere Web Client, browse to the virtual machine for which you want to suspend Fault
Tolerance.
2 Right-click the virtual machine and select Fault Tolerance > Suspend Fault Tolerance.
3 Click Yes.
Results
Fault Tolerance is suspended for the selected virtual machine. Any history and the Secondary VM
for the selected virtual machine are preserved and will be used if the feature is resumed.
What to do next
After you suspend Fault Tolerance, to resume the feature select Resume Fault Tolerance.
VMware, Inc. 62
vSphere Availability
Migrate Secondary
After vSphere Fault Tolerance is turned on for a Primary VM, you can migrate its associated
Secondary VM.
Procedure
1 In the vSphere Web Client, browse to the Primary VM for which you want to migrate its
Secondary VM.
2 Right-click the virtual machine and select Fault Tolerance > Migrate Secondary.
3 Complete the options in the Migrate dialog box and confirm the changes that you made.
Results
The Secondary VM associated with the selected fault tolerant virtual machine is migrated to the
specified host.
Test Failover
You can induce a failover situation for a selected Primary VM to test your Fault Tolerance
protection.
Procedure
1 In the vSphere Web Client, browse to the Primary VM for which you want to test failover.
2 Right-click the virtual machine and select Fault Tolerance > Test Failover.
Results
This task induces failure of the Primary VM to ensure that the Secondary VM replaces it. A new
Secondary VM is also started placing the Primary VM back in a Protected state.
Procedure
1 In the vSphere Web Client, browse to the Primary VM for which you want to conduct the test.
2 Right-click the virtual machine and select Fault Tolerance > Test Restart Secondary.
VMware, Inc. 63
vSphere Availability
Results
This task results in the termination of the Secondary VM that provided Fault Tolerance protection
for the selected Primary VM. A new Secondary VM is started, placing the Primary VM back in a
Protected state.
Prerequisites
Verify that you have sets of four or more ESXi hosts that are hosting fault tolerant virtual machines
that are powered on. If the virtual machines are powered off, the Primary and Secondary VMs can
be relocated to hosts with different builds.
Note This upgrade procedure is for a minimum four-node cluster. The same instructions can be
followed for a smaller cluster, though the unprotected interval will be slightly longer.
Procedure
1 Using vMotion, migrate the fault tolerant virtual machines off of two hosts.
4 Using vMotion, move the Primary VM for which Fault Tolerance has been suspended to one of
the upgraded hosts.
6 Repeat Step 1 to Step 5 for as many fault tolerant virtual machine pairs as can be
accommodated on the upgraded hosts.
Results
The following recommendations for host and networking configuration can help improve the
stability and performance of your cluster.
VMware, Inc. 64
vSphere Availability
Host Configuration
Hosts running the Primary and Secondary VMs should operate at approximately the same
processor frequencies, otherwise the Secondary VM might be restarted more frequently. Platform
power management features that do not adjust based on workload (for example, power
capping and enforced low frequency modes to save power) can cause processor frequencies
to vary greatly. If Secondary VMs are being restarted on a regular basis, deactivate all power
management modes on the hosts running fault tolerant virtual machines or ensure that all hosts
are running in the same power management modes.
n Distribute each NIC team over two physical switches ensuring L2 domain continuity for each
VLAN between the two physical switches.
n Use deterministic teaming policies to ensure particular traffic types have an affinity to a
particular NIC (active/standby) or set of NICs (for example, originating virtual port-id).
n Where active/standby policies are used, pair traffic types to minimize impact in a failover
situation where both traffic types will share a vmnic.
n Where active/standby policies are used, configure all the active adapters for a particular traffic
type (for example, FT Logging) to the same physical switch. This minimizes the number of
network hops and lessens the possibility of oversubscribing the switch to switch links.
Note FT logging traffic between Primary and Secondary VMs is unencrypted and contains guest
network and storage I/O data, as well as the memory contents of the guest operating system.
This traffic can include sensitive data such as passwords in plaintext. To avoid such data being
divulged, ensure that this network is secured, especially to avoid 'man-in-the-middle' attacks. For
example, you could use a private network for FT logging traffic.
Homogeneous Clusters
vSphere Fault Tolerance can function in clusters with nonuniform hosts, but it works best in
clusters with compatible nodes. When constructing your cluster, all hosts should have the
following configuration:
n The same BIOS settings (power management and hyperthreading) for all hosts.
VMware, Inc. 65
vSphere Availability
Performance
To increase the bandwidth available for the logging traffic between Primary and Secondary VMs
use a 10Gbit NIC, and activate the use of jumbo frames.
You can select multiple NICs for the FT logging network. By selecting multiple NICs, you can take
advantage of the bandwidth from multiple NICs even if all of the NICs are not dedicated to running
FT.
For virtual machines with Fault Tolerance activated, you might use ISO images that are accessible
only to the Primary VM. In such a case, the Primary VM can access the ISO, but if a failover
occurs, the CD-ROM reports errors as if there is no media. This situation might be acceptable if the
CD-ROM is being used for a temporary, noncritical operation such as a patch.
In a partitioned vSphere HA cluster using Fault Tolerance, the Primary VM (or its Secondary VM)
could end up in a partition managed by a primary host that is not responsible for the virtual
machine. When a failover is needed, a Secondary VM is restarted only if the Primary VM was in a
partition managed by the primary host responsible for it.
To ensure that your management network is less likely to have a failure that leads to a network
partition, follow the recommendations in Best Practices for Networking.
n A mix of vSAN and other types of datastores is not supported for both Primary VMs and
Secondary VMs.
To increase performance and reliability when using FT with vSAN, the following conditions are also
recommended.
VMware, Inc. 66
vSphere Availability
To use legacy Fault Tolerance, you must configure an advanced option for the virtual machine.
After you complete this configuration, the legacy FT VM is different in some ways from other
vSphere FT VMs.
Legacy FT vSphere FT
IPv6 Not supported for legacy FT logging Supported for vSphere FT-logging
NICs. NICs.
Eager-zeroed thick .vmdk disk files Required Not required because vSphere FT
supports all disk file types, including
thick and thin
.vmdk redundancy Only a single copy Primary VMs and Secondary VMs
always maintain independent copies,
which can be placed on different
datastores to increase redundancy.
NIC bandwidth Dedicated 1-Gb NIC recommended Dedicated 10-Gb NIC recommended
CPU and host compatibility Requires identical CPU model and CPUs must be compatible with
family and nearly identical versions of vSphere vMotion or EVC. Versions of
vSphere on hosts. vSphere on hosts must be compatible
with vSphere vMotion.
Storage vMotion Supported only on powered-off VMs. Not supported. User must turn off
vCenter Server automatically turns vSphere FT for the VM before
off FT before performing a Storage performing the Storage vMotion
vMotion action and turns on FT action and turn on vSphere FT again.
again after the Storage vMotion action
completes.
VMware, Inc. 67
vSphere Availability
n ESXi hosts must have access to the same virtual machine datastores and networks.
n Virtual machines must be stored in virtual RDM or virtual machine disk (VMDK) files that are
thick provisioned. If a virtual machine is stored in a VMDK file that is thin provisioned and an
attempt is made to use fault tolerance, a message appears. It indicates that the VMDK file must
be converted. To perform the conversion, you must power off the virtual machine.
n Hosts must have processors from the vSphere FT-compatible processor group. Verify that the
hosts' processors are compatible with one another.
n The host that supports the Secondary VM must have a processor that supports fault tolerance
and is the same CPU family or model as the host that supports the Primary VM.
n When upgrading hosts that contain fault tolerant VMs, verify that the Primary and Secondary
VMs continue to run on hosts with the same FT version number or host build number. This
requirement applies to hosts before ESX/ESXi 4.1.
Note If you designated a VM to use legacy FT before you upgraded the hosts in the cluster,
that VM continues to use legacy FT after the host upgrade.
vCenter Server version 6.5 or later can manage existing legacy FT VMs, but you cannot create
legacy FT VMs, even on hosts with a version earlier than version 6.5. The following vSphere FT
operations can be performed in this scenario:
n Suspend or resume FT
n Test failover
n Restart secondary
n Migrate secondary
n Turn off FT
Note Legacy FT VMs can exist only on ESXi hosts that are running on vSphere versions earlier
than 6.5.
VMware, Inc. 68
vCenter High Availability
4
vCenter High Availability (vCenter HA) protects vCenter Server Appliance against host and
hardware failures. The active-passive architecture of the solution can also help you reduce
downtime significantly when you patch vCenter Server Appliance.
After some network configuration, you create a three-node cluster that contains Active, Passive,
and Witness nodes. Different configuration paths are available. What you select depends on your
existing configuration.
Procedure
VMware, Inc. 69
vSphere Availability
Deploying each of the nodes on a different ESXi instance protects against hardware failure.
Adding the three ESXi hosts to a DRS cluster can further protect your environment.
When vCenter HA configuration is complete, only the Active node has an active management
interface (public IP). The three nodes communicate over a private network called vCenter HA
network that is set up as part of configuration. The Active node and the Passive node are
continuously replicating data.
VMware, Inc. 70
vSphere Availability
Mgmt Interface
HA Interface HA Interface
vCenter HA
Network
Witness
All three nodes are necessary for the functioning of this feature. Compare the node
responsibilities.
Node Description
VMware, Inc. 71
vSphere Availability
Component Requirements
Management vCenter Server (if used) Your environment can include a management vCenter Server system, or you
can set up your vCenter Server Appliance to manage the ESXi host on which it
runs (self-managed vCenter Server)
n vCenter Server 5.5 or later is required.
Network connectivity n vCenter HA network latency between Active, Passive, and Witness nodes
must be less than 10 ms.
n The vCenter HA network must be on a different subnet than the
management network.
Licensing required for vCenter HA n vCenter HA requires a single vCenter Server license.
n vCenter HA requires a Standard license.
VMware, Inc. 72
vSphere Availability
Mgmt Interface
HA Interface HA Interface
vCenter HA
Network
Witness
Platform Services
Controller
1 The user provisions the vCenter Server Appliance with an embedded Platform Services
Controller.
2 Cloning of the vCenter Server Appliance to a Passive and a Witness node occurs.
3 As part of the clone process, Platform Services Controller and all its services are cloned as
well.
4 When configuration is complete, vCenter HA performs replication to ensure that the Passive
node is synchronized with the Active node. The Active node to Passive node replication
includes Platform Services Controller data.
5 When configuration is complete, the vCenter Server Appliance is protected by vCenter HA. In
case of failover, Platform Services Controller and all its services are available on the Passive
node.
Set up of the external Platform Services Controller is discussed in the following VMware
Knowledge Base articles.
n 2147014: Configuring Netscaler Load Balancer for use with vSphere Platform Services
Controller (PSC) 6.5
VMware, Inc. 73
vSphere Availability
n 2147038 Configuring F5 BIG-IP Load Balancer for use with vSphere Platform Services
Controller (PSC) 6.5
n 2147046 Configuring NSX Edge Load Balancer for use with vSphere Platform Services
Controller (PSC) 6.5
Load Balancer
Mgmt Interface
HA Interface HA Interface
vCenter HA
Network
Witness
1 The user sets up at least two external Platform Services Controller instances. These
instances replicate vCenter Single Sign-On information and other Platform Services Controller
information, for example, licensing.
2 During provisioning of the vCenter Server Appliance, the user selects an external Platform
Services Controller.
3 The user sets up the vCenter Server Appliance to point to a load balancer that provides high
availability for Platform Services Controller.
4 The user or the Basic configuration clones the first vCenter Server Appliance to create the
Passive node and Witness node.
5 As part of the clone process, the information about the external Platform Services Controller
and the load balancer is cloned as well.
6 When configuration is complete, the vCenter Server Appliance is protected by vCenter HA.
7 If the Platform Services Controller instance becomes unavailable, the load balancer redirects
requests for authentication or other services to the second Platform Services Controller
instance.
VMware, Inc. 74
vSphere Availability
The configuration option that you select depends on your environment. The Basic configuration
requirements are stricter, but more of the configuration is automated. The Advanced configuration
is possible if your environment meets hardware and software requirements, and it offers more
flexibility. However, Advanced configuration requires that you create and configure the clones of
the Active node.
n Either the vCenter Server Appliance that will become the Active node is managing its own
ESXi host and its own virtual machine. This configuration is sometimes called a self-managed
vCenter Server.
n Or the vCenter Server Appliance is managed by another vCenter Server (management vCenter
Server) and both vCenter Server instances are in the same vCenter Single Sign-On domain.
That means they both use an external Platform Services Controller and both are running
vSphere 6.5.
1 The user deploys the first vCenter Server Appliance, which will become the Active node.
2 The user adds a second network (port group) for vCenter HA traffic on each ESXi host.
3 The user starts the vCenter HA configuration, selects Basic and supplies the IP addresses, the
target ESXi host or cluster, and the datastore for each clone.
4 The system clones the Active node and creates a Passive node with precisely the same
settings, including the same host name.
5 The system clones the Active node again and creates a more light-weight Witness node.
6 The system sets up the vCenter HA network on which the three nodes communicate, for
example, by exchanging heartbeats and other information.
For step-by-step instructions, see Configure vCenter HA With the Basic Option.
VMware, Inc. 75
vSphere Availability
1 The user deploys the first vCenter Server Appliance, which will become the Active node.
2 The user adds a second network (port group) for vCenter HA traffic on each ESXi host.
3 The user adds a second network adapter (NIC) to the Active node
4 The user logs in to the vCenter Server Appliance (Active node) with the vSphere Web Client.
5 The user starts the vCenter HA configuration, selects Advanced, and supplies IP address and
subnet information for the Passive and Witness nodes. Optionally, the user can override the
failover management IP addresses.
6 The user logs in to the management vCenter Server and creates two clones of the vCenter
Server Appliance (Active node).
7 The user returns to the configuration wizard on the vCenter Server Appliance and completes
the configuration process.
8 The system sets up the vCenter HA network on which the three nodes exchange heartbeats
and replication information.
After configuration is complete, the vCenter HA cluster has two networks, the management
network on the first virtual NIC and the vCenter HA network on the second virtual NIC.
Management network
The management network serves client requests (public IP). The management network IP
addresses must be static.
vCenter HA network
The vCenter HA network connects the Active, Passive, and Witness nodes and replicates the
appliance state. It also monitors heartbeats.
n The vCenter HA network IP addresses for the Active, Passive, and Witness nodes must be
static.
n The vCenter HA network must be on a different subnet than the management network.
The three nodes can be on the same subnet or on different subnets.
VMware, Inc. 76
vSphere Availability
n Network latency between the Active, Passive, and Witness nodes must be less than 10
milliseconds.
n You must not add a default gateway entry for the cluster network.
Prerequisites
n The vCenter Server Appliance that later becomes the Active node, is deployed.
n You can access and have privileges to modify that vCenter Server Appliance and the ESXi host
on which it runs.
n During network setup, you need static IP addresses for the management network. The
management and cluster network addresses must be IPv4 or IPv6. They cannot be mixed
Procedure
1 Log in to the management vCenter Server and find the ESXi host on which the Active node is
running.
This port group can be on an existing virtual switch or, for improved network isolation, you can
create a new virtual switch. It must be on a different subnet than the management network on
Eth0.
3 If your environment includes the recommended three ESXi hosts, add the port group to each
of the hosts.
What to do next
n With a Basic configuration, the wizard creates the vCenter HA virtual NIC on each clone and
sets up the vCenter HA network. When configuration completes, the vCenter HA network is
available for replication and heartbeat traffic.
n You have to first create and configure a second NIC on the Active node. See Create and
Configure a Second NIC on the vCenter Server Appliance.
n When you perform the configuration, the wizard prompts for the IP addresses for the
Passive and Witness nodes.
n The wizard prompts you to clone the Active node. As part of the clone process, you
perform additional network configuration.
VMware, Inc. 77
vSphere Availability
Prerequisites
n Deploy vCenter Server Appliance that you want to use as the initial Active node.
n Either the vCenter Server Appliance that will become the Active node is managing its
own ESXi host and its own virtual machine. This configuration is sometimes called a self-
managed vCenter Server.
If your environment does not meet one of these requirements, perform an Advanced
configuration. See Configure vCenter HA With the Advanced Option.
n Set up the infrastructure for the vCenter HA network. See Configure the Network.
n Determine which static IP addresses to use for the two vCenter Server Appliance nodes that
will become the Passive node and Witness node.
Procedure
2 Right-click the vCenter Server object in the inventory and select vCenter HA Settings.
3 Click Configure.
This option is available only if your environment meets prerequisites for the Basic option.
5 Specify the IP address, subnet mask for the Active node and the port group to connect to the
vCenter HA network and click Next.
6 Provide the vCenter HA network IP address and subnet mask for the Passive node and the
Witness node and click Next.
The configuration wizard needs the addresses to create the vCenter HA network and to
connect the three nodes.
VMware, Inc. 78
vSphere Availability
7 (Optional) Click Advanced if you want to override the failover management IP address for the
Passive node.
8 Review the information for the Passive and Witness nodes, click Edit to make changes, and
click Next.
If you are not using a DRS cluster, select different hosts and datastores for the Passive and
Witness nodes if possible.
9 Click Finish.
Results
The Passive and Witness nodes are created. When vCenter HA configuration is complete, vCenter
Server Appliance has high availability protection.
What to do next
See Manage the vCenter HA Configuration for a list of cluster management tasks.
Procedure
VMware, Inc. 79
vSphere Availability
Prerequisites
n Set up the infrastructure for the vCenter HA network. See Configure the Network.
n Deploy vCenter Server Appliance that you want to use as initial Active node.
n The vCenter Server Appliance must have a static IP address mapped to an FQDN.
Procedure
1 Log in to the management vCenter Server with the vSphere Web Client.
2 Select the vCenter Server Appliance virtual machine (Active node), add a second network
adapter, and attach it to the vCenter HA portgroup that you created.
3 Log in to the vCenter Server Appliance that will initially become the Active node, directly.
Interface Action
Prerequisites
n Deploy vCenter Server Appliance that you want to use as initial Active node.
n The vCenter Server Appliance must have a static IP address mapped to an FQDN.
n Determine which static IP addresses to use for the two vCenter Server Appliance nodes that
will become the Passive node and Witness node.
Procedure
VMware, Inc. 80
vSphere Availability
2 Right-click the vCenter Server object in the inventory and select vCenter HA Settings.
3 Click Configure.
5 Provide the IP address and subnet mask for the Passive and Witness nodes click Next.
You have to specify these IP addresses now even though the nodes do not exist yet. You can
no longer change these IP addresses after you click Next.
6 (Optional) Click Advanced if you want to override the failover management IP address for the
Passive node.
7 Leave the wizard window open and perform the cloning tasks.
What to do next
Procedure
1 Log in to the management vCenter Server, right-click the vCenter Server Appliance virtual
machine (Active node), and select Clone > Clone to Virtual Machine.
2 For the first clone, which will become the Passive node, enter the following values.
Option Value
New Virtual Machine Name Name of the Passive node. For example, use vcsa-peer.
Select Compute Resource Use a different target host and datastore than for the Active node if possible.
Select Storage
Clone Options Select the Customize the operating system and Power on virtual machine
after creation check boxes and click the New Customization Spec icon on the
next page.
In the New Customization Spec wizard that appears specify the following.
a Use the same host name as the Active node.
b Ensure the timezone is consistent with the Active node. Keep the
same AreaCode/Location with UTC as the Active node. If you have not
specified AreaCode/Location with UTC while configuring the Active node,
keep London in AreaCode/Location during cloning. London has 0.00
offset, so it keeps the clock to UTC without any offset.
c On the Configure Network page, specify the IP settings for NIC1 and
NIC2, which map to the management interface and the vCenter HA
interface. Leave the NIC2 Default Gateway blank.
VMware, Inc. 81
vSphere Availability
3 After the first clone has been created, clone the Active node again for the Witness node.
Option Value
New Virtual Machine Name Name of the Witness node. For example, use vcsa-witness.
Select Compute Resource Use a different target host and datastore than for the Active and Passive
Select Storage nodes if possible.
Clone Options Select the Customize the operating system and Power on virtual machine
after creation check boxes and click the New Customization Spec icon on the
next page.
In the New Customization Spec wizard that appears specify the following.
a Use the host name of your choice.
b Ensure the timezone is consistent with the Active node. Keep the
same AreaCode/Location with UTC as the Active node. If you have not
specified AreaCode/Location with UTC while configuring the Active node,
keep London in AreaCode/Location during cloning. London has 0.00
offset, so it keeps the clock to UTC without any offset.
c On the Configure Network page, specify the IP settings for NIC2, which
maps to the vCenter HA interface. Leave the NIC2 Default Gateway blank.
4 Ensure that the clone process completes and the virtual machines are powered on.
What to do next
Return to the vCenter HA wizard on the Active node to complete the setup. See Complete the
vCenter HA Advanced Configuration.
Prerequisites
Complete the process of cloning the Active node to a Passive node and a Witness node.
Procedure
VMware, Inc. 82
vSphere Availability
cluster configuration to deactivate or activate vCenter HA, enter maintenance mode, and remove
the cluster configuration.
VMware, Inc. 83
vSphere Availability
Set up SNMP traps for the Active node and the Passive node. You tell the agent where to send
related traps, by adding a target entry to the snmpd configuration.
Procedure
1 Log in to the Active node by using the Virtual Machine Console or SSH.
vicfg-snmp -t 10.160.1.1@1166/public
In this example, 10.160.1.1 is the client listening address, 1166 is the client listening port,
and public is the community string.
vicfg-snmp -e
What to do next
n To view the complete help for the command, run vicfg-snmp -h.
If possible, replace certificates in the vCenter Server Appliance that will become the Active node
before you clone the node.
VMware, Inc. 84
vSphere Availability
Procedure
3 On the Active node, which is now a standalone vCenter Server Appliance, replace the machine
SSL Certificate with a custom certificate.
Procedure
2 Log in to the Active node by using the Virtual Machine Console or SSH.
bash
4 Run the following command to generate new SSH keys on the Active node.
/usr/lib/vmware-vcha/scripts/resetSshKeys.py
5 Use SCP to copy the keys to the Passive node and Witness node.
scp /vcha/.ssh/*
6 Edit the cluster configuration and set the vCenter HA cluster to Enabled.
Automatic failover
The Passive node attempts to take over the active role in case of an Active node failure.
Manual failover
The user can force a Passive node to take over the active role by using the Initiate Failover
action.
VMware, Inc. 85
vSphere Availability
Procedure
1 Log in to the Active node vCenter Server Appliance with the vSphere Web Client and click
Configure.
A dialog offers you the option to force a failover without synchronization. In most cases,
performing synchronization first is best.
4 After the failover, you can verify that the Passive node has the role of the Active node in the
vSphere Web Client.
The operating mode of a vCenter Server Appliance controls the failover capabilities and state
replication in a vCenter HA cluster.
Note If the cluster is operating in either Maintenance or Disabled mode, an Active node can
continue serving client requests even if the Passive and Witness nodes are lost or unreachable.
VMware, Inc. 86
vSphere Availability
Prerequisites
Verify that the vCenter HA cluster is deployed and contains the Active, Passive, and Witness
nodes.
Procedure
1 Log in to the Active node vCenter Server Appliance with the vSphere Web Client and click
Configure.
Option Result
Enable vCenter HA Activates replication between the Active and Passive nodes. If the cluster is in a healthy
state, your Active node is protected by automatic failover from the Passive node.
Maintenance Mode In maintenance mode, replication still occurs between the Active and Passive nodes.
However, automatic failover is deactivated.
Disable vCenter HA Deactivates replication and failover. Keeps the configuration of the cluster. You can later
activate vCenter HA again.
Remove vCenter HA Removes the cluster. Replication and failover no longer are provided. The Active node
cluster continues to operate as a standalone vCenter Server Appliance. See Remove a vCenter
HA Configuration for details.
4 Click OK.
Note Remove the cluster configuration before you restore the Active node. Results are
unpredictable if you restore the Active node and the Passive node is still running or other cluster
configuration is still in place.
Prerequisites
Verify the interoperability of vCenter HA and the backup and restore solution. One solution is
vCenter Server Appliance file-based restore.
Procedure
2 Before you restore the cluster, power off and delete all vCenter HA nodes.
VMware, Inc. 87
vSphere Availability
Procedure
1 Log in to the Active node vCenter Server Appliance and click Configure.
n The vCenter HA cluster's configuration is removed from the Active, Passive, and Witness
nodes.
n You cannot reuse the Passive and Witness nodes in a new vCenter HA configuration.
n If you performed configuration using the Advanced options, or if the Passive and Witness
nodes are not discoverable, you must delete these nodes explicitly.
n Even if the second virtual NIC was added by the configuration process, the removal
process does not remove the virtual NIC.
Procedure
n Passive node
n Active node
n Witness node
3 Verify that all nodes join the cluster successfully, and that the previous Active node resumes
that role.
VMware, Inc. 88
vSphere Availability
and want to change the environment, you have to delete the Passive node virtual machine before
you change the configuration.
Procedure
1 Log in to the Active node with the vSphere Web Client, edit the cluster configuration, and
select Disable.
3 Change the vCenter Server Appliance configuration for the Active node, for example, from a
Small environment to a Medium environment.
When you collect a support bundle from the Active node in a vCenter HA cluster, the system
proceeds as follows.
n Collects support bundles from Passive and Witness nodes and places them in the commands
directory on the Active node support bundle.
Note The collection of support bundles from the Passive and Witness nodes is a best effort and
happens if the nodes are reachable.
VMware, Inc. 89
vSphere Availability
Problem
Cause
Look for the clone exception. It might indicate one of the following problems.
Solution
Problem
You start a vCenter HA cluster configuration, and it fails with an error. The error might show the
cause of the problem, for example, you might see an SSH Connection Failed message.
Solution
1 Verify that the Passive and Witness nodes can be reached from the Active node.
VMware, Inc. 90
vSphere Availability
Problem
If the cluster is in a degraded state, failover cannot occur. For information about failure scenarios
while the cluster is in a degraded state, see Resolving Failover Failures.
Cause
n If the Active node fails, a failover of the Active node to the Passive node occurs
automatically. After the failover, the Passive node becomes the Active node.
At this point, the cluster is in a degraded state because the original Active node is
unavailable.
After the failed node is repaired or comes online, it becomes the new Passive node and the
cluster returns to a healthy state after the Active and Passive nodes synchronize.
n If the Passive node fails, the Active node continues to function, but no failover is possible
and the cluster is in a degraded state.
If the Passive node is repaired or comes online, it automatically rejoins the cluster and the
cluster state is healthy after the Active and Passive nodes synchronize.
n If the Witness node fails, the Active node continues to function and replication between
Active and Passive node continues, but no failover can occur.
If the Witness node is repaired or comes online, it automatically rejoins the cluster and the
cluster state is healthy.
If replication fails between the Active and Passive nodes, the cluster is considered degraded.
The Active node continues to synchronize with the Passive node. If it succeeds, the cluster
returns to a healthy state. This state can result from network bandwidth problems or other
resource shortages.
VMware, Inc. 91
vSphere Availability
If configuration files are not properly replicated between the Active and Passive nodes, the
cluster is in a degraded state. The Active node continues to attempt synchronization with
the Passive node. This state can result from network bandwidth problems or other resource
shortages.
Solution
How you recover depends on the cause of the degraded cluster state. If the cluster is in a
degraded state, events, alarms, and SNMP traps show errors.
If one of the nodes is down, check for hardware failure or network isolation. Check whether the
failed node is powered on.
In case of replication failures, check if the vCenter HA network has sufficient bandwidth and
ensure network latency is 10 ms or less.
Problem
Solution
1 Attempt to resolve the connectivity problem. If you can restore connectivity, isolated nodes
rejoin the cluster automatically and the Active node starts serving client requests.
2 If you cannot resolve the connectivity problem, you have to log in to Active node's console
directly.
a Power off and delete the Passive node and the Witness node virtual machines.
b Log in to the Active node by using SSH or through the Virtual Machine Console.
destroy-vcha -f
VMware, Inc. 92
vSphere Availability
Problem
The Passive node fails while trying to assume the role of the Active node.
Cause
n The Witness node becomes unavailable while the Passive node is trying to assume the role of
the Active node.
Solution
1 If the Active node recovers from the failure, it becomes the Active node again.
2 If the Witness node recovers from the failure, follow these steps.
vcha-reset-primary
3 If both Active node and Witness node cannot recover, you can force the Passive node to
become a standalone vCenter Server Appliance.
destroy-vcha
VMware, Inc. 93
vSphere Availability
Problem
Table 4-4. The following events will raise VCHA health alarm in vpxd:
Table 4-5. The following events will raise PSC HA health alarm in vpxd:
Node {nodeName} left the One node left the cluster com.vmware.vcha.node.left warning
cluster
VMware, Inc. 94
vSphere Availability
For more information, see Patch a vCenter High Availability Environment in vSphere Upgrade.
VMware, Inc. 95
Using Microsoft Clustering Service
for vCenter Server on Windows
High Availability
5
When you deploy vCenter Server, you must build a highly available architecture that can handle
workloads of all sizes.
Availability is critical for solutions that require continuous connectivity to vCenter Server. To avoid
extended periods of downtime, you can achieve continuous connectivity for vCenter Server by
using a Microsoft Cluster Service (MSCS) cluster.
Multiple instances of vCenter Server are in an MSCS cluster, but only one instance is active at a
time. Use this solution to perform maintenance, such as operating system patching or upgrades,
excluding vCenter Server patching or upgrades, You perform maintenance on one node in the
cluster without shutting down the vCenter Server database.
Another potential benefit of this approach is that MSCS uses a type of "shared-nothing" cluster
architecture. The cluster does not involve concurrent disk accesses from multiple nodes. In other
words, the cluster does not require a distributed lock manager. MSCS clusters typically include
only two nodes and they use a shared SCSI connection between the nodes. Only one server needs
the disks at any given time. No concurrent data access occurs. This sharing minimizes the impact if
a node fails.
Unlike the vSphere HA cluster option, the MSCS option works only for Windows virtual machines.
The MSCS option does not support vCenter Server Appliance.
Note This configuration is supported only when vCenter Server is running as a VM, not on a
physical host.
VMware, Inc. 96
vSphere Availability
vCenter Server 6.0.x has 18 services, assuming that the PSC server is running on a different host.
vCenter Server 6.5 has 3 services and the names have changed. An MSCS cluster configuration
created to set up high availability for vCenter Server 6.0 becomes invalid after an upgrade to
vCenter Server 6.5.
The process for vCenter Server high availability in an MSCS environment is as follows.
Prerequisites
n Verify that you are not deleting the primary node VM.
n Verify that all the services of vCenter Server 6.0 are running on the primary node.
n Verify that the Platform Services Controller node upgrade is finished and running vCenter
Server 6.5.
Procedure
1 Power off the secondary node and wait for all the vCenter Server services to be started on the
primary node.
3 Destroy the MSCS cluster. Bring the RDM disks online again before changing the startup type.
4 Open the Service Management view and change the startup type for vCenter Server services
from manual to automatic.
5 Before upgrading to vCenter Server 6.5, change the IP and host name to the IP and host name
used for the role.
You must restart the host and ensure that vCenter Server is accessible.
6 Mount the vCenter Server 6.5 ISO and start the installation.
7 After the installation finishes, open the Service Management view and verify that the new
services are installed and running.
8 Set up the MSCS cluster configuration again and set the startup type of all vCenter Server
services to manual.
VMware, Inc. 97
vSphere Availability
9 Shut down the primary node and detach the RDM disks, but do not delete them from the
datastore.
10 After the reconfiguration is complete, select VM > Clone > Clone to Template, clone the
secondary node, and change its IP and host name.
11 Keep the secondary node powered off and add both RDM disks to the primary node. Then
power on the primary node and change its IP and host name.
12 Add both RDM disks to the secondary node. Then power on the secondary node.
What to do next
When configuring the MSCS cluster, you must add vCenter Server services such as the VMware
AFD service and the VMware vCenter Configuration service to the role as resources.
Prerequisites
n Create a virtual machine (VM) with one of the following guest operating systems:
n Add two raw device mapping (RDM) disks to this VM. These disks must be mounted when they
are added and the RDM disks must also be independent and persistent.
n Create a separate SCSI controller with the bus sharing option set to physical.
Note Since this configuration uses a SCSI controller with the Bus Sharing option set to
Physical, backup and restore is not supported. You must to use a host-based agent for backup
or restore.
n Open the MSCS drive and create two folders: one for VC data and another for VC installation.
n Install a Platform Services Controller instance before you install vCenter Server and provide its
FQDN during the installation.
VMware, Inc. 98
vSphere Availability
SQL Server DB
(VM1)
vCenter Server vCenter Server Node1
Management Node Management Node
(VM1) (VM2)
M1 M1
SQL Server DB
(VM2)
MSCS Cluster Node2
MSCS Cluster
Note MSCS as an availability solution for vCenter Server is provided only for management nodes
of vCenter Server (M node). For infrastructure nodes, customers must deploy multiple N nodes for
high availability. You cannot have M and N nodes on the same VM for MSCS protection.
Procedure
2 Format the two RDM disks, assign them drive letters, and convert them to MBR.
4 Install vCenter Server on one of the RDM disks and set the start option to manual.
Detaching the RDM disks is not a permanent deletion. Do not select Delete from disk and do
not delete the vmdk files.
7 Clone the VM. Do not select the Customize the operating system option.
Do not use the default or custom sysprep file, so that the clone has the same SID.
Note Generalization by sysprep is not available when you create a clone VM as the secondary
node of a cluster. If you use generalization by sysprep, failover of services to secondary node
might fail. Duplicate SIDs do not cause problems when hosts are part of a domain and only
domain user accounts are used. We do not recommended installing third party software other
than vCenter Server on the cluster node.
8 Attach the shared RDMs to both VMs and power them on.
VMware, Inc. 99
vSphere Availability
Note the original IP address and host name that were used at the time of the installation of
vCenter Server on VM1. This information is used to assign a cluster role IP.
11 To create an MSCS cluster on VM1, include both nodes in the cluster. Also select the validation
option for the new cluster.
13 Select VMware Service Lifecycle Manager from the listed services and click Next.
14 Enter the host name and IP used for the VM1. Then assign the RDM to the role.
16 Using Add Resource, add the VMware AFD and VMware vCenter Configuration services to the
role.
Results
You have created an MSCS cluster that can support vCenter Server availability.
What to do next
After you have created the MSCS cluster, verify that failover is occurring by powering off the VM
hosting vCenter Server (VM1).In a few minutes, verify that the services are running on the other
VM (VM2).