Web Console Guide Prism v6 10
Web Console Guide Prism v6 10
Console Guide
Prism 6.10
January 21, 2025
Contents
Help Organization..............................................................................................8
Cluster Management....................................................................................... 38
Prism Element Web Console Overview....................................................................................................38
Logging Into the Prism Element Web Console..............................................................................39
Logging Out of the Prism Element Web Console.......................................................................... 42
Main Menu...................................................................................................................................... 43
Settings Menu.................................................................................................................................45
Home Dashboard............................................................................................................................48
Monitoring Disk Rebuild Progress..................................................................................................53
Monitoring Node Rebuild Progress................................................................................................ 54
Understanding Displayed Statistics................................................................................................ 55
Modifying Cluster Details................................................................................................................56
Modifying UI Settings..................................................................................................................... 58
Finding the AHV Version on Prism Element..................................................................................60
Finding the AOS Version Using Prism Element............................................................................ 61
Prism Central Licensing............................................................................................................................ 61
Software and Firmware Upgrades............................................................................................................ 61
Nutanix Cluster Check (NCC)........................................................................................................ 61
Use Upgrade Software in the Prism Element Web Console (Legacy 1-Click Upgrade).................63
ii
View Task Status............................................................................................................................71
Cluster Resiliency Preference................................................................................................................... 75
Setting Cluster Resiliency Preference (nCLI).................................................................................76
Life Cycle Management (LCM)................................................................................................................. 76
Multi-Cluster Management........................................................................................................................ 76
Installing Prism Central Using 1-Click Method...............................................................................77
Registering or Unregistering a Cluster with Prism Central............................................................ 82
Restoring Prism Central (1-Click Recovery).................................................................................. 86
Single-node Clusters................................................................................................................................. 88
Prerequisites and Requirements.................................................................................................... 90
Read-Only Mode.............................................................................................................................91
Overriding Read-Only Mode...........................................................................................................92
Two-Node Clusters....................................................................................................................................92
Two-Node Cluster Guidelines........................................................................................................ 93
Witness for Two-node Clusters...................................................................................................... 95
Failure and Recovery Scenarios.................................................................................................... 95
Increasing the Cluster Fault Tolerance Level........................................................................................... 97
Replication Factor 1 Overview................................................................................................................ 101
Replication Factor 1 Recommendations and Limitations............................................................. 102
Enabling Replication Factor 1...................................................................................................... 104
Creating a Storage Container with Replication Factor 1..............................................................105
Disabling Replication Factor 1..................................................................................................... 107
CVM Memory Configuration.................................................................................................................... 107
Increasing the Controller VM Memory Size................................................................................. 108
Resource Requirements Supporting Snapshot Frequency (Asynchronous, NearSync, and
Metro).......................................................................................................................................109
Rebooting an AHV or ESXI Node in a Nutanix Cluster..........................................................................110
iii
Viewing Space Used by the Recycle Bin.................................................................................... 152
Clearing Storage Space Used by the Recycle Bin...................................................................... 153
iv
Supported Hardware Platforms for Optimized Database Solution............................................... 248
Networking Configurations for Compute-Only Nodes in Optimized Database Solution................248
Deployment of Compute-Only and Storage-Only nodes in Optimized Database Solution........... 249
v
Creating a Guest VM Cluster by Directly Attaching a Volume Group (AHV Only)....................... 331
System Management.....................................................................................351
Configuring a Filesystem Whitelist..........................................................................................................351
Configuring Name Servers......................................................................................................................352
Cluster Time Synchronization................................................................................................................. 352
Recommendations for Time Synchronization...............................................................................352
Configuring NTP Servers............................................................................................................. 353
Configuring an SMTP Server.................................................................................................................. 354
Configuring SNMP...................................................................................................................................354
Nutanix MIB.................................................................................................................................. 358
Configuring a Banner Page.................................................................................................................... 364
Registering a Cluster to vCenter Server.................................................................................................365
Unregistering a Cluster from the vCenter Server.........................................................................366
Managing vCenter Server Registration Changes................................................................................... 366
In-Place Hypervisor Conversion..............................................................................................................367
Requirements and Limitations for In-Place Hypervisor Conversion............................................. 368
In-Place Hypervisor Conversion Process.....................................................................................371
Converting Cluster (ESXi to AHV)............................................................................................... 371
Converting Cluster (AHV to ESXi)............................................................................................... 372
Stopping Cluster Conversion........................................................................................................373
Internationalization (i18n)........................................................................................................................ 374
Localization (L10n).................................................................................................................................. 374
Changing the Language Settings.................................................................................................375
Hyper-V Setup......................................................................................................................................... 375
Adding the Cluster and Hosts to a Domain................................................................................. 375
Creating a Failover Cluster for Hyper-V.......................................................................................376
Manually Creating a Failover Cluster (SCVMM User Interface).................................................. 377
Enabling Kerberos for Hyper-V.................................................................................................... 379
vi
Configuring Remote Connection Using CLI................................................................................. 392
Controlling Remote Connections..................................................................................................392
Configuring HTTP Proxy......................................................................................................................... 393
Accessing the Nutanix Support Portal.................................................................................................... 394
Nutanix REST API...................................................................................................................................396
Accessing the REST API Explorer...............................................................................................396
Determining Compatibility Between Hardware and Supported Products................................................397
Copyright........................................................................................................400
vii
HELP ORGANIZATION
This documentation is organized as follows:
Hosts read and write data in shared Nutanix datastores as if they were connected to a SAN. From the perspective of a
hypervisor host, the only difference is the improved performance that results from data not traveling across a network.
VM data is stored locally, and replicated on other nodes for protection against hardware failure.
When a guest VM submits a write request through the hypervisor, that request is sent to the Controller VM on the
host. To provide a rapid response to the guest VM, this data is first stored on the metadata drive, within a subset of
storage called the oplog. This cache is rapidly distributed across the 10 GbE network to other metadata drives in
the cluster. Oplog data is periodically transferred to persistent storage within the cluster. Data is written locally for
performance and replicated on multiple nodes for high availability.
When the guest VM sends a read request through the hypervisor, the Controller VM reads from the local copy first,
if present. If the host does not contain a local copy, then the Controller VM reads across the network from a host that
does contain a copy. As remote data is accessed, the remote data is migrated to storage devices on the current host, so
that future read requests can be local.
Live Migration
Live migration of VMs, whether it is initiated manually or through an automatic process like vSphere DRS, is fully
supported by the Nutanix Enterprise Cloud Computing Platform. All hosts within the cluster have visibility into
shared Nutanix datastores through the Controller VMs. Guest VM data is written locally, and is also replicated on
other nodes for high availability.
If a VM is migrated to another host, future read requests are sent to a local copy of the data, if it exists. Otherwise,
the request is sent across the network to a host that does contain the requested data. As remote data is accessed, the
remote data is migrated to storage devices on the current host, so that future read requests can be local.
High Availability
The built-in data redundancy in a Nutanix cluster supports high availability provided by the hypervisor. If a node
fails, all HA-protected VMs can be automatically restarted on other nodes in the cluster. The hypervisor management
system, such as vCenter, selects a new host for the VMs, which may or may not contain a copy of the VM data.
If the data is stored on a node other than the VM's new host, then read requests are sent across the network. As remote
data is accessed, the remote data is migrated to storage devices on the current host, so that future read requests can
be local. Write requests is sent to the local storage, and also replicated on a different host. During this interaction, the
Nutanix software also creates new copies of pre-existing data, to protect against future node or disk failures.
The Nutanix cluster automatically selects the optimal path between a hypervisor host and its guest VM data. The
Controller VM has multiple redundant paths available, which makes the cluster more resilient to failures.
When available, the optimal path is through the local Controller VM to local storage devices. In some situations, the
data is not available on local storage, such as when a guest VM was recently migrated to another host. In those cases,
the Controller VM directs the read request across the network to storage on another host through the Controller VM
of that host.
Data Path Redundancy also responds when a local Controller VM is unavailable. To maintain the storage path, the
cluster automatically redirects the host to another Controller VM. When the local Controller VM comes back online,
the data path is returned to this VM.
The Nutanix cluster has a distributed architecture, which means that each node in the cluster shares in the
management of cluster resources and responsibilities. Within each node, there are software components that perform
specific tasks during cluster operation.
All components run on multiple nodes in the cluster, and depend on connectivity between their peers that also run the
component. Most components also depend on other components for information.
Zeus
A key element of a distributed system is a method for all nodes to store and update the cluster's configuration. This
configuration includes details about the physical components in the cluster, such as hosts and disks, and logical
components, like storage containers. The state of these components, including their IP addresses, capacities, and data
replication rules, are also stored in the cluster configuration.
Zeus is the Nutanix library that all other components use to access the cluster configuration, which is currently
implemented using Apache Zookeeper.
Zookeeper
Zookeeper runs on either three or five nodes, depending on the redundancy factor that is applied to the cluster. Using
multiple nodes prevents stale data from being returned to other components, while having an odd number provides a
method for breaking ties if two nodes have different information.
Of these three nodes, one Zookeeper node is elected as the leader. The leader receives all requests for information and
confers with the two follower nodes. If the leader stops responding, a new leader is elected automatically.
Zookeeper has no dependencies, meaning that it can start without any other cluster components running.
Cassandra
Cassandra is a distributed, high-performance, scalable database that stores all metadata about the guest VM data
stored in a Nutanix datastore. In the case of NFS datastores, Cassandra also holds small files saved in the datastore.
When a file reaches 512K in size, the cluster creates a vDisk to hold the data.
Cassandra runs on all nodes of the cluster. These nodes communicate with each other once a second using the Gossip
protocol, ensuring that the state of the database is current on all nodes.
Cassandra depends on Zeus to gather information about the cluster configuration.
Stargate
A distributed system that presents storage to other systems (such as a hypervisor) needs a unified component for
receiving and processing data that it receives. The Nutanix cluster has a large software component called Stargate that
manages this responsibility.
From the perspective of the hypervisor, Stargate is the main point of contact for the Nutanix cluster. All read and
write requests are sent across vSwitchNutanix to the Stargate process running on that node.
Stargate depends on Medusa to gather metadata and Zeus to gather cluster configuration data.
Tip: If Stargate cannot reach Medusa, the log files include an HTTP timeout. Zeus communication issues can include a
Zookeeper timeout.
Curator
In a distributed system, it is important to have a component that watches over the entire process. Otherwise, metadata
that points to unused blocks of data could pile up, or data could become unbalanced, either across nodes, or across
disk tiers.
In the Nutanix cluster, each node runs a Curator process that handles these responsibilities. A Curator master
node periodically scans the metadata database and identifies cleanup and optimization tasks that Stargate or other
components should perform. Analyzing the metadata is shared across other Curator nodes, using a MapReduce
algorithm.
Curator depends on Zeus to learn which nodes are available, and Medusa to gather metadata. Based on that analysis,
it sends commands to Stargate.
Prism
A distributed system is worthless if users can't access it. Prism provides a management gateway for administrators to
configure and monitor the Nutanix cluster. This includes the nCLI and Prism Element web console.
Prism runs on every node in the cluster, and like some other components, it elects a leader. All requests are forwarded
from followers to the leader using Linux iptables. This allows administrators to access Prism using any Controller
VM IP address. If the Prism leader fails, a new leader is elected.
Prism communicates with Zeus for cluster configuration data and Cassandra for statistics to present to the user. It also
communicates with the ESXi hosts for VM status and related information.
Node Failure
A Nutanix node is comprised of a physical host and a Controller VM. Either component can fail without impacting
the rest of the cluster.
Controller VM Failure
The Nutanix cluster monitors the status of Controller VMs in the cluster. If any Stargate process fails to respond two
or more times in a 30-second period, another Controller VM redirects the storage path on the related host to another
Stargate. Reads and writes occur over the 10 GbE network until the missing Stargate comes back online.
To prevent constant switching between Stargates, the data path is not restored until the original Stargate has been
stable for 30 seconds.
A CVM failure may include a user powering down the CVM, a CVM rolling upgrade, or any event, which might
bring down the CVM. In any of these cases the storage traffic is served by another CVM in the cluster. The
hypervisor and CVM communicate using a private network on a dedicated virtual switch. This means that the entire
Host Failure
The built-in data redundancy in a Nutanix cluster supports high availability provided by the hypervisor. If a node
fails, all HA-protected VMs can be automatically restarted on other nodes in the cluster.
Curator and Stargate responds to two issues that arise from the host failure. First, when the guest VM begins reading
across the network, Stargate begins migrating those extents to the new host. This improves performance for the guest
VM. Second, Curator notices that there is a missing replica of those extents, and instruct Stargate to begin creating a
second replica.
Drive Failures
Drives in a Nutanix node store four primary types of data: persistent data (hot-tier and cold-tier), storage metadata,
oplog, and Controller VM boot files. Cold-tier persistent data is stored in the hard-disk drives of the node. Storage
metadata, oplog, hot-tier persistent data, and Controller VM boot files are kept in the SATA-SSD in drive bay one.
SSDs in a dual SSD system are used for storage metadata, oplog, hot-tier persistent data according to the replication
factor of system and in a RAID-1 configuration for CVM files. In all-flash nodes, data of all types is stored in the
SATA-SSDs.
Note: On hardware platforms that contain PCIe-SSD drives, the SATA-SSD holds the only the Controller VM boot
files. Storage metadata, oplog, and hot-tier persistent data reside on the PCIe-SSD.
Note: The Controller VM might restart under certain rare conditions on dual SSD nodes if a boot drive fails, or if you
unmount a boot drive without marking the drive for removal and the data has not successfully migrated.
Note: The Controller VM restarts if a data drive fails, or if you remove a data drive without marking the drive for
removal and the data has not successfully migrated.
Note:
• If sufficient number of working racks are not present after failure, the data is rebuilt in a non-rack-aware
manner to get node fault-tolerance level to one. The rack-fault-tolerance stays 0.
Note: – Prism Central workflows are not applicable for rack fault tolerance using HyperV since HyperV
support is not available for Prism Central
• You must have information on the actual physical mapping of racks and blocks in the datacenter.
• Minimum cluster requirements:
Procedure
Note: Directory List (AD and OpenLDAP) users can view and configure Rack Awareness only if a service account
is configured for the directory service. For more information about how to configure a service account for the
directory service, see Configuring Authentication in the Security Guide.
4. Click + Add New Rack, and enter the name of the rack in the Rack Name field.
Note:
• Once rack fault tolerance is enabled; create, edit, delete operations are not allowed. To make any
new modifications, you must re-configure the fault tolerance level as Node. For more information,
see Configuring Node Fault Tolerance on page 28.
• Enabling rack fault tolerance may trigger Cassandra ring change operations and trigger curator scans
to redistribute replicas for rack fault tolerance. The time taken for these depend on existing data on
cluster, workload on the cluster, and configurations.
The Data Resiliency widget on the Prism dashboard shows the current state of the fault tolerance.
Note:
• OK: This state indicates that the fault tolerance domain is highly resilient to safely handle a node or
a disk (in single or two node clusters) failure.
• Warning: This state indicates that the fault tolerance level is almost reaching to 0. Warning state is
displayed if the cluster is not fault tolerant at the configured domain, but is fault tolerant at a lower
domain. For example, if you have configured rack as the configured domain and the cluster can no
longer handle any rack failures due to some reason but can still handle node (lower domain) failures,
then fault tolerance state is displayed as Warning.
• Critical: This state indicates that the fault tolerance level is 0, and the fault tolerance domain cannot
handle a node or a disk (in single or two node clusters) failure.
• Computing: This state indicates that the new fault tolerance level is being calculated. This state is
displayed soon after a node or disk failure, before rebuild is initiated.
Nutanix uses a data resiliency factor also known as replication factor (RF) to ensure data redundancy and availability,
which is based upon the cluster fault tolerance (FT) level (FT1 and FT2). The following table provides the metadata
and data RF values for the corresponding FT level:
Disk 1* 1 1 1 1 None
2 2 1 1 1 1 disk failure
3 Not 5 2 1 2 nodes or 2
applicable disk failures
3 Not 5 5 1 2 nodes or 2
applicable blocks or 2
disk failures
Rack 2 Not 3 3 3 1 node or 1
applicable block or 1
rack or 1 disk
failure
3 Not 5 5 5 2 nodes or
applicable 2 blocks or
2 racks or 2
disk failures
* Before using RF1, see KB-11291 for RF1 guidelines and limitations.
3 6* 2 1 2 nodes or 2
disk failures
Block 2 4 4* 1 1 node or 1
block or 1 disk
failure
3 6 6* 1 2 nodes or 2
blocks or 2 disk
failures
Rack 2 4 4 4* 1 node or 1
block or 1 rack
or 1 disk
3 6 6 6* 2 nodes or 2
blocks or 2
racks or 2 disks
* Minimums that are required to enable erasure coding on new containers in rack aware clusters.
Caution: If rack awareness is enabled on a cluster with erasure coding (EC) functionality enabled, the cluster might
need to reduce the EC strip size to comply with the new fault tolerance configuration. This leads to an increased usage
of space in the cluster temporarily, due to the unpacking of EC strips. After the re-packing of EC strips is complete,
some amount of savings will be lost (depending on initial and final EC strip sizes).
For example, an EC enabled 12-node cluster will have the EC strip size as 4/2 (4 data blocks and 2 parity
blocks). If rack awareness is enabled on this cluster with 6 racks and 2 nodes in each rack, the EC strip
size changes to 2/2 (2 data blocks and 2 parity blocks), leading to lesser savings to accommodate the new
fault tolerance configuration.
For more information about EC strip sizes based on cluster size, see the Nutanix Erasure Coding
solutions documentation.
Procedure
4. Optionally, click Add New Rack to specify rack and block mapping.
• Every storage tier in the cluster contains at least one drive on each block.
• Every storage container in the cluster has replication factor of at least two.
• For replication factor 2, there are a minimum of three blocks in the cluster.
• There is enough free space in all the tiers, in at least replication factor number of blocks in the cluster. For
example, if the replication factor of storage containers in the cluster is two, then at least two blocks require free
space.
• A minimum of four blocks for RF2 or six blocks for RF3 is required to maintain block awareness if erasure coding
is enabled on any storage container. (If the cluster has fewer blocks, block awareness is lost when erasure coding
is enabled.)
Note: This is not applicable for single-node replication target clusters. For more information about how single-node
replication target clusters handle failures, see Single-Node Replication Target Clusters in the Data Protection
and Recovery with Prism Element guide.
Once block fault tolerance conditions are met, the cluster can tolerate a specific number of block failures:
• A replication factor two or replication factor three cluster with three or more blocks can tolerate a
maximum failure of one block.
• A replication factor three cluster with five or more blocks can tolerate a maximum failure of two blocks.
Block fault tolerance is one part of a resiliency strategy. It does not remove other constraints such as the availability
of disk space and CPU/memory resources in situations where a significant proportion of the infrastructure is
unavailable.
Replication factor 2
• 3 blocks
• 1 node per block
Replication factor 3
• 5 blocks
• 1 node per block
Table 4: Additional Requirements when Adding Nodes to an Existing Block Aware Cluster
Note: Be sure your cluster has met the previous minimum cluster requirements.
Replication factor 2 There must be at least 3 blocks If a block contains 4 nodes, you
populated with a specific number need 8 nodes distributed across
of nodes to maintain block fault the remaining (non-failing) blocks
tolerance. To calculate the to maintain block fault tolerance
number of nodes required to for that cluster. X = number of
maintain block fault tolerance nodes in the block with the most
when the cluster RF=2, you need nodes. In this case, 4 nodes
twice the number of nodes in in a block. 2X = 8 nodes in the
the remaining blocks as there remaining blocks.
are in the block with the most or
maximum number of nodes.
Replication factor 3 There must be at least 5 blocks If a block contains 4 nodes, you
populated with a specific number need 16 nodes distributed across
of nodes to maintain block fault the remaining (non-failing) blocks
tolerance. To calculate the to maintain block fault tolerance
number of nodes required to for that cluster. X = number of
maintain block fault tolerance nodes in the block with the most
when the cluster replication factor nodes. In this case, 4 nodes in
3 you need four times the number a block. 4X = 16 nodes in the
of nodes in the remaining blocks remaining blocks
as there are in the block with
the most or maximum number of
nodes.
Guest VM Data
Redundant copies of guest VM data are written to nodes in blocks other than the block that contains the node where
the VM is running. The cluster keeps two copies of each write stored in the oplog.
In the case of a block failure, the under-replicated guest VM data is copied to other blocks in the cluster, and one copy
of the oplog contents is available.
Metadata
The Nutanix Medusa component uses Cassandra to store metadata. Cassandra uses a ring-like structure where data is
copied to peers within the ring to ensure data consistency and availability. The cluster keeps at least three redundant
copies of the metadata, at least half of which must be available to ensure consistency.
With block fault tolerance, the Cassandra peers are distributed among the blocks to ensure that no two peers are on
the same block. In the event of a block failure, at least two copies of the metadata is present in the cluster.
Configuration Data
The Nutanix Zeus component uses Zookeeper to store essential configuration data for the cluster. The Zookeeper role
is distributed across blocks to ensure availability in the case of a block failure.
The following table shows the level of data resiliency (simultaneous failure) provided for the following
combinations of replication factor, minimum number of nodes, and minimum number of blocks.
The state of block fault tolerance is available for viewing through the Prism Element web console and
Nutanix CLI. Although administrators must set up the storage tiers or storage containers, they cannot
determine where data is migrated. AOS determines where data is migrated.
Prism Element web Data Resiliency Status view on the Home screen
console
nCLI ncli> cluster get-domain-fault-tolerance-status type="rackable_unit"
Procedure
4. Optionally, click Add New Rack to specify rack and block mapping.
Nutanix recommends configuring rack and block mapping if you want to enable rack awareness at a later stage.
Redundancy Factor 3
Redundancy factor 3 is a configurable option that allows a Nutanix cluster to withstand the failure of two
nodes or drives in different blocks.
By default, Nutanix clusters have redundancy factor 2, which means they can tolerate the failure of a single node
or drive. The larger the cluster, the more likely it is to experience multiple failures. Without redundancy factor 3,
multiple failures cause cluster unavailability until the failures are repaired.
Redundancy factor 3 has the following requirements:
• A cluster must have at least five nodes, blocks, racks for redundancy factor 3 to be enabled.
• For guest VMs to tolerate the simultaneous failure of two nodes or drives in different blocks, the data must be
stored on storage containers with replication factor 3.
The state of fault tolerance is available to view through the management interfaces.
Prism Element web Data Resiliency Status view on the Home screen
console
nCLI ncli> cluster get-redundancy-state
Guest VM Data
For storage containers with replication factor 3, the cluster stores three redundant copies of guest VM data and the
oplog.
Redundant copies of the guest VM data (designated by square, circle, and triangle icons ) are stored on different
nodes in the cluster.
In the case of two nodes failing, at least one copy of all guest VM data, including the oplog, is available. Under-
replicated VM data is copied to other nodes.
Metadata
At least half of the redundant copies of metadata must be available to ensure consistency. Without redundancy factor
3, the cluster keeps three copies of metadata. With redundancy factor 3, the cluster keeps five copies of metadata so
that if two nodes fail, at least three copies are available.
Configuration Data
The Nutanix Zeus component uses Zookeeper to store essential configuration data for the cluster. Without
redundancy factor 3, the cluster keeps three copies of configuration data. With redundancy factor 3, the cluster keeps
five copies of configuration data so that if two nodes fail, at least three copies are available.
Degraded Node
A degraded node contains a CVM that is in a partially unresponsive state and cannot reliably perform cluster
operations at the same performance level as normal clusters.
Nutanix clusters are designed to provide fault tolerance using container settings such as replication factor. Fault
tolerance setting like replication factor itself does not guarantee protection from partial failures or degraded nodes.
For example, a single node containing a disk exhibiting high latency (but that is partially responsive) can equally
produce downtime for workloads running in clusters with RF2 and RF3 settings.
Degraded node events can occur because physical hardware can fail in innumerable ways. Diagnostics can capture
only some of these events in software-defined failure cases. Degradation can lead to downtime of some or all
production workloads running on the cluster.
The following events could contribute to node degradation:
Degradation Effects
Nutanix clusters are designed to distribute the cluster services across the Controller VMs on all the constituent nodes
to offer the best possible performance and resource allocation for the workloads. This distribution of services can
cause a degraded (or partially available) node to adversely affect the performance of a cluster until you resolve the
degradation.
Hades Service
Tests any drive that is marked offline by
Stargate to check its S.M.A.R.T. health
status and whether it is mountable.
Panic or halt
Core dump
Restart or hang.
Logging errors
Panic or halt
Restart or hang.
Note:
* Local handlers could have other dependencies on the cluster. Their failure-handling abilities may not be
available during events where more than one node is impacted by a particular failure.
**Response logic depends on the features available in the specific software or firmware versions deployed.
It may be subject to change. Nutanix recommends that if you consider upgrading to the latest AOS
maintenance release for the latest fault-tolerance capabilities.
• The cluster was Resilient prior to the event and could tolerate a node failure.
• The partial failure occurs only on one node in the cluster.
• The adverse effects of the partial failure sustain for at least 5 minutes.
Detection
Degraded Node Detection works as follows:
1. When a degraded node is detected, alerts are generated.
See the alerts in the Alerts page. For more information on these alerts, see KB 3827 and KB 9132.
The alerts trigger log collection to aid root cause analysis. See the Log Collection for Alert XXXX on the
Tasks page. Click the Succeeded link to download the logs.
2. If the degraded node had leadership of critical services including Prism, the leadership is revoked.
3. Insights Data Fabric (IDF) services do not run on the degraded node.
Note: Note: Do not manually change the degraded status of the node (using the Mark as Fixed button in Prism)
if you have not confirmed that the issue causing the node or CVM degradation has been fixed. Manually marking a
degraded node as fixed can lead to production impacting issues including downtime of the cluster. Contact Nutanix
Support for assistance.
Procedure
2. Click the gear icon in the main menu and then select Degraded Node Settings in the Settings page.
The Degraded Node Settings dialog box appears.
3. Set the degraded node detection status, Enable Degraded Node Detection.
a. Resolve: Select this option after you have fixed the underlying issue.
Note: Selecting Resolve does not resolve the degraded node issue. It merely marks the alert status as
Resolved, indicating you or Nutanix Support have resolved the underlying issue that triggered the alert.
b. Acknowledge: Select this option to leave the degraded node in its current state, keep the alert unresolved,
and mark the alert as acknowledged.
4. To run an NCC check for the degraded node, go to the command line interface and enter
nutanix@cvm$ ncc health_checks system_checks degraded_node_check
The Prism Element web console does not remove the warning sign for the degraded node.
5. Caution: Do not unmark the degraded node if you have not confirmed that the issue causing the node or CVM
degradation has been fixed. Unmarking a node can lead to production impacting issues in the cluster. Contact
Nutanix Support for assistance.
To unmark the degraded node, in the Summary view, select the fixed node and click Mark as Fixed.
• If you have a pure hypervisor cluster (cluster with only one type of hypervisor), you must adhere to the maximum
number of hosts allowed by Nutanix Configuration Maximums for AHV or Nutanix Configuration Maximums
for ESXi or Nutanix Configuration Maximums for Hyper-V.
• If your cluster consist of multiple hypervisors (mixed hypervisor cluster), you must adhere to the minimum
number of hosts specified for any of the hypervisors in your cluster. For example, if your cluster consists of AHV
and ESXi, then you are allowed to have the number of hosts specified for either AHV or ESXi, whichever is less.
For more information, see Nutanix Configuration Maximums for AHV, Nutanix Configuration Maximums for
ESXi, and Nutanix Configuration Maximums for Hyper-V.
• In a break-fix or generational upgrade (for example moving from NX G5 to G8 platform model) scenario, if you
must exceed the number of hosts beyond the configuration maximum, contact Nutanix Support to temporarily
allow the cluster extension beyond the limit. Clusters must always adhere to the size limits to ensure optimal
performance and stability and you are allowed to exceed the limits only temporarily for these scenarios as an
exception. You are expected to follow the size limits once the node is healthy or upgrades are complete. For more
information, see KB 12681.
Nutanix clusters are also subject to the vSphere maximum values documented by VMware. For more information
about the list of the vSphere maximums, see Configuration Maximums for the version of vSphere you are running.
• The Prism Element web console is a graphical user interface (GUI) that allows you to monitor cluster operations
and perform a variety of configuration tasks. For more information, see Prism Element Web Console
Overview on page 38.
• Nutanix employs a license-based system to enable your entitled Nutanix features, and you can install or regenerate
a license through the Prism Element web console. For more information, see Prism Central Licensing on
page 61.
• You can upgrade a cluster when a new AOS release is available through the Prism Element web console. For more
information, see the Life Cycle Manager Guide.
• If you have multiple clusters, you can manage them all through a single web interface. For more information, see
Multi-Cluster Management on page 76.
Note: You can perform most administrative actions using either the Prism Element web console or nCLI. However,
some tasks are only supported in the nCLI either because a new feature has not yet been incorporated into the
Prism Element web console or the task is part of an advanced feature that most administrators do not need to use.
For more information about how to use the nCLI, see Command Reference. For information about platform
configuration and hypervisor-specific tasks that are not performed through the Prism Element web console, see AHV
Administration Guide and hypervisor-specific guides.
Display Features
The Prism Element web console screens are divided into the following sections:
• Main menu bar. The main menu bar appears at the top of every screen in the Prism Element web console. The
cluster name appears on the far left of the main menu bar. To the right of the cluster name, you can select an entity
from the pull-down list (Home, Health, VM, Storage, Network, Hardware, File Server, Data Protection, Analysis,
Alerts, tasks, LCM, Settings) to display information about that entity. You can also search for specific topics or
select various tasks from the pull-down lists on the right side of the main menu bar. In addition, the main menu
bar includes status icons for quick access to heath, alert, and event information. For more information, see Main
Menu on page 43.
• Entity views. There is a dashboard view for each entity. Some entities (VM, Storage, Hardware, and Data
Protection) include additional views such as a diagram or table view that you can select from the dashboard of that
entity.
• Screen menu bar. Some entity dashboards include another menu bar below the main menu that provides options
specific to that screen. In the following example from the Storage dashboard, three view tabs (Overview,
Diagram, and Table) and three task buttons (+ Storage Container, + Volume Group, and + Storage
Pool) appear on this menu bar.
• Usage and performance/health statistics. Most views include fields that provide usage and either performance
or health (or both) statistics. The usage and performance/health statistics vary based on the entity that you are
viewing. For example, virtual machine usage statistics are displayed in terms of CPU and memory, while disk
Procedure
Note:
Prism Element web console supports the latest version, and the two preceding major versions of Firefox,
Chrome, Safari, and Microsoft Edge browsers.
2. Enter http://management_ip_addr in the address field and press Enter. Replace management_ip_addr with
the cluster virtual IP address (if configured) or the IP address of any Nutanix Controller VM in the cluster.
Note: If you are logging into Prism Central, enter the Prism Central VM IP address.
The browser redirects to the encrypted port (9440) and may display an SSL certificate warning. Acknowledge
the warning and proceed to the site. If user authentication is enabled and the browser does not have the correct
certificate, a denied access message may appear. For more information, see Configuring Authentication.
3. If a welcome screen appears, read the message, and then click the Accept terms and conditions bar at the bottom.
For more information on the welcome screen, see Configuring a Banner Page on page 364.
Note: If you are using LDAP authentication, enter the user name in the samAccountName@domain
format; the domain\user format is not supported. (Authentication does not use the user principle name
[UPN]; user@domain is simply a concatenation of the user and domain names specified in Configuring
Authentication.)
Note: The login page includes background animation that is enabled by default. Click the Freeze space time
continuum! link at the bottom right of the login screen to disable the animation (or the Engage the warp
drive! link to enable the animation). For information on how to permanently disable (or enable) the animation, see
Modifying UI Settings on page 58.
5. If you are logging in as an administrator (admin user name and password) for the first time, which requires that
the default password (Nutanix/4u) be changed, enter a new password in the password and re-type password
fields and then press Enter or click the right arrow icon.
The password must meet the following complexity requirements:
Note:
• You are prompted to change the password when logging in as the admin user for the first time after
upgrading AOS. If the first login after upgrade is to the Controller VM through SSH (instead of
Prism), you must log in using the default admin user password (Nutanix/4u) and then change the
password when prompted.
• The default password expiration age for the admin user is 60 days. You can configure the minimum
and maximum password expiration days based on your security requirement.
• When you change the admin user password, update any applications and scripts using the admin
user credentials for authentication. Nutanix recommends that you create a user assigned with the
admin role instead of using the admin user for authentication.
7. If a Pulse will be Enabled screen appears (typically on the first login or after an upgrade), read the statement and
then do one of the following. This screen refers to the Pulse feature that alerts Nutanix customer support regarding
the health of the cluster. For more information, see Pulse Health Monitoring on page 383.
Caution: If Pulse is not enabled, alerting Nutanix customer support when problems occur is disabled.
Pulse provides Nutanix customer support with analytic information that allows them to dynamically monitor
your cluster and identify potential issues before they become problems. For more information, see Remote
Diagnostics on page 384. Enabling Pulse is recommended unless providing cluster information to Nutanix
customer support violates your security policy.
8. If a screen about enhanced cluster health monitoring appears (typically either after enabling Pulse in the previous
step or after upgrading a cluster), read the statement and then do one of the following:
» Click the Yes (recommended) button to enable enhanced cluster health monitoring.
» Click the Not Now button to disable this feature.
The enhanced (on top of standard Pulse) cluster health monitoring provides Nutanix customer support with
more detailed (but more transparent) information that allows them to better monitor the health of your cluster.
9. [2-node clusters only] If a screen about registering with a Witness appears, see Witness Option in the Data
Protection and Recovery with Prism Element guide.
Procedure
To log out from the Prism Element web console, click the user icon in the main menu and then select the
Sign Out option from the dropdown list. You are logged out immediately after selecting the option (no prompt or
message).
Cluster Information
The cluster name appears on the far left of the main menu bar. Clicking the cluster name opens the Cluster Details
window. This window displays the cluster ID number, cluster name, and cluster virtual IP address (if set). You
can modify the name or virtual IP address at any time. For more information, see Modifying Cluster Details on
page 56.
View Options
Selecting a view (entity name) from the pull-down list on the left displays information about that entity. Select from
the following options:
Note: These views reflect that Prism Element retains alerts and events, and raw metric values for 90 days.
• A health (heart) icon appears on the left of the main menu. It can be green (healthy), yellow (warning), or red
(unhealthy) indicating the current heath status. Clicking the icon displays the heath details view. For more
information, see Health Dashboard on page 254.
• An alerts (bell) icon appears on the left of the main menu when critical (red), warning (yellow), or informational
(gray) alert messages have been generated and have not been marked as resolved. The number of unresolved alerts
is displayed in the icon. Click the icon to display a drop-down list of the most recent unresolved alerts. Click
an alert or the right arrow link to open the alerts view. For more information, see Alerts Dashboard in Prism
Element Alerts and Events Reference Guide.
• A tasks (circle) icon appears to the right of the alerts icon. The number of active tasks (running or completed
within the last 48 hours) is displayed in the icon. Click the icon to display a drop-down list of the active tasks, and
Help Menu
A question mark icon appears on the right side of the main menu. Clicking the question mark displays a list of help
resource options that you can select. The following table describes each option in the pull-down list.
Name Description
Help with this page Opens the online help at the page that describes this screen. For more
information, see Accessing Online Help on page 398.
Health Tutorial Opens the Health dashboard tutorial that takes you through a guided
tour of the health analysis features. For more information, see Health
Dashboard on page 254.
Support Portal Opens a new browser tab (or window) at the Nutanix support portal login
page. For more information, see Accessing the Nutanix Support Portal on
page 394.
Nutanix Next Community Opens a new browser tab (or window) at the Nutanix Next Community
entry page. This is an online community site for customers and partners
to exchange ideas, tips, and information about Nutanix technologies and
related datacenter topics. For more information, see Accessing the Nutanix
Next Community on page 399.
User Menu
A user icon appears on the far right side of the main menu with the current user login name. Clicking the user
icon displays a list of options to update your user account, log out from the Prism Element web console, and other
miscellaneous tasks. The following table describes each option in the dropdown list.
Name Description
Update Profile Opens the Update Profile window to update your user name and email
address. For more information, see Updating My Account.
Change Password Opens the Change Password window to update your password. For more
information, see Updating My Account.
REST API Explorer Opens a new browser tab (or window) at the Nutanix REST API Explorer
web page. For more information, see Accessing the REST API Explorer on
page 396.
Download nCLI Downloads the Nutanix command line interface (nCLI) as a zip file to
your local system. The download occurs immediately after clicking this
option (no additional prompts). For more information about installing the
nCLI locally and for nCLI command descriptions, see Nutanix Command
Reference.
Download Cmdlets Installer Downloads the PowerShell installer for the Nutanix cmdlets. For more
information about the cmdlets, see PowerShell Cmdlets Reference.
Download Prism Central Opens a new browser tab (or window) at the Support Tools page of the
Nutanix support portal from which you can download the files to install
Prism Central. If a login page appears, enter your Nutanix support portal
credentials to access the portal. For more information, see Prism Central
Infrastructure Guide.
About Nutanix Opens a window that displays Nutanix operating system (AOS) and other
version information. For more information, see Finding the AOS Version
Using Prism Element on page 61.
Nothing To Do? Opens a game that is strictly for entertainment. To quit the game, click the
X at the upper right of the screen.
Sign Out Logs out the user from the Prism Element web console. For more
information, see Logging Out of the Prism Element Web Console on
page 42.
Settings Menu
The Prism Element web console includes a Settings page from which you can configure a variety of
system services.
You can access the Settings page by doing either of the following:
Name Description
General
Cluster Details Opens the Cluster Details window to view or modify certain cluster
parameters. For more information, see Modifying Cluster Details on
page 56.
Configure CVM Opens the Configure CVM window to increase the Controller VM memory
size. For more information, see Increasing the Controller VM Memory Size
on page 108.
Convert Cluster Opens the Convert Cluster window to convert the cluster from ESXi to
AHV and then from AHV to ESXi. For more information, see In-Place
Hypervisor Conversion on page 367.
Expand Cluster Opens the Expand Cluster window to add new nodes to the cluster . For
more information, see Expanding a Cluster on page 199.
Image Configuration [AHV Opens the Image Configuration window to import and mange image files
only] that can be used to create VMs. For more information, see Configuring
Images on page 293.
Licensing Opens the Licensing window to install or update the cluster license that
enables entitled Nutanix features. For more information, see Prism Central
Licensing on page 61.
Reboot [AHV] Opens the Request Reboot window to gracefully restart the nodes in the
cluster one after the other. You can select the nodes you want to restart.
For more information, see Rebooting an AHV or ESXI Node in a Nutanix
Cluster on page 110.
Remote Support Opens the Remote Support Services window, which enables (or
disables) Nutanix remote support access. For more information, see
Controlling Remote Connections on page 392.
Upgrade Software Opens the Upgrade Software window to upgrade the cluster to a newer
AOS version, or update other upgradeable components. For more
information, see Software and Firmware Upgrades on page 61.
vCenter Registration [ESXi Opens the vCenter Registration window to register (or unregister) the
only] cluster with the vCenter instance. For more information, see Registering a
Cluster to vCenter Server on page 365.
Setup
Connect to Citrix Cloud [AHV Opens the Connect to Citrix Cloud window to connect to the Citrix
and XenServer only] Workspace Cloud. For more information, see Connect to Citrix Cloud on
page 329.
Prism Central Registration Opens the Prism Central Registration window to add the cluster into
a central registration for multicluster connection and support. For more
information, see Registering or Unregistering a Cluster with Prism Central
in Prism Central Infrastructure Guide.
Pulse Opens the Pulse window to enable the sending of cluster information to
Nutanix customer support for analysis. For more information, see Pulse
Configuration on page 385.
Rack Configuration Opens the Rack Configuration page to configure the fault tolerant domain
for node, block, and rack awareness. For more information, see Rack Fault
Tolerance on page 17.
Network
HTTP Proxy Opens the HTTP Proxy window to configure an HTTP proxy to which
the Nutanix software can connect. For more information, see Configuring
HTTP Proxy on page 393.
Name Servers Opens the Name Servers window to configure name servers for
the cluster. For more information, see Configuring Name Servers on
page 352.
Network Configuration [AHV Opens the Network Configuration window to configure network
only] connections for the cluster. For more information, see Network
Configuration for VM Interfaces on page 165.
Network Switch Opens the Network Switch Configuration window to configure network
switch information needed for collecting network traffic statistics. For more
information, see Configuring Network Switch Information on page 170.
This option does not appear when running a hypervisor that does not
support this feature.
NTP Servers Opens the NTP Servers window to specify which NTP servers to access.
For more information, see Configuring NTP Servers on page 353.
Security
Cluster Lockdown Opens the Cluster Lockdown window, which allows you to delete (or add)
public authentication keys used for SSH access into the cluster. Removing
all public keys locks down the cluster from external access. For more
information, see Controlling Cluster Access.
Data at Rest Encryption Opens the Data-at-Rest Encryption screen to configure key management
[SEDs only] for self encrypting drives (SEDs) and enable data encryption across the
cluster. This menu option appears only when the data drives in the cluster
are SEDs. For more information, see Data-at-Rest Encryption.
Filesystem Whitelists Opens the Filesystem Whitelist window to specify whitelist addresses.
For more information, see Configuring a Filesystem Whitelist on
page 351.
SSL Certificate Opens the SSL Certificates window to create a self-signed certificate. For
more information, see Certificate Management.
Users and Roles
Local User Management Opens the Local User Management window. This window lists current
users and allows you to add, update, and delete user accounts. For more
information, see User Management.
Role Mapping Opens the Role Mapping window to configure role mappings that apply
in the user authentication process. For more information, see Configuring
Authentication.
Alert Email Configuration Opens the Alert Email Configuration window, which enables (or
disables) the e-mailing of alerts. For more information, see Configuring
Alert Emails in Prism Element Alerts and Events Reference Guide.
Alert Policies Opens the Alert Policies window, which allows you to specify what events
should generate an alert and how frequently the system should check for
each event type. For more information, see Configuring Alert Policies in
Prism Element Alerts and Events Reference Guide.
SMTP Server Opens the SMTP Server Settings window to configure an SMTP server.
For more information, see Configuring an SMTP Server on page 354.
Data Resiliency
Configure Witness [ESXi and Opens the Configure Witness window to add a witness VM for metro
AHV only] availability and two-node clusters. For more information, see Witness
Option in Data Protection and Recovery with Prism Element Guide.
Degraded Node Settings Opens the Degraded Node Settings window to enable or disable
Degraded Node Detection. For more information, see Degraded Node on
page 30.
Manage VM High Availability Opens the Manage VM High Availability window to enable high
[AHV only] availability for guest VMs in the cluster. For more information, see Enabling
High Availability for the Cluster on page 303.
Redundancy State Opens the Redundancy Factor Readiness window to configure the
redundancy factor of the cluster. For more information, see Increasing the
Cluster Fault Tolerance Level on page 97.
Appearance
Language Settings Opens the Languages window, which allows you to select the language
of the Prism Element web console. For more information, see Localization
(L10n) on page 374.
Welcome Banner Opens the Edit Welcome Banner window to create a welcome banner
message that appears before users login to the Prism Element web
console. For more information, see Configuring a Banner Page on
page 364.
Home Dashboard
The Home dashboard is the opening screen that appears after logging into the Prism Element web
console. It provides a dynamic summary view of the cluster status. To view the Home dashboard at any
time, select Home from the pull-down list on the far left of the main menu.
Menu Options
The Home dashboard does not include menu options other than the options available from the main menu. For more
information, see Main Menu on page 43.
Note: These fields reflect that Prism Element retains alerts and events, and raw metric values for 90 days.
For information about how the statistics are derived, see Understanding Displayed Statistics on
page 55.
Name Description
Hypervisor Summary Displays the name and version number of the hypervisor.
Prism Central Displays whether you have registered the cluster to a Prism Central
instance or not. Click Register to register the cluster to a Prism Central
instance. If you have already registered, you can click Launch to launch
the Prism Central instance in a new tab of your browser.
Storage Summary Displays information about the physical storage space utilization (in GiB or TiB)
and resilient capacity of the cluster.
Placing the cursor anywhere on the horizontal axis displays a breakdown view of
the storage capacity usage.
The View Details link displays the resiliency status and storage information of
all the individual nodes in the cluster. For more information, see Storage Details
Page on page 123.
You can also configure a threshold warning for the resilient capacity utilization
in the cluster by clicking the gear icon. For more information, see Configuring a
Warning Threshold for Resilient Capacity on page 140.
VM Summary Displays the total number of VMs in the cluster broken down by on, off,
and suspended states.
Hardware Summary Displays the number of hosts and blocks in the cluster, plus one or more
Nutanix block model numbers.
Cluster-wide Controller IOPS Displays I/O operations per second (IOPS) in the cluster. The displayed
time period is a rolling interval that can vary from one to several hours
depending on activity moving from right to left. Placing the cursor
anywhere on the horizontal axis displays the value at that time. (These
display features also apply to the I/O bandwidth and I/O latency monitors.)
For more in-depth analysis, you can add this chart (and any other charts
on the page) to the analysis page by clicking the blue link in the upper right
of the chart. For more information, see Analysis Dashboard on page 333.
Cluster-wide Controller IO Displays I/O bandwidth used per second in the cluster. The value is
Bandwidth displayed in an appropriate metric (MBps, KBps, and so on) depending on
traffic volume.
Cluster-wide Controller Displays the average I/O latency (in milliseconds) in the cluster.
Latency
Cluster CPU Usage Displays the current CPU utilization percentage along with the total
available capacity (in GHz).
Cluster Memory Usage Displays the current memory utilization percentage along with the total
available capacity (in GB).
Health Displays health status for the cluster as a whole (good, warning, critical)
and summary health status for the VMs, hosts, and disks in the cluster.
Clicking the VMs, hosts, or disks line displays detailed information
about that object in the Health page. For more information, see Health
Dashboard on page 254.
Data Resiliency Status Displays information indicating whether the cluster is protected currently
from potential data loss due to a component failure. Clicking anywhere
in this field displays the Data Resiliency Status dialog box. For more
information about the Data Resiliency Status, see the following Data
Resiliency Status section.
• Resiliency Status. Indicates whether the cluster can safely handle a node
failure, that is whether a copy exists somewhere in the cluster of all data in any
node. If the status is not OK, the Data Resiliency Status window includes a
message about the problem.
Note: The resiliency status for single-node backup cluster is at the disk
level and not at the node level. For more information, see Single-Node
Replication Target Clusters in Data Protection and Recovery with
Prism Element Guide.
• Rebuild Progress. Indicates data rebuild status for offline nodes and disks
marked for removal. For more information about the Rebuild Progress
indicator, see the following Rebuild Progress section.
• Rebuild Capacity Available. Indicates whether there is sufficient unused
storage in the cluster to rebuild a data copy after a node is lost. If the status
is not Yes, the Data Resiliency Status window includes a message about the
problem.
Note: This option does not appear for single-node replication target clusters.
Critical Alerts Displays the most recent unresolved critical alert messages. Click a
message to open the Alert screen at that message. For more information,
see Alerts Dashboard in Prism Element Alerts and Events Reference Guide.
Warning Alerts Displays the most recent unresolved warning alert messages. Click a
message to open the Alert screen at that message.
Note: When a node goes down, extent group (egroup) fault tolerance status remains unchanged as the node is assumed
(initially) to be unavailable just temporarily. However, Stargate fault tolerance goes down by one until all data has been
migrated off that node. The egroup fault tolerance status goes down only when a physical copy of the egroup replica is
permanently bad. For more information on Stargate, see Cluster Components on page 12.
Note: Data Resiliency Status displays only a single generation data rebuild at any point in time.
Rebuild Phases
A data rebuild has the following phases.
• Rebuilding Data. Indicates that the data rebuilt progress up to 100% . This includes time to rebuild data and also
validate that all data has actually been rebuilt.
• Data Rebuild Complete. Indicates the success of the data rebuild operation.
• Aborted. Indicates an aborted rebuild operation. The rebuild operation aborts when a node that went offline
comes back online.
Data Rebuilt
The following data rebuilds when a rebuild scenario triggers.
Note: The data rebuild percentage does not account for metadata rebuild. For example, if the metadata rebuild is at
10% and the data rebuild is 100%, the Data Resiliency Status widget displays the rebuild percentage as 100%. The
metadata rebuild progress currently appears on the Tasks page of the Prism Element web console.
Note: The progress percentage of the extent store data rebuild restricts to 95% until the rebuild completes for oplog and
NearSync LWS episodes
m Main Menu
s Settings Menu
f or / Spotlight (search bar)
u User Menu
a Alerts menu
h Help menu (? menu)
p Recent tasks
o Overview View
d Diagram View
t Table View
Procedure
3. Hover on the (?) symbol to view the rebuild completion ETA for the current generation.
Figure 17: Overall Estimated Time and Progress Percentage (Data Rebuild)
4. Monitor the Rebuild Progress progress bar to view the rebuild completion percentage.
6. You can verify the rebuild event status through the Tasks option.
Note: The offline node progress appears only if the Stargate health is down.
Procedure
3. Hover on the (?) symbol to view the rebuild completion ETA for the current generation.
4. Monitor the Rebuild Progress progress bar to view the rebuild completion percentage.
5. Click anywhere on the Rebuild Progress field to view the following rebuild metrics for the individual entities.
Note: Most displayed statistics appear in 30 second intervals. The values in the tables represent the most recent data
point within the last 30 seconds. Prism Central collects the statistical data from each registered cluster, so the process of
collecting that data could result in a longer lag time for some statistics displayed in Prism Central.
1. Hypervisor: Hypervisor provides usage statistics only. The support to provide usage statistics is available only
in ESXi, and not in Hyper-V and AHV hypervisors. If the cluster consists of Hyper-V or AHV hypervisor, the
controller provides the usage statistics.
Note: Ensure that you consider the usage statistics reported from ESXi hypervisor in both Prism Central and Prism
Element, only when it matches with the usage statistics in vCenter.
2. Controller (Stargate): When hypervisor statistics are unavailable or inappropriate, the Controller VM (CVM)
provides the statistics from Stargate. For more information about Stargate, see Nutanix Bible. The Controller-
reported statistics might differ from those reported by the hypervisor for the following reasons:
• An NFS client might break up large I/O requests into smaller I/O units before issuing them to the NFS server,
thus increasing the number of operations reported by the controller.
• The hypervisor might read I/O operations from the cache in the hypervisor which are not counted by the
controller.
3. Disk (Stargate): Stargate can provide statistics from both controller and disk perspective. The controller
perspective includes reading both I/O operations from memory and disk I/O operations, but the disk perspective
includes only disk I/O operations.
Note: The difference in statistics derived from the sources: Hypervisor, Controller, and Disk, only applies to storage-
related statistics such as IOPS, latency, and bandwidth.
The following field naming conventions are used in Prism Central to identify the information source:
• A field name with Controller word indicates the statistic is derived from the controller (for example Controller
IOPS).
• A field name with Disk word indicates the statistic is derived from the disk (for example Disk IOPS).
• A field name without Controller or Disk word indicates the statistic is derived from the hypervisor. For example
IOPS.
For VM statistics in a mixed ESXi/AHV cluster, the statistics source depends on the type of hypervisor that hosts the
VM. If the Hypervisor is:
Note:
• The overview, VM, and storage statistics are derived from either the hypervisor or controller.
• Hardware statistics are derived from disk.
• Metrics in the analysis page are derived from any of the sources: hypervisor, controller, or disk, based
on the type of metric.
Hardware Disk
Hyper-V Overview, VM, and Controller
Storage
Hardware Disk
AHV Overview, VM, and Controller
Storage
Hardware Disk
Citrix Hypervisor Overview, VM, and Controller
Storage
Hardware Disk
Mixed (ESXi + AHV) Overview, VM, and Hypervisor
Storage
Hardware Disk
Procedure
2. In the main menu, either click the cluster name at the far left or click the gear icon in the main menu and then
select Cluster Details in the Settings page.
The Cluster Details window appears. It displays read-only fields for the cluster UUID (universally unique
identifier), ID, incarnation ID, subnet, and encryption status values. It also contains editable fields for cluster
name, FQDN, virtual IP configuration, and iSCSI data services IP address. The cluster ID remains the same
for the life of the cluster, but the incarnation ID is reset (typically to the wall time) each time the cluster is re-
initialized.
3. In the Cluster Name field, enter (or update) a name for the cluster.
The default name is simply Unnamed. Providing a custom name is optional but recommended.
4. In the FQDN field, enter the fully qualified domain name (FQDN) for the cluster.
This requires an administrator to configure the domain name in the DNS server to resolve to all the external IPs
of the Nutanix Controller VMs.
Caution: All the features that use the cluster virtual IP address will be impacted if you change that address. For
more information, see Virtual IP Address Impact on page 57.
Note: You are not required to configure either a virtual IP or a domain name, but configuring both provides
better addressing flexibility.
6. In the Virtual IPv6 field, enter an IPv6 address that will be used as a virtual IPv6 address for the cluster.
A Controller VM runs on each node and has its own IPv6 address, but this field sets a logical IPv6 address
that always connects to one of the active Controller VM in the cluster (assuming at least one is active), which
removes the need to enter the address of a specific Controller VM. The virtual IPv6 address is normally set
during the initial cluster configuration, but you can update the address at any time through this field. For more
information, see Field Installation Guide.
7. In the ISCSI Data Services IP Address field, enter (or update) an IP address for use with Nutanix Volumes
and other data services applications.
Caution: For certain features, changing the external data services IP address can result in unavailable storage
or other issues. The features in question include Volumes, Calm, Leap, Karbon, Objects, and Files. For more
information, see KB 8216 and iSCSI Data Services IP Address Impact on page 58.
8. To disable the Recycle Bin, clear (unselect) Retain Deleted VMs for 24h.
When you disable the Recycle Bin, AOS automatically deletes any entities in the Recycle Bin after 24 hours.
Any entities you delete after disabling the Recycle Bin are marked for deletion as soon as possible and are not
stored in the storage container Recycle Bin folder. For more information, see Recycle Bin on page 151.
9. To enable the Recycle Bin, select Retain Deleted VMs for 24h.
If you enable the Recycle Bin and then delete a guest VM or volume group vDisk, it retains its contents (deleted
vDisks and configuration information) for up to 24 hours, unless the cluster free storage space reaches critical
thresholds.
10. When the field entries are correct, click the Save button to save the values and close the window.
• You can no longer manage a Nutanix cluster (Hyper-V) through the System Center Virtual Machine Manager
(SCVMM).
• Nutanix data protection service might lose connection if you configured the remote site using the virtual IP
address of that cluster.
• All the VMs running the NGT (Nutanix guest tools) instance will be affected. For information about NGT
reconfiguration, see Reconfiguring NGT on page 319.
• External machines mounting shares from Nutanix might fail if the virtual IP address is used for HA functionality
(as recommended).
• Should be in the same subnet as the cluster Controller VM IP eth0 network interface addresses
• Helps load balance storage requests
• Enables path optimization in the cluster, preventing bottlenecks
• Eliminates the need for configuring a multipathing service such as Microsoft multipath I/O (MPIO)
Nutanix recommends setting the iSCSI data services IP address once for each cluster, but you can change it if needed
through the Cluster Details window in the Prism Element web console. For more information, see Modifying
Cluster Details on page 56.
If you change the iSCSI data services IP address, you will need to reconfigure any clients to use the new IP address.
Caution: For certain features, changing the external data services IP address can result in unavailable storage or other
issues. The features in question include Volumes, Calm, Leap, Karbon, Objects, and Files. For more information, see
KB 8216.
Modifying UI Settings
Procedure
2. Click the gear icon in the main menu and then select UI Settings under Appearance in the Settings page.
The UI Settings dialog box appears.
Important:
• The Prism themes feature is currently in technical preview. You may encounter visual anomalies
in few settings/views where the prism themes are not applied. For example, Licensing Settings.
Nutanix recommends that you register a case on the support portal to report any anomalies
• You are required to save your selection and refresh any open Prism tabs after changing a background
theme.
• The Prism themes setting is not a universal feature between Prism Central and Prism Element web
console. The background themes need to be configured in Prism Central and Prism Element web
console respectively.
• Select Light Theme for light background with high contrast view. This is the default Prism background
theme.
• Select Dark Theme for dark background with high contrast view. A pop-up appears prompting you to click
Continue to proceed.
• Select Auto (OS defined) to apply background themes defined in the OS. A pop-up appears prompting you
to click Continue to proceed. For example, if you have defined Dark mode in the OS setting, the Prism UI
honors the setting and sets a dark background theme.
4. To disable the logon page background animation, clear the Enable animated background particles
checkbox (or select it to enable).
Clearing the Enable animated background particles checkbox in the Prism UI Settings dialog box disables
the creation or drawing of particles entirely. This action stops the drawing of the particles on the Prism Element
logon page.
Note: This setting is not persistent. In other words, if the Prism service restarts, this setting is lost and must be
disabled again.
Disabling the particles allows you to conserve critical CPU resources that are used in creating and maintaining the
particles.
Note: Disabling or enabling this setting in Prism Element web console does not propagate to Prism Central or vice
versa. The setting must be disabled in Prism Element web console and Prism Central UI separately.
You can disable the particle animation from the logon page by clicking Freeze space-time continuum! at the
right bottom of the logon page. This action stores a setting in the local browser to stop the animation. However,
this action does not stop creation or drawing of the particles itself.
Note: You can enable the particle animation by clicking Engage the warp drive! .
• Click UI Settings dialog box and simultaneously press the option key on the MAC system or Alt key on the
Windows system. Options for customizing the theme, title text, and blurb text are displayed.
• Select the theme from the options displayed for Theme. You can change the HEX codes to create your own
custom gradient background color for the logon page.
• In the Title Text field, enter the text to create your custom title.
• In the Blurb Text field, enter the text to create your custom blurb text. This text is displayed below the
password field.
Figure 18: UI Settings Window for customizing the theme, title text, and blurb text
• Select the session timeout for the current user from the Session Timeout For Current User drop-down
list.
• Select the default session timeout for all users (except an administrator) from the DEFAULT SESSION
TIMEOUT FOR NON-ADMIN USERS drop-down list.
• Select the appropriate option from the SESSION TIMEOUT OVERRIDE FOR NON-ADMIN USERS
drop-down list to override the session timeout.
Note: The timeout interval for an administrator cannot be set for longer than 1 hour.
7. Clear the Disable 2048 game checkbox to disable the 2048 game.
Procedure
2. The Hypervisor Summary widget widget on the top left side of the Home page displays the AHV version.
Procedure
2. Click the dropdown next to the username in the top right corner.
Note: Starting with LCM 2.7, Foundation upgrades are exclusively performed through LCM. The one-click upgrade
for Foundation has been disabled; the Foundation tab under Settings > Upgrade Software is not present anymore.
• For Prism Element clusters, run the NCC checks from the Health dashboard of the Prism Element
web console. You can select to run all the checks at once, the checks that have failed or displayed some
warning, or even specific checks of your choice. You can also log on to a Controller VM and run NCC
from the command line.
• if you are running checks by using Prism Element web console, you are unable to collect the logs at the
same time.
• You can also log on to a Controller VM and run NCC from the ncc command line.
Run NCC on Prism Central
For Prism Central clusters, log on to the Prism Central VM through a secure shell session (SSH) and run the
NCC checks using the ncc command line. You cannot run NCC from the Prism Central web console. For
more information, see Prism Central Infrastructure Guide.
a. In the Health dashboard, from the Actions drop-down menu, select Run Checks.
b. Select the checks that you want to run for the cluster.
• All checks. Select this option to run all the checks at once.
• Only Failed and Warning Checks. Select this option to run only the checks that failed or triggered a
warning during the health check runs.
• Specific Checks. Select this option and type the check or checks name in the text box that appears that you
want to run.
This field gets auto-populated once you start typing the name of the check. The Added Checks box lists all the
checks that you have selected for this run.
c. Select the Send the cluster check report in the email option to receive the report after the cluster check.
To receive the email, ensure that you have configured email configuration for alerts. For more information, see the
Prism Element web console Guide.
The status of the run (succeeded or aborted) is available in the Tasks dashboard. By default, all the event triggered
checks are passed. Also, the Summary page of the Health dashboard updates with the status according to health
check runs.
Procedure
Do these steps to run NCC by using the ncc command.
If the check reports a status other than INFO or PASS, resolve the reported issues before proceeding. If you are
unable to resolve the issues, contact Nutanix Support for assistance.
Procedure
• Show top-level help about a specific available health check category. For example, hypervisor_checks.
nutanix@cvm$ ncc health_checks hypervisor_checks
• Show all NCC flags to set your NCC configuration. Use these flags under the direction of Nutanix Support.
nutanix@cvm$ ncc -help
• File
• File Server
• VMware ESXi or Microsoft Hyper-V hypervisors (use LCM to upgrade AHV)
Note: You must allow access to the upgrade URLs in your firewall, listed in Firewall Best Practices in the
Security Guide.
Nutanix recommends performing most upgrades through Life Cycle Manager (LCM) in the Prism Element web
console. If you choose to upgrade individual cluster software components through Settings > Upgrade Software
in the Prism Element web console, follow the recommended upgrade order as described in Recommended Upgrade
Order for Dark Site Method section in Life Cycle Manager Guide.
You can choose how to obtain the latest versions of the AOS or other software that Nutanix makes available on the
Nutanix Support portals. You might need to download software from hypervisor vendor release web pages.,
• File Server • On demand: The Prism Element web • Nutanix Support Portal
console regularly checks for new lists binaries, metadata, and
• File versions and notifies you through checksums when available. You
Upgrade Software that a new can upload the binary file and
version is available. Click Download metadata to upgrade clusters.
to retrieve and install the software
package.
• Hypervisor • On demand AHV: For most AOS • Older AOS versions support
versions, upgrade AHV by using the AHV upgrades that you apply
LCM, linked from Upgrade Software through Settings > Upgrade
in the Prism Element web console. Software in the Prism Element
web console
• On-demand is not available for ESXi.
You can check the Nutanix Support • Nutanix Support Portal
portal for metadata JSONs for Nutanix- lists binaries, metadata, and
qualified ESXi versions. checksums when available.
You can upload the binary and
• On-demand is not available for metadata files to upgrade clusters.
Microsoft Hyper-V.
• Other hypervisor metadata files
and checksums required to install
hypervisor software are listed
when available.
• Hypervisor vendor web site.
Hypervisor vendors such as
VMware provide upgrade
packages. For example, you
can download the metadata file
from Nutanix and hypervisor
binary package from VMware
and then upload the files through
Upgrade Software to upgrade
cluster hosts.
See Upgrading ESXi Hosts
by Uploading Binary and
Metadata Files on page 66
or Upgrading Hyper-V Hosts
by Uploading Binary and
Metadata Files on page 70
for more information.
AOS Upgrade
To upgrade your cluster to AOS 6.8 or later versions, you must use LCM. However, you must upgrade LCM at
your site to the latest version before you upgrade AOS. For more information, see LCM Updates in the Life Cycle
Manager Guide.
Note: Starting with LCM 3.0, 1-click upgrade for AOS is deprecated.
ESXi Upgrade
These topics describe how to upgrade your ESXi hypervisor host through the Prism Element web console
Upgrade Software feature (also known as 1-click upgrade). For information on how to install or upgrade
VMware vCenter server or other third-party software, see your vendor documentation.
AOS supports ESXi hypervisor upgrades that you can apply through the web console Upgrade Software feature
(also known as 1-click upgrade).
• To install or upgrade VMware vCenter Server or other third-party software, see your vendor documentation.
• Always consult the VMware web site for any vCenter and hypervisor installation dependencies. For example, a
hypervisor version might require that you upgrade vCenter first.
• If you have not enabled fully automated DRS in your environment and want to upgrade the ESXi host, you need
to upgrade the ESXi host manually. For LCM upgrades on the ESXi cluster, it is recommended to have a fully
automated DRS, so that VM migrations can be done automatically. For information on fully automated DRS, see
Set a Custom Automation Level for a Virtual Machine in the VMware vSphere Documentation. For information on
how to upgrade ESXi hosts manually, see ESXi Host Manual Upgrade in the vSphere Administration Guide.
• Disable Admission Control to upgrade ESXi on AOS; if enabled, the upgrade process will fail. You can enable it
for normal cluster operation otherwise.
Nutanix Support for ESXi Upgrades
Nutanix qualifies specific VMware ESXi hypervisor updates and provides a related JSON metadata
upgrade file on the Nutanix Support Portal for one-click upgrade through the Prism Element web
console Software Upgrade feature.
Nutanix does not provide ESXi binary files, only related JSON metadata upgrade files. Obtain ESXi offline
bundles (not ISOs) from the VMware web site.
Nutanix supports the ability to patch upgrade ESXi hosts with versions that are greater than or released after
the Nutanix qualified version, but Nutanix might not have qualified those releases. For more information,
see the Nutanix hypervisor support statement in our Support FAQ. For updates that are made available
by VMware that do not have a Nutanix-provided JSON metadata upgrade file, obtain the offline bundle
and md5sum checksum available from VMware, then use the web console Software Upgrade feature to
upgrade ESXi.
Trusted Platform Module (TPM) 2.0 enabled hosts
Before upgrading a host with Trusted Platform Module (TPM) 2.0 enabled, in a cluster running ESXi
7.0U2 and later versions, Nutanix recommends that you backup the recovery key created when
encrypting the host with TPM. For information on how to generate and backup the recovery key,
see KB 81661 in the VMware documentation. Ensure you use this recovery key to restore the host
configuration encrypted by TPM 2.0 if it fails to start after upgrading the host in the cluster. For
information on how to restore an encrypted host, see KB 81446 in the VMware documentation. If you
don't have the recovery key, and if the host fails to start after an upgrade, contact Nutanix Support.
Mixing nodes with different processor (CPU) types in the same cluster
If you are mixing nodes with different processor (CPU) types in the same cluster, you must enable
enhanced vMotion compatibility (EVC) to allow vMotion/live migration of VMs during the hypervisor
upgrade. For example, if your cluster includes a node with a Haswell CPU and other nodes with
Broadwell CPUs, open vCenter and enable VMware enhanced vMotion compatibility (EVC) setting
and specifically enable EVC for Intel hosts.
CPU Level for Enhanced vMotion Compatibility (EVC)
AOS Controller VMs and Prism Central VMs require a minimum CPU micro-architecture version of Intel
Sandy Bridge. For AOS clusters with ESXi hosts, or when deploying Prism Central VMs on any ESXi
cluster: if you have set the vSphere cluster enhanced vMotion compatibility (EVC) level, the minimum level
must be L4 - Sandy Bridge.
Note: You might be unable to log in to vCenter Server as the /storage/seat partition for vCenter Server version
7.0 and later might become full due to a large number of SSH-related events. For information on the symptoms
and solutions to this issue, see KB 10830.
• If your cluster is running the ESXi hypervisor and is also managed by VMware vCenter, you must provide
vCenter administrator credentials and vCenter IP address as an extra step before upgrading. Ensure that
ports 80 / 443 are open between your cluster and your vCenter instance to successfully upgrade.
• If you have just registered your cluster in vCenter. Do not perform any cluster upgrades (AOS, Controller
VM memory, hypervisor, and so on) if you have just registered your cluster in vCenter. Wait at least 1
hour before performing upgrades to allow cluster settings to become updated. Also do not register the
cluster in vCenter and perform any upgrades at the same time.
• Cluster mapped to two vCenters. Upgrading software through the web console (1-click upgrade) does not
support configurations where a cluster is mapped to two vCenters or where it includes host-affinity must
rules for VMs.
Ensure that enough cluster resources are available for live migration to occur and to allow hosts to enter
maintenance mode.
Mixing Different Hypervisor Versions
For ESXi hosts, mixing different hypervisor versions in the same cluster is temporarily allowed for
deferring a hypervisor upgrade as part of an add-node/expand cluster operation, reimaging a node
as part of a break-fix procedure, planned migrations, and similar temporary operations.
• For more information, see VMware ESXi Hypervisor Upgrade Recommendations and Limitations.
• For more information, see General Hypervisor Upgrade Recommendations section in Life Cycle Manager
Guide.
• Follow the recommended upgrade order. For more information, see Recommended Upgrade Order for Dark
Site Method section in Life Cycle Manager Guide.
• For more information on how to install or upgrade VMware vCenter server or other third-party software, see your
vendor documentation.
Procedure
1. Before performing any upgrade procedure, make sure you are running the latest version of the Nutanix Cluster
Check (NCC) health checks and upgrade NCC if necessary.
2. Run NCC. For more information, see Running NCC (Prism Element) in Prism Element Web Console Guide.
a. The default view is All. From the drop-down menu, select Nutanix - VMware ESXi, which shows all
available JSON versions.
b. From the release drop-down menu, select the available ESXi version. For example, 7.0.0 u2a.
c. Click Download to download the Nutanix-qualified ESXi metadata .JSON file.
4. Log in to the Prism Element web console for any node in the cluster.
5. Click Settings from the drop-down menu of the Prism Element web console, and select Upgrade Software >
Hypervisor.
7. Click Choose File for the metadata JSON (obtained from Nutanix) and binary files (offline bundle zip file for
upgrades obtained from VMware), respectively, browse to the file locations, select the file, and click Upload
Now.
8. When the file upload is completed, click Upgrade > Upgrade Now, then click Yes to confirm.
[Optional] To run the pre-upgrade installation checks only on the Controller VM where you are logged on
without upgrading, click Upgrade > Pre-upgrade. These checks also run as part of the upgrade procedure.
Note: AOS can detect if you have uploaded software that is already installed or upgraded. In this case, the
Upgrade option is not displayed, because the software is already installed.
The Upgrade Software dialog box shows the progress of your selection, including status of pre-installation
checks and uploads, through the Progress Monitor.
10. On the LCM page, click Inventory > Perform Inventory to enable LCM to check, update and display the
inventory information.
For more information, see Performing Inventory with Life Cycle Manager section in the Life Cycle Manager
Guide.
• Do the following steps to download a non-Nutanix-qualified (patch) ESXi upgrade offline bundle from VMware,
then upgrade ESXi through Upgrade Software in the Prism Element web console.
• Typically you perform this procedure to patch your version of ESXi and Nutanix has not yet officially qualified
that new patch version. Nutanix supports the ability to patch upgrade ESXi hosts with versions that are greater
than or released after the Nutanix qualified version, but Nutanix might not have qualified those releases.
Procedure
1. From the VMware web site, download the offline bundle (for example, update-from-esxi6.0-6.0_update02.zip)
and copy the associated MD5 checksum. Ensure that this checksum is obtained from the VMware web site, not
manually generated from the bundle by you.
3. Log on to the Prism Element web console for any node in the cluster.
4. Click Settings from the drop-down menu of the Prism Element web console, and select Upgrade Software >
Hypervisor.
6. Click enter md5 checksum and copy the MD5 checksum into the Hypervisor MD5 Checksum field.
7. Scroll down and click Choose File for the binary file, browse to the offline bundle file location, select the file,
and click Upload Now.
8. When the file upload is completed, click Upgrade > Upgrade Now, then click Yes to confirm.
[Optional] To run the pre-upgrade installation checks only on the Controller VM where you are logged on without
upgrading, click Upgrade > Pre-upgrade. These checks also run as part of the upgrade procedure.
Note: AOS can detect if you have uploaded software that is already installed or upgraded. In this case, the
Upgrade option is not displayed, because the software is already installed.
The Upgrade Software dialog box shows the progress of your selection, including status of pre-installation
checks and uploads, through the Progress Monitor.
Hyper-V Upgrade
AOS supports hypervisor upgrades that you can apply through the Prism Element web console Upgrade Software
feature (also known as 1-click upgrade).
You can view the available upgrade options, start an upgrade, and monitor upgrade progress through the web console.
In the main menu, click the gear icon, and then select Upgrade Software in the Settings panel that appears, to see
the current status of your software versions (and start an upgrade if warranted).
This section provides the requirements, recommendations, and limitations to upgrade Hyper-V.
Recommendations
Nutanix recommends that you schedule a sufficiently long maintenance window to upgrade your Hyper-V clusters.
Budget sufficient time to upgrade: Depending on the number of VMs running on a node before the upgrade, a node
could take more than 1.5 hours to upgrade. For example, the total time to upgrade a Hyper-V cluster from Hyper-V
2016 to Hyper-V 2019 is approximately the time per node multiplied by the number of nodes. Upgrading can take
longer if you also need to upgrade your AOS version.
Requirements
Note:
• You can upgrade to Windows Server 2022 Hyper-V only from a Hyper-V 2019 cluster.
• Upgrade to Windows Server 2022 Hyper-V from an LACP-enabled Hyper-V 2019 cluster is not
supported unless the Hyper-V vSwitch bound/team is set to Switch Embedded Teaming (SET). For
more information, see KB 11220.
• For Windows Server 2022 Hyper-V, only NX Series G6 and later models are supported.
• For Windows Server 2022 Hyper-V, SET is the default teaming mode. LBFO teaming is not supported
on Windows Server 2022 Hyper-V.
• For Hyper-V 2019, if you do not choose LACP/LAG, SET is the default teaming mode. NX Series G5
and later models support Hyper-V 2019.
• For Hyper-V 2016, if you do not choose LACP/LAG, the teaming mode is Switch Independent LBFO
teaming.
• For Hyper-V (2016 and 2019), if you choose LACP/LAG, the teaming mode is Switch Dependant
LBFO teaming.
• Before upgrading, ensure that the Active Directory user account has the necessary permissions to add and remove
Active Directory objects.
• The platform must not be a light-compute platform.
• Before upgrading, disable or uninstall third-party antivirus or security filter drivers that modify Windows firewall
rules. Windows firewalls must accept inbound and outbound SSH traffic outside of the domain rules.
• Enable Kerberos when upgrading from Windows Server 2012 R2 to Windows Server 2016. For more information,
see Enabling Kerberos for Hyper-V.
Note: Kerberos is enabled by default when upgrading from Windows Server 2016 to Windows Server 2019.
• Enable virtual machine migration on the host. Upgrading reimages the hypervisor. Any custom or non-standard
hypervisor configurations could be lost after the upgrade is completed.
• If you are using System Center for Virtual Machine Management (SCVMM) 2012, upgrade to SCVMM 2016 first
before upgrading to Hyper-V 2016. Similarly, upgrade to SCVMM 2019 before upgrading to Hyper-V 2019 and
upgrade to SCVMM 2022 before upgrading to Windows Server 2022 Hyper-V.
• Upgrade using ISOs and Nutanix JSON File
• Upgrade using ISOs. The Prism Element web console supports 1-click upgrade (Upgrade Software dialog
box) of Hyper-V 2016, 2019, or 2022 by using metadata upgrade JSON file. This file is available in the
Nutanix Support portal Hypervisor Details page and the Microsoft Hyper-V ISO file.
• The Hyper-V upgrade JSON file, when used on clusters where Foundation 4.0 or later is installed, is available
for Nutanix NX series G4 and later, Dell EMC XC series, or Lenovo Converged HX series platforms. You can
upgrade hosts to Hyper-V 2016, 2019 (except for NX series G4) on these platforms by using this JSON file.
Limitations
• When upgrading hosts to Hyper-V 2016, 2019, and later versions, the local administrator user name and password
is reset to the default administrator name Administrator and password of nutanix/4u. Any previous changes to the
administrator name and/or password are overwritten.
• Logical networks are not restored immediately after upgrade. If you configure logical switches, the
configuration is not retained and VMs could become unavailable.
• Any VMs created during hypervisor upgrade (including as part of disaster recovery operations) and not
marked as HA (High Availability) experiences unavailability.
• Disaster recovery: VMs with the Automatic Stop Action property set to Save is marked as CBR Not Capable if
they are upgraded to version 8.0 after upgrading the hypervisor. Change the value of Automatic Stop Action to
ShutDown or TurnOff when the VM is upgraded so that it is not marked as CBR Not Capable
• Enabling Link Aggregation Control Protocol (LACP) for your cluster deployment is supported when upgrading
hypervisor hosts from Windows Server 2016 to 2019.
This procedure enables you to update Hyper-V through the Prism Element web console Upgrade
Software dialog box.
Procedure
2. Download the Microsoft Hyper-V Windows Server .ISO file from the Microsoft web site.
3. Log on to the Nutanix support portal and select the hypervisor metadata .JSON file from the Downloads menu.
4. Save the files to your local machine or media, such as a USB drive or other portable media.
6. Click the gear icon in the main menu of the web console, select Upgrade Software in the Settings page, and
then click the Hypervisor tab.
8. Click Choose File for the metadata and binary files, respectively, browse to the file locations, select the file,
and click Upload Now.
9. [Optional] When the file upload is completed, to run the pre-upgrade installation checks only on the Controller
VM where you are logged on without upgrading, click Upgrade > Pre-upgrade. These checks also run as part
of the upgrade procedure.
Note: AOS can detect if you have uploaded software that is already installed or upgraded. In this case, the
Upgrade option is not displayed, because the software is already installed.
The Upgrade Software dialog box shows the progress of your selection, including status of pre-installation
checks and uploads, through the Progress Monitor.
11. On the LCM page, click Inventory > Perform Inventory to enable LCM to check, update and display the
inventory information.
For more information, see Performing Inventory with Life Cycle Manager section in the Life Cycle Manager
Guide.
• To view the Task dashboard, log in to Prism Element web console, and select Home > Tasks.
• An icon also appears in the main menu when one or more tasks are active (running or completed within the last 48
hours). The icon appears blue when a task runs normally, yellow when it generates a warning, or red when it fails.
Clicking the icon displays a drop-down list of active tasks; clicking the View All Tasks button at the bottom of
that list displays a details screen with information about all tasks for this cluster.
Note: The drop-down list of active tasks may include a Clean Up button (top right). Clicking this button removes
from the list any tasks that are no longer running. However, this applies to the current session only. The full active
list (including the non-running tasks) appears when you open a new Prism Element web console session.
• When multiple tasks are active, you can filter the list by entering a name in the filter by field.
• You can also filter the list by clicking the Filters button and selecting the desired filter options
Each task appears in the list for a minimum of one hour after completion, but how long that task remains in the list
depends on several factors. In general, the maximum duration is two weeks. However, tasks are rotated off the list
as new tasks arrive, so a task might disappear from the list much sooner when activity is high. In some cases a task
appears for longer than two weeks because the last task for each component is retained in the listing.
Task Specifies which type of operation the task is Any cluster operation
performing. you can perform in the
Prism Element web
console
Entity Affected Display the entity on which task has been Entity description
performed. If the link appears on the entity, click it
to display the details.
Duration Displays how long the task took to complete. seconds, minutes, hours
Note: As part of the AOS upgrade, the node where you have logged on and initiated the upgrade restarts. The Prism
Element web console appears unresponsive and might display the following message: Unable to reach server. Check for
internet connectivity. Wait a few minutes and log on to the Prism Element web console again.
You can see the progress of the download or upgrade process through one of the following.
• LCM Updates page. For more information, see Life Cycle Manager Guide.
• Upgrade Software dialog box in Prism Element web console.
• Alerts summary on the main menu
Procedure
3. Under the Upgrade Software dialog box progress bar, click Open.
The dialog box displays each Controller VM or disk as it progresses through the upgrade. For example:
4. Click open to see progress including download, installation, and upgrade completion progress bars.
5. Click Close at any time; you can reopen the Upgrade Software dialog box from the main menu.
If you click Close, the Alerts summary on the main menu also shows upgrade progress.
8. Hover your mouse pointer over the clock icon to see timestamps related to the upgrade task.
Procedure
» Open Upgrade Software from the Settings page in the Prism Element web console.
» Click the Alerts summary on the main menu.
Procedure
1. Log on to the Controller VM where the image has been downloaded by using a secure shell session (SSH).
Note:
• The Delayed cluster resiliency preference setting is deprecated from AOS 6.7 release. If you have the
Delayed cluster resiliency preference configured in your setup, the system changes it to Smart.
• Starting with AOS 6.8 release, the cluster resiliency preference is set as Smart by default. You can
set the cluster resiliency preference from both nCLI and Prism Central. For information on how to
configure the cluster resiliency preference from Prism Central, see Cluster Management in Prism
Central Infrastructure Guide.
The following table describes the difference in cluster rebuild operation between Smart and Immediate resiliency
preference:
Immediate Smart
During an upgrade or failure, the rebuild process starts During an upgrade, the system identifies the planned
almost immediately (approximately 60 seconds) when outage occurrence as a maintenance activity, and
Stargate of the node being upgraded is down. restricts unnecessary component rebuilds that might lead
to space bloats on the cluster. However, if the upgrade
extends beyond expected time or if any failure occurs in
the cluster, the system triggers an immediate rebuild to
restore cluster resiliency.
Procedure
• Smart - System differentiates between a failure and maintenance activity and manages the rebuild operation in
the following way:
• For planned outage or maintenance - Restricts unnecessary component rebuilds that lead to space bloats
on the cluster.
• For failures - Immediately starts the rebuild process.
• Immediate - Rebuild starts immediately
The cluster resiliency preference is set with the desired option. You can now upgrade your cluster and monitor the
cluster resiliency status using the Prism Element web console.
Multi-Cluster Management
Nutanix provides an option to monitor and manage multiple clusters through a single web console. The multi-cluster
view, known as Prism Central, is a centralized management tool that runs as a separate VM. From the Prism Element
web console, you can either register the cluster with an existing Prism Central instance or deploy a Prism Central
instance and register the cluster with it.
Prism Central provides the following features:
Note: For more information about Prism Central, refer to the Prism Central documentation. For information
about how the Prism Central documentation is organized, see Prism Central Documentation Portfolio.
For information on how to install Prism Central using 1-click method, see Installing Prism Central Using 1-Click
Method on page 77.
For information on how to register or unregister a cluster with Prism Central, see Registering or Unregistering a
Cluster with Prism Central on page 82.
For information on how to backup, restore, and migrate Prism Central, see Prism Central Backup, Restore, and
Migration in Prism Central Infrastructure Guide.
• Check the Requirements for Prism Central Deployment and Limitations of Prism Central Deployment.
• Check the port requirements between Prism Central and Prism Element. For more information, see Ports and
Protocols.
Table 18: Prism Central Installation Requirements - Connected Site and Dark Site
Procedure
1. Log in to the Prism Element web console as the user admin for your cluster.
» On the Home dashboard, click Register or create new from the Prism Central widget.
» Click the gear icon in the main menu and then select Prism Central Registration from the Settings
menu.
Note: On an ESXi cluster, you must first register a vCenter Server before you deploy a new Prism Central
instance.
5. (Applicable for Dark site only) In the PC Version step of Prism Central Deployment screen, click the
Upload Installation Binary link, select the Prism Central Metadata File (.json) and Prism Central Installation
Binary (.tar) files, and click Upload.
If there is already an image uploaded, the system displays the available versions.
6. (Applicable for Connected site only) In the PC version step of Prism Central Deployment screen, select
the required Prism Central version you intend to install.
Select Show compatible versions checkbox to view the list of PC versions compatible with the AOS
cluster.
Note: If the Prism Central version you want to install does not appear in the list, typically because the cluster
does not have Internet access (such as at a dark site), you can click the Upload Installation Binary link to
upload an image from your local media as described in Step 5 on page 79.
7. Click Next.
The Scale type step appears.
9. Click Next.
The Configuration step appears
a. Select (click the radio button for) the Prism Central VM size based on the number of guest VMs it must
manage across all the registered clusters:
For Prism Central configuration limits, see KB-8932 and Nutanix Configuration Maximums.
b. Network: Select an existing network for this Prism Central instance from the list.
If the target network is not listed, click the Create Network link to create a new network. For more
information, see Network Management.
c. Subnet Mask: Enter the subnet mask value.
d. Gateway: Enter the IP address of the gateway.
e. DNS Address(es): Enter the IP address for one or more DNS servers. Enter multiple addresses in a
comma separated list.
f. NTP Address(es): Enter the IP address for one or more NTP servers. Enter multiple addresses in a
comma separated list.
g. Select a Container: Select a container for the Prism Central VM from the drop-down list.
h. (Applicable for Scale-Out PC (on 3-VMs) only) Virtual IP: Enter a virtual IP address for the Prism Central
instance
Note: A virtual IP can be used as a single point of access for Prism Central. When you enter virtual IP, the
IP addresses for the three PC VMs are populated automatically. You can keep those addresses or change
them as desired.
a. Prism Central Service Domain Name: Enter a unique domain name for the Prism Central
Microservices. For more information, see Prism Central Service Domain Name restrictions in
Microservices Infrastructure Prerequisites and Considerations.
b. Internal Network: Select the network to use for Prism Central micro services communication from the
dropdown list.
The default selection Private Network [default] is a pre-configured private VxLAN network. Instead, if
you want microservices infrastructure to use a different network, you can select the network (managed or
• Retain the check mark for the Use default settings (recommended) checkbox and click Validate.
(Go to step 12.d on page 81.)
Retaining the check mark for the Use default settings (recommended) checkbox allows Prism
Central to use the Private Network [default] with the default values for Subnet Mask, Gateway IP
Address and IP Address Range.
• Clear the Use default settings (recommended) checkbox, if you want Prism Central to use the
Private Network [default] setting with specific (non-default or custom) values for Subnet Mask,
Gateway IP Address and IP Address Range.
Configure the Internal Network for microservices infrastructure if you did one of the following:
• Selected a managed or unmanaged network other than Private Network [default] for Internal
Network.
If you selected a managed network, the values in the Subnet Mask, Gateway IP Address and IP
Address Range fields are already configured. If you selected an unmanaged network, you must enter
the necessary values in the respective fields.
• Cleared the Use default settings (recommended) checkbox with the Private Network [default]
selection for Internal Network.
Enter the values for the following parameters to configure Internal Network:
Parameter Description
d. After you check that the values entered for all the fields are correct, click Next.
14. In the Summary step, check the details entered for Prism Central installation, and click Deploy.
This begins the deployment process. On the Home page, the Prism Central widget displays Deploying until
the installation is completed, then it displays OK. Click OK to launch the Prism Central web console in your
browser.
What to do next
Register this cluster with Prism Central. The management features are not available until Prism Central
registers the cluster in which it is located. For more information about how to connect to an existing Prism
Central instance, see Register (Unregister) Cluster with Prism Central.
• If you have never logged into Prism Central as the admin user, you must log in and change the password before
you attempt to register a cluster with Prism Central. For more information, see Logging Into Prism Central.
• Do not enable client authentication in combination with ECDSA certificates on a registered cluster since it causes
interference when communicating with Prism Central.
• Ports 9440 and 80 need to be open in both directions between the Prism Central VM and all the Controller VMs
(and the cluster virtual IP address if configured) in each registered cluster. For the complete list of required ports,
see Ports and Protocols.
• If you have a proxy server configured and you want the cluster - Prism Central communication to go through the
proxy, open the relevant ports on the proxy. If you do not want the communication to go through the proxy, add
the Prism Central IP address to the proxy whitelist (allowlist) in the cluster settings. For more information about
configuring proxy, see Configuring HTTP Proxy on page 393. For the complete list of required ports, see
Ports and Protocols.
• A cluster can register with just one Prism Central instance at a time. To register with a different Prism Central
instance, first unregister the cluster.
Note: To perform this task, ensure that you log in to the Prism Element web console as an admin user.
Procedure
2. To run Nutanix Cluster Checks, go to the Health dashboard, and from the Actions dropdown menu, click Run
Checks.
» In the Home dashboard, click Register or create new from the Prism Central widget.
» Click the Settings icon, and navigate to Setup > Prism Central Registration in the Settings page.
a. Prism Central IP: Indicates the IP address of the Prism Central VM.
b. Port: The default port number is 9440. This is an optional field. For the complete list of required ports, see
Ports and Protocols.
c. Username: Indicates the user name for Prism Central. You can enter admin as the Prism Central user name.
d. Password: Indicates the password for the Prism Central user.
Note:
• The user credentials provided when registering a cluster (Prism Element) with Prism Central
are only used once. After registration, modifying the admin password would not impact any
communication between Prism Central and the cluster.
• On small, large, and x-large Prism Central deployments, when you register a new cluster to Prism
Central, Prism Central synchronises the past 90 days of data (including multiple metrics) from the
cluster. On x-small Prism Central deployments, Prism Central synchronises the past 2 hours of data
(including multiple metrics) from the cluster. To view the list of metrics that are synced during
registration, see the file /home/nutanix/config/arithmos/data_sender/arithmos_history.json in the
Controller VM. To view the list of metrics that are synced during a regular synchronisation between
Prism Central and the cluster, see the file /home/nutanix/config/arithmos/data_sender/arithmos.json
in the Controller VM.
Caution: Unregistering a cluster from Prism Central is not a supported workflow. The unregistered cluster might be
disallowed for re-registration with a Prism Central instance.
You can use the destroy Cluster feature of Prism Central, which implicitly unregisters the cluster. For more
information, see Destroying a Cluster.
If you still want to go ahead with unregistration of the cluster, consider the following points:
• Unregistering a cluster through the Prism Element web console is no longer available. This option is removed
to reduce the risk of accidentally unregistering a cluster. Several features such as role-based access control,
application management, micro-segmentation policies, and self-service capability require Prism Central to run
your clusters. If a cluster is unregistered from Prism Central, it leads to features unavailability and configuration
erasure. You can only use the following procedure from Controller VM (CVM) to unregister a cluster.
• Perform the entire registration process, followed by the cleanup process.
• Do not remove the IPs of the cluster and Prism Central from the whitelists on both sides until the unregistration
process completes successfully.
If you have enabled additional applications or features in Prism Central, see the following table for recommendations
before you unregister a cluster. For more information, see KB 4944.
Nutanix Disaster Recovery (Leap) Before unregistering the cluster from Prism Central, you must
remove any Nutanix Disaster Recovery (Leap) configuration
involving virtual machines or volume groups for the cluster
being unregistered. Otherwise you will not be able to manage
the snapshot creation or replication policies configured on the
cluster. For more information, see KB 12749.
Flow Networking You must disable Flow Networking on the cluster before
unregistering it from Prism Central, using the steps mentioned
in Unregistering a PE from the PC in Flow Virtual Networking
Guide. If Flow Networking is not disabled on the cluster prior
to unregistering from Prism Central, attempts to enable Flow
Networking in the same cluster does not work as expected. For
more information, see KB 12449.
NuCalm/App Management You must clean up Calm entities after unregistration.
Contact Nutanix Support for assistance in cleaning up the CALM
entities.
Prism Self Service configuration Changes that have been made to the Prism Self Service
configuration in Prism Central are lost after unregistration.
Ensure that you follow the extra cleanup steps mentioned in KB
4944.
NKE Kubernetes Do not unregister the cluster hosting an NKE Kubernetes cluster
from Prism Central. Unregistration of cluster from Prism Central
will prevent the management of the NKE clusters.
Procedure
2. Run the cluster status command and verify that all services are in a healthy state.
3. Run the following command to unregister the cluster from Prism Central.
nutanix@cvm$ ncli multicluster remove-from-multicluster external-ip-address-or-svm-
ips=pc-name-or-ip username=pc-username password=pc-password
Replace pc-name-or-ip with the Prism Central name or IP address and pc-username and pc-password with
the login credentials for your Prism Central administrator account. This step can take some time (though typically
just a few seconds). If the password contains any special characters, ensure to enclose the password in single
quotes.
6. Log in to the Prism Central VM through an SSH session (as the nutanix user) and perform the following steps:
Note: If you do not run the clean-up script, some artifacts continue to retain references to entities that are no
longer managed by the cluster. Some artifacts that might have lost references due to the unregistration process
might not be able to recover their references.
b. Run the following command to retrieve the UUID for Prism Central:
[pcvm]$ ncli cluster info
Find the Cluster UUID value in the displayed information (see step 5), which in this case is the UUID for
Prism Central.
7. Go back to the CVM, and run the unregistration_cleanup.py script to complete the unregistration process on the
cluster.
nutanix@cvm$ python /home/nutanix/bin/unregistration_cleanup.py uuid
In this case the uuid is the Prism Central UUID obtained in step 6b.
If you do not encounter any error after running the cleanup script, you can consider that the cleanup is successful.
What to do next
Once the unregistration process is complete, you are not allowed to re-register the cluster with a new or re-created
Prism Central instance. If you need to register the cluster again, see KB-15679 or contact Nutanix Support.
If the clean up process is not completed successfully, perform the following actions:
• Check the logs to indicate if there are any input errors when calling the script. The logs for the unregistration
cleanup script are available under ~/data/logs/unregistration_cleanup.log.
• If any error occurred during script processing, run the cluster status command and check that the cluster services
are up and running. Re-run the script and check if the script processes successfully.
Note: The Nutanix cluster takes about 10 minutes to stabilize the Prism Central instance after the recovery task on the
Prism Element web console is displayed as complete. This is because Microservices Infrastructure causes the system to
rotate certificates after the restoration and restart services like IAM and Flow Virtual Networking.
Note: After restoring your Prism Central instance, ensure that you manually restore Files to the pre-disaster version.
For more information, see Manually Failing Over to a Remote Site in the Files Manager User Guide.
Procedure
1. Log in to any Prism Element web console registered to the Prism Central instance you want to restore.
The Prism Element dashboard shows the Prism Central widget, which contains Prism Central information (IP
address and connection status). If this is a fresh Prism Element that you created and you have yet to register to a
Prism Central instance, it won't appear. Instead, you see the Register or Deploy Prism Central.
2. Click the Settings icon and navigate to Data Resiliency > Restore Prism Central from the Settings menu.
For more information, see Settings Menu in the Prism Element Web Console Guide.
The Restore Prism Central page appears. This page provides you options to restore the Prism Central instance
from a Prism Element cluster or S3-compatible object storage.
3. Select from where you want to restore the Prism Central instance and click Restore Now.
Note: If you configured only continuous backup, the Restore Now option is available only when Prism Central is
in a disconnected state(shown Disconnected in the Prism Element web console.
The Restore Prism Central window appears. This window shows the service data that will be recovered and
those that will not.
» To recover from a continuous backup, see Restore Prism Central-Field Information for Continuous
Backup on page 88.
» To recover from a point-in-time backup, see Restore Prism Central-Field Information for S3-based
Object Storage on page 88.
The Prism Central instance is restored in 60 to 90 minutes, depending on the configuration data of the hosts.
The restoration involves the deployment of Prism Central and the restoration of its configuration data from the
backup, which can take anywhere between 60 and 120 minutes, depending on the size of the data. The restored
Prism Central instance takes an additional 30 to 40 minutes to show all the guest VMs, disks, and metrics. Wait to
perform any actions on the restored Prism Central instance until all the recovery tasks are complete on the cluster.
You can see the restoration status and the related processes in the Tasks window.
What to do next
Consider the following after the Prism Central restoration.
Note: If the Prism Central restoration fails, contact Nutanix Support. Do not bring up the old Prism Central
instance.
Note: If you have both S3-compatible object storage and Nutanix on-prem clusters configured as backup targets
and you recovered the Prism Central instance through an on-prem cluster. In that case, you must reconfigure the s3
bucket credentials after the recovery through the Prism Central Backup and Restore widget in Settings >
Prism Central Management.
• Reconfigure the proxy server. For information on how to configure the HTTP proxy through the Prism Central
web console, see Configuring an HTTP Proxy in the Prism Central Admin Center Guide
If the old Prism Central instance had a proxy server, reconfigure the proxy server so that the recovered Prism
Central instance maps to the correct IP address.
• Reconfigure the fully qualified domain name (FQDN).
If the old Prism Central instance had an FQDN, reconfigure the FQDN so that the recovered Prism Central
instance maps to the correct IP address.
• Recovery plan jobs (RPJ) in progress: Perform the steps in KB-10962.
If the old Prism Central instance had a failover task running (Nutanix Disaster Recovery) or protection policy with
guest VMs protected with synchronous replication schedule, perform the steps in KB 10962 to ensure that all the
failover tasks stuck in the running state are terminated and a script is executed for efficient recovery of the Prism
Central instance.
Procedure
1. Select the cluster where you want to restore the Prism Central instance.
2. Verify the version of Prism Central instance that would restore on the selected cluster.
3. Select the network where you want to restore and install Prism Central instances.
The Subnet Mask, Gateway, and DNS Address(s) fields show the relevant information associated with the
selected network.
4. Enter details (name, IP address) for the Prism Central instance you want to restore and click Save.
Procedure
a. AWS Region Name. Enter an AWS region name. For more information, see AWS documentation.
a. AWS Bucket Name. Enter a bucket name. For more information, see AWS documentation.
b. (Optional for NC2 environments) Access Key. Enter your access key.
c. (Optional for NC2 environments) Secret Access Key. Enter your secret access key.
2. In Source tab, select the Prism Central backup you want to restore, and click Next.
An S3 bucket can be used to back up multiple Prism Central instances. The instances are listed as the
PC_<IP_ADDRESS> or the FQDN if configured.
3. In Restore Point tab, specify the date for the point-in-time backup, and then select one of the available restore
points to restore the Prism Central instance and click Next.
4. In Installation tab, verify the cluster IP address where Prism Central instance was hosted originally and the
version of your instance, and click Next.
Note: If you are restoring the Prism Central instance from the same cluster (Prism Element web console) where the
Prism Central instance was hosted, details like vLAN, Subnet Mask, Gateway IP, DNS Address(es), NTP
Address(es), Container, and Virtual IP are populated automatically. You must configure these details if you are
trying to restore the Prism Central instance from a different AZ or a cluster.
6. In the Microservices tab, specify the Prism Central service domain name, internal network, and the required
input to enable Microservices Infrastructure (CMSP). Nutanix recommends using the default settings for Subnet
Mask, Gateway IP Address, and IP Address Range.
Note: Ensure that the IP address range does not conflict with the reserved DHCP IP address pool in your network.
7. In Summary tab, review the information you configured in the previous steps, and click Restore.
Single-node Clusters
A traditional Nutanix cluster requires a minimum of three nodes, but Nutanix also offers the option of a two-node
cluster and single-node cluster for ROBO implementations and other situations that require a lower cost option.
Note: Single-node clusters are supported only on a limited set of hardware models. For more information, see
Prerequisites and Requirements on page 90.
• Single-node clusters are supported only on a select set of hardware models. For information about supported
models, see KB 5943.
• As a best practice, Nutanix recommends configuring a maximum of five guest VMs. Ensure that the cluster
platform has the minimal physical resources to cater to the compute and disk requirements. Also, ensure that the
CVM resources are optimally consumed.
Nutanix also recommends configuring backup for all the five guest VMs running on a single-node cluster to
protect the guest VMs in a node failure scenario.
Important: Failing to configure backup for guest VMs may result in data loss as data cannot be recovered from
a single-node cluster. The data loss can be observed when there is any meta-data inconsistency or file system
corruption.
• Nutanix recommends that you schedule an appropriate maintenance window (downtime) for your single node
clusters when you plan to perform any network configurations or changes thereto.
• Do not exceed a maximum of 1000 IOPS.
• Do not create a Prism Central instance (VM) in the cluster. There is no built-in resiliency for Prism Central in a
single-node cluster, which means that a problem with the node takes out Prism Central with limited options to
recover.
• LCM is supported for software updates, but not for firmware updates.
• Async DR is supported for 6 hour RPO only.
Important: In case of a Hybrid HCI node or All-Flash HCI node with more than 2 SSDs, if a disk (SSD or HDD)
fails, the cluster can fill up and write IO fails if there is insufficient rebuild capacity and the other disks do not have
enough available space to bring up the data to disk fault tolerance. For more information, see Resilient Capacity
in Nutanix Bible.
• All user VMs must be shut down before upgrading a single-node cluster. If any user VMs are still running, a
warning box appears.
Important: A graceful shutdown of the guest VM may not power-off the VM immediately. Based on the operating
system in the guest VM and the workloads running on it, the VM could take sometime to get powered off.
Therefore, wait for sometime and check if the VM is in the powered off state.
Read-Only Mode
Single-node clusters enter read-only mode when certain requirements are not met or a disk fails. An alert A1195 is
generated. For more information, see KB 8156. A yellow exclamation mark is displayed in the web console that
indicates the single-node cluster has entered read-only mode.
A single node cluster can comprise either a Hybrid or All-Flash HCI Node.
The following table describes the disk failure scenarios in which a cluster enters into read-only mode:
HCI Node Type in Failure Type Impact on the single node Nutanix
Cluster cluster Recommendations
Hybrid with 2 SSDs 1 SSD fails The cluster becomes read-only Ensure that there
as there is only 1 SSD remaining is more than 1 SSD
All-Flash with 2 SSDs to store the meta-data. available for the
system to place meta-
data.
All-Flash with more than 1 SSD fails The meta-data attempts to pick None
2 SSDs another SSD in the node and get
the node out of read-only state.
Procedure
To override read-only mode from Prism Element web console, perform the following steps:
1. Click the yellow exclamation mark to re-enable write mode on the cluster.
The system prompts you to confirm if you want to override read-only mode.
Note:
• An alert A101057 - Cluster In Override Mode is generated by the system. For more information, see
KB-8132.
Two-Node Clusters
A traditional Nutanix cluster requires a minimum of three nodes, but Nutanix also offers the option of a two-node
cluster for ROBO implementations and other situations that require a lower cost yet high resiliency option. Unlike a
one-node cluster, a two-node cluster can still provide many of the resiliency features of a three-node cluster. This is
possible by adding an external Witness VM in a separate failure domain to the configuration. For more information,
see Configuring a Witness (Two-node Cluster) in the Data Protection and Recovery with Prism Element guide.
Nevertheless, there are some restrictions when employing a two-node cluster. The following table outlines the
features and limitations of a two-node cluster.
Note: Two-node clusters are supported only on a limited set of hardware models.
Feature Description
• A Witness VM for two-node clusters requires a minimum of 2 vCPUs, 6 GBs of memory, and 25 GBs of
storage.
• The same Witness VM can be used for both Metro Availability and two-node clusters, and a single Witness
VM can support up to 50 instances (any combination of two-node clusters and Metro Availability protection
domains).
• You can bring up a two-node cluster without a Witness VM being present initially, but it is recommended that
the Witness VM be alive and running before starting the cluster.
• The Witness VM may reside on any supported hypervisor, and it can run on either Nutanix or non-Nutanix
hardware.
Note: Nutanix does not support the deployment of a Witness VM on the AWS and Azure cloud platforms.
• The Witness VM must reside in a separate failure domain, which means independent power and network
connections from any cluster it is managing. This separate platform can be deployed either on premise
(including as an option the Nutanix one-node replication target NX-1155) or off premise (centrally, typically
where Prism Central is hosted).
• During a node failure, the transition of a healthy node to single-node mode can take 30-60 seconds. The client
VMs can experience I/O timeouts during this period. Therefore, it is recommended that the SCSI timeout of
the client VM disks should be at least 60 seconds.
• The minimum recovery point objective (RPO) for a two-node (or one-node) cluster is six hours.
• Network latency between a two-node cluster and the Witness VM should not exceed 500 ms. (RPC timeouts
are triggered if the network latency is higher.) During a failure scenario, nodes keep trying to reach (ping) the
Witness VM until successful. Nodes ping the Witness VM every 60 seconds, and each Witness request has a
two-second timeout, so it can tolerate up to one second of link latency.
• Node removal is not supported, but node replacement is supported (where the node remains part of the cluster).
When replacing a node, the cluster remains active but will transition to standalone mode for a brief period of
time.
• All node maintenance work flows (software upgrades, life cycle manager procedures, node and disk break-fix
procedures, boot drive break-fix procedure) require that the cluster be registered with a Witness VM.
• Node or disk failures are handled as follows:
• Node failure: When a two-node cluster is operating normally, data is replicated across the nodes. If a node
fails, data is replicated across disks on the healthy node to maintain resiliency.
• HDD failure: An HDD failure is handled in the same way as in a three-node cluster, that is data is rebuilt in the
background. If there is insufficient rebuild capacity when an HDD fails, the cluster can fill up and I/O can fail.
• SSD failure: If a node loses an SSD, the remedial action is the same as for an HDD failure, that is the cluster
remains normal and data is rebuilt in the background for the disk. Occasionally, an SSD failure on one node
can result in the other node in the cluster transitioning to a standalone state because the SSD failure caused a
critical storage service to restart, which caused user I/O to stall. If the nodes have only a single SSD left and if
either node fails, the healthy node goes into read-only state.
Node Failure
When a node goes down, the live node sends a leadership request to the Witness VM and goes into single-node mode.
In this mode RF2 is still retained at the disk level, meaning data is copied to two disks. (Normally, RF2 is maintained
at the node level normally meaning data is copied to each node.) If one of the two metadata SSDs fails while in
single-node mode, the cluster (node) goes into read-only mode until a new SSD is picked for metadata service. When
the node that was down is back up and stable again, the system automatically returns to the previous state (RF2 at the
node level). No user intervention is necessary during this transition.
Witness VM Failure
When the Witness goes down (or the network connections to both nodes and the Witness fail), an alert is generated
but the cluster is otherwise unaffected. When connection to the Witness is re-established, the Witness process
resumes automatically. No administrator intervention is required.
If the Witness VM goes down permanently (unrecoverable), follow the steps for configuring a new Witness through
the Configure Witness option of the Prism Element web console. Fore more information, see Configuring a
Witness (Two-node Cluster) in the Data Protection and Recovery with Prism Element guide.
• If you enable block awareness in addition to increasing the fault tolerance (FT) level, you need a minimum of 5
blocks.
• Increasing the cluster FT level might require at least 30 percent of your disk space.
• Each cluster must have a minimum of five nodes.
• Changes to Fault Tolerance cannot be reverted. You also cannot reduce the cluster fault tolerance level. For
example, you cannot change the redundancy factor from 3 to 2 or 2 to 1. If you attempt to reduce this setting, the
web console displays an error.
You can also enable Replication Factor 1 at the Settings > Redundancy State page. For more information about
this setting, see Replication Factor 1 Overview on page 101, Replication Factor 1 Recommendations and
Limitations on page 102, and Enabling Replication Factor 1 on page 104.
Procedure
1. To view the number of host failures that ZooKeeper can tolerate, open a web browser and log on to the Prism
Element web console.
3. Click the gear icon in the main menu and then select Redundancy State in the Settings page.
The Redundancy Factor Readiness page appears, displaying the existing Redundancy Factor of the cluster.
4. Select 3 in the Desired Redundancy Factor drop-down list to set the cluster to redundancy factor 3, then click
Save.
You can also enable Replication Factor 1 at the Settings > Redundancy State page. For more
information about this setting, see Replication Factor 1 Overview on page 101, Replication Factor 1
Recommendations and Limitations on page 102, and Enabling Replication Factor 1 on page 104.
6. Set the replication factor to 3 for every storage container you want to have three copies of the data.
Increasing the redundancy factor for the cluster does not automatically increase the replication factor for any
container. This gives you granular control in case you have a container where you do not want to incur the
Note: Be sure to change the replication factor to 3 for all desired containers, including any created by the system
such as the SelfServiceContainer or NutanixManagementShare.
To set the replication factor to 3 for the system storage container NutanixManagementShare, run
ncli> ctr edit name=NutanixManagementShare rf=3 force=true
As with increasing the redundancy factor, increasing the replication factor can take some time to complete.
You can verify the status by again going to the Data Resiliency Status window in Prism. When the value
of Failures Tolerable for Extent Groups (which reflects the container replication level) increases to 2, it
confirms that the replication factor is now 3.
Recommendations
Nutanix recommends the following.
• For each cluster where you want to use RF1, create one RF1 storage container per node. The RF1-enabled storage
container can contain multiple RF1-enabled vDisks.
• For clusters running the ESXi hypervisor, make sure you mount the RF1 storage container on a single host only.
Mounting the RF1 storage container on all or multiple hosts is not supported.
• Do not enable RF1 storage containers on clusters where SSD tier capacity is less than 6 percent of total cluster
capacity.
Limitations
Mixed hypervisor cluster For VMware ESXi and AHV mixed hypervisor
clusters, RF1-enabled storage containers and VMs
with attached RF1 vDisks are not supported on the
AHV storage-only nodes.
You can enable RF1 on the nodes running the VMware
ESXi hypervisor.
In-place hypervisor conversion Not supported when converting a cluster from AHV
to ESXi.
Recycle bin Not supported. When you delete an RF1 vDisk, the
vDisk is marked for deletion as soon as possible
and bypasses the recycle bin.
Storage container settings or modification with RF1 These settings or modifications are not supported.
enabled
• Increasing the replication factor
• Enabling capacity reservation (Reserve Capacity
setting) or Reserve Rebuild Capacity
• Storage containers with Reserve Capacity or Reserve
Rebuild Capacity already enabled
Maintenance mode operations (planned) The following operations automatically shut down
and restart RF1 VMs.
• Controller VM shutdown
• Memory update
• Host boot disk replacement
• Nutanix Flow microsegmentation work flows
• Request Reboot (rolling reboot operation)
ESXi hypervisor 1-click upgrade Automatic VM shutdown is not supported. You must
manually shut down RF1 VMs before upgrading and
restart them after the upgrade is completed.
• AOS upgrade
• Firmware upgrades for ESXi and AHV clusters
• AHV hypervisor upgrade
Procedure
2. Click the gear icon in the main menu and then select Redundancy State in the Settings page.
The Redundancy Factor Readiness page appears, displaying the existing Redundancy Factor of the cluster and
the Enable Replication Factor 1 checkbox.
3. Select the Enable Replication Factor 1 checkbox, click Yes to confirm the Are you sure you want to enable
RF1? message, and then click Save.
What to do next
Create an RF1 storage container and assign it to a node. For more information, see Creating a Storage
Container with Replication Factor 1 on page 105.
• vSphere: The storage container is accessible as an NFS datastore. This requires access to the vSphere APIs.
Ensure that you have appropriate vSphere license to access the APIs.
• AHV: The storage container is accessible transparently.
Note: The NutanixManagementShare storage container is a built-in storage container for Nutanix clusters for use
with the Nutanix Files and Self-Service Portal (SSP) features. This storage container is used by Nutanix Files and SSP
for file storage, feature upgrades, and other feature operations. To ensure proper operation of these features, do not
delete this storage container. Nutanix also recommends that you do not delete this storage container even if you are not
using these features. The NutanixManagementShare storage container is not intended to be used as storage for vDisks,
including Nutanix Volumes.
Procedure
4. Storage Pool: Select a storage pool from the drop-down list. Max Capacity displays the amount of free space
available in the selected storage pool.
a. (vSphere only) NFS Datastore: Select the Mount on the following ESXi hosts button to mount the
storage container on a single host. From the list of host IP addresses, check the box for one host only.
For clusters running the ESXi hypervisor, make sure you mount the RF1 storage container on a single host
only. Mounting the RF1 storage container on all or multiple hosts is not supported.
5. Click the Advanced Settings button and configure the storage container.
6. Filesystem Whitelists: Enter the comma-separated IP address and netmask value (in the form ip_address/
netmask).
A whitelist, also known as an allowlist, is a set of addresses that are allowed access to this storage container.
Allowlists are used to allow appropriate traffic when unauthorized access from other sources is denied. Setting a
storage container level whitelist overrides any global whitelist for this storage container.
Setting an allowlist helps you provide access to the container via NFS. Some manual data migration workflows
might require the allowlist to be configured temporally, while some third-party backup vendors might require the
allowlist to be configured permanently to access the container via NFS.
Caution:
• There is no user authentication for NFS access, and the IP address in the allowlist has full read or
write access to the data on the container.
• It is recommended to allow single IP addresses (with net mask such as 255.255.255.255) instead of
allowing subnets (with netmask such as 255.255.255.0).
Procedure
2. Click the gear icon in the main menu and then select Redundancy State in the Settings page.
The Redundancy Factor Readiness dialog box appears. This displays the existing Redundancy Factor of the
cluster and the Enable Replication Factor 1 checkbox.
• CVM vCPU = Number of physical host CPUs minus 2, limited to a maximum of 22 vCPUs.
Note: This is applicable till Foundation version 5.3.x. From Foundation version 5.4 onwards, the capping of
maximum 22 vCPUs is not applicable.
• CVM memory = Available RAM minus 16 GiB, limited to a maximum of 256 GiB.
Note:
• This is applicable from Foundation version 5.3 and above. In the earlier Foundation versions, the
memory allocation happens without capping to 256 GiB.
• A capping of maximum 256 GiB is applied, and Foundation allocates the maximum possible vRAM
to CVM. For example, if the available RAM is 512 GiB, the system allocates a maximum of 256
GiB and never considers the 512-16 = 496 GiB value. However, if you change the system allocated
vRAM, the vRAM gets overridden with the supplied value.
Note: Minimum Foundation version of 5.3 supports these limits with NUMA pinnings or alignments. Earlier
Foundation versions with a minimum version of 5.0 support these limits but not NUMA pinnings or alignments.
• CVM vCPU = Number of physical host CPUs minus 2, limited to a maximum of 22 vCPUs.
Note: This is applicable till Foundation version 5.3.x. From Foundation version 5.4 onwards, the capping of
maximum 22 vCPUs is not applicable.
• CVM memory = Available RAM minus 16 GiB, limited to a maximum of 256 GiB.
Note:
• This is applicable from Foundation version 5.3 and above. In the earlier Foundation versions, the
memory allocation happens without capping to 256 GiB.
• A capping of maximum 256 GiB is applied, and Foundation allocates the maximum possible vRAM
to CVM. For example, if the available RAM is 512 GiB, the system allocates a maximum of 256
GiB and never considers the 512-16 = 496 GiB value. However, if you change the system allocated
vRAM, the vRAM gets overridden with the supplied value.
Note: Minimum Foundation version of 5.3 supports these limits with NUMA pinnings or alignments. Earlier
Foundation versions with a minimum version of 5.0 support these limits but not NUMA pinnings or alignments.
Note:
• vCenter access details and credentials are required to update the CVM configuration. The details are
encrypted on the CVM and removed after the update is complete.
Perform the following procedure to increase the CVM memory in the Prism Element web console.
Procedure
2. Run NCC as described in Running Checks by Using Prism Element Web Console on page 259.
3. Click the gear icon in the main menu and then select Configure CVM in the Settings page.
Prism Element web console displays the Configure CVM dialog box.
4. If your cluster is running the ESXi hypervisor and you are managing your cluster by using VMware vCenter
Server, you must enter the vCenter authentication information to increase the CVM memory by performing the
following steps:
5. Select the Target CVM Memory Allocation memory size and click Apply.
Note:
• You can allocate a maximum CVM memory of 64 GB through the Prism Element web console. To
upgrade the CVM memory beyond 64 GB, contact Nutanix support.
• If a CVM was already allocated more memory than your choice, it remains at the allocated amount.
For example, if a Controller VM is at 20 GB memory and you select 28 GB, the Controller VM
memory is upgraded to 28 GB. However, if a Controller VM is at 48 GB memory and you select 28
GB, the Controller VM memory remains unchanged at 48 GB.
AOS applies memory to each CVM that is below the amount you choose. Resizing memory for a CVM requires a
restart. Only one CVM restarts at a time, thus preventing any production impact.
For information on snapshot frequency requirements, see Resource Requirements Supporting Snapshot
Frequency (Asynchronous, NearSync and Metro) information in the Data Protection and Recovery with Prism
Element Guide or On-Prem Hardware Resource Requirements information in the Nutanix Disaster Recovery
Guide.
Note: For metro availability, the synchronous replication is supported with snapshots generated every 6 hours. Any
node that supports 6-hour snapshot retention can support synchronous replication with 0 seconds RPO. For more
information, see Synchronous (0 seconds RPO) in Nutanix Disaster Recovery Guide.
Note: Reboot host is a graceful restart workflow. Hosts are automatically put into maintenance mode and all the user
VMs are migrated to another host when you perform a reboot operation for a host. There is no impact on the user
workload due to the reboot operation. Reboot fails if the ESXI node is already in maintenance mode.
Procedure
2. Click Settings from the drop-down menu of the Prism Element web console and then select Reboot.
3. In the Request Reboot window, select the checkbox associated with the nodes you want to restart, and click
Reboot.
A progress bar is displayed that indicates the progress of the restart of each node.
• Nutanix clusters provide storage pool, storage container, volume group, and virtual disk components to organize
storage. For more information see Storage Components on page 111.
• Prism Element web console helps you monitor storage usage across the cluster. For more information see Storage
Dashboard on page 121.
• Prism Element web console helps you create storage containers and volume groups. For more information, see
Creating a Storage Container on page 135 and Creating a Volume Group on page 146.
• Prism Element web console helps you configure a threshold warning for storage capacity available in the cluster
after accounting for the storage space needed to rebuild and restore in case of any component failures. For more
information, see Configuring a Warning Threshold for Resilient Capacity on page 140.
• Prism Element web console helps you reserve storage capacity for rebuilding failed nodes, blocks or racks. For
more information, see Rebuild Capacity Reservation on page 141.
Storage Components
Storage in a Nutanix cluster is organized into the following components.
Storage Tiers
Each type of storage hardware (SSD-PCIe (NVMe), SSD (SATA SSD), and HDD) is placed in a storage tier. You can
determine the tier breakdown for disks in a storage pool through the web console . For more information, see Storage
Table View on page 128 .
Storage Pools
Storage pools are groups of physical disks from one or more tiers. Storage pools provide physical separation because
a storage device can only be assigned to a single storage pool at a time. Nutanix recommends creating a single storage
pool for each cluster. This configuration allows the cluster to dynamically optimize capacity and performance.
Isolating disks into separate storage pools provides physical separation, but can create an imbalance of these resources
if the disks are not actively used. When you expand your cluster by adding new nodes, the new disks can also be
added to the existing storage pool. This scale-out architecture allows you to build a cluster that grows with your
needs.
When you create a cluster, a default predefined storage pool is available. This pool includes the total capacity of all
the disks on all the hosts in the cluster.
Storage Containers
A storage container is a subset of available storage within a storage pool. Storage containers are created within a
storage pool to hold virtual disks (vDisks) used by virtual machines. For more information, see Creating a Storage
Container. By default, storage is thinly provisioned, which means that the physical storage is allocated to the storage
container as needed when data is written, rather than allocating the predefined capacity when the storage container
is created. Storage efficiency features such as compression, deduplication, and erasure coding are enabled at the
container level.
When you create a Nutanix cluster, the following storage containers are created by default:
Volume Groups
A volume group is a collection of logically related virtual disks (or volumes). A volume group is attached to VM
either directly or using iSCSI. You can add vDisks to a volume group, attach them to one or more consumers, include
them in disaster recovery policies, and perform other management tasks. You can also detach a volume group from
one VM and attach it to another, possibly at a remote location to which the volume group is replicated.
You manage a volume group as a single unit. When a volume group is attached to a VM, the VM can access all of the
vDisks in the volume group. You can add, remove, and resize the vDisks in a volume group at any time.
Each volume group is identified by a UUID, a name, and an iSCSI target name. Each disk in the volume group also
has a UUID and a SCSI index that specifies ordering within the volume group. A volume group can be configured for
either exclusive or shared access.
You can backup, protect, restore, and migrate volume groups. You can include volume groups in protection domains
configured for asynchronous data replication (Async DR), either exclusively or with VMs. However, volume groups
cannot be included in a protection domain configured for metro availability, in a protected vStore, or in a consistency
group for which application consistent snapshots are enabled.
vDisks
A vDisk is created within a storage container or volume group to provide storage to the virtual machines. A vDisk
shows up as a SCSI device when it is mapped to a VM.
Note: Using a Nutanix storage container as a general-purpose NFS or SMB share is not recommended. For NFS and
SMB file service, use Nutanix Files.
NFS Datastores. The Distributed Storage Fabric (DSF) reduces unnecessary network chatter by localizing
the data path of guest VM traffic to its host. This boosts performance by eliminating unnecessary hops
between remote storage devices that is common with the pairing of iSCSI and VMFS. To enable vMotion
and related vSphere features (when using ESX as the hypervisor), each host in the cluster must mount an
NFS volume using the same datastore name. The Nutanix web console and nCLI both have a function to
create an NFS datastore on multiple hosts in a Nutanix cluster.
To correctly map the local ESX datastore to the Nutanix container:
SMB Library Share. The Nutanix SMB share implementation is the Hyper-V equivalent of an NFS
Datastore with feature and performance parity with a vSphere configuration. The registration of a Nutanix
storage container as an SMB Library share can be accomplished through a single powershell script, or
through the Virtual Machine Manager GUI.
Compression
You can enable compression on a storage container. Compression can save physical storage space and improve I/O
bandwidth and memory usage—which may have a positive impact on overall system performance.
Note: If the metadata usage is high, compression is automatically disabled. If compression is automatically disabled, an
alert is generated.
Compression Ratios
You can view compression ratios and usage savings in the Prism Element web console.
• Cluster
In the Storage dashboard, under Capacity Optimization, click the After bar, and hover your mouse over
Compression.
• Storage container
In the Storage dashboard Table view, on the Storage Container tab, click the storage container for which you
want to view the compression ratio. You can see the compression ratio for the selected storage container under
Storage Container Details.
Deduplication
Deduplication reduces space usage by consolidating duplicate data blocks on Nutanix storage when you enable
capacity deduplication on a storage container.
Important:
Capacity Deduplication
Enable capacity deduplication of persistent data to reduce storage usage. Capacity deduplication means deduplication
performed on the data in hard disk storage (HDD).
Note:
Important: Nutanix recommends that you configure the Controller VMs with at least 32 GiB of RAM and 300 GiB
SSDs for the metadata disk for Capacity Deduplication.
Erasure Coding
Erasure coding increases the usable capacity on a cluster. Instead of replicating data, erasure coding uses a parity
information to rebuild data in the event of a disk failure. The capacity savings of erasure coding is in addition to
deduplication and compression savings.
Important: Erasure coding is supported on clusters with a minimum of 4 nodes when using RF2 and a minimum of 6
nodes when using RF3.
If you have configured redundancy factor 2, two data copies are maintained. For example, consider a 6-node cluster
with 4 data blocks (a b c d). In this example, we start with 4 data blocks (a b c d) configured with redundancy factor
2.
The white text represents the data blocks and the green text represents the copies.
When the data becomes cold, the erasure code engine performs an exclusive OR operation to compute parity “P” for
the data.
After parity is computed, the data block copies are removed and replaced with the parity information. Redundancy
through parity results in data reduction because the total data on the system is now a+b+c+d+P instead of 2 × (a+b+c
+d).
Note: Each block in the stripe is placed on a separate node to protect from a single node failure.
If the node that contains data block c fails, block c is rebuilt using the rest of the erasure coded stripe (a b d and P) as
displayed in the following example:
Block c is then placed on a node that does not have any other members of this erasure coded stripe.
Note: When the cluster is configured for the redundancy factor 3, two parity blocks are maintained so that the erasure
coded data has the same resiliency as the replicated data. An erasure coded stripe with two parity blocks can handle the
failure of two nodes.
The space savings from the erasure coding depends on the cluster size, redundancy setting, and percentage of cold
data.
You can view the erasure coding usage savings from the storage container summary.
In a 6-node cluster configured with RF2, erasure coding uses a stripe size of 5 where 4 nodes are for data and 1 node
is for parity. The sixth node in the cluster ensures that if a node fails, another node is available for rebuild. With a
stripe of 4 data to 1 parity, the and percentage of cold data. You can view the erasure coding usage savings from the
storage container summary.
Erasure coding stripe size adapts to the size of the cluster starting with the minimum 4 nodes with a maximum of 5
node stripe width. The following is an example displaying the various configurations of cluster size, possible stripe
widths, and approximate savings that might occur when erasure coding is enabled.
• A cluster must have at least four nodes/blocks/racks to enable erasure coding. The cluster can have all four flash
nodes or a combination of flash and hybrid nodes, or all hybrid nodes. If erasure coding is enabled on a storage
container, a minimum of four blocks for RF2 or six blocks for RF3 is required to maintain block awareness.
• The following table provides the information about the recommended minimum configuration for multiple node
removal operations:
Note: Ensure that you maintain a cluster size that is at least one node greater than the combined strip size (data +
parity) to allow space to rebuild the strips if a node fails.
• AOS dynamically calculates the erasure coding strip sizes depending on the number of nodes, blocks, and racks.
The maximum supported and recommended strip sizes are (4,1) or (4,2) depending on the nodes, blocks, and
racks. Nutanix recommends that you to not change the strip size. Greater strip sizes increases the space savings,
however, it increases the cost of rebuild.
• Erasure coding effectiveness (data reduction savings) might reduce on workloads that have many overwrites
outside of the erasure coding window. The default value for erasure coding window is seven days for write cold.
• Read performance is affected during rebuild and the amount depends on cluster strip size and read load on the
system.
• Same vDisk strips: Strips that are created using the data blocks from the same vDisk. Nutanix recommends that
you configure inline erasure coding type as same vDisk strips for workloads that do not require data locality.
• Cross vDisk strips: Strips that are created using the data blocks across multiple vDisks. Nutanix recommends that
you configure inline erasure coding type as cross vDisk strips for workloads that require data locality.
By default, same vDisk strips are created when you enable inline erasure coding.
Note: Inline erasure coding with same vDisk strips can be enabled for clusters running AOS version 5.18 or higher;
and with cross vDisk strips can be enabled for clusters running AOS version 6.6 or higher.
Caution:
• Nutanix recommends that you enable inline erasure coding for Object storage containers only. To enable
inline erasure coding for any other type of storage container, contact Nutanix Support.
• Erasure coding must be enabled on the container to enable inline erasure coding. For information about
how to enable erasure coding, see Creating a Storage Container.
Procedure
• To explicitly configure inline erasure coding type, run the following nCLI commands:
Replace container_name and storage_pool_id with the storage container name and storage pool ID on
which you want to enable erasure coding.
• To change an existing inline erasure coding type, run the following ncli commands:
• Reserve capacity for a storage container only if the storage pool consists of multiple storage containers. Unless
there is a specific reason to have multiple storage containers, Nutanix recommends you to configure a single
storage pool with a single storage container.
• Do not reserve more than 90% of the total space in the storage pool.
Storage Dashboard
The Storage dashboard displays dynamically updated information about the storage configuration in a
cluster. To view the Storage dashboard, select Storage from the pull-down list on the far left of the main
menu.
Menu Options
In addition to the main menu, the Storage screen includes a menu bar with the following options. For information on
the main menu, Main Menu on page 43.
• Click the Overview button on the left to display storage information in a summary view. For more
information, see Storage Overview View on page 121.
• Click the Diagram button to display a diagram of the storage pools and storage containers in the cluster nodes
from which you get detailed information by clicking on a storage pool or storage container of interest. For
more information, see Storage Diagram View on page 124.
• Click the Table button to display hardware information in a tabular form. The Table screen is further divided
into Volume Group, Storage Pool, and Storage Container views; click the Volume Group tab to view volume
group information, the Storage Pool tab to view storage pool information, and the Storage Container tab
to view storage container information. For more information, see Storage Table View on page 128.
• Action buttons. Click the Volume Group button on the right to add a volume group to the cluster in a storage
container. For more information, see Creating a Volume Group on page 146.
Click the Storage Container button to add a storage container to a storage pool. For more information, see
Creating a Storage Container on page 135.
• Page selector. In the Table view, hosts and disks are listed 10 per page. When there are more than 10 items in the
list, left and right paging arrows appear on the right, along with the total count and the count for the current page.
• Export table content. In the Table view, you can export the table information to a file in either CSV or JSON
format by clicking the gear icon on the right and selecting either Export CSV or Export JSON from the pull-
down menu. (The browser must allow a dialog box for export to work.) Chrome, Internet Explorer, and Firefox
download the data into a file; Safari opens the data in the current window.
Note: For information about how the statistics are derived, see Understanding Displayed Statistics on page 55.
Name Description
Storage Summary Displays information about the physical storage space utilization (in GiB or TiB)
and resilient capacity of the cluster.
Placing the cursor anywhere on the horizontal axis displays a breakdown view of
the storage capacity usage.
The View Details link displays the resiliency status and storage information of
all the individual nodes in the cluster. For more information, see Storage Details
Page on page 123.
You can also configure a threshold warning for the resilient capacity utilization
in the cluster by clicking the gear icon. For more information, see Configuring a
Warning Threshold for Resilient Capacity on page 140.
Storage Containers Displays the number of storage containers, number of VMs, and number of
hosts on which the storage containers are mounted in the cluster.
Capacity Optimization Displays the data reduction ratio (compression, deduplication, and erasure coding),
data reduction savings (compression, deduplication, and erasure coding), and
the current gained by enabling compression, deduplication, and erasure coding
features.
Cluster-wide Controller IOPS Displays I/O operations per second (IOPS) in the cluster. The displayed
time period is a rolling interval that can vary from one to several hours
depending on activity moving from right to left. Placing the cursor
anywhere on the horizontal axis displays the value at that time. (These
display features also apply to the I/O bandwidth and I/O latency monitors.)
Cluster-wide Controller IO B/ Displays I/O bandwidth used per second in the cluster. The value is
W displayed in an appropriate metric (MBps, KBps, and so on) depending on
traffic volume. For more in depth analysis, you can add this chart (and any
other charts on the page) to the analysis page by clicking the blue link in
the upper right of the chart. For more information, see Analysis Dashboard
on page 333.
Cluster-wide Controller Displays the average I/O latency (in milliseconds) in the cluster.
Latency
Cache Deduplication
Note: Cache deduplication is not supported in AOS 6.6 and later versions.
Storage Critical Alerts Displays the five most recent unresolved storage-specific critical alert
messages. Click a message to open the Alert screen at that message. You
can also open the Alert screen by clicking the view all alerts button at the
bottom of the list. For more information, see Alerts Dashboard in Prism
Element Alerts and Events Reference Guide.
Storage Warning Alerts Displays the five most recent unresolved storage-specific warning alert
messages. Click a message to open the Alert screen at that message. You
can also open the Alert screen by clicking the view all alerts button at the
bottom of the list.
Storage Events Displays the ten most recent storage-specific event messages. Click a
message to open the Event screen at that message. You can also open
the Event screen by clicking the view all events button at the bottom of
the list.
Storage Over-provisioning Displays the storage over-provisioning ratio (calculated based on the
provisioned storage and the available raw storage) in the cluster. Note that
the time taken for the Storage Over-provisioning Ratio widget to reflect
the changes made in the cluster varies according to the recent storage
operations/activities performed.
• The right section displays a diagrammatic representation of the number of nodes present in the cluster along with
the respective storage capacity used.
• The left section provides detailed storage information of the cluster as follows.
• The top section is a cascading diagram of the storage units. Initially, a cluster bar appears with storage information
about the cluster (used, provisioned, and available storage). You can configure a threshold warning for the
resilient capacity utilization in the cluster by clicking the gear icon to the right of the cluster bar. For information,
see Configuring a Warning Threshold for Resilient Capacity on page 140. Clicking on a cluster bar
displays storage information about the physical usage of storage pool (used, provisioned, and available storage)
and bar with colored blocks for each storage container in that storage pool. Clicking on a storage pool block
displays storage information about the storage container and a bar for that storage container. You can edit a
storage pool or storage container by clicking the pencil (edit) or X (delete) icon to the right of the name. Clicking
the close link at the far right hides that storage pool or storage container bar from the display.
• The bottom Summary section provides additional information. It includes a details column on the left and a set
of tabs on the right. The details column and tab content varies depending on what has been selected.
Note: For information about how the statistics are derived, see Understanding Displayed Statistics on page 55.
• Click the Update link to update the settings for this storage container.
• Click the Delete link to delete this storage container configuration.
For more information about these actions, see Modifying a Storage Container on page 139.
• In the Summary: storage_container_name table, hover your mouse over the value to see additional details
of that parameter.
• Four tabs appear that display information about the selected storage container (see following sections for details
about each tab): Storage Container Usage, Storage Container Performance, Storage Container
Alerts, Storage Container Events.
Replication Factor Displays the replication factor, which is the number [1,2,3]
of maintained data copies. The replication factor is
specified when the storage container is created.
Protection Domain Displays the name of the protection domain if the you (name)
have a configured a protection domain for that storage
container.
Datastore Displays the name of the datastore. (name)
Free Space (Physical) Displays the amount of free physical storage space xxx [GB|TB]
available to the storage container that is unreserved.
Used (Physical) Displays the amount of used physical storage space in the xxx [GB|TB]
storage container, including space used by Snapshots and
Recycle Bin.
Max Capacity Displays the total amount of storage capacity available xxx [TB]
to the storage container. Nutanix employs a thin
provisioning model when allocating storage space,
which means space is assigned to a storage container
only when it is actually needed. The maximum capacity
value reflects the total available storage regardless of
how many storage containers are defined. Therefore,
when you have two storage containers, it can appear that
you have twice as much capacity because maximum
capacity for both storage containers show the full
amount. Maximum capacity is calculated as the total
physical capacity in the storage pool, minus any reserved
capacity, minus space used by other storage containers.
Reserved Displays the amount of reserved physical storage space xxx [GB|TB]
in the storage container.
Effective Free Displays the amount of usable free space after data
reduction.
Filesystem Whitelists Displays whether you have configured filesystem [None, On, Off]
whitelist for this storage container.
• When a storage pool is selected, Summary: storage_pool_name appears below the diagram, and action links
appear on the right of this line:
• Click the Update link to update the settings for this storage pool.
For more information about this action, see Modifying a Storage Container on page 139.
• Four tabs appear that display information about the selected storage pool (see following sections for details about
each tab): Storage Pool Usage, Storage Pool Performance, Storage Pool Alerts, Storage Pool
Events.
Free (Physical) Displays the total amount of physical storage space that xxx [GB|TB]
is available.
Used (Physical) Displays the total amount of physical storage space used xxx [GB|TB]
in the storage pool.
Capacity (Physical) Displays the total physical storage space capacity in the xxx [TB]
storage pool.
Disk Count Displays the number of disks in the storage pool. (number)
• The Storage Summary column (on the left) includes five fields:
• Free (Physical). Displays the amount of physical storage space still available in the cluster.
• Used (Physical). Displays the amount of physical storage space used currently in the cluster, including the
Recycle Bin.
• Capacity (Physical). Displays the total physical storage capacity in the cluster.
• Storage Pool(s). Displays the names of the storage pool. Clicking on a name displays the detail information
for that storage pool in this section.
• Storage Container(s). Displays the names of the storage containers. Clicking on a name displays the detail
information for that storage container in this section.
• Four tabs appear that display cluster-wide information (see following sections for details about each tab): Usage
Summary, Performance Summary, Storage Alerts, Storage Events.
Usage Tab
The Usage tab displays graphs of storage usage. The tab label varies depending on what is selected in the table:
• Usage Summary (no storage pool or storage container selected). Displays usage statistics across the cluster.
• Storage Container Usage (storage container selected). Displays usage statistics for the selected storage
container.
• Storage Pool Usage (storage pool selected). Displays usage statistics for the selected storage pool.
The Usage tab displays the following two graphs:
• Cluster-wide Usage Summary: Displays a rolling time interval usage monitor that can vary from one to
several hours depending on activity moving from right to left. Placing the cursor anywhere on the horizontal
axis displays the value at that time. For more in depth analysis, you can add the monitor to the analysis page
by clicking the blue link in the upper right of the graph. For more information, see Analysis Dashboard on
page 333.
• Tier-wise Usage: Displays a pie chart divided into the percentage of storage space used by each disk tier in
the cluster, storage pool, or storage container. Disk tiers can include DAS-SATA, SSD-SATA, and SSD-PCIe
depending on the Nutanix model type.
Performance Tab
The Performance tab displays graphs of performance metrics. The tab label varies depending on what is selected in
the table:
• Performance Summary (no storage pool or storage container selected). Displays storage performance statistics
across the cluster.
• Storage Container Performance (storage container selected). Displays storage performance statistics for the
selected storage container.
• Storage Pool Performance (storage pool selected). Displays storage performance statistics for the selected
storage pool.
The graphs are rolling time interval performance monitors that can vary from one to several hours depending on
activity moving from right to left. Placing the cursor anywhere on the horizontal axis displays the value at that time.
• [Cluster-wide Hypervisor|Controller|Disk] IOPS: Displays I/O operations per second (IOPS) for the
cluster, selected storage container, or selected storage pool.
• [Cluster-wide Hypervisor|Controller|Disk] I/O Bandwidth: Displays I/O bandwidth used per second
(MBps or KBps) for physical disk requests in the cluster, selected storage container, or selected storage pool.
• [Cluster-wide Hypervisor|Controller|Disk] I/O Latency: Displays the average I/O latency (in milliseconds)
for physical disk requests in the cluster, selected storage container, or selected storage pool.
• The top section is a table. Each row represents a single volume group, storage pool, or storage container and
includes basic information about that entity. Click a column header to order the rows by that column value
(alphabetically or numerically as appropriate).
• The bottom Summary section provides additional information. It includes a details column on the left and a set
of tabs on the right. The details column and tab content varies depending on what has been selected.
Note: For more information about how the statistics are derived, see Understanding Displayed Statistics on
page 55.
• The table at the top of the screen displays information about all the configured volume groups, and the details
column (lower left) displays additional information when a volume group is selected in the table. The following
table describes the fields in the volume group table and detail column.
• Click the Update link to update the settings for this volume group.
• Click the Delete link to delete this volume group.
For more information about these actions, see Modifying or Deleting a Volume Group on page 147.
Five tabs appear that display information about the selected volume group (see following sections for details about
each tab): Performance Metrics, Virtual Disks, Volume Group Tasks, Volume Group Alerts, and Volume Group
Events.
Controller IOPS Displays the current I/O operations per second (IOPS) [0 - unlimited]
for the volume group. The controller IOPS, I/O
bandwidth, and I/O latency fields record the I/O requests
serviced by the Controller VM. The I/O can be served
from memory, cache (SSD), or disk.
Controller IO B/W Displays I/O bandwidth used per second for Controller xxx [MBps|KBps]
VM-serviced requests in this volume group.
Controller IO Latency Displays the average I/O latency for Controller VM- xxx [ms]
serviced requests in this volume group.
Number of Virtual Disks Displays the number of virtual disks in the volume [0–256]
group.
Total Size Displays the total size of the volume group. xxx [GB|TB]
Initiators Displays the iSCSI initiators to which the volume group (None|List of names)
is attached.
Storage Container Displays the name of the storage container to which the (name)
volume group belongs.
Target IQN Prefix Displays the IQN prefix of the target iSCSI.
• The table at the top of the screen displays information about all the configured storage containers, and the
details column (lower left) displays additional information when a storage container is selected in the table. The
following table describes the fields in the storage container table and detail column.
• When a storage container is selected, Summary: storage_container_name appears below the table, and
action links appear on the right of this line:
• Click the Update link to update the settings for this storage container.
• Click the Delete link to delete this storage container configuration.
For more information about these actions, see Modifying a Storage Container on page 139.
• Four tabs appear that display information about the selected storage container (see following sections for details
about each tab): Storage Container Breakdown, Storage Container Usage, Storage Container
Performance, Storage Container Alerts, Storage Container Events.
Encrypted Displays the encryption status of the storage container. [Yes| No]
Replication Factor Displays the replication factor, which is the number [1,2,3]
of maintained data copies. The replication factor is
specified when the storage container is created.
Compression Displays whether compression is enabled. [Off|On]
Erasure Coding Displays whether erasure coding is enabled for the [On, Off]
storage container or not
Free Capacity (Physical) Displays the amount of free physical storage space in the xxx [GB|TB]
storage container.
Used Capacity (Physical) Displays the amount of used physical storage space in the xxx [GB|TB]
storage container, including space used by the Recycle
Bin.
Reserved Capacity Displays the amount of reserved physical storage space xxx [GB|TB]
(Physical) in the storage container.
Controller IO Latency Displays the average I/O latency for Controller VM- xxx [ms]
serviced requests in this storage container.
Protection Domain Displays the data protection domain used for the storage (DR name)
container.
VMs Displays the number of VMs associated with the storage xxx
container.
Free Capacity (Physical) Displays the amount of free physical storage space xxx [GB|TB]
available to the storage container that is unreserved.
Used (Physical) Displays the amount of used physical storage space for xxx [GB|TB]
the storage container.
Snapshot The total storage capacity in the cluster consumed by xxx [GB|TB]
snapshots (sum of both local and remote).
Max Capacity Displays the total amount of storage capacity available xxx [TB]
to the storage container (see the Max Capacity (Physical)
for description).
Reserved Displays the total reserved storage capacity in the storage xxx [GB|TB]
container.
Replication Factor Displays the replication factor, which is the number [1, 2, 3]
of maintained data copies. The replication factor is
specified when the storage container is created.
Compression Displays whether compression is enabled. [Off|On]
Effective Free Displays the amount of usable free space after data xxx [GB|TB]
reduction.
Overall Efficiency Displays the capacity optimization (as a ratio) that xxx [GB|TB]
results from the combined effects of data reduction
(deduplication, compression, and erasure coding),
cloning, and thin provisioning.
Capacity Deduplication Displays whether on disk deduplication is enabled, that is [On, Off]
dedup compression applied to data on hard disks (HDD).
Filesystem Allowlists Displays whether you have configured filesystem [None, On, Off]
allowlist for this storage container.
Erasure Coding Displays whether erasure coding is enabled or not. [On, Off]
• The table at the top of the screen displays information about the storage pool, and the details column (lower left)
displays additional information when a storage pool is selected in the table. The following table describes the
fields in the storage pool table and detail column.
• When a storage pool is selected, Summary: storage_pool_name appears below the table, and action links
appear on the right of this line:
• Click the Update link to update the settings for this storage pool.
For more information about these actions, see Modifying a Storage Pool on page 135.
• Four tabs appear that display information about the selected storage pool (see following sections for details about
each tab): Storage Pool Usage, Storage Pool Performance, Storage Pool Alerts, Storage Pool
Events.
Free (Physical) Displays the total amount of physical storage space that xxx [GB|TB]
is available.
Used (Physical) Displays the total amount of physical storage space used xxx [GB|TB]
in the storage pool.
Disk IOPS Displays the current I/O operations per second (IOPS) [0 - unlimited]
for the storage pool. The IOPS, I/O bandwidth, and I/
O latency fields record the I/O requests serviced by
physical disks across the storage pool.
Disk IO B/W Displays the I/O bandwidth used per second for physical xxx [MBps|KBps]
disk requests in this storage pool.
Disk Avg IO Latency Displays the average I/O latency for physical disk xxx [ms]
requests in this storage pool.
Free (Physical) Displays the total amount of physical storage space that xxx [GB|TB]
is available.
Used (Physical) Displays the total amount of physical storage space used xxx [GB|TB]
in the storage pool.
Capacity (Physical) Displays the total physical storage space capacity in the xxx [TB]
storage pool.
Disk Count Displays the number of disks in the storage pool. (number)
• The Storage Summary column (on the left) includes five fields:
• Available (Physical). Displays the amount of physical storage space still available in the cluster.
• Used (Physical). Displays the amount of physical storage space used currently in the cluster.
• Capacity (Physical). Displays the total physical storage capacity in the cluster.
• Storage Pool. Displays the name of the storage pool in the cluster. Clicking the name displays detailed
information about the storage pool in this section.
• Storage Container(s). Displays the names of the storage containers. Clicking a name displays detailed
information about that storage container in this section.
• Four tabs appear that display cluster-wide information (see following sections for details about each tab): Usage
Summary, Performance Summary, Storage Alerts, Storage Events.
Breakdown Tab
The Breakdown tab is displayed in the Summary section only when a storage container is selected from the
storage container table.
The Breakdown tab displays the type (VM or VG), list of virtual disks, the amount of allocated space, and storage
space utilized by each in the selected storage container.
• Usage Summary (no storage pool, storage container, or volume group selected). Displays usage statistics across
the cluster.
• Storage Container Usage (storage container selected). Displays usage statistics for the selected storage
container.
• Storage Pool Usage (storage pool selected). Displays usage statistics for the selected storage pool.
• Volume Group Usage (volume group selected). Displays usage statistics for the selected volume group.
The Usage tab displays the following two graphs:
• Usage Summary: Displays a rolling time interval usage monitor that can vary from one to several hours
depending on activity moving from right to left. Placing the cursor anywhere on the horizontal axis displays the
value at that time. For more in depth analysis, you can add the monitor to the analysis page by clicking the blue
link in the upper right of the graph. For more information, see Analysis Dashboard on page 333.
• Tier-wise Usage: Displays a pie chart divided into the percentage of storage space used by each disk tier in
the cluster, storage pool, or storage container. Disk tiers can include DAS-SATA, SSD-SATA, and SSD-PCIe
depending on the Nutanix model type.
Performance Tab
The Performance tab displays graphs of performance metrics. The tab label varies depending on what is selected in
the table:
• Performance Summary (no storage pool, storage container, or volume group selected). Displays storage
performance statistics across the cluster.
• Storage Container Performance (storage container selected). Displays storage performance statistics for the
selected storage container.
• Storage Pool Performance (storage pool selected). Displays storage performance statistics for the selected
storage pool.
• Volume Group Performance (volume group selected). Displays storage performance statistics for the selected
volume group.
The graphs are rolling time interval performance monitors that can vary from one to several hours depending on
activity moving from right to left. Placing the cursor anywhere on the horizontal axis displays the value at that time.
For more in depth analysis, you can add a monitor to the analysis page by clicking the blue link in the upper right
of the graph. For more information, see Analysis Dashboard on page 333. The Performance tab includes the
following three graphs:
• [Cluster-wide Hypervisor|Controller|Disk] IOPS: Displays I/O operations per second (IOPS) for the
cluster, selected storage container, selected storage pool, or selected volume group.
• [Cluster-wide Hypervisor|Controller|Disk] I/O Bandwidth: Displays I/O bandwidth used per second
(MBps or KBps) for physical disk requests in the cluster, selected storage container, selected storage pool, or
selected volume group.
• [Cluster-wide Hypervisor|Controller|Disk] I/O Latency: Displays the average I/O latency (in milliseconds)
for physical disk requests in the cluster, selected storage container, selected storage pool, or selected volume
group.
Events Tab
The Events tab displays the unacknowledged event messages about storage pools, storage containers, or volume
groups in the same form as the Events page. For more information, see Events Summary View. Click the Include
Acknowledged button to also display acknowledged events.
Procedure
2. Select Storage from the pull-down main menu (upper left of screen) and then select the Table and Storage
Pool tabs.
3. To update the storage pool, select the target storage pool and then click the Update link.
The Update Storage Pool window appears displaying the current name and capacity of the storage pool.
4. Enter the new name of the storage pool in the Name field.
5. Click Save.
• Ensure that the cluster is configured to synchronize time with NTP servers. For more information, see
Configuring NTP Servers on page 353. Also, ensure that the time on the Controller VMs is synchronized
and current. If the time on the Controller VMs is ahead of the current time, cluster services might fail to start.
Files within the storage containers might also have timestamps ahead of the current time when viewed from the
hypervisor.
• A storage pool and one storage container are created automatically when the cluster is created.
• A storage container is not created if you have not configured the Controller VMs with enough memory. Controller
VM memory allocation requirements differ depending on the models and features that are being used. For more
information, see CVM Memory Configuration on page 107.
To create a replication factor 1 enabled storage container, see Creating a Storage Container with Replication
Factor 1 on page 105.
Procedure
Note: This entity has the following naming restrictions across various hypervisors.
AHV:
Make default on all Hyper-V hosts Makes this storage container a default location for
storing virtual machine configuration and virtual hard
disk files on all the Hyper-V hosts.
Make default on particular Hyper-V hosts Provides you with an option to select the hosts to make
this storage container a default location for storing
virtual machine configuration and virtual hard disk
files on all the Hyper-V hosts.
a. Replication Factor: Displays the number of data copies to maintain in the cluster.
Nutanix supports a replication factor of 2 or 3. Setting the replication factor to 3 adds an extra layer of data
protection at the cost of storing an additional copy of the data.
Note: To change the storage container level setting to replication factor 3, the cluster must be set to fault
tolerance level 2. For more information, see Increasing the Cluster Fault Tolerance Level on page 97.
Nutanix supports a replication factor of 1 if you first select the Enable Replication Factor 1 checkbox at
the Settings > Redundancy State page. For more information about this setting, see Replication Factor
1 Overview on page 101 and Replication Factor 1 Recommendations and Limitations on page 102. If
you do not select Enable Replication Factor 1, Nutanix supports a replication factor of 2 or 3. Setting the
replication factor to 3 adds an extra layer of data protection at the cost of storing an additional copy of the data.
Note: To change the storage container level setting to replication factor 3, the cluster must be set to fault
tolerance level 2. For more information, see Increasing the Cluster Fault Tolerance Level on page 97.
b. Reserved Capacity (Logical): To reserve storage space for this storage container, enter the amount (in
GiB) to reserve in this field.
Reserved Capacity (Physical) (Read-Only): Displays the amount of physical capacity that is reserved
based on the logical reserved capacity value.
You can reserve space for a storage container to ensure a minimum storage capacity is available. Reserving
space for a storage container means that space is no longer available to other storage containers even if the
reserved space is unused. For more information, see Capacity Reservation Best Practices on page 120.
c. Advertised Capacity (Logical): Sets a maximum storage space for this storage container, enter the amount
(in GiB) to reserve in this field.
Advertised Capacity (Physical) (Read-Only): Displays shows the amount of physical capacity that is
advertised based on the logical advertised capacity value.
This sets an advertised capacity, which is the maximum storage size that the storage container can use. This
can be set to any value, but if a reserved capacity is configured, it must be set greater than or equal to the
reservation on the storage container. The hypervisor ensures that the storage container storage does not go
beyond the advertised capacity. When a storage container reaches a threshold percentage of the actual storage
pool size, an alert is issued.
d. Compression: Inline compression is enabled by default with the Delay (In Minutes) field set to 0. A
value of 0 means that data is compressed immediately as it is written. The delay time between write and
compression is configurable. For post-process compression, where data is compressed after it is written,
Nutanix recommends a delay of 60 minutes. Compression is delayed for 60 minutes after the initial write
operation.
All data in the storage container is compressed when you select Compression. For information about using
compression, see Compression on page 113.
e. Deduplication: Select the CAPACITY check box to perform post-process deduplication of persistent data.
Nutanix recommends this option primarily for full clone, persistent desktops, and physical to virtual migration
use cases that need storage capacity savings (not just performance savings from deduplication). Nutanix
Caution:
• User authentication is not available for NFS access, and the IP address in the allowlist has full
read or write access to the data on the container.
• Nutanix recommends to allow single IP addresses (with net mask such as 255.255.255.255)
instead of allowing subnets (with netmask such as 255.255.255.0).
• The NutanixManagementShare container is an internal storage container for Nutanix products and services. To
ensure seamless operations, external users should avoid accessing, modifying, or deleting this storage container.
The NutanixManagementShare storage container is not intended to store user data and vDisks, including Nutanix
Volumes.
• The web console does not allow you to rename a storage container in an AHV cluster when modifying container
details through the Update Storage Container dialog box. You cannot rename a storage container if it contains
vdisks.
Procedure
2. Select Storage from the pull-down main menu (upper left of screen) and then select the Table and Storage
Container tabs.
Note:
• For ESXi clusters, if you make changes to any of the parameters that impact the storage container
size (such as Advertised Capacity), the information does not get refreshed in the vCenter ESXi
nodes by default. You must right-click the container in vCenter and select Refresh Capacity
Information to refresh the capacity.
• If the compression policy is changed from compressed to uncompressed (or vice versa), the existing
compressed (uncompressed) data in the storage container will be uncompressed (compressed) as a
background process when the next data scan detects the data that needs this change.
• The Prism Element web console does not provide an option to change the container replication
factor. That can be done only through the nCLI. For more information, see Increasing the Cluster
Fault Tolerance Level on page 97.
4. To delete the storage container, select the target storage container, and then click the Delete link.
Procedure
2. Select Storage from the pull-down main menu and then select Overview to display storage information in a
summary view.
• Use default. Select this option to use the default 75% warning threshold limit for resilient capacity.
• Set manually. Select this option and enter your custom warning threshold limit for resilient capacity in the
cluster.
5. Click Save.
A resilient capacity marker is set according to the warning threshold limit on the storage capacity chart, hover
over the chart to see more details.
• Nutanix supports rebuild capacity reservation on the cluster with three or more node. Nutanix does not support
rebuild capacity reservation on single-node and two-node clusters.
• It is advisable for the Prism administrator to confirm if a manual container reservation was previously configured
on the cluster for this purpose before configuring a rebuild capacity reservation.
• The cluster must have a single storage pool that is the default storage pool. Do not create storage pools in the
cluster when you have reserved rebuild capacity.
• For more information about storage pools, see Storage Components on page 111.
• Total capacity used or consumed must be less than the resilient capacity.
If the used capacity is close to the resilient capacity and it increases due to large write operations and/or internal
background jobs and migration tasks, then the used capacity can overshoot the resilient capacity. If rebuild
capacity is already reserved on the cluster, the cluster stops accepting write requests.
Total Capacity in the cluster is the sum of resilient capacity and reserved rebuild capacity. Capacity usage as
the percentage of resilient capacity (Used Capacity) changes the color of the bar displaying the used capacity.
Capacity usage also generates alerts. For example, if usage is 95 percent or more, then the cluster generates a
critical alert after a specified number of NCC check iterations. When you have reserved rebuild capacity and the
usage is 95 percent or more (of the resilient capacity), the cluster stops accepting Write requests.
Caution: Enabling Rebuild Capacity Reservation on a cluster with current usage (Total Used Space) close to the
Resilient Capacity threshold might result in a VM outage. To avoid the issue, before enabling reservation ensure
that current usage (Total Used Space) is not more than 90% of the calculated Resilient Capacity threshold.
Note: Auto-disabling of Recycle Bin depends on full scans by Curator. Therefore, the excess usage condition of the
Recycle Bin could continue for sometime between two full scans before Recycle Bin is disabled.
• Changes in the cluster storage capacity impacts Resilient, Rebuild, Used and Free capacities in the cluster.
• When rebuild capacity is reserved and you try to remove a host or a disk, the cluster calculates the possible
used capacity and resilient capacity after the removal. A confirmation dialog box appears displaying a message
that the Resilient capacity will be reduced to the possible reduced capacity after the removal. If
the used capacity is greater than the possible resilient capacity, then the host or disk removal fails.
Thus when rebuild capacity is reserved, Prism allows a node or disk removal only if the cluster's data can
rebuild after such removal and at the same time preserve the configured domain's fault tolerance.
• When rebuild capacity reservation is enabled, the data consumption of failed nodes is not accounted in the
total usage. When rebuild capacity reservation is not enabled, the data consumption of failed nodes is added to
the total usage thereby inflating it.
• When rebuild capacity reservation is enabled, Oplog usage is accounted in the total usage. The oplog is a fast
write back cache to absorb random writes. Its consumption will also be accounted towards total usage when
rebuild capacity reservation is enabled.
• You cannot reserve rebuild capacity if you have enabled redundancy factor (RF) 1. You cannot enable RF 1 if you
have reserved rebuild capacity.
When you reserve rebuild capacity in a cluster, do not enable RF1 using CLI or API.
• If you want to change the failure domain, for example, from Node to Block you must disable Rebuild Capacity
Reservation. You can enable Rebuild Capacity Reservation after changing the failure domain. You cannot change
the failure domain when Rebuild Capacity Reservation is enabled.
Do not modify the failure domain using APIs when the Rebuild Capacity Reservation is enabled.
• Ensure that features such as Erasure Coding, De-duplication and compression are not disabled after you enable
rebuild capacity reservation. Disabling such features can lead to drastic increase in used capacity beyond 95
percent and put the cluster in read-only mode when the usage before turning off these features is close to the
threshold.
• The rebuild process is completed only when fault tolerance is not exceeded. For example, if failure domain is
Node and fault tolerance is 2, the rebuild process that starts after one node failure completes successfully. The
rebuild process that starts after a second node failure (concurrent) also completes successfully. However, at this
stage, if a third node fails, the rebuild process starts but does not complete.
• The rebuild process does not complete if data replicas are unavailable due to reasons such as link failures or disk
failures.
Note:
Usage due to internal background jobs and migration tasks can increase the used capacity even when there
are no write operations running. Heavy write operations with small containers also drastically increases the
For more information about Resilient Capacity and Warning Threshold configuration, see Configuring a Warning
Threshold for Resilient Capacity on page 140.
Click the View Details link to open the Storage Details page, after selecting Physical in the drop-down on the right.
The Storage Details page displays a banner informing you about rebuild capacity reservation. It also provides the
Enable Now link that opens the Rebuild Capacity Reservation where you can reserve rebuild capacity.
To enable rebuild capacity reservation, see Reserving Rebuild Capacity on page 144.
When you enable rebuild capacity reservation, the cluster calculates the required rebuild capacity based on parameters
such as fault tolerance, failure domain, and total storage capacity in the cluster. The reserved rebuild capacity is
displayed on the Rebuild Capacity Reservation page.
After you reserve rebuild capacity, the Overview tab of Storage page does not display resilient capacity Warning
Threshold in Storage Summary.
After you reserve rebuild capacity, the Storage Details page displays a banner informing you that the cluster has
reserved rebuild capacity. The details show an additional item - Rebuild Capacity with the capacity reserved in TiB.
The capacity numbers change in the Storage Details page after the reservation.
Cluster Resilient Capacity Calculation with Homogeneous Capacity Entities in Failure Domains
For homogeneous capacity entities in a Failure Domain, the cluster resilient capacity is calculated as described in the
following table:
Cluster Redundancy Replication Factor Failure Domain Amount of space reserved (Re
State Capacity)
Cluster Resilient Capacity Calculation with non-homogeneous Capacity Entities in Failure domains
For non-homogeneous capacity entities in a Failure Domain, the cluster resilient capacity is calculated as the
maximum available capacity at the lowest supported Failure Domain that can meet the required Replication Factor
(RF) after Fault Tolerance (FT) failures at the configured Failure Domain.
The following table provides the examples for resilient capacity calculation when non-homogeneous capacity entities
exist in Failure Domain:
Procedure
Caution: Multiple VMs writing to a vDisk that belongs to a shared volume group without additional software to
manage the access can lead to data corruption. Use shared access when you are configuring VMs for use with cluster
aware software.
• Oracle RAC
Procedure
4. The iSCSI Target Name Prefix is auto-filled with the volume group Name. You can accept this prefix or enter
your own target name prefix for the volume group. This entity has the same naming restrictions as Name.
a. Select the Enable external client access checkbox if you are allowlisting clients that are external to or not
residing in this cluster.
If you select this checkbox, it remains selected the next time you create a volume group.
b. If you are using one-way CHAP security, select the CHAP Authentication checkbox and type a 12-
character to 16-character password (also known as a CHAP secret) in the Target Password field.
Initiators must use the same password to authenticate to the AOS cluster.
c. Click Add New Client to configure the iSCSI initiators, and then enter the client Initiator iSCSI Qualified
Name (IQN) in the Client IQN/IP Address field to create the allowlist. If you have configured Mutual
CHAP authentication on the client, select CHAP Authentication and enter the iSCSI client password
(secret). Click Add.
Note: Ensure that you enter the client IQN in the Client IQN/IP Address field and not the IP address. AOS
does not support an allowlist containing IP addresses in a volume group.
Access Control displays any configured clients. This list includes any clients attached to volume groups
in the cluster. Repeat this step to add more initiators allowed to access this storage. For information about
which products, features, or solutions are supported for concurrent access to a volume group, see Concurrent
Access from Multiple Clients on page 145.
Note: Individual virtual disks of a volume group cannot be excluded from Flash Mode by using the Prism Element
web console. However, you can exclude individual virtual disks from flash mode by using aCLI. For more
information, see Removing Flash Mode for Virtual Disks of a Volume Group on page 150.
9. Click Save.
What to do next
If the hypervisor is AHV, you can now attach the volume group to the VM and start using the vDisks. If you
want to use iSCSI and you already allowlisted the host IP addresses, log on to the VMs, and configure
iSCSI.
Procedure
2. Select Storage from the pull-down main menu (upper left of screen), and then select the Table and Volume
Group tabs.
3. To update a volume group, select the volume group, and then click the Update link.
The Update Volume Group window appears, which includes the same fields as the Create Volume Group
window. For more information, see Creating a Volume Group on page 146. In this window you can change
the volume group name, add and remove disks, configure the volume group for sharing, and add or remove (by
clearing) entries from the initiator allowlist. On AHV clusters, you can attach the volume group to a VM as a
SCSI disk (described later in this procedure). If you attach a volume group to a VM that is part of a Protection
4. To attach a volume group to a VM, click Attach to a VM, and then select the VM from the Available VMs list.
5. To manage the iSCSI client list, click the Summary link to deselect any volume groups, and then click the
Manage Initiators link. (This link appears only when no volume groups are selected.)
The Manage iSCSI Clients window appears, which includes a list of the clients and available actions. To modify
a client (enable/disable CHAP authentication), click the pencil icon for that client, which displays the Edit iSCSI
client window. For more information, see the Nutanix Volumes Guide.
Procedure
1. If any iSCSI clients are attached to the volume group, first disconnect or detach the AOS cluster target for
each iSCSI initiator from the iSCSI client (for example, from the Windows or Linux client). See your vendor
documentation for specific disconnection procedures.
2. In the web console, select Storage from the pull-down main menu (upper left of screen), and then select the
Table and Volume Group tabs.
3. To detach any iSCSI clients attached to the volume group, first update the volume group by doing these steps.
You might need to scroll down the dialog box. If no clients are attached, skip this step.
4. To delete a volume group, select the volume group, and then click Delete.
Procedure
2. Select Storage from the drop-down main menu (upper left of screen), and then select Table > Volume Group.
The system displays a list of volume groups created in the cluster.
3. Select the volume group that you want to clone and click Clone.
This displays the Clone Volume Group window.
5. Click Save.
Note: For information about minimum SSDs requirement for Hybrid HCI Node and All-Flash HCI Node, see HCI
Node Field Requirements topic in Acropolis Advanced Administration Guide.
If you enable this feature on a VM, all the virtual disks that are attached to the VM are automatically placed on the
SSD tier. Also, virtual disks added to this VM are automatically placed on SSD. However, you can update the VM
configuration to remove the flash mode from any virtual disks.
You can enable the feature on the VM and VM disks only during the VM update workflow from the Prism Element.
For information on how to enable the feature on VMs, see 11 on page 282. However, for VGs, you can enable the
feature during the creation of VGs. For information on how to enable the feature on VGs, see Creating a Volume
Group on page 146.
Caution: While enabling the flash mode feature for a VM may increase the performance of that VM, it may also
lower the performance of VMs that do not have flash mode feature enabled. Nutanix recommends you to consider
the performance impact on other VMs and VGs. To mitigate any impact on the performance, you can update the VM
configuration and remove the flash mode on individual virtual disks. For example, you can enable flash mode on the
applications data disks and disable it on the log disks.
A node failure may cause a reduction in SSD tier capacity that can drive the flash mode usage above the 25% of the
tier capacity. In this event, alerts will report the flash usage exceeding 25%.
Note:
• This feature is supported on ESXi and AHV for VMs and on all the hypervisors for VGs.
• For the cluster created using ESXi hosts, you must register your cluster with the vCenter Server. For
more information, see Registering a Cluster to vCenter Server on page 365.
• If a VM or a VG with flash mode feature enabled is cloned, then the flash mode policies are not automatically
applied to the cloned VM or VG.
Procedure
1. Log in to the Controller VM in your cluster through an SSH session and access the Acropolis command line.
2. (Optional) If you have not enabled the flash mode feature for the VG by using Prism, you can enable it by running
the following command.
acli> vg.update vg_name flash_mode=true
Replace vg_name with the name of the VG on which you want to enable the flash mode.
Note: When Replication Factor 1 is enabled for a storage container, the recycle bin is disabled for that storage
container. When you delete a storage entity like a guest VM or volume group vDisk, it is marked for deletion as soon as
possible and bypasses the recycle bin. For more information, see Replication Factor 1 Overview on page 101.
When the Recycle Bin is enabled, AOS creates a Recycle Bin associated with the storage container configured in your
cluster. If you then delete a storage entity (like a guest VM or volume group vDisk), the Recycle Bin retains vDisk
and configuration data files for up to 24 hours.
If you enable the Recycle Bin and then delete a guest VM or volume group vDisk, it retains its contents (deleted
vDisks and configuration information) for up to 24 hours, unless the cluster free storage space reaches critical
thresholds. After 24 hours, AOS automatically deletes these files. AOS deletes the files in less than 24 hours if your
cluster is unable to maintain sufficient free disk space. AOS triggers free disk space alerts when your cluster reaches
critical thresholds.
After you disable the Recycle Bin, AOS automatically deletes any entities in the Recycle Bin after 24 hours. Then,
when you subsequently delete any storage entities, AOS marks them for deletion as soon as possible. They are not
stored in the storage container Recycle Bin folder.
After deleting one or more storage entities, the Free storage space shown in the Prism Element web console might
not update for 24 hours. To see the space used by the Recycle Bin in the Storage Summary, go to the web console
Storage dashboard and select the Diagram or Table view.
For more information about Recycle Bin behavior, see Recycle Bin Limitations and Considerations on
page 151.
• The Recycle Bin stores vDisk and configuration data for up to 24 hours. After 24 hours, these files are deleted.
The files are deleted in less than 24 hours if your cluster is unable to maintain sufficient free disk space.
The Recycle Bin is temporarily disabled when the Recycle Bin is using more than five percent of cluster storage
capacity. AOS triggers free disk space alerts when your cluster reaches critical thresholds. In this case, newly-
deleted entities are marked them for deletion as soon as possible. They are not stored in the storage container
Recycle Bin folder.
When the Recycle Bin is cleared manually or automatically and the Recycle Bin is using two percent or less of
cluster storage capacity, Recycle Bin is automatically re-enabled and available for use after all Curator service
scans are completed. (Among other cluster tasks, the Curator service controls Recycle Bin and storage cleanup.)
• If the Recycle Bin contains more than 2000 files, deleted storage entities guest VMs and volume group vDisks
bypass the Recycle Bin. AOS marks them for deletion as soon as possible. They are not stored in the storage
container Recycle Bin folder.
• The web console displays the Clear Recycle Bin (available on the Storage dashboard in Summary view) or
Clear Space options (available in Storage Details) only if the Recycle Bin contains deleted items. If these
options are available but the space used by the Recycle Bin is 0, AOS has detected the items but not yet calculated
the storage space used.
• After you empty the Recycle Bin with Clear Recycle Bin or Clear Space or AOS empties the Recycle Bin,
these options are not displayed again until you delete storage entities like guest VMs and volume group vDisks.
Procedure
2. Click the gear icon in the main menu and then select Cluster Details in the Settings page.
3. To disable the Recycle Bin, clear the Retain Deleted VMs for 24h checkbox.
After you disable the Recycle Bin, AOS automatically deletes any entities in the Recycle Bin after 24 hours. Then,
when you subsequently delete any storage entities, AOS marks them for deletion as soon as possible. They are not
stored in the storage container Recycle Bin folder. See also Recycle Bin on page 151.
4. To enable the Recycle Bin, select the Retain Deleted VMs for 24h checkbox.
If you enable the Recycle Bin and then delete a guest VM or volume group vDisk, it retains its contents (deleted
vDisks and configuration information) for up to 24 hours, unless the cluster free storage space reaches critical
thresholds.
Procedure
3. To view the space used by the Recycle Bin, click the Diagram or Table tab.
The Storage Summary panel shows the space used by the Recycle Bin. If the Clear Recycle Bin option is
available but the space used by the Recycle Bin is 0, AOS has detected the deleted items but not yet calculated the
storage space used.
4. Also, from the Home dashboard, at the Storage Summary widget, click View Details.
The Recycle Bin shows the space used. If the Clear Space option is available but the space used by the Recycle
Bin is 0, AOS has detected deleted items but not yet calculated the storage space used.
• The web console displays the Clear Recycle Bin (available on the Storage dashboard in Summary view) or
Clear Space options (available in Storage Details from the Storage Summary widget) only if the Recycle
Bin contains deleted items. If these options are available but the space used by the Recycle Bin is 0, AOS has
detected the items but not yet calculated the storage space used.
• After you empty the Recycle Bin with Clear Recycle Bin or Clear Space or AOS empties the Recycle Bin,
these options are not displayed again until you delete storage entities like guest VMs and volume group vDisks.
Procedure
2. From the Home dashboard: at the Storage Summary widget, click View Details.
a. To clear the space used by the Recycle Bin, Click Clear Recycle Bin.
b. In the confirmation pop-up, click Delete.
You cannot undo this action. Any entities in the Recycle Bin are marked for deletion immediately.
• To configure network connections in clusters through the web console with Nutanix virtualization management
(such as those running AHV as the hypervisor), see Network Configuration for Cluster on page 154.
• To enable LAG and LACP on the T0R switch, see Enabling LACP and LAG (AHV Only) on page 172.
• To configure the network interfaces for a VM, see Network Configuration for VM Interfaces on page 165.
• To track and record networking statistics for a cluster, the cluster requires information about the first-hop
network switches and the switch ports being used. You can configure one or more network switches for statistics
collection. For more information, see Configuring Network Switch Information on page 170.
• A network visualizer is provided that presents a consolidated graphical representation of the network formed
by the VMs and hosts in a Nutanix cluster and first-hop switches. You can use the visualizer to monitor the
network and to obtain information that helps you troubleshoot network issues. For more information, see Network
Visualization on page 176.
Network Connections
Each VM network interface is bound to a virtual network, and each virtual network is bound to a single VLAN.
Information about the virtual networks configured currently appears in the Network Configuration page.
The Network Configuration page includes three tabs.
Networks Tab
Virtual Switch Displays the name of the virtual switch in the form (vs<number>)
vs#, for example vs0 for virtual switch 0 which is the
default virtual switch.
VLAN ID Displays the VLAN identification number for the (ID number)
network in the form vlan.#, for example vlan.27 for
virtual LAN number 27.
Free IPs in Subnets Displays the number of free or unused IP addresses (number of IP
in the subnet. This parameter is applicable only addresses)
when you have configured a managed network or
subnet.
Free IPs in Pool Displays the number of free or unused IP addresses (number of IP
in the configured pool. This parameter is applicable addresses)
only when you have configured a managed network
or subnet.
Subnet (Gateway IP / Displays the subnet that the internal interface (IP Address/prefix
Prefix Length) belongs to in the form <IP Address>/<number number)
(prefix)>
Virtual Switch
Name Displays the name of the switch in the form vs# (vs<number>)
Bridge Displays the name of the bridge associated with the (br<number>)
virtual switch in the form br#, for example br0 for the
default bridge.
MTU (bytes) Displays the MTU set for the virtual switch in bytes. (number)
The default MTU is 1500.
Bond Type Displays the uplink bond type associated with (<bond_type>)
the virtual switch. See the Bond Type table. For
example, Active-Backup
Procedure
4. To create a virtual switch, click + Create VS and perform the following steps in the Create Virtual Switch
dialog box.
To update a virtual switch, click the edit option (pencil icon) and perform the following steps in the Edit Virtual
Switch dialog box.
Field Description
Description Provide a description for the virtual switch that helps identify the virtual switch.
Physical NIC MTU (bytes) MTU must be a value in the range 1500 ~ 9000 inclusive.
Select Configuration Method Select the Standard (Recommended) method to implement the VS
configuration:
This method ensures no disruptions occur to the workloads by putting the hosts
in maintenance mode and migrating the VMs out of the host before applying the
configuration.
Note: When the Standard method is selected, only the hosts that have been
updated with virtual switch configurations are rebooted.
This process requires a longer duration of time to complete. The time required
depends on the number and configuration of VMs.
In this method, the VS configuration is deployed in the rolling update process.
Bond Type Select an appropriate bond type. See the Bond Types table for details about the
bond types.
Select Uplink Ports Select the criteria that need to be satisfied for the uplink ports. The available
uplink ports that satisfy the criteria are displayed in the (Host port) table at the
bottom of this tab.
Uplink Port Speeds Select a speed to display the ports that have the selected speed. You can select
speeds such as 1G, 10G or both (All Speeds). The speeds displayed depend
on the NIC type that is installed on the host.
Based on your selection the columns in the (Host Port) table change
dynamically to display the ports with the speeds you selected.
(Host Port) table Based on the Selections you made in this Select Uplink Ports section, a table
displays the hosts that have the uplink ports that satisfy the selected criteria.
Select the ports you need for this configuration from the list. Click the down
arrow on the right side of the table to display the ports listed for each host.
Click the check box of a port to select the port. When the check box is already
checked, click check box to clear the selection. Clearing the selection removes
the uplink port (NIC) from the virtual switch.
Click Select All to select all the ports available and listed.
Click Clear All to unselect all the ports available and listed.
• Speed—Fast (1s)
• Mode—Active fallback-active-
backup
• Priority—Default. This is not
configurable.
Note: The Maximum VM NIC Throughput and Maximum Host Throughput values are not restricted to the value
provided in this table. The values in the table are indicated for an assumption of 2 x 10 Gb adapters with simplex speed.
For more information about uplink configuration, see Virtual Switch Workflow in the AHV Administration Guide.
7. Click Create to create the virtual switch or Save to update an existing virtual switch.
Click Cancel to exit without creating or updating the virtual switch.
Click Back to go back to the General tab.
1. Click the Delete icon (trash icon) next to the virtual switch listed in the list on the Virtual Switch tab in the
Network Configuration dialog box.
The Delete Virtual Switch dialog box opens. Deletion cannot be undone or reversed. Therefore, you must
confirm the deletion in this dialog box.
What to do next
Check if the virtual switch is deleted from the list on the Virtual Switch tab.
• The bridge intended to be converted to a Virtual Switch must have consistent configurations across all the nodes
in the cluster in terms of bond-type, MTU and LACP parameters.
• Within a virtual switch, the VLAN IDs must be exclusive across networks. An exception to this configuration
is when a network is IPAM enabled and the other network is not IPAM enabled. In a scenario where an existing
bridge has duplicate VLAN IDs, only one of these networks (per IPAM state) will get migrated under the virtual
switch. The additional networks remains as is with no visible impact to the functionality.
Note:
Networks with same the VLAN IDs can exist across different virtual switches.
Note: You cannot migrate any bridges using Prism Central. Use Prism Element web console or aCLI to migrate or
convert the bridges.
You can convert only one bridge at a time. You need to repeat the workflow for every bridge that you want to convert
to a virtual switch.
Note: The migration process creates new virtual switches which host the bridges that are being migrated or converted.
Procedure
2. Click the Convert Bridges to VS link at the bottom left of the Network dashboard.
If there are any bridges that you can migrate, the system displays the Convert Bridges to VS dialog box.
If there are no bridges that you can migrate, the system displays the There are no OVS bridges that can be
converted to virtual switches message.
a. Provide a Name and a Description for the virtual switch that you want to create by migrating the bridge.
For example, if you select br1 in Select a Bridge, you can provide vs1 as the name for the virtual switch.
4. Click Convert.
The converted virtual switch is available in the Network page in Prism Element web console and the Virtual
Switch tab on the Network Configuration page in Prism Central.
• Place the affected AHV host where you want to reconfigure the bonds into maintenance mode.
Log on to any CVM using SSH, and run the following command:
nutanix@cvm$ acli host.enter_maintenance_mode hypervisor-IP-address [wait="{ true |
false }" ] [non_migratable_vm_action="{ acpi_shutdown | block }" ]
Replace hypervisor-IP-address with either the IP address or host name of the AHV host you want to shut
down.
The following are optional parameters for running the acli host.enter_maintenance_mode command:
• wait: Set the wait parameter to true to wait for the host evacuation attempt to finish.
• non_migratable_vm_action: By default the non_migratable_vm_action parameter is set to block,
which means the non-migratable VMs are shut down when you put a node into maintenance mode. For more
information on non-migratable VMs, see Non-Migratable VMs.
Note: VMs with host affinity policies are also not migrated to other hosts in the cluster, if any of the following
condition is met:
• The hosts that are configured as part of VM-Host affinity policy are not available.
• The hosts that are part of VM-Host affinity policy does not have the sufficient resources to run
the VM.
If you want to automatically shut down such VMs, set the non_migratable_vm_action parameter to
acpi_shutdown.
For more information, see Putting a Node into Maintenance Mode using CLI.
• Check the Data Resiliency Status of the cluster to ensure the cluster is healthy and resilient to any brief
interruptions to network connectivity during uplink changes. For more information, see Home Dashboard on
page 48.
Important:
• Perform the bond changes only on one host at a time. Ensure that you get the completed host out of
maintenance mode before you proceed to work on any other affected hosts.
• Use this procedure only when you need to modify the inconsistent bonds in a migrated bridge across
hosts in a cluster, that is preventing Acropolis (AOS) from deploying the virtual switch for the migrated
bridge.
Do not use ovs-vsctl commands to make the bridge level changes. Use the manage_ovs commands,
instead.
The manage_ovs command allows you to update the cluster configuration. The changes are applied
and retained across host restarts. The ovs-vsctl command allows you to update the live running host
configuration but does not update the AOS cluster configuration and the changes are lost at host restart.
This behavior of ovs-vsctl introduces connectivity issues during maintenance, such as upgrades or
hardware replacements.
ovs-vsctl is typically used in cases where a host might be isolated on the network and requires a
workaround to gain connectivity before the cluster configuration can actually be updated using
manage_ovs.
Note: Disable the virtual switch before you attempt to change the bonds or bridge.
If you face an issue where the virtual switch is automatically re-created after it is disabled (with AOS
versions 5.20.0 or 5.20.1), follow steps 1 and 2 below to disable such an automatically re-created virtual
switch again before migrating the bridges. For more information, see KB-3263.
Be cautious when using the disable_virtual_switch command because it deletes all the configurations from
the IDF, not only for the default virtual switch vs0, but also any virtual switches that you may have created
(such as vs1 or vs2). Therefore, before you use the disable_virtual_switch command, ensure that you check
a list of existing virtual switches, that you can get using the acli net.get_virtual_switch command.
Complete this procedure on each host Controller VM that is sharing the bridge that needs to be migrated to a virtual
switch.
Procedure
• bridge-name: Provide the name of the bridge, such as br0 for the virtual switch on which you want to set the
uplink bond mode.
• bond-name: Provide the name of the uplink port such as br0-up for which you want to set the bond mode.
• bond-type: Provide the bond mode that you require to be used uniformly across the hosts on the named
bridge.
Use the manage_ovs --help command for help on this command.
Note: To disable LACP, change the bond type from LACP Active-Active (balance-tcp) to Active-Backup/Active-
Active with MAC pinning (balance-slb) by setting the bond_mode using this command as active-backup or
balance-slb.
Ensure that you turn off LACP on the connected ToR switch port as well. To avoid blocking of the bond
uplinks during the bond type change on the host, ensure that you follow the ToR switch best practices to
enable LACP fallback or passive mode.
To enable LACP, configure bond-type as balance-tcp (Active-Active) with additional variables --
lacp_mode fast and --lacp_fallback true.
4. Exit the host from maintenance mode, using the following command:
nutanix@cvm$ acli host.exit_maintenance_mode hypervisor-IP-address
Replace hypervisor-IP-address with the IP address of the AHV host.
This command migrates (live migration) all the VMs that were previously running on the host back to the host.
For more information, see Exiting a Node from the Maintenance Mode Using CLI.
5. Repeat the above steps for each host for which you intend to reconfigure bonds.
6. (If migrating to AOS version earlier than 5.20.2) Check if the issue in the note and disable the virtual switch.
What to do next
After making the bonds consistent across all the hosts configured in the bridge, migrate the bridge or
enable the virtual switch. For more information, see:
• Nutanix recommends that you assign a set of unique VLANs for guest VMs on each AHV cluster. Ensure these
VLANs do not overlap with the VLANs on other AHV clusters. Assigning unique VLAN ranges for each cluster
reduces the risk of MAC address conflicts. Such an assignment also ensures compliance with the general best
practice of maintaining small Layer 2 broadcast domains with limited numbers of endpoints.
Note: Nutanix AHV clusters use the MAC address prefix OUI 50:6B:8D by default.
By default, Acropolis leader generates MAC address for a VM on AHV. The first 24 bits of the MAC
address (OUI) is set to 50-6b-8d (0101 0000 0110 1101 1000 1101) and are reserved by Nutanix, the 25th
bit is set to 1 (reserved by Acropolis leader), the 26th bit to 48th bits are auto generated random numbers.
Consider this sample design of a deployment with three sites and five clusters in each site. Define a unique MAC
address prefix for Site1-Cluster1 such as 02:01:01, where:
• 02—Defines the MAC address as a unicast address that is locally administered. This value could be a hexadecimal
number defined by X2, X3, X6, X7, XA, XB, or XE series, where X is any valid hexadecimal value such that the
second binary bit (binary bits being counted from right to left, right most is the first bit) of the binary equivalent of
this hexadecimal number XX is set to 1 to make the MAC address a locally administered MAC address.
• 01—Used to identify, for example, the site number. This value could be any valid hexadecimal value.
• 01—Used to identify, for example, the AHV cluster within the site. This value could be any valid hexadecimal
value
The NIC specific octets, XX:XX:XX, are auto-assigned to the VM NICs or the endpoints within each AHV cluster.
Thus, for Site1, the clusters would have the following prefixes:
• Site1-Cluster1: 02:01:01
• Site1-Cluster2: 02:01:02
• Site1-Cluster3: 02:01:03
• Site1-Cluster4: 02:01:04
• Site1-Cluster5: 02:01:05
... and so on for the other clusters at Site1.
Similarly for Site2, if you define, for example 02:02:01 as the MAC address prefix for the first cluster - Cluster1, you
get the series of predefined MAC address prefixes for the clusters and VMs or endpoints in Site2, Cluster1.
• Site2-Cluster1-VM1: 02:02:01:00:00:01
• Guest VMs in the cluster do not have any NICs that have MAC addresses with the default prefix.
• Check the Requirements and Considerations for MAC Address Prefix on page 163.
Procedure
3. Add the MAC address prefix for the cluster using the following command.
<acropolis> net.set_mac_prefix XX:XX:XX
Note: Ensure that the VMs in the cluster do not have any NICs that have MAC addresses with the default prefix.
What to do next
Verify if the MAC address prefix is configured using the net.get_mac_prefix command.
The output displays the hexadecimal prefix that you configured.
Using the configuration example, the output would show "02:01:01" as follows:
<acropolis> net.get_mac_prefix
"02:01:01"
<acropolis>
Repeat this procedure to add MAC address prefixes to other clusters that share the same VLAN (defining the
common broadcast domain) that you want to avoid duplicate MAC addresses in.
Procedure
3. Remove the MAC address prefix for the cluster using the following command.
<acropolis> net.clear_mac_prefix
Note: Ensure that the VMs in the cluster do not have any NICs that have MAC addresses with the configured
prefix.
What to do next
Verify that the MAC address prefix is removed. When you use the net.get_mac_prefix command, the
output displays the default MAC address prefix, "50:6b:8d".
<acropolis> net.get_mac_prefix
"50:6b:8d"
<acropolis>
Procedure
• To configure virtual networks for user VM interfaces, see Configuring a Virtual Network for Guest VM
Interfaces on page 165.
• To secure intra-cluster communications by configuring network segmentation, see Securing Traffic Through
Network Segmentation in the Nutanix Security Guide.
Note: Do not add any other device (including guest VMs) to the VLAN to which the CVM and hypervisor host are
assigned. Isolate guest VMs on one or more separate VLANs.
Procedure
Note: This option only appears in clusters that support this feature.
3. Click the Subnets tab and then click the + Create Subnet button.
The Create Subnet dialog box appears. Do the following in the indicated fields:
Note: If you do not enable this option while creating a network, you cannot enable or disable IP address
management (IPAM).
IP Address Management (IPAM) is a feature of AHV that allows it to assign IP addresses
automatically to VMs by using DHCP. For more information, see IP Address Management in
the AHV Administration Guide.
e. Network IP Address/Prefix Length: Enter the IP address of the gateway for the network and prefix with
the network prefix (CIDR notation, for example, 10.1.1.0/24).
f. Gateway IP Address: Enter the VLAN default gateway IP address.
g. Configure Domain Settings: Select this checkbox to display fields for defining a domain.
Selecting this checkbox displays fields to specify DNS servers and domains. Clearing the checkbox hides
those fields.
h. Domain Name Servers (comma separated): Enter a comma-delimited list of DNS servers.
i. Domain Search (comma separated): Enter a comma-delimited list of domains.
j. Domain Name: Enter the VLAN domain name.
k. TFTP Server Name: Enter the host name or IP address of the TFTP server from which virtual machines
can download a boot file. Required in a Pre-boot eXecution Environment (PXE).
l. Boot File Name: Name of the boot file to download from the TFTP server.
4. To define a range of addresses for automatic assignment to virtual NICs, click the Create Pool button (under IP
Address Pools) and enter the following in the Add IP Pool dialog box:
If you do not assign a pool, assign IP addresses to VMs manually.
a. Enter the starting IP address of the range in the Start Address field.
b. Enter the ending IP address of the range in the End Address field.
c. Click the Submit button to close the window and return to the Create Subnet dialog box.
5. To configure a DHCP server, select the Override DHCP server checkbox and enter an IP address in the DHCP
Server IP Address field.
This address (reserved IP address for the Acropolis DHCP server) is visible only to VMs on this network
and responds only to DHCP requests. If you do not check the box, the DHCP Server IP Address
field does not display, and the DHCP server IP address generates automatically. The automatically
generated address is network_IP_address_subnet.254, or if the default gateway is using that address,
network_IP_address_subnet.253.
Note:
• You can also specify network mapping to control network configuration for the VMs when they start
on the remote site. For more information about configuring networking mapping on remote site, see
Configuring a Remote Site (Physical Cluster) in the Data Protection and Recovery with Prism
Element Guide.
• Verify (or update as needed) that the physical switch configuration allows traffic for the same
VLAN's that are configured for the virtual networks.
Procedure
1. Click the gear icon in the main menu and then select Network Configuration in the Setting page.
Procedure
3. To modify a network configuration, in the Subnet tab, click the Edit option associated with the subnet.
The Update Subnet dialog box appears, which contains the same fields as the Create Subnet dialog box.
For more information, see Configuring a Virtual Network for Guest VM Interfaces on page 165. Do the
following:
Note: Changing the VLAN ID of a network in use is not allowed; only the name can be changed.
4. To delete a network configuration, select the target line and click the X icon (on the right).
A window prompt appears to verify the action; click the OK button. The network configuration is removed from
the list.
Network Segmentation
You can segment the network on a Nutanix cluster by using Prism Element.
For more information, see Securing Traffic Through Network Segmentation in the Security Guide.
2. Click the gear icon in the main menu and then select Network Switch in the Settings page.
3. Click the Switch Configuration tab and then click the Add Switch Configuration button.
a. Switch Management IP Address: Enter the Management IP address of the switch or the fully qualified
switch name.
b. Host IP Addresses or Host Name (separated by commas): Enter the IP address or fully qualified
host name of each host in the cluster that uses this switch to route traffic.
When there are multiple hosts, enter the addresses in a comma separated list. Failing to add the host list
might result in issues with switch port discovery.
c. SNMP Profile: Select the SNMP profile to apply (or None) from the drop-down list.
Any SNMP profiles you have created appear in the list.
Note: Selecting a profile populates the remaining fields automatically with the profile values. If you have not
created a profile (or select None from the list), you must enter the values in the remaining fields manually.
d. SNMP Version: Select the SNMP version to use from the drop-down list.
The options are SNMPv2c and SNMPv3.
e. SNMP Security Level (SNMPv3 only): Select the security level to enforce from the drop-down list.
The options are No Authorization No Privacy, Authorization But No Privacy, and Authorization
and Privacy. This field appears only when SNMPv3 is selected as the version.
f. SNMP Community Name: Enter the SNMP community name to use.
g. SNMP Username: Enter the SNMP user name.
h. SNMP Authentication Type: Select the authentication protocol to use from the drop-down list.
The options are None and SHA.
Note: This field and the following three fields are set to None or left blank (and read only) when the version
is SNMPv2c or the security level is set to no authorization.
i. SNMP Authentication Pass Phrase: Enter the appropriate authentication pass phrase.
j. SNMP Privacy Type: Select the privacy protocol to use from the drop-down list.
The options are None, AES, and DES.
k. SNMP Privacy Pass Phrase: Enter the appropriate privacy pass phrase.
l. When all the fields are correct, click the Save button.
This saves the profile and displays the new entry in the Switch Configuration tab.
Note: As a security protection, the SNMP Authentication Pass Phrase and SNMP Privacy Pass
Phrase fields appear blank after saving (but the entered phrases are saved).
Procedure
1. Login to Prism element and navigate to Settings > Network Configuration > Virtual Switch.
You can also login to Prism Central, select the Infrastructure application from Application Switcher Function,
and navigate to Network & Security > Subnets > Network Configuration > Virtual Switch from the
navigation bar.
The system displays the Virtual Switch tab.
2. Click the Edit icon ( ) for the target virtual switch on which you want to configure LACP and LAG.
The system displays the Edit Virtual Switch window.
3. In the General tab, choose Standard (Recommended) option in the Select Configuration Method field,
and click Next.
Important: When you select the Standard method, only the hosts that have been updated are restarted.
The Standard configuration method puts each updated node in maintenance mode before applying the
updated settings. After applying the updated settings, the node exits from maintenance mode. For more
information, see Virtual Switch Workflow.
4. In the Uplink Configuration tab, select Active-Active in the Bond Type field, and click Save.
Note: The Active-Active bond type configures all AHV hosts with the fast setting for LACP speed, causing
the AHV host to request LACP control packets at the rate of one per second from the physical switch. In addition,
the Active-Active bond type configuration sets LACP fallback to Active-Backup on all AHV hosts. You
cannot modify these default settings after you have configured them in Prism, even by using the CLI.
This completes the LAG and LACP configuration on the cluster. At this stage, cluster starts the Rolling Reboot
operation for all the AHV hosts. Wait for the reboot operation to complete before you put the node and CVM in
maintenance mode and change the switch ports.
For more information about how to manually perform the rolling reboot operation for an AHV host, see
Rebooting an AHV Node in a Nutanix Cluster.
Note: Before you put a node in maintenance mode, see Verifying the Cluster Health and carry out the
necessary checks.
The Step 6 in Putting a Node into Maintenance Mode using Web Console section puts the Controller VM in
maintenance mode.
6. Change the settings for the interface on the switch that is directly connected to the Nutanix node to match the
LACP and LAG settings made in the Edit Virtual Switch window above.
For more information about how to change the LACP settings of the switch that is directly connected to the node,
refer to the vendor-specific documentation of the deployed switch.
Nutanix recommends you perform the following configurations for LACP settings on the switch:
Consider the LACP time options (slow and If the switch has a fast configuration, Nutanix recommends you set
fast) the LACP time to fast on AHV host.
Nutanix recommends the LACP time to match on both; switch and
AHV host, for L2 failure detection at the same time on the switch
and AHV host. If the switch has a fast configuration, set the LACP
time to fast on AHV host.
When the LACP time setting matches on AHV host and switch, the
detachment of a failed interface occurs at the same time, and both
switch and the AHV host do not use the failed interface.
When the LACP time is set to:
• bond-name with the actual name of the uplink port such as br0-up in the above commands.
• [AHV host IP] with the actual AHV host IP at your site.
8. Remove the node and Controller VM from maintenance mode. For more information, see Exiting a Node from
the Maintenance Mode using Web Console.
The Controller VM exits maintenance mode during the same process.
What to do next
Do the following after completing the procedure to enable LAG and LACP in all the AHV nodes the
connected ToR switches:
• Verify that the status of all services on all the CVMs are Up. Run the following command and check if the status
of the services is displayed as Up in the output:
nutanix@cvm$ cluster status
• Log in to the Prism Element web console of the node and check the Data Resiliency Status widget displays
OK.
Procedure
2. Click the gear icon in the main menu and then select Network Switch in the Settings page.
3. Click the SNMP Profile tab and then click the Add SNMP Profile button.
Note: This field and the following three fields are set to None or left blank (and read only) when the version
is SNMPv2c or the security level is set to no authorization.
g. SNMP Authentication Pass Phrase: Enter the appropriate authentication pass phrase.
h. SNMP Privacy Type: Select the privacy protocol to use from the pull-down list.
The options are None, AES, and DES.
i. SNMP Privacy Pass Phrase: Enter the appropriate privacy pass phrase.
j. When all the fields are correct, click the Save button.
This saves the profile and displays the new entry in the SNMP Profile tab.
Procedure
1. Click the gear icon in the main menu and then select Network Switch in the Settings page.
2. To modify a switch configuration (or SNMP profile), select the Switch Configuration (or SNMP Profile) tab,
go to the target switch (or profile) entry in the table, and click the pencil icon.
This displays the configuration fields. Edit the entries as desired and then click the Save button.
3. To delete a switch configuration (or SNMP profile), select the Switch Configuration (or SNMP Profile) tab,
go to the target switch (or profile) entry in the table, and click the X icon.
This deletes the entry and removes it from the table.
Caution: You cannot use the visualizer to configure the network. The network visualizer is available only on AHV
clusters.
Prerequisites
Before you use the network visualizer, do the following:
• Configure network switch information on the Nutanix cluster. For more information, see Configuring Network
Switch Information on page 170.
• Enable LLDP or CDP on the first-hop switches. The visualizer uses LLDP or CDP to determine which switch
port is connected to a given host interface. If LLDP or CDP is unavailable, SNMP data is used on a commercially
reasonable effort. For information about configuring LLDP or CDP, see the switch manufacturer’s documentation.
• (Optional) Configure SNMP v3 or SNMP v2c on the first-hop switches if you want the network visualizer to
display the switch and switch interface statistics. The visualizer uses SNMP for discovery and to obtain real-
time usage statistics from the switches. For information about configuring SNMP, see the switch manufacturer’s
documentation.
Note: This is not a mandatory requirement to use the network visualizer. This is a prerequisite only if you want the
network visualizer to display the switch and switch interface statistics.
Note: Network visualization depends on a functional internal DNS system to map switch hostnames and IP addresses
based on the LLDP responses. Incorrect or partial DNS configuration displays inaccurate network details in the UI. To
troubleshoot the network visualization issues, see KB-4085 in the Nutanix Support portal.
Network Visualizer
The network visualizer displays interactive visual elements for the networked devices and for network components
such as physical and logical interfaces. It also provides filtering and grouping capabilities that you can use to limit the
display to a specific set of networked devices and connections.
The network visualizer includes the virtual networks pane and the topology view.
Topology View
Displays the network topology. The topology view shows the following entities:
Virtual Switch (VS)
VSs are configured on the cluster. The visualizer displays a different color for each VS. It shows the
VSs to which a VM or the VMs in a VM group belong. It also shows which VSs are configured on a
first-hop switch.
VMs
VMs on the VSs that are selected in the virtual networks pane. Filter and group-by options enable
you to customize the topology view.
Hosts
Hosts in the Nutanix cluster. The filter above the hosts enables you to specify which hosts you want
to show in the topology view.
Switches
First-hop switches and the VSs configured on each of them. The filter above the switches enables
you to specify which switches you want to show in the topology view.
Procedure
3. Specify which virtual switches you want to show in the topology view:
• In the Virtual Switch (VS) pane, select the checkboxes associated with the virtual switches and the networks
in each virtual switch that you want to display in the topology view and clear those that you do not want to
display.
• Select or clear the Other checkbox if you want to include or exclude, respectively, the VMs that are not on
any VLANs.
• Select a group-by option from the menu at the top of the VMs. The following group-by options are available:
• Power State. Groups VMs by states such as On and Off. By default, the VMs are grouped by power state.
• Host. Groups VMs by the host on which they are running.
• VM Type. Groups VMs into guest VMs and Controller VMs.
• Enter a string in the search filter field to filter VMs that match the search string.
• If the group-by and filter operations result in a VM group, click the VM group to show the VMs in the group.
When you click a VM group, the visualizer displays ten VMs at first. To load ten more VMs, click Load
More VMs.
To group the VMs again or to clear the filter, click Back beside the group-by menu.
5. Specify which Nutanix hosts you want to display in the topology view:
Viewing VM Information
In the visualizer, you can view the settings and real-time statistics of a virtual NIC.
1. Use the group-by and filtering capabilities of the visualizer to display the VM you want, and then click the name
of the VM.
For information on how to group VMs or use a search filter, see Customizing the Topology View on
page 178.
The VM network information window appears.
For information about the statistics that are displayed for the virtual NICs, see VM NICs Tab on page 269.
3. Optionally, point to a location on a graph to view the value at that point in time.
Tip: You can return to the visualizer by pressing the back button in your browser.
1. In the topology view, hover on or click a host interface to view the settings of the interface.
For a host interface, statistics are shown in addition to interface settings. Point to a location on a graph to view the
value at that point in time.
For information about the statistics that are displayed for the host NICs, see Host NICs Tab on page 199.
These lines are highlighted because they are related to the host of selected VM. A solid line leading from a
bond to a host interface indicates the uplink is connected and is an active connection. A dotted line indicates a
connected but unconfigured passive bond interface. A dotted line also indicates an uplink on a virtual switch with
the bond type configured as No Uplink Bond.
3. Click any network component in the diagram and view its network settings and statistics in the right pane.
» Click the Controller VM, and then, in the right pane, select a virtual NIC from the list to view settings and
statistics of that virtual interface. Optionally, point to a location on a graph to view the value at that point in
time.
For information about the statistics that are displayed for the virtual NICs, see VM NICs Tab on page 269.
» Click a virtual switch to view the configuration details of the virtual switch.
» Click a bridge to view the configuration details of the bridge.
A solid rectangle indicates an external bridge. A dotted rectangle indicates an internal bridge.
» Click a bond to view the settings of the bond.
Tip: You can return to the visualizer by pressing the back button in your browser.
Procedure
Note: When you open the network visualizer to view switch details in the Prism Element web console, it might
take up to 30 seconds for the switch port statistics to populate. This issue might also cause the same delay in the
Switch Details table view.
Tip: You can return to the visualizer by pressing the back button in your browser.
Procedure
2. Optionally, to show the value at any given point in time in a graph, point to the location on the graph.
• You can monitor hardware configurations and status across the cluster through the web console. For more
information, see Hardware Dashboard on page 185.
• You can expand the cluster through the web console. For more information, see Expanding a Cluster on
page 199.
Hardware Dashboard
The Hardware dashboard displays dynamically updated information about the hardware configuration in
a cluster. To view the Hardware dashboard, select Hardware from the drop-down list on the far left of the
main menu.
Menu Options
In addition to the main menu, the Hardware screen includes a menu bar with the following options:
• Click the Overview option on the left to display hardware information in a summary view. For more
information, see Hardware Overview View on page 186.
• Click the Diagram option to display a diagram of the cluster nodes from which you get detailed hardware
information by clicking on a component of interest. For more information, see Hardware Diagram View on
page 186.
• Click the Table option to display hardware information in a tabular form. The Table screen is further divided
into host, disk, and switch views; click the Host tab to view host information, the Disk tab to view disk
information, or the Switch tab to view switch information. For more information, see Hardware Table View
on page 192.
• (ESXi only) Add NVMe Devices: Click this option to attach an NVMe drive to the cluster.
Note: This option appears only if your hardware supports NVMe software serviceability. For more information,
see Completing NVMe Drive Replacement (Software Serviceability) in Hardware Replacement
Documentation.
• Expand Cluster: Click this option to add nodes to the cluster. For more information on how to expand a cluster,
see Expanding a Cluster on page 199.
• Repair Host Boot Device: Click this option to repair the boot drive of your hosts.
• Page selector: In the Table view, hosts and disks are listed 10 per page. When there are more than 10 items in the
list, left and right paging arrows appear on the right, along with the total count and the count for the current page.
• Export table content: In the Table view, you can export the table information to a file in either CSV or JSON
format by clicking the gear icon on the right and selecting either Export CSV or Export JSON from the
pull-down menu. (The browser must allow a dialog box for export to work.) Chrome, Internet Explorer, and
Firefox download the data into a file; Safari opens the data in the current window.
Note: For information about how the statistics are derived, see Understanding Displayed Statistics on page 55.
Name Description
Hardware Summary Displays the number of hosts and blocks in the cluster. It also displays the
Nutanix model number.
Hosts Displays the number of hosts in the cluster broken down by on, off, and
suspended states. It also displays the number of discovered nodes that
have not yet been added to the cluster.
Disks Displays the total number of disks in the cluster broken down by tier type
(SSD-PCIe, SSD-SATA, DAS-SATA).
CPU Displays the total amount of CPU capacity (in GHz) in the cluster.
Memory Displays the total amount of memory (in GBs) in the cluster.
Top Hosts by Disk IOPS Displays I/O operations per host for the 10 most active hosts.
Top Hosts by Disk IO Displays I/O bandwidth used per host for the 10 most active hosts. The
Bandwidth value is displayed in an appropriate metric (MBps, KBps, and so on)
depending on traffic volume.
Top Hosts by Memory Displays the percentage of memory capacity used per host for the 10 most
Usage active hosts.
Top Hosts by CPU Usage Displays the percentage of CPU capacity used per host for the 10 most
active hosts.
Hardware Critical Alerts Displays the five most recent unresolved hardware-specific critical alert
messages. Click a message to open the Alert screen at that message. You
can also open the Alert screen by clicking the view all alerts button at the
bottom of the list. For more information, see Alerts Dashboard in Prism
Element Alerts and Events Reference Guide.
Hardware Warning Alerts Displays the five most recent unresolved hardware-specific warning alert
messages. Click a message to open the Alert screen at that message. You
can also open the Alert screen by clicking the view all alerts button at the
bottom of the list.
Hardware Events Displays the ten most recent hardware-specific event messages. Click a
message to open the Event screen at that message. You can also open
the Event screen by clicking the view all events button at the bottom of
the list.
• The top section is an interactive diagram of the cluster blocks. Clicking on a disk or host (node) in the cluster
diagram displays information about that disk or host in the summary section.
• The bottom Summary section provides additional information. It includes a details column on the left and a set
of tabs on the right. The details column and tab content varies depending on what has been selected.
Note: For information about how the statistics are derived, see Understanding Displayed Statistics on page 55.
Host Details
Selecting a host in the diagram displays information about that host in the lower section of the screen.
• When a host is selected, Summary: host_name appears below the diagram, and action links appear to the right
of this line:
• Click the Turn On LED link to light up the host LED light on the chassis.
• Click the Turn Off LED link to turn off the host LED light on the chassis.
• Click the Remove Host link to remove this host from the cluster.
• Five tabs appear that display information about the selected host: Host Performance, Host Usage, Host
Alerts, Host Events, and Host NICs.
Controller VM IP Displays the IP address assigned to the Controller VM. (IP address)
Block Model Displays the block model number. (model series number)
Storage Capacity Displays the total amount of storage capacity on this xxx [GB|TB]
host.
Disks Displays the number of disks in each storage tier in the DAS-SATA: (number),
host. Tier types vary depending on the Nutanix model SSD-SATA: (number),
type. SSD-PCIe: (number)
Memory Displays the total memory capacity for this host. xxx [MB|GB]
CPU Model Displays the CPU model name (CPU model name)
No. of CPU Cores Displays the number of CPU cores on this host. (number of CPU cores)
No. of VMs Displays the number of VMs running on this host. (number)
Oplog Disk % Displays the percentage of the operations log (oplog) [0 - 100%]
capacity currently being used. The oplog resides on the
metadata disk.
Oplog Disk Size Displays the current size of the operations log. (The xxx [GB]
oplog maintains a record of write requests in the cluster.)
A portion of the metadata disk is reserved for the oglog,
and you can change the size through the nCLI.
Monitored Displays whether the host is high availability (HA) [true|false]
protected. A true value means HA is active for this host.
A false value means VMs on this host are not protected
(will not be restarted on another host) if the host fails.
Normally, this value should always be true. A false
value is likely a sign of a problem situation that should
be investigated.
Hypervisor Displays the name and version number of the hypervisor (name and version #)
running on this host.
Datastores Displays the names of any datastores. This field does not (name)
appear in Prism Central.
Disk Details
Selecting a disk in the diagram displays information about that disk in the lower section of the screen.
• When a disk is selected, Summary: disk_name appears below the diagram, and action links appear to the right
of this line:
• Click the Turn On LED link to light up the LED light on the disk.
• Click the Turn Off LED link to turn off the LED light on the disk.
• Click the Remove Disk link to remove this disk from the cluster.
• Four tabs appear that display information about the selected storage container (see following sections for details
about each tab): Disk Usage, Disk Performance, Disk Alerts, Disk Events.
Storage Tier Displays the disk type (tier name). Nutanix models can [SSD-PCIe | SSD-SATA |
contain disk tiers for PCIe solid state disks (SSD-PCIe), DAS-SATA]
SATA solid state disks (SSD-SATA), and direct attach
SATA hard disk drives (DAS-SATA) depending on the
model type.
Used (Physical) Displays the amount of used space on the drive. xxx [GB|TB]
Capacity (Physical) Displays the total physical space on the drive. xxx [GB|TB]
Hypervisor Displays the IP address of the hypervisor controlling the (IP address)
disk.
Storage Pool Displays the name of the storage pool in which the disk (name)
resides.
Status Displays the operating status of the disk. Possible states Normal; Data migration
include the following: initiated; Marked for
removal, data migration is
• Normal. Disk is operating normally. in progress; Detachable
• Data migration initiated. Data is being migrated
to other disks.
• Marked for removal, data migration is in
progress. Data is being migrated in preparation to
remove disk.
• Detachable. Disk is not being used and can be
removed.
Self Encryption Drive Displays whether this is a self encrypting drive (SED). [present|not present]
Password Protection Mode Displays whether data-at-rest encryption is enabled for [protected|not protected]
[SED only] the cluster. When it is enabled, a key is required to access
(read or write) data on the drive. This field appears only
when the drive is a SED.
• Performance Summary (no host or disk selected). Displays resource performance statistics (CPU, memory,
and disk) across the cluster.
• Host Performance (host selected). Displays resource performance statistics (CPU, memory, and disk) for the
selected host.
• Disk Performance (disk selected). Displays disk performance statistics for the selected disk.
The graphs are rolling time interval performance monitors that can vary from one to several hours depending on
activity moving from right to left. Placing the cursor anywhere on the horizontal axis displays the value at that time.
For more in depth analysis, you can add a monitor to the analysis page by clicking the blue link in the upper right
of the graph. For more information, see Analysis Dashboard on page 333. The Performance tab includes the
following graphs:
• [Cluster-wide] CPU Usage: Displays the percentage of CPU capacity currently being used (0 - 100%) either
across the cluster or for the selected host. (This graph does not appear when a disk is selected.)
• [Cluster-wide] Memory Usage: Displays the percentage of memory capacity currently being used (0 - 100%)
either across the cluster or for the selected host. (This graph does not appear when a disk is selected.)
• [Cluster-wide] IOPS: Displays I/O operations per second (IOPS) for the cluster, selected host, or selected disk.
• [Cluster-wide] I/O Bandwidth: Displays I/O bandwidth used per second (MBps or KBps) for physical disk
requests in the cluster, selected host, or selected disk.
• [Cluster-wide] I/O Latency: Displays the average I/O latency (in milliseconds) for physical disk requests in the
cluster, selected host, or selected disk.
• Usage Summary: Displays a rolling time interval usage monitor that can vary from one to several hours
depending on activity moving from right to left. Placing the cursor anywhere on the horizontal axis displays the
value at that time. For more in depth analysis, you can add the monitor to the analysis page by clicking the blue
link in the upper right of the graph. For more information, see Analysis Dashboard on page 333.
• Tier-wise Usage (host only): Displays a pie chart divided into the percentage of storage space used by each disk
tier on the host. Disk tiers can include DAS-SATA, SSD-SATA, and SSD-PCIe depending on the Nutanix model
type.
• Total Packets Received. Displays a monitor of the total packets received over time. Place the cursor anywhere
on the line to see the rate for that point in time. (This applies to all the monitors in this tab.)
• Total Packets Transmitted. Displays a monitor of the total packets that were transmitted.
• Dropped Packets Received. Displays a monitor of received packets that were dropped.
• Dropped Packets Transmitted. Displays a monitor of transmitted packets that were dropped.
• The top section is a table. Each row represents a single host or disk and includes basic information about that
host or disk. Click a column header to order the rows by that column value (alphabetically or numerically as
appropriate).
• The bottom Summary section provides additional information. It includes a details column on the left and a set
of tabs on the right. The details column and tab content varies depending on what has been selected.
Note: For information about how the statistics are derived, see Understanding Displayed Statistics on page 55.
Host Tab
Clicking the Host tab displays information about hosts in the cluster.
• The table at the top of the screen displays information about all the hosts, and the details column (lower left)
displays additional information when a host is selected in the table. The following table describes the fields in the
host table and detail column.
• When a host is selected, Summary: host_name appears below the table, and action links appear on the right of
this line:
• Click the Turn On LED link to light up the host LED light on the chassis.
• Click the Turn Off LED link to turn off the host LED light on the chassis.
• Click the Enter Maintenance Mode link to put the host in maintenance mode.
• Click the Repair Host Boot Device link to repair the boot drive of the selected host.
• Click the Remove Host link to remove this host from the cluster.
• Five tabs appear that display information about the selected host: Host Performance, Host Usage, Host
Alerts, Host Events, and Host NICs.
CVM IP Displays the IP address assigned to the Controller VM. (IP address)
CPU Capacity Displays the CPU capacity of this host. xxx [GHz]
Memory Capacity Displays the memory capacity of this host. xxx [MiB|GiB]
Total Disk Usage Displays the storage space used and total disk capacity of xxx [MiB|GiB|TiB] of xxx
this host. [MiB|GiB|TiB]
Disk IOPS Displays I/O operations per second (IOPS) for this host. [0 - unlimited]
Disk IO B/W Displays I/O bandwidth used per second for this host. xxx [MBps|KBps]
Disk IO Latency Displays the average I/O latency (in milliseconds) for xxx [ms]
this host.
Controller VM IP Displays the IP address assigned to the Controller VM. (IP address)
Block Model Displays the block model number. (model series number)
Storage Capacity Displays the total amount of storage capacity on this xxx [GB|TB]
host.
Disks Displays the number of disks in each storage tier in the DAS-SATA: (number),
host. Tier types vary depending on the Nutanix model SSD-SATA: (number),
type. SSD-PCIe: (number)
Memory Displays the total memory capacity for this host. xxx [MB|GB]
CPU Capacity Displays the total CPU capacity for this host. xxx [GHz]
CPU Model Displays the CPU model name (CPU model name)
No. of CPU Cores Displays the number of CPU cores on this host. (number of CPU cores)
Oplog Disk % Displays the percentage of the operations log (oplog) [0 - 100%]
capacity currently being used. The oplog resides on every
SSD.
Oplog Disk Size Displays the current size of the operations log. (The xxx [GB]
oplog maintains a record of write requests in the cluster.)
A portion of every SSD is reserved for the oplog.
Monitored Displays whether the host is high availability (HA) [Yes|No]
protected. A Yes value means HA is active for this host.
A No value means VMs on this host are not protected
(will not be restarted on another host) if the host fails.
Normally, this value should always be Yes. A No value
is likely a sign of a problem situation that should be
investigated.
Hypervisor Displays the name and version number of the hypervisor (name and version #)
running on this host.
Secure Boot Enabled Displays whether the host is secure boot enabled. [Yes|No]
Disk Tab
Clicking the Disk tab displays information about disks in the cluster.
• The table at the top of the screen displays information about all the disks, and the details column (lower left)
displays additional information when a disk is selected in the table. The following table describes the fields in the
disk table and detail column.
• When a disk is selected, Summary: disk_name appears below the table, and action links appear on the right of
this line:
• Click the Turn On LED link to light up the LED light on the disk.
• Click the Turn Off LED link to turn off the LED light on the disk.
• Click the Remove Disk link to remove this disk from the cluster.
• Four tabs appear that display information about the selected storage container (see following sections for details
about each tab): Disk Usage, Disk Performance, Disk Alerts, Disk Events.
Tier Displays the disk type (tier name). Nutanix models can [SSD-PCIe | SSD-SATA |
contain disk tiers for PCIe solid state disks (SSD-PCIe), DAS-SATA]
SATA solid state disks (SSD-SATA), and direct attach
SATA hard disk drives (DAS-SATA) depending on the
model type.
Status Displays the operating state of the disk. online, offline
Disk Usage Displays the percentage of disk space used and total [0 - 100%] of xxx [GB|TB]
capacity of this disk.
Disk IOPS Displays I/O operations per second (IOPS) for this disk. [0 - unlimited]
Disk IO B/W Displays I/O bandwidth used per second for this disk. xxx [MBps|KBps]
Disk Avg IO Latency Displays the average I/O latency for this disk. xxx [ms]
Storage Tier Displays the disk type (tier name). Nutanix models can [SSD-PCIe | SSD-SATA |
contain disk tiers for PCIe solid state disks (SSD-PCIe), DAS-SATA]
SATA solid state disks (SSD-SATA), and direct attach
SATA hard disk drives (DAS-SATA) depending on the
model type.
Used (Physical) Displays the amount of used space on the drive. xxx [GB|TB]
Capacity (Logical) Displays the total physical space on the drive. xxx [GB|TB]
Hypervisor Displays the IP address of the hypervisor controlling the (IP address)
disk.
Storage Pool Displays the name of the storage pool in which the disk (name)
resides.
Status Displays the operating status of the disk. Possible states Normal; Data migration
include the following: initiated; Marked for
removal, data migration is
• Normal. Disk is operating normally. in progress; Detachable
• Data migration initiated. Data is being migrated
to other disks.
• Marked for removal, data migration is in
progress. Data is being migrated in preparation to
remove disk.
• Detachable. Disk is not being used and can be
removed.
Password Protection Mode Displays whether data-at-rest encryption is enabled for [protected|not protected]
[SED only] the cluster. When it is enabled, a key is required to access
(read or write) data on the drive. This field appears only
when the drive is a SED.
Switch Tab
Clicking the Switch tab displays information about the physical switches used by the host NICs to support traffic
through the virtual NICs. The table at the top of the screen displays information about the switches, and the lower
portion of the screen displays additional information when a switch is selected in the table. You can configure any
number of switches, but only the switches that are actually being used for virtual NIC traffic appear in this table. For
more information on how to configure a switch, see Configuring Network Switch Information on page 170. The
following table describes the fields in the switch table, in the detail column (lower left), and in the Physical Switch
Interfaces tab (lower right).
Vendor Name Displays the name of the switch vendor. (company name)
Contact Info Displays the switch vendor contact information. (company contact)
Vendor Name Displays the name of the switch vendor. (company name)
Management Addresses Displays the IP address(es) for the switch (IP address)
management ports.
MTU (in bytes) Displays the size in bytes of the largest protocol (number)
data unit (maximum transmission unit) that the layer
can pass onwards.
Discard Rx Pkts Displays the number of received packets that were (number)
discarded.
When you click a physical switch interface entry, usage graphs about that interface appear below the table:
• Unicast Packets Received: Displays a monitor of the received unicast packets over time. Place the cursor
anywhere on the line to see the value for that point in time. (This applies to all the monitors in this tab.)
• Unicast Packets Transmitted: Displays a monitor of the transmitted unicast packets.
• Error Packets Received: Displays a monitor of error packets received.
• Dropped Packets Received: Displays a monitor of received packets that were dropped.
• Dropped Packets Transmitted: Displays a monitor of transmitted packets that were dropped.
• The Hardware Summary column (on the left) includes the following fields:
• Performance Summary (no host or disk selected). Displays resource performance statistics (CPU, memory,
and disk) across the cluster.
• Host Performance (host selected). Displays resource performance statistics (CPU, memory, and disk) for the
selected host.
• Disk Performance (disk selected). Displays disk performance statistics for the selected disk.
The graphs are rolling time interval performance monitors that can vary from one to several hours depending on
activity moving from right to left. Placing the cursor anywhere on the horizontal axis displays the value at that time.
For more in depth analysis, you can add a monitor to the analysis page by clicking the blue link in the upper right
of the graph. For more information, see Analysis Dashboard on page 333. The Performance tab includes the
following graphs:
• [Cluster-wide] CPU Usage: Displays the percentage of CPU capacity currently being used (0 - 100%) either
across the cluster or for the selected host. (This graph does not appear when a disk is selected.)
• [Cluster-wide] Memory Usage: Displays the percentage of memory capacity currently being used (0 - 100%)
either across the cluster or for the selected host. (This graph does not appear when a disk is selected.)
• [Cluster-wide] IOPS: Displays I/O operations per second (IOPS) for the cluster, selected host, or selected disk.
• [Cluster-wide] I/O Bandwidth: Displays I/O bandwidth used per second (MBps or KBps) for physical disk
requests in the cluster, selected host, or selected disk.
• [Cluster-wide] I/O Latency: Displays the average I/O latency (in milliseconds) for physical disk requests in the
cluster, selected host, or selected disk.
• Host Usage (host selected). Displays usage statistics for the selected host.
• Disk Usage (disk selected). Displays usage statistics for the selected disk.
The Usage tab displays one or both of the following graphs:
• Usage Summary: Displays a rolling time interval usage monitor that can vary from one to several hours
depending on activity moving from right to left. Placing the cursor anywhere on the horizontal axis displays the
value at that time. For more in depth analysis, you can add the monitor to the analysis page by clicking the blue
link in the upper right of the graph. For more information, see Analysis Dashboard on page 333)
• Tier-wise Usage (host only): Displays a pie chart divided into the percentage of storage space used by each disk
tier on the host. Disk tiers can include DAS-SATA, SSD-SATA, and SSD-PCIe depending on the Nutanix model
type.
• Total Packets Received. Displays a monitor of the total packets received by the host NIC (in KBs or MBs)
over time. Place the cursor anywhere on the line to see the value for that point in time. (This applies to all the
monitors in this tab.)
• Total Packets Transmitted. Displays a monitor of the total packets transmitted by the host NIC (in KBs or
MBs) .
• Dropped Packets Received. Displays a monitor of received packets that were dropped.
• Dropped Packets Transmitted. Displays a monitor of transmitted packets that were dropped.
• Error Packets Received. Displays a monitor for error packets received.
Expanding a Cluster
A cluster is a collection of nodes. You can add new nodes to a cluster at any time after physically installing
and connecting them to the network on the same subnet as the cluster. The cluster expansion process
compares the AOS version on the existing and new nodes and performs any upgrades necessary for all
nodes to have the same AOS version.
• Review the relevant sections in Prerequisites and Requirements on page 204 before attempting to add a
node to the cluster. The process for adding a node varies depending on several factors. This section covers specific
considerations based on your AOS, hypervisor, encryption, and hardware configuration.
• Check the Health Dashboard. If any health checks are failing, resolve them before adding any nodes. As a final
check, run NCC to ensure that the cluster is healthy.
Note: Steps 9-12 are for special cases (rack fault tolerance, data-at-rest encryption, and Hyper-V), and they are not in
proper sequence. Refer to these steps as needed during the procedure for any that apply.
Procedure
» Click the gear icon in the main menu and then select Expand Cluster in the Settings page.
» Go to the Hardware dashboard and click the Expand Cluster button.
3. In the Expand Cluster window, select (click the radio button for) the desired option and then click the Next
button:
» Select Expand Cluster to begin the expansion immediately (after you complete the remaining
configuration steps).
» Select Prepare Now and Expand Later to prepare the nodes now but delay adding them to the cluster
until a later time. Preparing the nodes includes imaging the hypervisor (if needed), upgrading the AOS
version (if needed), and preparing new node network configuration (if needed).
The network is searched for Nutanix nodes and then the Select Host tab appears with a graphical list of the
discovered blocks and nodes. Discovered blocks are blocks with one or more unassigned factory-prepared
nodes (hypervisor and Controller VM installed) residing on the same subnet as the cluster. Discovery requires
that IPv6 multicast packets are allowed through the physical switch. If you see a failure message, review the
requirements in Prerequisites and Requirements on page 204.
Note: For an ESXi or Hyper-V cluster, manual host discovery requires that the target node has the same
hypervisor type and version. In addition, the AOS version must be lower than or equal to that of the cluster.
Note: For Controller VM, hypervisor, and IPMI, both IPv6 and IPv4 address fields appear to account for
either protocol. When entering addresses, be sure to enter each IP address in the correct field for the relevant
protocol. Usually you can ignore the IPv6 field and only update the IPv4 address if needed.
7. In the Choose Node Type tab, select the node type (HCI Node or Storage-only) for each node from the
pull-down list and then click the Next button.
Selecting HCI Node means that the node will be added as a standard node in the cluster. Selecting Storage-
only means that the node will be added as a storage-only node. Storage-only nodes have specific requirements.
Review Storage-Only Nodes on page 230 before adding a storage-only node.
8. In the Host Networking tab, configure the uplinks for each management bridge or vSwitch to be created or
updated for the nodes.
The Host Networking tab displays a list of the target nodes where you can specify the status (active
or standby) for each node's uplinks to the management bridge or vSwitch. The uplink list depends on the
hypervisor of the node to be added, while the listed bridge or vSwitch is for the base cluster. The standard form
for AHV uplinks is ethx (eth0, eth1, eth2, and so on); the standard form for ESXi uplinks is vmnicx (vmnic0,
vmnic1, and so on.) For example, if the hypervisor is ESXi on the node but AHV on a three-node base cluster,
each entry displays the AHV vSwitch name (br0 or br1), ESXi uplink name (vmnic0, vmnic1, or vmnic2),
9. In the Configure Host tab, specify the hypervisor image and allowlist for nodes that require imaging.
» If the detected hypervisor and AOS version are the same as the cluster, no imaging is required and a message
acknowledging that fact appears. Skip to the next step.
» If a hypervisor image is listed in the Hypervisor: <type> field and it is the desired one, skip to the next
step. If you uploaded a hypervisor image when adding nodes previously (and it is still compatible for
imaging the new nodes), that image appears here. You can use that image or upload a different one.
» If no hypervisor image is listed or the listed one is not the desired one, click the Choose File (no listed file)
or Change File (file listed already) button. In the search window, find the image file on your workstation,
and then click the Open button (in the search window) to upload that image file.
You must provide an ISO file for imaging ESXi or Hyper-V. You can get the required AHV image from
the Downloads > AHV page of the Nutanix Support portal. For information on how to access the portal,
see Accessing the Nutanix Support Portal on page 394. From AOS 6.8 release onwards, AOS
does not include the AHV installation bundle. The AHV bundle on the portal is named as: AHV-DVD-
x86_64-elx.nutanix.AHV-version.iso.
11. [data-at-rest encryption only] In the Encrypt Host tab, do the following.
This tab appears only when data-at-rest encryption is enabled with an external KMS. For more information, see
Data-At-Rest Encryption. You can apply encryption licenses after the new nodes are added to the cluster.
a. In the Certificate Signing Request Information field, click the Generate and Download link for
each node to be added.
Clicking the link generates a certificate signing request (CSR) named csr_for_discovered_node for the node,
which you download to your workstation.
b. Get the CSRs signed by a certificate authority (CA).
c. Click the select files link for each key management server and upload the signed certificates for the nodes
to be added.
d. Click the Next button.
12. [Hyper-V only] Specify the credentials to join the new nodes to Active Directory and to a failover cluster.
a. Specify the name of the Hyper-V failover cluster in the Failover Cluster Name text box.
b. Specify the user name and password of the domain account that has the privileges to create a new or modify
an existing computer account in the Active Directory domain. The user name must be in the DOMAIN
\USERNAME format.
13. When all the fields are correct, click the Run Checks button to verify that the nodes are ready.
This runs a set of precheck tests on the nodes. Progress messages appear in the Expand Cluster window. You
can also monitor progress from the Tasks dashboard. For more information, see View Task Status on page 71.
» Click the Expand Cluster button to begin the cluster expansion process. (This button appears if you
selected Expand Cluster in step 2.)
» Click the Prepare Node(s) button to begin preparing the nodes. (This button appears if you selected
Prepare Now and Expand Later in step 2.)
The expand cluster or node preparation process begins. As with the prechecks, progress messages appear in the
Expand Cluster window, and you can also monitor progress from the Tasks dashboard. Nodes are processed
(upgraded or reimaged as needed) and added in parallel. Adding nodes can take some time. Imaging a node
typically takes a half hour or more depending on the hypervisor.
Note: When the cluster has only one storage pool, you can ignore this step because the new storage is added to
that storage pool automatically.
a. Select Storage from the pull-down main menu (upper left of screen) and then select the Table and
Storage Pool tabs.
b. Select the target storage pool (upper display) and then click the update link.
c. In the Update Storage Pool window, check the Use unallocated capacity box in the Capacity line and
then click the Save button.
This step adds all the unallocated capacity to the selected storage pool.
d. Go to the Tasks dashboard and monitor progress until the node addition completes successfully.
Cluster expansion is not complete until the node is added to the metadata ring.
What to do next
One or more of the following items might apply to the added nodes. Also, see Prerequisites and Requirements on
page 204.
• Nondefault timezones are not updated on added nodes and must be reconfigured manually.
• If the Controller VM password for the cluster was changed from the default, the password on any new nodes gets
updated automatically to match the cluster password.
• To check Controller VM memory compatibility after cluster expansion, run the cvm_same_mem_level_check
and cvm_memory_check NCC checks. For more information, see Running Checks by Using Prism Element
Web Console on page 259.
• If the cvm_same_mem_level_check result is FAIL, the memory size in the added nodes is not the same as
the other Controller VMs in the cluster. If the memory is less than the common size, increase the memory to
match.
• If the cvm_memory_check result is FAIL, the Controller VM memory is less than the minimum required for
the workload. Increase the memory to (at least) the minimum size.
For information about increasing the memory of the Controller VM, see Increasing the Controller VM Memory
Size on page 108. For information about the Controller VM memory size recommendations, see CVM Memory
Configuration on page 107.
AOS Considerations
The following apply to any cluster:
• Ensure that the total number of nodes per cluster does not exceed the Cluster Maximums defined in Maximum
System Values on page 36. Note that the maximum number of nodes per cluster differ per the hypervisor type
and in case it is a pure hypervisor cluster (cluster with only one type of hypervisor) or mixed hypervisor cluster
(cluster with more than one type of hypervisor).
• The expand cluster process does not support compute-only node preparation.
• Enable IPv6 in your network and retry the expand cluster operation.
• If enabling IPv6 is not an option, the expand cluster procedure allows you to enter the IP addresses of nodes
to add manually and run discovery using IPv4. This requires that you have the IP addresses before starting the
expand cluster operation.
• If the Controller VM memory on a new node is less than the current nodes in the cluster, the expand cluster
process increases the memory on the new node to the same base value as the current nodes. The new Controller
VM is upgraded to a maximum of 32 GB.
The Controller VM is upgraded to a maximum of 28 GB for ESXi nodes with 64 GB or less of total physical
memory. With total physical memory greater than 64 GB, the existing Controller VM memory is increased by 4
GB.
• A new node is reimaged automatically before being added under certain conditions. The following table describes
those conditions.
Configuration Description
Same AOS and hypervisor versions The node is added to the cluster without reimaging it.
Same hypervisor version but different The node is automatically reimaged before it is added. However,
AOS version if the AOS version on the node is higher than the version on
the cluster, you can upgrade the cluster to the higher version.
If you do not upgrade the base cluster to match the node AOS
version, the node is reimaged automatically to match the lower
AOS version of the cluster. For more information about how to
upgrade your cluster, see the Life Cycle Manager Guide.
Same AOS version but different The node is automatically reimaged before it is added.
hypervisor version
Note: If you are expanding a cluster (now) on which the network is segmented only by traffic type (management and
backplane), see Network Segmentation During Cluster Expansion in the Security Guide.
AHV Considerations
The following apply to clusters running AHV:
• If the Controller VMs in the cluster reside in a VLAN configured network, the discovery process still finds any
factory-prepared nodes regardless of their current VLAN status (configured or not configured).
• You cannot reimage a node running discovery OS when Link Aggregation Control Protocol (LACP) is enabled
on the cluster. Discovery OS is a pre-installed software on Nutanix nodes that allows the nodes to be discovered.
Prepare the node with LACP using a Foundation VM.
• If you plan to use an NVIDIA host driver on the host that you want to add to an existing cluster with GPU nodes,
do not install the driver before adding the host to the cluster. Instead, first add the new hosts to the cluster and then
use the install_host_package script to install the driver on the host. For more information on using the script, see
Installing the NVIDIA GRID Driver.
ESXi Considerations
The following apply to clusters running ESXi:
• Before adding a host running ESXi 7.0U2 and later versions, with Trusted Platform Module (TPM) 2.0 enabled,
to a cluster, Nutanix recommends that you backup the recovery key created when encrypting the host with TPM.
For information on how to generate and backup the recovery key, see KB 81661 in the VMware documentation.
Ensure that you use this recovery key to restore the host configuration encrypted by TPM 2.0 if it fails to start
after adding the host to your cluster. For information on how to restore an encrypted host, see KB 81446 in the
VMware documentation. If you don't have the recovery key, and if the host fails to start, contact Nutanix Support.
• If the ESXi root user password was changed from the default, the expand cluster operation might fail. In this case,
reset the ESXi root user password to the default, and then retry the expand cluster procedure. For default cluster
credentials, see KB 1661.
• While expanding a Nutanix cluster running NSX enabled ESXi hosts, add the newly imaged node to the Nutanix
cluster where the host and CVM management network are configured with a standard vSwitch, and then add the
node in NSX manager. Otherwise, the cluster expansion operation fails with the following error. For information
on how to expand a cluster, see Expanding a Cluster on page 199.
Failed to get VLAN tag of node <MAC Address of the node>
• The expand cluster operation supports mixed node (ESXi + storage-only) clusters only if network segmentation
is not enabled. If network segmentation is enabled for the mixed cluster, you cannot use the expand cluster
operation. However, you can use the expand cluster operation for a mixed cluster that has backplane segmentation
enabled and the storage-only nodes are hosted with AOS 6.1 or newer release.
• Network configuration has the following restrictions and requirements:
• You cannot configure the network when either a target node or the base cluster has LACP enabled. Prepare
LACP nodes offline using a Foundation VM.
• You cannot migrate management from vSwitch0 to another standard vSwitch.
• Management interfaces must either be on a VSS or a DVS; a mixed setup is not supported.
• If on VSS, the Controller VM management interface (eth0) and the hypervisor management interface must be
deployed on vSwitch0 and connected to port group VM Network and Management Network, respectively.
• If on DVS, all Controller VM management interfaces and all host management interfaces must be connected
to the same distributed virtual switch and same port group. (However, the Controller VM interfaces can be
connected to a different DVS port group than the host management interfaces.)
• Segmented network interfaces (backplane, volume, DR) must be on same vSwitch type (VSS or DVS) as the
management.
• If network segmentation is enabled on the base cluster and the backplane is deployed on a separate vSwitch
than management, you can create vSwitches (VSS or DVS) and prepare the required Controller VM interfaces.
(Manual switch configuration is required for expand now but is integrated into the expand later work flow.)
• If the base cluster is on DVS and you are doing an expand later, you can migrate the target nodes from the
default vSwitch0 to that DVS.
• The common Nutanix datastores are mounted on the new nodes by default after cluster expansion.
• The target storage containers must be set to mount on the new hosts. You can check the mount status from the
Storage dashboard. Click the Storage Container tab, select the target storage container, click the Update
button, and verify that Mount on all ESXi Hosts (or the new hosts are checked in Mount/Unmount on
the following ESXi Hosts) is selected in the NFS DATASTORE field.
• If an added node has an older processor class than the existing nodes in the cluster, cluster downtime is
required to enable EVC (enhanced vMotion compatibility) with the lower feature set as the baseline. For an
indication of the processor class of a node, see the Block Serial field in the Hardware dashboard. For more
information, see Hardware Diagram View on page 186 or Hardware Table View on page 192. For
more information on enabling EVC, see vSphere EVC Settings in the vSphere Administration Guide for
Acropolis.
Caution: If you mix processor classes without enabling EVC, vMotion/live migration of VMs is not supported
between processor classes. If you add the host with the newer processor class to vCenter Server before enabling
EVC, cluster downtime is required to enable EVC later because all VMs (including the Controller VM) must be
shut down.
• Add the new nodes to the appropriate vCenter Server cluster. If an added node has a newer processor class (for
example, Haswell) than the existing nodes in the cluster (Ivy Bridge or Sandy Bridge), enable EVC with the
lower feature set as the baseline before adding the node to vCenter.
• If you are adding multiple nodes to an existing EVC-enabled vCenter cluster, which requires powering off the
Controller VM for each node to complete the addition, add just one node at a time and wait for data resiliency
to return to OK before adding the next node to vCenter.
Caution: Adding multiple nodes to vCenter simultaneously can cause a cluster outage when all the Controller
VMs are powered off at the same time.
• If you are adding a node to a cluster where HA is enabled with APD and VMCP is enabled, you must enable
APD and APD timeout on the new host.
• If you are adding new nodes to vCenter EVC configured cluster, ensure the following requirements.
Hyper-V Considerations
The following apply to clusters running Hyper-V:
• Configure data-at-rest encryption for the new nodes. The new nodes must have self-encrypting disks or software-
only encryption.
• If the cluster uses an external key manager server (KMS), either with self-encrypting drives or software only, then
reimaging cannot be done through the expand cluster workflow. In this case, any nodes to add must already have
the correct hypervisor and AOS version. Use Foundation to image the nodes (if needed) before attempting to add
them to the cluster.
• If an encrypted cluster uses an external KMS, the new node might not be able to resolve the DNS name of the
KMS. Therefore, you must manually update the resolv.conf file on the new node to enable communication
between the node and the KMS.
• Adding a node to a cluster with self-encrypting drives (SED) where the added node is running a different
hypervisor is not supported. In this case, image the node to the same hypervisor by using Foundation before
adding it to the SED cluster. For more information, see KB 4098.
• When expanding a cluster on AWS, the nodes are not added to the Cassandra ring (wait in a queue) until there are
enough nodes to extend the ring. In addition, new nodes are added to the Cassandra ring only when they are do not
break domain awareness. (Other services remain unaffected.)
Hardware Considerations
Note the following when it applies to the nodes you are adding:
Note: Do not shut down more than one Controller VM at the same time.
• If you expand a cluster by adding a node with older generation hardware to a cluster that was initially created with
later generation hardware, power cycle (do not reboot) any guest VMs before migrating them to the added older
generation node or before upgrading the cluster.
Guest VMs are migrated during hypervisor and firmware upgrades (but not AOS upgrades).
For example, if you are adding a node with G4 Haswell CPUs to a cluster that also has newer G5 nodes with
Broadwell CPUs, you must power cycle guest VMs hosted on the G5 nodes before you can migrate the VMs to
the node with G4 CPUs. Power cycling the guest VMs enables them to discover a CPU set compatible with older
G4 processors.
In rare cases, certain CPU features might be deprecated in the new generation of CPUs. For example, Intel
introduced MPX in Skylake class of CPUs and deprecated it with Ice Lake. In such cases, introduction of Ice Lake
(newer) CPUs to an all-Skylake cluster can cause problems with existing VMs that are running with MPX. Such
VMs must be power cycled.
Power cycle guest VMs from the Prism Element web console VM dashboard. Do not perform a Guest Reboot; a
VM power cycle is required in this case.
• If you physically add a node to a block (for example, a single node shipped from Nutanix is placed into an empty
slot in an existing chassis), log on to the Controller VM for that node, and update the following parameters in the /
etc/nutanix/factory_config.json file:
• rackable_unit_serial: Set it to the same value as the other Controller VMs in the same block.
• node_position: Set it to the physical location of the node in the block (A, B, C, D).
After changing the configuration file, restart Genesis with the genesis restart command.
Important:
If the check result (output of the acli atlas_config.get command) shows the
vpc_east_west_traffic_config section with dvs_uuid displaying the UUID of a non-default
virtual switch, follow the procedure in this section.
Cluster Network
This procedure assumes the following cluster network configuration:
• vs0 is the default virtual switch for Controller VM and AHV communications.
• All bridges are configured with Active-Active with LACP.
• To prepare the new clusters for cluster expansion, see Preparing the New Nodes for Addition to Existing AHV
Cluster on page 211.
• To verify that the new node is adequately prepared for expansion of the existing cluster, see Verifying the New
Node Setup on page 211.
• To expand the cluster with the new node you prepared, see Expanding a Cluster with Flow Virtual Networking
Enabled on page 213.
Procedure
• (If Secure Boot is used in the cluster) Ensure that the node is booting with UEFI and not in Legacy BIOS. Ensure
that Foundation does not change the node to boot mode back to Legacy BIOS. Change the Foundation settings if
required.
• Ensure that the firmware on the node is consistent with the firmware on the existing nodes in the cluster. If
feasible, update the firmware on the node to match the firmware version on the nodes in the cluster.
• Use the appropriate Foundation version to image the node with the same AOS and AHV versions as those on the
cluster to which you are adding the node. This is essential to prevent the cluster expansion process from running
another Foundation imaging process and wiping out all prepared elements.
After imaging each node with Foundation software, perform these steps to prepare all the new nodes you are adding
to the existing cluster:
Procedure
1. (If Secure Boot is used in the cluster) Enable Secure Boot for the node.
2. Check all the physical port to Ethernet port mappings to ensure that the physical connectivity matches the logical
configuration. Check if these mappings are consistent with the mappings in the existing nodes. This is to ensure
that AHV enumerates the Ethernet port numbering and, hence, the Ethernet to bridge (virtual switch) mapping
correctly.
3. Configure default virtual switch vs0 (and bridge br0) and the uplinks to be consistent with the existing nodes. Any
non-default virtual switches such as vs1 and vs2 (with bridges br1 and br2 respectively) that are created after the
cluster expansion need not be created pre-created in the new nodes. Pre-create and configure on the new node only
those non-default virtual switches that already exist on the nodes in the cluster.
What to do next
Verify the new node setup to ensure that it is adequately prepared for cluster expansion.
1. Ensure that the new node is physically installed at the required location. To verify that the new node is physically
installed correctly, do the following.
a. Ensure that redundant power cables are connected and power failover is tested after powering on the node.
b. Ensure that the node boots in AHV.
c. Ensure that the physical network connections are made in accordance with the physical connections on the
existing nodes in the cluster.
Ensure that:
• Each virtual switch on the new node has dual redundant uplinks.
• The NICs have the same speed across all the nodes in the cluster.
• All the physical connections have the link lights on.
2. Log on to Controller VM on the new node and run the manage ovs show interfaces command to check if the
necessary interfaces are up and displaying the necessary speed.
3. Perform network reachability tests to ensure that AHV, Controller VM and IPMI (management connectivity) are
reachable over the network.
4. Perform NIC failover tests for the bridge (br0) on the default virtual switch vs0. Perform these tests from a local
console that has access to the new AHV host.
a. Log on to the AHV host using a shell session and run the following command:
AHV# mokutil --sb-state
The output of the command indicates that Secure Boot is enabled.
b. Check the LACP status on the AHV using the following command:
AHV# ovs-appctl lacp/show
The status of each uplink bond must be displays as Negotiated and not as Active.
c. Log on to the Controller VM on the new node and run the following command to display the uplink status.
nutanix@cvm$ manage ovs show uplinks
Check the bridges, Ethernet and uplink bond configurations
Ensure that all issues found at this stage are resolved before proceeding to cluster expansion.
Note: The Flow Virtual Networking networking stack (Network Controller with brAtlas) does not exist in the new
node. Any VM migrations to the new node fail.
Procedure
1. Disable Acropolis Dynamic Scheduling (ADS). This step ensures that no VMs are migrated to the new node
during the cluster expansion.
Log on to the CVM and run the following commands to check the current status of ADS:
nutanix@cvm$ acli
acli> ads.get
Check if ADS status is enabled or disabled.
Run the following command to disable ADS:
acli> ads.update enable=false
Check the status of ADS by running the ads.get command to ensure that ADS is disabled.
2. Log on to the Prism Element web console of the cluster. You need a minimum of Cluster Admin privileges to
run the cluster expansion workflow in Prism Element web console.
» Click the gear icon in the main menu and then select Expand Cluster in the Settings page.
» Go to the Hardware dashboard and click the Expand Cluster button.
» If the list includes all specified prepared nodes, go to the next step.
» If IPv6 is not supported for the cluster or the list does not include all desired nodes, go to the bottom of the
list, click the Discover Hosts Manually button, and do the following to retry discovery using IPv4:
• In the Manual Host Discovery window, click the +Add Host link. A line appears in the table below the
link.
• Enter the Controller VM IP address on that host (or the host IP address for compute-only nodes) and then
click Save.
• Add more hosts as desired.
• When the list of hosts is complete, click the Discover and Add Hosts button. Manual discovery uses
IPv4 to find and add the hosts to the list.
6. In the Select Host tab, select the check box for each node that you need to add to the cluster.
Check the Host Name, Controller VM [IPv4|IPv6], Hypervisor [IPv4|IPv6], and IPMI IP [IPv4|IPv6]
fields. Ensure that the details are displayed correctly. Resolve any discrepancies before you proceed.
7. In the Choose Node Type tab, select the node type as HCI Node.
9. In the Configure Host tab, if the detected AHV and AOS version on the selected new nodes are the same
as versions on the existing nodes in the cluster, no imaging is required and a message acknowledging that fact
appears. Skip to the next step.
Since the nodes were prepared with the correct AHV and AOS versions, this message is expected to be
displayed, allowing you to skip to the next step.
10. When all the fields are correct, click the Run Checks button to verify that the nodes are ready.
This runs a set of precheck tests on the nodes. Progress messages appear in the Expand Cluster window. You
can also monitor progress from the Tasks dashboard. For more information, see View Task Status on page 71.
11. Click the Expand Cluster button to begin the cluster expansion process.
After the first round of tasks is complete, a second round of tasks to add the nodes into the metadata ring
automatically starts.
Check whether the nodes are added to the metadata ring by logging on to any Controller VM in the cluster and
running the following commands:
nutanix@cvm$ nodetool -h 0 ring
The output lists all the controller VMs in the cluster and displays their status as Up.
a. Update the uplinks of the virtual switches for the new nodes.
For information on how to update the uplink configuration, see Creating or Updating a Virtual Switch on
page 155.
b. Retrieve the UUID of the new node.
acli> host.list
Note the UUID of the new host in the aCLI command output.
c. Run the following command at the acli> prompt to configure the IP addresses for new nodes.
acli> net.update_virtual_switch virtual-switch-name host_ip_addr_config='{<new-
host-uuid1>:<new-host_ip_address/prefix>}'
Replace
• <new-host-uuid1> with the UUID of the new node noted in the previous step.
To add the IP addresses of multiple new nodes to the host IP address configuration, use the following
command format:
acli> net.update_virtual_switch vs1 host_ip_addr_config='{<new-host-
uuid1>:<host_ip_address1/prefix>;<new-host-uuid2>:<host_ip_address2/prefix>;<new-
host-uuid3>:<host_ip_address3/prefix>}'
d. If the IP address of the new node does not belong to the default VLAN and the new node generates VPC
traffic, tag the VLAN of the new node to the bridge of the non-default virtual switch. Log on to the new host
with root privileges and run the following command.
root@ahv$ ovs-vsctl set port <brX> tag= <VLAN-ID>
Replace <brX> with the non-default bridge such as br1 or br2.
Replace <VLAN-ID> with the ID of the non-default VLAN that the new node IP address belongs to.
For more information, see Assigning an AHV Host to a VLAN.
For information on how to update the virtual switches, see Modifying Switch Information.
14. Reconfigure Flow Virtual Networking (Network Controller) on the new nodes.
a. Ensure that the new node is placed to the maintenance mode (see previous step).
b. Verify that the Connected status of the new node is True.
Run the aCLI host.list command and check the Connection status of the new node in the table in the
output.
acli> host.list
A sample output of this command is as follows:
c. Log on to the new host with root privileges and run the following command.
root@ahv# systemctl stop ahv-host-agent
After a few minutes, verify that the Connected status of the new node is False using the aCLI host.list
command.
d. Log on to the new host with root privileges and run the following command.
root@ahv# systemctl start ahv-host-agent
After a few minutes, verify that the Connected status of the new node is True using the aCLI host.list
command.
Run this procedure on each new node.
• Default Virtual Switch Error: This alert is not resolved automatically. Resolve it manually if you have
verified that the virtual switch is healthy.
• Inconsistent Virtual Switch State Detected: This alert is resolved in the previous steps.
• Failed to configure host for Atlas networking: This alert is resolved in the previous steps.
• VPC east-west traffic configuration error: This alert is resolved in the previous steps.
Resolve all alerts. If you do not see any red icons for the virtual switches, go to the next step.
16. After the cluster health is validated as good, re-enable ADS using aCLI.
b. Enable ADS.
acli> ads.update enable=true
What to do next
Run network connectivity validations for guest VMs in the cluster, using test VMs. Create a test VM with
appropriate IP address in the guest VM network and run ping tests from the test VM to the default gateway
of the guest VM network and the IP addresses of the other guest VMs in the network. You need one test
VM per guest VM network for this test.
Node Maintenance
Node Maintenance Mode
You are required to gracefully place a node into the maintenance mode or non-operational state for
reasons such as making changes to the network configuration of a node, performing manual firmware
upgrades or replacements, performing CVM maintenance or any other maintenance operations.
Note: VMs with CPU passthrough or PCI passthrough, pinned VMs (with host affinity policies), and RF1 VMs are not
migrated to other hosts in the cluster when a node undergoes maintenance. Click View these VMs link to view the list
of VMs that cannot be live-migrated.
For information on how to place a node under maintenance, see Putting a Node into Maintenance Mode using
Web Console on page 218.
You can also place an AHV host under maintenance mode or exit an AHV host from maintenance mode through the
CLI.
Note: Using the CLI method to place an AHV host under maintenance only places the hypervisor under maintenance
mode. The CVM is up running in this method. To place the entire node under maintenance, Nutanix recommends using
the UI method (through web console).
• For information on how to use the CLI method to place an AHV host in maintenance mode, see Putting a Node
into Maintenance Mode using CLI in the AHV Administration Guide.
• For information on how to use the CLI method to exit a node from the maintenance mode, see Exiting a Node
from the Maintenance Mode using CLI in the AHV Administration Guide.
• With a minimum AOS release of 6.1.2, 6.5.1 or 6.6, you can only place one node at a time in maintenance mode
for each cluster.
• Entering or exiting a node under maintenance from the CLI is not equivalent to entering or exiting the node under
maintenance from the Prism Element web console. For example, placing a node under maintenance from the CLI
places the AHV host and CVM under maintenance while the CVM continues to remain powered on.
Warning: You must exit the node from maintenance mode using the same method that you have used to put the
node into maintenance mode. For example, if you put the node into maintenance mode using CLI, you must use
CLI to exit the node from maintenance mode. Similarly, if you put the node into maintenance mode using web
console, you must use the web console to exit the node from maintenance mode.
If you put the node in maintenance mode using CLI and attempt to exit it using web console or if you
put the node in maintenance mode using web console and attempt to exit it using CLI, the maintenance
workflow breaks, and activities such as node upgrade and coming up of the CVM service after exiting
maintenance mode get impacted.
Note: At this stage, the AHV host is not shut down. For information on how to shut down the AHV host,
see Shutting Down a Node in a Cluster (AHV). You can list all the hosts in the cluster by running
nutanix@cvm$ acli host.list command, and note the value of Hypervisor IP for the node you want to shut down.
Procedure
Note: VMs with CPU passthrough, PCI passthrough, pinned VMs (with host affinity policies), and RF1 are not
migrated to other hosts in the cluster when a node undergoes maintenance. Click View these VMs link to view the
list of VMs that cannot be live-migrated.
6. Select the Power-off VMs that can not migrate checkbox to enable the Enter Maintenance Mode button.
• A revolving icon appears as a tool tip beside the selected node and also in the Host Details view. This indicates
that the host is entering the maintenance mode.
• The revolving icon disappears and the Exit Maintenance Mode option is enabled after the node completely
enters the maintenance mode.
• You can also monitor the progress of the node maintenance operation through the newly created Host enter
maintenance and Enter maintenance mode tasks which appear in the task tray.
Note: In case of a node maintenance failure, certain rolled-back operations are performed. For example, the CVM
is rebooted. But the live migrated are not restored to the original host.
What to do next
Once the maintenance activity is complete, you can perform any of the following.
• View the nodes under maintenance. For more information, see Viewing a Node that is in Maintenance Mode
on page 221.
• View the status of the UVMs. For more information, see Guest VM Status when Node is in Maintenance
Mode on page 221.
• Remove the node from the maintenance mode. For more information, see Exiting a Node from the
Maintenance Mode using Web Console on page 219.
Warning: You must exit the node from maintenance mode using the same method that you have used to put the node
into maintenance mode. For example, if you put the node into maintenance mode using CLI, you must use CLI to exit
the node from maintenance mode. Similarly, if you put the node into maintenance mode using web console, you must
use the web console to exit the node from maintenance mode.
If you put the node in maintenance mode using CLI and attempt to exit it using web console or if you
put the node in maintenance mode using web console and attempt to exit it using CLI, the maintenance
workflow breaks, and activities such as node upgrade and coming up of the CVM service after exiting
maintenance mode get impacted.
As the node exits the maintenance mode, the following high-level tasks are performed internally.
Note: The AHV host is shut down during Putting a Node into Maintenance Mode using Web Console
on page 218 and it is required to power on the AHV host. For information on how to power on the AHV host, see
Starting a Node in a Cluster (AHV).
After the host exits the maintenance mode, the RF1 VMs continue to be powered on and the VMs migrate
to restore host locality.
For more information, see Guest VM Status when Node is in Maintenance Mode on page 221 to view the
status of the UVMs.
Perform the following steps to remove the node into maintenance mode.
Procedure
1. On the Prism Element web console home page, select Hardware from the drop-down menu.
3. Select the node which you intend to remove from the maintenance mode.
• A revolving icon appears as a tool tip beside the selected node and also in the Host Details view. This indicates
that the host is exiting the maintenance mode.
• The revolving icon disappears and the Enter Maintenance Mode option is enabled after the node
completely exits the maintenance mode.
• You can also monitor the progress of the exit node maintenance operation through the newly created Host exit
maintenance and Exit maintenance mode tasks which appear in the task tray.
What to do next
Once a node exits the maintenance mode, you can perform any of the following.
• View the status of node under maintenance. For more information, see Viewing a Node that is in Maintenance
Mode on page 221.
Note: This procedure is the same for AHV and ESXi nodes.
Procedure
4. Observe the icon along with a tool tip that appears beside the node which is under maintenance. You can also
view this icon in the host details view.
5. Alternatively, view the node under maintenance from the Hardware > Diagram view.
What to do next
You can:
• View the status of the guest VMs. For more information, see Guest VM Status when Node is in Maintenance
Mode on page 221.
• Remove the node from the maintenance mode. For more information, see Exiting a Node from the
Maintenance Mode using Web Console on page 219Exiting a Node from the Maintenance Mode
(vSphere).
Note: The following scenarios are the same for AHV and ESXi nodes.
Caution:
Verify that it is safe to remove the node from the cluster before you start this procedure. You must follow the
hypervisor boot drive replacement procedure for your specific platform. The documents are on the Nutanix support
portal under Hardware Replacement Documentation in one of following categories.
For information about replacing NVMe drives, see the NVMe replacement procedures for your platform.
Modifying a Cluster
Hardware components (nodes and disks) can be removed from a cluster or reconfigured in other ways
when the conditions warrant it.
Adding a Disk
You can add a disk to a node in Nutanix or non-Nutanix environments.
• The process of adding a disk is the same for all platforms (both Nutanix and third-party platforms), assuming that
the platform is running Nutanix AOS.
• The process of adding an NVME disk differs for each platform. For more information about adding an NVME
disk, see Completing NVMe Drive Replacement (Software Serviceability).
• The types of disks you can add depends on your platform configuration. For supported disk configurations, see the
System Specifications for your platform.
• Hybrid: A mixture of SSDs and HDDs. Hybrid configurations fill all available disk slots, so you can add a
disk only if there is a disk missing.
• All-flash: All-flash nodes have both fully populated and partially populated configurations. You can add new
drives to the empty slots. All-flash nodes can accept only SSDs.
• SSD with NVMe: A mixture of SSDs and NVMe drives. Only certain drive slots can contain NVMe disks.
Refer to the system specifications for your platform for disk configurations.
• HDD with NVMe: A mixture of HDDs and NVMe drives. Only certain drive slots can contain NVMe disks.
Refer to the system specifications for your platform for disk configurations.
• All NVMe: All NVMe nodes have both fully populated and partially populated configurations. You can add
new drives to the empty slots. All NVMe nodes can accept only NVMe drives.
• Nutanix allows mixing of disks with different capacities in the same node. However, the node treats higher-
capacity disks as if they had the same capacity as the lower-capacity disks. Nutanix provides the ability to mix
disk capacities for cases where you need to replace a disk, but only higher-capacity disks are available. To
increase the overall storage capacity of the node, replace all disks with higher-capacity disks.
• When adding multiple disks to a node, allow at least one minute between adding each disk.
Procedure
3. If the disk is red and shows a label of Unmounted Disk, select the disk and click Repartition and Add under
the Diagram tab.
This message and the button appear only if the replacement disk contains data. Their purpose is to protect you
from unintentionally using a disk with data on it.
Caution: This action removes all the data on the disk. Do not repartition the disk until you have confirmed that the
disk contains no essential data.
4. Go to Hardware > Disk, and verify that the disk has been added to the original storage pool.
If the cluster has only one storage pool, the disk is automatically added to the storage pool.
5. If the cluster has multiple storage pools and the drive is not automatically added to the storage pool, add it to the
desired storage pool:
a. In the web console, select Storage from the drop down menu, select the Table tab, and then select the
Storage Pool tab.
b. Select the target storage pool and then click Update.
The Update Storage Pool window appears.
c. In the Capacity field, select the Use unallocated capacity box to add the available unallocated capacity
to this storage pool then click Save.
d. Go to Hardware > Diagram, select the drive, and confirm that it is in the correct storage pool.
Removing a Disk
You may need to remove a disk for various reasons such as during the replacement of a failed disk.
Procedure
3. Click the Remove Disk link on the right of the Summary line.
A dialog box appears to verify the action.
Caution: Do not physically remove a disk until that disk status indicator turns red in the diagram. The status
message that appears on completion of the steps may indicate that the data migration is complete, but the disk is not
ready for removal until that the disk status indicator turns red in the diagram.
• The Prism Element web console displays a warning message that you need to reclaim the license after you have
removed the node. For information about reclaiming or rebalancing your cluster licenses, see License Manager
Guide.
• Removing a node (host) takes some time because data on that node must be migrated to other nodes before it
can be removed from the cluster. You can monitor progress through the dashboard messages. Removing a node
implicitly removes all the disks in that node.
• (Hyper-V only) Initiating a removal of a node running Hyper-V fails if the node is running as a part of a Hyper-V
failover cluster and the following message appears.
Node node id is a part of a Hyper-V failover cluster failover cluster name. Please
drain all the roles, remove the node from the failover cluster and then mark the
node for removal.
If this message is displayed in either nCLI or in web interface, as a cluster administrator, you must use the
management tools provided by Microsoft such as Failover Cluster Manager to drain all the highly-available
roles off the node. Then remove the node from the failover cluster followed by removing the node from the AOS
cluster.
• (ESXi only) Before removing the node, ensure that there are no guest VMs running on the respective ESXi host.
Note that CVM should be running on the node to initiate the node removal.
• (ESXi only) Ensure that the vSphere Distributed Switch (VDS) does not have any references to the host in the
Port mirroring and VM templates. Ensure that there are no remaining VMs, VMkernels, or VM NICs associated
with the VDS on the host you intend to remove.
• (ESXi only) Temporarily disable DRS on the cluster before the node removal. The DRS must be re-enabled after
the node is removed. Update the HA configuration to exclude the node you want to remove.
Caution: Ensure that you migrate the guest VMs before removing a host or node. Verify that the target cluster
has enough available compute capacity before actually migrating the VMs. Removing a node or host without first
migrating the guest VMs may result in loss of service.
Caution: When you put the host in maintenance mode, the maintenance mode process powers down or
migrates all the VMs that are running on the host.
For more information, see Node Maintenance in vSphere Administration Guide for Acropolis.
4. Remove the host from the vCenter server.
Procedure
2. Select the node you want to remove in one of the following ways:
3. Click the Remove Host link on the right of the Summary line.
Note: The Remove Host link on the right of the Summary line does not appear for the removal of a host from a
three-node cluster.
Caution: Do not shut down the CVM or put the CVM into maintenance mode while the node removal is in
progress.
What to do next
• If you remove the last node with older generation hardware from a cluster and the remaining nodes have newer
generation hardware, you may choose to power cycle the guest VMs to enable the VMs discover a new CPU set
compatible with the newer generation hardware. For example, if you remove the last node with a G4 Haswell CPU
from a cluster where all of the other nodes are newer G5 nodes with Broadwell CPUs, then VMs can discover a
Caution: After adding the removed node back into the cluster, the same cluster ID is applied to both clusters in the
following circumstances:
• When the removed node is the first node (lowest IP address in the cluster)
• When you reuse the removed node in another cluster
• When the removed node again becomes the first node (lowest IP address) in the new cluster
To prevent this occurrence, reimage the node (using Foundation) before adding it to the new cluster.
For information about how to image a node, see Prepare Bare-Metal Nodes for Imaging in the Field
Installation Guide.
• User requests to remove multiple nodes (serially one after the other) from the Prism Element web console or
nCLI.
• System runs prechecks on each node to allow node removal request. The node must pass all the checks to be
successfully accepted for node removal.
For more information, see Prechecks to Allow Multiple Node Removal on page 227.
• System removes nodes serially from metadata ring of Cassandra.
• System rebuilds the data in parallel in the background for all the nodes that are marked for removal.
Note: This feature is supported on a cluster with a minimum of four hosts (nodes).
Review the prerequisites for removing a node listed in the Prerequisites for Removing a Node on page 225
topic.
To remove multiple nodes from the cluster, do the following:
Procedure
2. Select the node you want to remove by performing one of the following:
3. Click the Remove Host link on the right of the Summary line.
Note: The Remove Host link on the right of the Summary line does not appear for the removal of a host from a
three-node cluster.
Caution: Do not shut down the CVM or put the CVM into maintenance mode while the node removal is in
progress.
5. To remove multiple nodes, repeat Step 1 through Step 3 for each node.
The Tasks menu shows the progress of the node removal. The first node that was selected to be removed has two
subtasks running:
• If you remove the last node with older generation hardware from a cluster and the remaining nodes have newer
generation hardware, you may choose to power cycle the guest VMs to enable the VMs discover a new CPU set
compatible with the newer generation hardware. For example, if you remove the last node with a G4 Haswell CPU
from a cluster where all of the other nodes are newer G5 nodes with Broadwell CPUs, then VMs can discover a
CPU set compatible with the newer G5 processors only when you trigger a power cycle. Note that a power cycle
is not required for normal functionalities of the cluster, but the guest VMs continue to see the CPU set compatible
with the older processors until you trigger a power cycle.
• After a node is removed, it goes into a state without any configuration. You can add such a node back into the
cluster through the Expanding a Cluster on page 199 workflow.
Caution: After adding the removed node back into the cluster, the same cluster ID is applied to both clusters in the
following circumstances:
• When the removed node is the first node (lowest IP address in the cluster)
• When you reuse the removed node in another cluster
• When the removed node again becomes the first node (lowest IP address) in the new cluster
To prevent this occurrence, reimage the node (using Foundation) before adding it to the new cluster.
For information about how to image a node, see Prepare Bare-Metal Nodes for Imaging in the Field
Installation Guide.
Adding a Node
You can need to add a node or host into the metadata store in events such as the aftermath of the
replacement of a failed metadata disk.
Procedure
2. Select the node you want to add in one of the following ways:
3. Click the Enable Metadata Store link on the right of the Summary line.
The Enable Metadata Store link appears only when the node is not added back automatically and the alert
message is displayed
Compute-Only Nodes
The Nutanix cluster uses the resources (CPUs and memory) of a compute-only (CO) node exclusively for computing
purposes. A CO node allows you to seamlessly and efficiently expand the computing capacity (CPU and memory) of
your cluster. CO nodes do not have a Controller VM (CVM) and local storage.
Note: Clusters that have CO nodes do not support virtual switches. Instead, use bridge configurations for network
connections. For more information, see Virtual Switch Limitations.
• AHV SO (Optimized Database Solution) nodes. For more information, see Optimized Database Solution on
page 242
• HCI nodes. For more information, see Deployment Specifications and Considerations for Compute-Only
and Storage-Only Nodes on page 231
For information on how to deploy a CO node, see Deployment of Compute-Only Nodes on page 240.
Storage-Only Nodes
A storage-only node allows you to seamlessly expand the storage capacity in your cluster. A storage-only
node always runs the AHV hypervisor. If you add a storage-only node to any cluster (regardless of the
hypervisor), the hypervisor on the storage-only node is always AHV. Therefore, if you want to scale up only
the storage capacity in your cluster, you do not need to purchase additional hypervisor licenses.
Foundation allocates the maximum resources to Controller VM (CVM) of the SO node as follows:
• CVM vCPU = Number of physical host CPUs minus 2, limited to a maximum of 22 vCPUs.
Note: This is applicable till Foundation version 5.3.x. From Foundation version 5.4 onwards, the capping of
maximum 22 vCPUs is not applicable.
• CVM memory = Available RAM minus 16 GiB, limited to a maximum of 256 GiB.
Note:
• This is applicable from Foundation version 5.3 and above. In the earlier Foundation versions, the
memory allocation happens without capping to 256 GiB.
• A capping of maximum 256 GiB is applied, and Foundation allocates the maximum possible vRAM
to CVM. For example, if the available RAM is 512 GiB, the system allocates a maximum of 256
Note: Minimum Foundation version of 5.3 supports these limits with NUMA pinnings or alignments. Earlier
Foundation versions with a minimum version of 5.0 support these limits but not NUMA pinnings or alignments.
• A storage-only node includes an AHV hypervisor, a Controller VM (CVM), and memory and CPU resources
enough to run only the CVM.
• A storage-only node always runs the AHV hypervisor. Therefore, if you add a storage-only node to an ESXi or
Hyper-V cluster, the hypervisor on the storage-only node is always AHV.
• For hardware model support for storage-only node, see Supported Hardware Platforms on page 235.
• A storage-only node is supported on ESXi, Hyper-V, and AHV clusters. However, you cannot run guest VMs on
the storage-only nodes.
• If you have storage-only AHV nodes in clusters with compute-only nodes being ESXI or Hyper-V, deployment
of default virtual switch vs0 fails. In such cases, the Prism Element, Prism Central or CLI workflows for virtual
switch management are unavailable to manage the bridges and bonds. Use the manage_ovs command options to
manage the bridges and bonds.
Note: A storage-only node is not the same as a storage-heavy node. A storage-heavy node is a regular Nutanix
hyperconverged node, but with a greater storage capacity. A storage-heavy node can run any hypervisor (AHV, ESXi,
or Hyper-V) and can run guest VMs.
AHV SO nodes do not perform the compute operation, and they support the following two types of nodes for
compute operation:
Operation Specifications
This section provides information about the operational attributes for the supported deployment configurations of
compute-only (CO) and storage-only (SO) nodes in the Hyperconverged Infrastructure (HCI) setup.
Storage source for vDisk/Volumes HCI nodes in the cluster SO nodes in the cluster
associated with guest VMs on the
compute-only (CO) node
Controller VM (CVM) and local storage No CVM and local storage on No CVM and local storage on
CO nodes HCI nodes
VMs Management (CRUD operations, Using Prism Element web console
ADS, and HA)
Hypervisor operation for CO node Runs on the local storage Runs on the local storage
media of the CO node media of the HCI node
Network segmentation support Not supported
Hypervisor and firmware upgrade Using Life Cycle Manager (LCM). For more information, see
the LCM Updates topic in the Life Cycle Manager Guide.
Cluster Requirements
This section provides information about the minimum cluster requirements for compute-only (CO) and storage-only
(SO) nodes in the Hyperconverged Infrastructure (HCI) setup.
Table 44: Minimum Cluster Requirements for Compute-Only Nodes and Storage-Only Nodes
Cluster Attributes AHV CO with AHV HCI AHV SO with AHV or ESXi HCI
AOS Version AOS 5.11 or later for HCI nodes AOS 5.11 or later for SO nodes
AHV Version Compatible AHV version based on Compatible AHV version based on
AOS release. AOS release.
Number of nodes Minimum 3 HCI nodes and minimum 2 Minimum 3 SO nodes and minimum 2
CO nodes HCI nodes
Nodes Ratio Nutanix recommends the following Nutanix recommends the following
nodes ratio: nodes ratio:
1 CO : 2 HCI 1 HCI : 2 SO
Storage node All the HCI nodes in the cluster must All the SO nodes in the cluster must
Specification be All-flash nodes. be All-flash nodes.
Network Speed Use dual 25 GbE on AHV CO nodes Use dual 25 GbE on AHV HCI nodes
and quad 25 GbE on AHV HCI nodes. and quad 25 GbE on AHV SO nodes.
Hypervisor specification
• For HCI node: AHV only • For HCI node: AHV or ESXi
• For CO node: AHV • For SO node: AHV
AHV CO node must run the same AHV HCI node must run the same AHV
AHV version as the AHV HCI nodes in version as the AHV SO nodes in the
the cluster. cluster.
When you add an AHV CO node to the When you add an AHV HCI node to
cluster, AOS checks if the AHV version the cluster, AOS checks if the AHV
of the node matches with the AHV version of the node matches with the
version of the existing AHV nodes in the AHV version of the existing AHV nodes
cluster. If there is a mismatch, the node in the cluster. If there is a mismatch, the
addition fails. node addition fails.
For general requirements about adding a For general requirements about adding a
node to a Nutanix cluster, see Expanding node to a Nutanix cluster, see Expanding
a Cluster. a Cluster.
NIC Bandwidth Total amount of NIC bandwidth Total amount of NIC bandwidth
allocated to all the HCI nodes must allocated to all the SO nodes must
be twice the amount of the total NIC be twice the amount of the total NIC
bandwidth allocated to all the CO bandwidth allocated to all the HCI
nodes in the cluster. nodes in the cluster.
CPU See Controller VM (CVM) Specifications in the Acropolis Advanced
Administration Guide.
Memory
Drives See HCI Node Field Requirements in the Acropolis Advanced Administration
Guide.
Socket For CO node: Single or Dual socket For SO node:Single socket
For HCI node: Dual socket except ROBO For HCI node: Dual socket except ROBO
setup setup
Licensing Requirements
This section provides information about the licensing requirements that apply to compute-only (CO) and storage-only
(SO) nodes deployment at your site in the Hyperconverged Infrastructure (HCI) setup.
AHV CO with AHV HCI Uses NCI licenses on a per-core basis. For more information about
NCI licences, see NCI section in Nutanix Cloud Platform Software
AHV SO with AHV or ESXi HCI
Options.
AHV CO with AHV HCI Use the manage_ovs commands and add the --host flag to the
manage_ovs commands.
AHV SO with AHV HCI
Note: If the deployment of default virtual switch vs0 fails, the Prism
Element, Prism Central or CLI workflows for virtual switch management
are unavailable to manage the bridges and bonds. Use the manage_ovs
command options to manage the bridges and bonds.
For example, to create or modify the bridges or uplink bonds or uplink load
balancing, run the following command:
nutanix@cvm$ manage_ovs --
host IP_address_of_compute_node --bridge_name bridge_name
create_single_bridge
Replace:
Note: Run the manage_ovs commands for an AHV CO node from any
Controller VM running on an AHV HCI node.
Perform the networking tasks for each AHV CO node in the cluster
separately.
For more information about networking configuration of the AHV hosts, see
Host Network Management.
AHV SO with ESXi HCI Perform the networking tasks for each ESXi HCI node in the cluster
individually.
For more information on vSphere network configuration, see vSphere
Networking in the vSphere Administration Guide for Acropolis.
Important:
To achieve optimal cluster resiliency, deploy storage-only nodes in a replication factor (RF) plus one
(RF + 1) configuration. For example, if the RF of your cluster is two, deploy at least three storage-only
nodes. If the RF of your cluster is three, deploy at least four storage-only nodes.
The RF configuration is especially important if you plan to configure your storage-only nodes with
higher storage capacity than the regular hyper-converged nodes.
For example, in a cluster of four HCI nodes, each with 4 TB capacity (total 16 TB), you are adding high
capacity storage-only nodes of 30 TB capacity, add RF plus 1 (RF + 1) number of nodes.
Since the Fault Tolerance value is applicable to the cluster as a whole, this value determines the usage until fault
tolerance is reached and the cluster can still rebuild the data.
For example, you have configured Resiliency Factor 3 with four storage heavy nodes (say, node_1, node_2,
node_3, and node_4). Assume that the capacities of the remaining smaller nodes do not add up to same amount of
storage as the capacity of a single storage heavy node. In such a case, to accommodate the failure of two storage
heavy nodes AOS needs to be able to place all of their data on the remaining two storage heavy nodes in a node
fault tolerant system. If extent group egroup_1 resides on node_1, node_2, and node_3 then, in an event of failure
of node_1 and node_2, egroup_1 may be placed on node_4 or one of the smaller nodes that has enough space.
Therefore, consider the Fault Tolerance capacity that you get based on the static configuration and ensure that the
usage is maintained below Fault Tolerance value. This ensures that data can be rebuilt in a fault tolerant manner
even in the event of Fault Tolerance number of simultaneous failures.
The following table provides details of the number of storage-only nodes for Redundancy Factor with Replication
Factor and Fault Tolerance settings.
3 5 N+2 2 3 2 - when a
cluster is
healthy (can
tolerate a
simultaneous
failure of two
nodes)
1 - when a
cluster is not
healthy (still can
tolerate single
node failure)
0 - when a
cluster is not
healthy (cannot
tolerate node
failure)
3 4 2 - when a
cluster is
healthy (can
tolerate a
simultaneous
failure of two
nodes)
1 - when a
cluster is not
healthy (still can
tolerate single
node failure)
0 - when a
cluster is not
healthy (cannot
tolerate node
failure)
• Recommended Compute Cluster Settings refers to the number of compute (HCI or CO) nodes in the cluster for
a given Fault Tolerance. Permitted Storage-only Nodes refers to the number of SO nodes required for the given
number of Compute nodes. For example, for the minimum fault tolerance level of 1 in the first row of the table
above, the Recommended Compute Cluster Settings number is N+1, and the Permitted Storage-only Nodes
number is 3.
# = Supported configuration
X = Unsupported configuration
• Image the node as CO by using Foundation. For more information on how to image a node as a CO node,
see the Field Installation Guide.
• Add that node to the cluster by using the Prism Element web console. For more information, see Adding
an AHV Compute-Only Node to an AHV Cluster on page 241 and Adding an ESXi Compute-
Only Node to an AHV Cluster on page 249.
Existing HCI Node as CO Node
To add an existing HCI node, that is already a part of the cluster, as a CO node to the cluster, you must:
• Remove that node from the cluster. For more information about how to remove a node, see Modifying a
Cluster on page 222.
• Image that node as CO by using Foundation.
• Add that node back to the cluster. For more information, see Adding an AHV Compute-Only Node to
an AHV Cluster on page 241 and Adding an ESXi Compute-Only Node to an AHV Cluster on
page 249.
Figure 44:
• Check the Deployment Specifications and Considerations for Compute-Only and Storage-Only Nodes on
page 231.
• Log on to CVM using SSH, and disable all the virtual switches using the following command:
nutanix@cvm:~$ acli net.disable_virtual_switch
The system displays the following output, and prompts you to confirm the disable action:
This action will clear virtual switch database, remove all virtual switches including
the default one. This CANNOT be undone.
OVS bridges and uplink bond/interface configurations won't be changed by this
command.
Enter Yes to confirm the disable virtual switch action:
Do you really want to proceed? (yes/no) yes
Procedure
» Click the gear icon in the main menu and select Expand Cluster on the Settings page.
» Go to the hardware dashboard and click Expand Cluster.
The system displays the Expand Cluster window:
Note: To expand a cluster with CO node, do not select Prepare Now and Expand Later. This option is used
to only prepare the nodes and expand the cluster at a later point in time. For CO nodes, node preparation is not
supported.
The system displays the error Compute only nodes cannot be prepared in the Configure Host tab, if
you proceed with Prepare Now and Expand Later option:
4. In the Select Host tab, scroll down and, under Manual Host Discovery, click Discover Hosts
Manually.
6. Under Host or CVM IP, type the IP address of the AHV host and click Save.
Note: The CO node does not have a Controller VM and you must therefore provide the IP address of the AHV
host.
11. Click either of the following options in the Configure Host tab:
• Run Checks - Used to only run pre-checks required for cluster expansion. Once all pre-checks are
successful, you can click the Expand Cluster to add the CO node to the cluster.
• Expand Cluster - Used to run both; pre-checks required for cluster expansion and expand cluster operation
together.
The add-node process begins, and Prism Element performs a set of checks before the node is added to the
cluster. Once all checks are completed and the node is added successfully, the system displays the completion
states for the tasks as 100%.
Note:
• You can check the progress of the operation in the Tasks menu of the Prism Element web
console. The operation takes approximately five to seven minutes to complete.
• If you have not disabled the virtual switch as specified in Prerequisites, the system displays the
multiple errors during cluster expansion.
Check the progress of the operation in the Tasks menu of the Prism Element web console. The operation takes
approximately five to seven minutes to complete.
12. Check the Hardware Diagram view to verify if the CO node is added to the cluster.
You can identity a node as a CO node if the Prism Element web console displays N/A in the CVM IP field.
Important: Virtual switch configuration is not supported when there are CO nodes in the cluster. The system
displays the error message if you attempt to reconfigure the virtual switch for the cluster, using the following
command:
nutanix@cvm:~$ acli net.migrate_br_to_virtual_switch br0 vs_name=vs0
Virtual switch configuration is not supported when there are Compute-Only nodes in the cluster.
• For more information about imaging storage-only nodes when creating a cluster, see the Field Installation Guide.
• For instructions on adding storage-only nodes to an existing cluster, see Expanding a Cluster on page 199.
Note: Starting with AOS 6.8, ESXi CO nodes support ESXi 8.0 and later versions.
An ESXi compute-only (CO) node allows you to seamlessly and efficiently expand the computing capacity (CPU
and memory) of your AHV cluster. The Nutanix cluster uses the resources (CPUs and memory) of an ESXi CO node
exclusively for computing purposes.
You can use a supported server or re-image an existing hyperconverged (HCI) node as an ESXi CO node.
To use a node as CO, image the node as CO by using Foundation and then add that node to the cluster by using the
Prism Element web console. For more information on how to image a node as a CO node, see the Field Installation
Guide.
Note: If you want an existing HCI node that is already a part of the cluster to work as a CO node, remove that node
from the cluster, image that node as a CO node by using Foundation, and add that node back to the cluster. For more
information on how to remove a node, see Modifying a Cluster.
Table 50: Minimum Cluster Requirements for Compute-Only Nodes and Storage-Only Nodes in
Optimized Database Solution
• Even nodes - 1 CO : 1 SO
• Odd Nodes - Recommended difference between CO and SO nodes is 1.
However, you can deploy different combination of CO and SO nodes,
provided the combinations comply with minimum 5 nodes and maximum 32
nodes in a cluster and meet your workload requirements.
Note: Use minimum 3 SO nodes to ensure a healthy and active cluster and
minimum 2 CO nodes to maintain high availability in case of upgrade or any other
maintenance activities on the CO node
NIC Bandwidth Uniform NIC bandwidth between CO and SO nodes.Uniform NIC bandwidth
between CO and SO nodes.
Note:
For CO node:
Note: See the NDB Control Plane Configuration and Scalability section of
Nutanix Database Service Administration Guide to review the additional CPUs that
may be used by NDB Agent VMs running on the Compute Only Nodes.
Drives For SO node: Minimum 8 drives, minimum 3.84 TB each Drives ( NVMe only)
For CO node: Minimum 2 SSD or NVMe drives, with minimum 1.92 TB each
Optimized Database AHV CO with AHV SO Uses a combination of NCI Ultimate or NCI Pro
Solution licenses for AHV storage-only nodes and NDB
ESXi CO with AHV SO
Platform licenses for AHV compute-only nodes. Both
NCI and NDB platforms are licensed on a per-core
basis.
NCI Ultimate on storage-only AHV nodes is the preferred
licensing model to get the most functional Optimized
DB Solution. When you use the NCI Pro license on the
storage-only AHV nodes, the entire cluster functions at
the Pro level feature set, and the NDB disaster recovery
feature and other advanced functionalities are not
available.
For more information about NCI Ultimate and NDB
feature set licenses, see https://www.nutanix.com/
products/cloud-platform/software-options.
AHV CO with AHV SO Use the manage_ovs commands and add the --host flag to the
manage_ovs commands.
Note: If the deployment of default virtual switch vs0 fails for AHV
CO node with AHV SO node, the Prism Element, Prism Central or CLI
workflows for virtual switch management are unavailable to manage the
bridges and bonds. Use the manage_ovs command options to manage the
bridges and bonds.
For example, to create or modify the bridges or uplink bonds or uplink load
balancing, run the following command:
nutanix@cvm$ manage_ovs --host IP_address_of_co_node --
bridge_name bridge_name create_single_bridge
Replace IP_address_of_co_node with the IP address of the CO node and
bridge_name with the name of bridge you want to create.
Note: Run the manage_ovs commands for an AHV CO node from any
Controller VM running on an AHV SO node.
Perform the networking tasks for each AHV CO node in the cluster
separately.
For more information about networking configuration of the AHV hosts, see
Host Network Management.
ESXi CO with AHV SO Perform the networking tasks for each ESXi CO node in the cluster
individually.
For more information on vSphere network configuration, see vSphere
Networking in the vSphere Administration Guide for Acropolis.
» Click the gear icon in the main menu and select Expand Cluster in the Settings page.
» Go to the hardware dashboard and click Expand Cluster.
3. In the Select Host screen, scroll down and, under Manual Host Discovery, click Discover Hosts
Manually.
5. Under Host or CVM IP, type the IP address of the ESXi CO node and click Save.
This node does not have a Controller VM and you must therefore provide the IP address of the ESXI CO node.
8. Click Next.
10. Check the Hardware Diagram view to verify if the node is added to the cluster.
You can identity a node as a CO node if the Prism Element web console does not display the IP address for the
Controller VM.
Note: For more information about the data protection solutions available from Nutanix, see Nutanix DR Solutions
Workflow in the Data Protection and Recovery with Prism Element Guide.
• Nutanix supports several types of protection strategies including one-to-one or one-to-many replication. For more
information, see Protection Strategies in the Data Protection and Recovery with Prism Element Guide.
• You can configure asynchronous disaster recovery. For more information, see Data Protection with
Asynchronous Replication (One-hour or Greater RPO) in the Data Protection and Recovery with Prism
Element Guide.
• The Cloud Connect feature gives you the option to asynchronously back up data on the Amazon Web Service
(AWS) cloud. For more information, see Asynchronous Replication With Cloud Connect (On-Premises
To Cloud) in the Data Protection and Recovery with Prism Element Guide.
• You can also configure synchronous disaster recovery. There are two options:
• For environments that support metro availability, you can configure a metro availability protection domain.
For more information, see Metro Availability (ESXi and Hyper-V 2016) in the Data Protection and
Recovery with Prism Element Guide.
• For other environments, you can configure synchronous replication. For more information, see Synchronous
Replication (ESXi and Hyper-V 2012) in the Data Protection and Recovery with Prism Element Guide.
• Summary health status information for VMs, hosts, and disks appears on the home dashboard. For more
information, see Home Dashboard on page 48.
• In depth health status information for VMs, hosts, and disks is available through the Health dashboard. For more
information, see Health Dashboard on page 254.
• You can customize the frequency of the scheduled health checks and how frequently to run them. For more
information, see Configuring Health Checks on page 257.
• You can run NCC health checks directly from the Prism. For more information, see Running Checks by Using
Prism Element Web Console on page 259.
• You can collect logs for all the nodes and components. For more information, see Collecting Logs by Using
Prism Element Web Console on page 259.
• For a description of each available health check. For more information, see Alerts/Health checks.
Note: If the Cluster Health service status is DOWN for more than 15 minutes, an alert email is sent by the AOS cluster
to configured addresses and Nutanix support (if selected). In this case, no alert is generated in the Prism Element web
console. The email is sent once per 24 hours. You can run the NCC check cluster_services_down_check to see the
service status.
Health Dashboard
The Health dashboard displays dynamically updated health information about VMs, hosts, and disks in the
cluster. To view the Health dashboard, select Health from the pull-down list on the left of the main menu.
Menu Options
The Health dashboard does not include menu options other than those available from the main menu.
Note: When you first visit the Health dashboard, a tutorial opens that takes you through a guided tour of the health
analysis features. Read the message and then click the Next button in the text box to continue the tutorial (or click the
X in the upper right to close the text box and end the tutorial.) You can view this tutorial at any time by selecting the
Health Tutorial option in the user menu. For more information, see Main Menu on page 43.
Screen Details
The Health dashboard is divided into three columns:
• The left column displays tabs for each entity type (VMs, hosts, disks, storage pools, storage containers, cluster
services, and [when configured] protection domains and remote sites). Each tab displays the entity total for
the cluster (such as the total number of disks) and the number in each health state. Clicking a tab expands the
displayed information (see following section).
• The middle column displays more detailed information about whatever is selected in the left column.
• The Summary tab provides summarized view of all the health checks according to check status and check
type.
• The Checks tab provides information about individual checks. Hovering the cursor over an entry displays
more information about that health check. You can filter the checks by clicking appropriate field type and
clicking Apply. The checks are categorized as follows.
Filter by Status
Passed, Failed, Warning, Error, Off, or All
Filter by Type
Scheduled, Not Scheduled, Event Triggered, or All
Filter by Entity Type
VM, Host, Disk, Storage Pool, Storage Container, Cluster Service, or All
For example, if you want to see only the failed checks, filter the checks by selecting the Failed option. If you
click on the specific check, the middle column will provide the detailed history of when the checks failed and
what is the percentage of the check failure. If you click the bar, a detailed graph of the pass and fail history is
Note: For the checks with status as error, follow the similar process as described above to get detailed
information about the errors.
You can also search for specific checks by clicking the health search icon and then entering a string in the
search box.
• The Actions tab provides you with an option to manage checks, run checks, and collect logs.
• The left column expands to display a set of grouping and filter options. The selected grouping is highlighted. You
can select a different grouping by clicking on that grouping. Each grouping entry lists how many categories are in
that grouping, and the middle section displays information about those categories. In the following example, the
disks storage tier is selected, and there are two categories (SSD and HDD) in that grouping. By default, all entities
Procedure
3. Select a check.
a. The left column lists the health checks with the selected check highlighted. Click any of the entries to select
and highlight that health check.
b. The middle column describes what this health check does, and it provides the run schedule and history across
affected entities (hosts, disks, or VMs).
c. The right column describes what failing this health check means (cause, resolution, and impact).
5. To turn off (or turn on) a health check, click the Turn Check Off (or Turn Check On) link at the top of the
middle column and then click the Yes button in the dialog box.
a. Depending on the alert condition (Info, Critical, and so on), unselect the condition to disable these alert
condition messages.
All the alerts are enabled by default (box checked). In most cases this field includes just a single box with the
word Critical, Warning, or Info indicating the severity level. Checking the box means this event will trigger
an alert of that severity. Unchecking the box means an alert will not be issued when the event occurs. In some
cases, such as in the example figure about disk space usage, the event can trigger two alerts, a warning alert
when one threshold is reached (in this example 90%) and a critical alert when a second threshold is reached
(95%). In these cases you can specify whether the alert should be triggered (check/uncheck the box) and at
what threshold (enter a percentage in the box).
b. Auto Resolve These Alerts: Uncheck (or check) the box to disable (or re-enable) automatic alert
resolution.
Automatic alert resolution is enabled for all alert types (where applicable) by default. When this is enabled, the
system will automatically resolve alerts under certain conditions such as when the system recognizes that the
error has been resolved or when the initiating event has not reoccurred for 48 hours. (Automatic resolution is
not allowed for some alert types, and this is noted in the policy window for those types.)
7. To change a parameter setting for those health checks that have configurable parameters, click the Parameters
link at the top of the middle column, change one or more of the parameter values in the drop-down window, and
then click the Update button.
This link appears only when the health check includes configurable parameters. The configurable parameters are
specific to that health check. For example, the CPU Utilization health check includes parameters to specify the
host average CPU utilization threshold percentage and host peak CPU utilization threshold percentage.
8. To change the schedule for running a health check, select an interval from the Schedule drop-down list for the
schedulable checks at the top of the middle column.
Each check has a default interval, which varies from as short as once a minute to as long as once a day depending
on the health check. The default intervals are optimal in most cases, and changing the interval is not recommended
(unless requested to do so by Nutanix customer support).
Procedure
2. In the Health dashboard, from the Actions drop-down menu select Set NCC Frequency.
» Every 4 hours: Select this option to run the NCC checks at four hours interval.
» Every Day: Select this option to run the NCC checks on a daily basis.
Select the time of the day when you want to run the checks from Start Time field.
» Every Week: Select this option to run the NCC checks on a weekly basis.
Select the day and time of the week when you want to run the checks from the On and Start Time fields. For
example, if you select Sunday and Monday from the On field and select 3:00 p.m. from the Start Time field,
every Sunday and Monday at 3 p.m. the NCC checks are run automatically.
The Email address that you have configured by using alert emails is also displayed. A report will be sent as an
email to all the recipients. For more information on how to configure alert emails, see Configuring Alert Emails
in Prism Element Alerts and Events Reference Guide.
4. Click Save.
Note: If you are running checks by using Prism Element web console, you will not be able to collect the logs at the
same time.
Procedure
1. In the Health dashboard, from the Actions drop-down menu select Run NCC Checks.
2. Select the checks that you want to run for the cluster.
a. All checks: Select this option to run all the checks at once.
b. Only Failed and Warning Checks: Select this option to run only the checks that were failed or gave
warning during the health check runs.
c. Specific Checks: Select this option and type the check or checks name in the text box that appears that you
want to run.
This field gets auto-populated once you start typing the name of the check. All the checks that you have
selected for this run are listed in the Added Checks box.
3. Select the Send the cluster check report in the email option to receive the report after the cluster check.
To receive the email configuration ensure that you have configured email configuration for alerts. For more
information, see Configuring Alert Emails in Prism Element Alerts and Events Reference Guide.
The status of the run (succeeded or aborted) is available in the Tasks dashboard. By default, all the event
triggered checks are passed. Also, the Summary page of the Health dashboard will be updated with the status
according to health check runs.
Note:
• While this method works, the preferred method for collecting logs is through a CLI tool called logbay.
For more information about logbay, see the Nutanix Cluster Check (NCC) Guide.
• The timestamps for all Nutanix service logs are moved to UTC (in ISO 8601:2020-01-01 T00:00:00Z)
from Prism version 5.18.
• All operating system logs will not be moved to UTC, hence Nutanix recommends that you set the server
local time to UTC.
Procedure
2. In the Health dashboard, from the Actions drop-down menu, select Collect Logs.
3. In Node Selection, click + Select Nodes. Select the nodes for which you want to collect the logs and click
Done.
4. Click Next.
» All. Select this option if you want to collect the logs for all the tags.
» Specific (by tags). Select this option, click + Select Tags if you want to collect the logs only for the
selected tags and then click Done.
• 1. Select Duration. Select the duration for which you want to collect the logs. You can collect the logs
either in hours or days. Click the drop-down list to select the required option.
2. Cluster Date. Select the date from which you want to start the log collection operation. Click the drop-
down list to select either Before or After to collect logs before or after a selected date.
3. Cluster Time. Select the time from when you want to start the log collection operation.
4. Select Destination for the collected logs. Click the drop-down list to select the server where you
want the logs to be collected.
• Download Locally
• Nutanix Support FTP. If you select this option, enter the case number in the Case Number field.
• Nutanix Support SFTP. If you select this option, enter the case number in the Case Number field.
• Custom Server. Enter server name, port, username, password, and archive path if you select this
option.
5. Anonymize Output. Select this checkbox if you want to mask all the sensitive information like the IP
addresses.
a. Go to the Task dashboard, find the log bundle task entry, and click the Succeeded link for that task (in the
Status column) to download the log bundle. For more information, see View Task Status on page 71.
Note: If a pop-up blocker in your browser stops the download, turn off the pop-up blocker and try again.
b. Log in the support portal, click on the target case in the 360 View widget on the dashboard (or click the
Create a New Case button to create a new case), and upload the log bundle to the case (click the Choose
Files button in the Attach Files section to select the file to upload).
• The web console allows you to monitor status of the VMs across the cluster. For more information, see VM
Dashboard on page 262.
• In Acropolis managed clusters, the web console also allows you to do the following:
• Create VMs. For more information, see Creating a VM (AHV) on page 272.
• Manage VMs. For more information, see Managing a VM (AHV) on page 277.
• Enable VM high availability. For more information, see Enabling High Availability for the Cluster on
page 303.
• Configure network connections. For more information, see Network Configuration for VM Interfaces on
page 165.
• You can create and manage VMs directly from Prism Element when the hypervisor is ESXi.
• Create VMs. For more information, see Creating a VM (ESXi) on page 287.
• Manage VMs. For more information, see Managing a VM (ESXi) on page 289.
VM Dashboard
The virtual machine (VM) dashboard displays dynamically updated information about virtual machines in
the cluster. To view the VM dashboard, select VM from the pull-down list on the left of the main menu.
Menu Options
In addition to the main menu, the VM screen includes a menu bar with the following options:
• View selector. Click the Overview button on the left to display the VM dashboard. For more information, see VM
Overview View on page 263. Click the Table button to display VM information in a tabular form. For more
information, see VM Table View on page 264.
• Action buttons. Click the Create VM button on the right to create a virtual machine. For more information, see
Creating a VM (AHV) on page 272. Click the Network Config button to configure a network connection.
For more information, see Network Configuration for VM Interfaces on page 165.
information to a file in either CSV or JSON format by clicking the gear icon on the right and selecting either
Export CSV or Export JSON from the drop-down menu. (The browser must allow a dialog box for export to
work.) Chrome, Internet Explorer, and Firefox download the data into a file; Safari opens the data in the current
window.
• Search box. In the Table view, you can search for entries in the table by entering a search string in the box.
VM Overview View
The VM Overview view displays VM-specific performance and usage statistics on the left plus the most
recent VM-specific alert and event messages on the right.
The following table describes each field in the VM Overview view. Several fields include a slide bar on the right to
view additional information in that field. The displayed information is dynamically updated to remain current.
Note: For information about how the statistics are derived, see Understanding Displayed Statistics on page 55.
Name Description
Hypervisor Summary Displays the name and version number of the hypervisor.
VM Summary Displays the total number of VMs in the cluster broken down by on, off,
and suspended states.
CPU Displays the total number of provisioned virtual CPUs and the total amount
of reserved CPU capacity in GHz for the VMs.
Memory Displays the total amount of provisioned and reserved memory in GBs for
the VMs.
Top User VMs by Controller Displays I/O operations per VM for the 10 most active VMs.
IOPS
Top User VMs by Controller Displays I/O bandwidth used per VM for the 10 most active VMs. The
IO Latency value is displayed in an appropriate metric (MBps, KBps, and so on)
depending on traffic volume.
Top User VMs by Memory Displays the percentage of reserved memory capacity used per VM for the
Usage 10 most active VMs.
Top User VMs by CPU Displays the percentage of reserved CPU capacity used per VM for the 10
Usage most active VMs.
VM Critical Alerts Displays the five most recent unresolved VM-specific critical alert
messages. Click a message to open the Alert screen at that message. You
can also open the Alert screen by clicking the view all alerts button at the
bottom of the list. For more information, see Alerts Dashboard in Prism
Element Alerts and Events Reference Guide.
VM Warning Alerts Displays the five most recent unresolved VM-specific warning alert
messages. Click a message to open the Alert screen at that message. You
can also open the Alert screen by clicking the view all alerts button at the
bottom of the list.
VM Events Displays the ten most recent VM-specific event messages. Click a
message to open the Event screen at that message. You can also open
the Event screen by clicking the view all events button at the bottom of the
list.
VM Table View
The VM Table view displays information about each VM in a tabular form. The displayed information is
dynamically updated to remain current. In Acropolis managed clusters, you can both monitor and manage
VMs through the VM Table view.
• The top section is a table. Each row represents a single VM and includes basic information about that VM. Click a
column header to order the rows by that column value (alphabetically or numerically as appropriate).
• The bottom Summary section provides additional information. It includes a summary or details column on the
left and a set of tabs on the right. The details column and tab content varies depending on what has been selected.
The following table describes each field in the table portion of the view. The details portion and tab contents are
described in the subsequent sections.
Note: For information about how the statistics are derived, see Understanding Displayed Statistics on page 55.
VirtIO must be installed in a VM for AHV to display correct VM memory statistics. For more information about VirtIO
drivers, see Nutanix VirtIO for Windows in AHV Administration Guide.
Memory Capacity Displays the total amount of memory available to xxx [MB | GB]
the VM.
Storage Displays the used capacity (utilised capacity of the xxx / xxx [MiB | GiB]
VM disk(s)) in relation to the total capacity (the total
storage capacity of all the disks provisioned to the
VM). For example, 1.9 GiB/ 5 GiB.
[Controller] Read IOPS Displays read I/O operations per second (IOPS) for (number)
the VM.
[Controller] Write IOPS Displays write I/O operations per second for the (number)
VM.
[Controller] IO Bandwidth Displays I/O bandwidth used per second by the VM. xxx [MBps|KBps]
[Controller] Avg IO Displays the average I/O latency of the VM. xxx [ms]
Latency
Backup and Recovery Indicates (Yes or No) whether the VM can be [Yes|No]
Capable protected (create backup snapshots) and recovered
if needed. When the value is No, click the question
mark icon for an explanation.
VM Detail Information
When a VM is selected in the table, information about that VM appears in the lower part of the screen.
• Summary: vm_name appears below the table and VM Details fields appear in the lower left column. The
following table describes the fields in this column.
• Click Manage NGT to enable and mount Nutanix guest tools for this VM.
• Click the Launch Console link to open a console window for this VM.
• Click the Power on (or Power Off Actions) link to start (or shut down) this VM.
• Click the Take Snapshot link to create a backup snapshot on demand.
• Click the Migrate link to migrate the VM onto a different host.
• Click the Pause (or Resume) link to pause (or resume) this VM.
Note: If the VM is a Prism Central VM, all Actions except Launch Console, Take Snapshot, and
Migrate are disabled as a protective measure.
• A set of tabs appear to the right of the details section that display information about the selected VM. The set of
tabs varies depending on whether the VM is an Acropolis managed VM or not. The following sections describe
each tab.
• Standard VM tabs: VM Performance, Virtual Disks, VM NICs, VM Alerts, VM Events, I/O Metrics,
and Console.
• Acropolis managed VM tabs: VM Performance, Virtual Disks, VM NICs, VM Snapshots, VM Tasks,
I/O Metrics, and Console.
Host Displays the host name on which this VM is running. (IP address)
Host IP Displays the host IP address for this VM. (IP address)
Guest OS Displays the operating system running on this VM, such (operating system name)
as Windows 7 or Ubuntu Linux. (This information is not
available when running AHV.)
Memory Displays the amount of memory available to this VM. xxx [MB|GB]
Reserved Memory Displays the amount of memory reserved for this VM (by xxx [MB|GB]
the hypervisor).
Assigned Memory (Hyper- Displays the amount of dynamic memory currently xxx [MB|GB]
V only) assigned to the VM by the hypervisor.
Reserved CPU Displays the amount of CPU power reserved for this VM xxx [GHz]
(by the hypervisor).
Disk Capacity Displays the total disk capacity available to this VM. xxx [GB|TB]
Network Adapters Displays the number of network adapters available to this (# of adapter ports)
VM.
Storage Container Displays the name of the storage container in which the (storage container name)
VM resides.
Virtual Disks Displays the number of virtual disks in the VM. (number)
NGT Enabled Displays whether NGT is enabled or not for the VM. [Yes|No]
NGT Mounted Displays whether NGT is mounted or not for the VM. [Yes|No]
GPU Configuration (AHV only) Comma-separated list of GPUs configured (list of GPUs)
for the VM. GPU information includes the model name
and a count in parentheses if multiple GPUs of the same
type are configured for the VM. If the firmware on
the GPU is in compute mode, the string compute is
appended to the model name.
The field is hidden if no GPUs are configured or if the
hypervisor is not AHV.
VMware Guest Tools Displays whether VMware guest tools are mounted or [Yes|No]
Mounted not on the VM
VMware Guest Tools Displays whether VMware guest tools are running or not [Yes|No]
Running Status on the VM.
• The VM Summary fields appear in the lower left column. The following table describes the fields in this column.
• Three tabs appear that display cluster-wide information (see following sections for details about each tab):
Performance Summary, All VM Alerts, All VM Events.
VM State Displays the number of powered on, powered off, and [number] powered on,
suspended VMs in the cluster. powered off, suspended
Total Provisioned vCPU Displays the total number of provisioned virtual CPUs in (number)
the cluster.
Total Reserved CPU Displays the total amount of CPU power reserved for the xxx [GHz]
VMs (by the hypervisor).
Total Provisioned Memory Displays the total amount of memory provisioned for all xxx [GB]
VMs.
Total Reserved Memory Displays the total amount of memory reserved for all xxx [GB]
VMs (by the hypervisor).
VM Performance Tab
The VM Performance tab displays graphs of performance metrics. The tab label varies depending on what is selected
in the table:
• Performance Summary (no VM selected). Displays resource performance statistics (CPU, memory, and I/O)
across all VMs in the cluster.
• VM Performance (VM selected). Displays resource performance statistics (CPU, memory, and I/O) for the
selected VM.
The graphs are rolling time interval performance monitors that can vary from one to several hours depending on
activity moving from right to left. Placing the cursor anywhere on the horizontal axis displays the value at that time.
For more in depth analysis, you can add a monitor to the analysis page by clicking the blue link in the upper right
of the graph. For more information, see Analysis Dashboard on page 333. The Performance tab includes the
following graphs:
• [Cluster-wide] CPU Usage: Displays the percentage of CPU capacity currently being used (0 - 100%) across
all VMs or for the selected VM.
• [Cluster-wide] Memory Usage: Displays the percentage of memory capacity currently being used (0 - 100%)
across all VMs or for the selected VM. (This field does not appear when the hypervisor is Hyper-V.)
• [Cluster-wide] {Hypervisor|Controller} IOPS: Displays I/O operations per second (IOPS) across all VMs or
for the selected VM.
Note: In this and the following two fields, the field name is either Controller or Hypervisor to indicate where
the information comes from. For ESXi, the information comes from the hypervisor; for Hyper-V and AHV, the
information comes from the Controller VM.
• [Cluster-wide] {Hypervisor|Controller} I/O Bandwidth: Displays I/O bandwidth used per second (MBps or
KBps) across all VMs or for the selected VM.
• [Cluster-wide] {Hypervisor|Controller} Avg I/O Latency: Displays the average I/O latency (in
milliseconds) across all VMs or for the selected VM.
Note: Clicking on a virtual disk (line) displays subtabs for total, read, and write IOPS, I/O bandwidth, and I/O latency
performance graphs for the virtual disk (see the Performance Tab section for more information about the graphs).
• Total IOPS. Displays the total (both read and write) I/O operations per second (IOPS) for the virtual disk.
• Random IO. Displays the percentage of I/O that is random (not sequential).
• Read Source Cache. Displays the amount of cache data accessed for read requests.
• Read Source SSD. Displays the amount of SSD data accessed for read requests.
• Read Source HDD. Displays the amount of HDD data accessed for read requests.
• Read Working Size Set. Displays the amount of data actively being read by applications in the VM that are
using this virtual disk.
• Write Working Size Set. Displays the amount of data actively being written by applications in the VM that are
using this virtual disk.
• Union Working Size Set. Displays the total amount of data used by the VM for either reads or writes.
VM NICs Tab
The VM NICs tab displays information in tabular form about the virtual NICs in a selected VM. (This tab appears
only when a VM is selected.) Each line represents a virtual NIC, and the following information is displayed for each
NIC:
• Total Packets Received. Displays a monitor of the total packets received (in KB or MB) over time. Place the
cursor anywhere on the line to see the value for that point in time. (This applies to all the monitors on this tab.)
• Total Packets Transmitted. Displays a monitor of the transmitted data rate.
• Dropped Packets Received. Displays a monitor of received packets that were dropped.
• Dropped Packets Transmitted. Displays a monitor of transmitted packets that were dropped.
• Error Packets Received. Displays a monitor for error packets received.
Clicking the Host NICs Stats tab displays the following statistics for each host NIC (one per line) that is used by
the selected virtual NIC to send the traffic:
VM Alerts Tab
The VM Alerts tab displays the unresolved alert messages about all VMs or the selected VM in the same form as the
Alerts page. For more information, see Alerts Summary View. Click the Unresolved X button in the filter field to
also display resolved alerts.
VM Events Tab
The VM Events tab displays the unacknowledged event messages about all VMs or the selected VM in the same form
as the Events page. For more information, see Events Summary View. Click the Include Acknowledged button
to also display acknowledged events.
• Create Time. Displays the time the backup snapshot was created (completed).
• Name. Displays a name for the backup if one was created.
• Actions. Displays four action links:
• Click the Details link to open a window that displays the VM configuration plus a creation time stamp field.
• Click the Clone link to clone a VM from the snapshot.
• Click the Restore link to restore the VM from the snapshot. This restores the VM back to the state of the
selected snapshot.
• Click the Delete link to delete the snapshot.
I/O Metrics
The I/O Metrics tab displays information about different I/O metrics for the VM (latency and performance
distribution).
Console Tab
The Console tab displays a live console window. (This tab appears only when a VM is selected.) In addition to
entering commands in the console window, you can invoke several options from this tab:
• Click the language (left-most) button and select the desired language from the pull-down list to set the language
key mapping for the console keyboard.
• Click the Send CtrlAltDel button to send a Ctrl+Alt+Delete key signal. This is the same as pressing Ctrl+Alt
+Delete from the console keyboard.
• Click the Take Screenshot button to take a screen shot of the console display that you can save for future
reference.
• Click the New Window button to open a new console window. This is the same as clicking the Launch
Console link on the Summary line.
VM Management
You can create and manage VMs directly from Prism Element when the hypervisor is either AHV or ESXi. The
following topics provide more information on creating and managing VM configuration on AHV and ESXi.
• AHV
Creating a VM (AHV)
In AHV clusters, you can create a new virtual machine (VM) through the Prism Element web console.
Note: Use Prism Central to create a VM with the memory overcommit feature enabled. Prism Element web console
does not allow you to enable memory overcommit while creating a VM. If you create a VM using the Prism Element
web console and want to enable memory overcommit for it, update the VM using Prism Central and enable memory
overcommit in the Update VM page in Prism Central. For more information, see Updating a VM through Prism
Central information in Prism Central Infrastructure Guide.
When creating a VM, you can configure all of its components, such as number of vCPUs and memory, but you cannot
attach a volume group to the VM. Attaching a volume group is possible only when you are modifying a VM.
Procedure
Note: This option does not appear in clusters that do not support this feature.
Note:
The RTC of Linux VMs must be in UTC, so select the UTC timezone if you are creating a Linux
VM.
Windows VMs preserve the RTC in the local timezone, so set up the Windows VM with the
hardware clock pointing to the desired timezone.
d. Use this VM as an agent VM: Select this option to make this VM as an agent VM.
You can use this option for the VMs that must be powered on before the rest of the VMs (for example, to
provide network functions before the rest of the VMs are powered on the host) and must be powered off after
the rest of the VMs are powered off (for example, during maintenance mode operations). Agent VMs are
never migrated to any other host in the cluster. If an HA event occurs or the host is put in maintenance mode,
agent VMs are powered off and are powered on the same host once that host comes back to a normal state.
If an agent VM is powered off, you can manually start that agent VM on another host and the agent VM now
permanently resides on the new host. The agent VM is never migrated back to the original host. Note that
you cannot migrate an agent VM to another host while the agent VM is powered on.
e. vCPU(s): Enter the number of virtual CPUs to allocate to this VM.
f. Number of Cores per vCPU: Enter the number of cores assigned to each virtual CPU.
g. Memory: Enter the amount of memory (in GiB) to allocate to this VM.
4. (For GPU-enabled AHV clusters only) To configure GPU access, click Add GPU in the Graphics section, and
then do the following in the Add GPU dialog box:
For more information, see GPU and vGPU Support in the AHV Administration Guide.
a. To configure GPU pass-through, in GPU Mode, click Passthrough, select the GPU that you want to
allocate, and then click Add.
If you want to allocate additional GPUs to the VM, repeat the procedure as many times as you need to. Make
sure that all the allocated pass-through GPUs are on the same host. If all specified GPUs of the type that you
Note: This option is available only if you have installed the GRID host driver on the GPU hosts in the cluster.
For more information about the NVIDIA GRID host driver installation instructions, see the
NVIDIA Grid Host Driver for Nutanix AHV Installation Guide.
You can assign multiple virtual GPU (vGPU) to a VM. A vGPU is assigned to the VM only if a vGPU is
available when the VM is starting up.
Before you add multiple vGPUs to the VM, see Multiple Virtual GPU Support and Restrictions for
Multiple vGPU Support in the AHV Administration Guide.
Note: Multiple vGPUs are supported on the same VM only if you select the highest vGPU profile type.
After you add the first vGPU, to add multiple vGPUs, see Adding Multiple vGPUs to the Same VM in the
AHV Administration Guide.
» Legacy BIOS: Select legacy BIOS to boot the VM with legacy BIOS firmware.
» UEFI: Select UEFI to boot the VM with UEFI firmware. UEFI firmware supports larger hard drives, faster
boot time, and provides more security features. For more information about UEFI firmware, see UEFI
Support for VM in the AHV Administration Guide.
If you select UEFI, you can enable the following features:
• Secure Boot: Select this option to enable UEFI secure boot policies for your guest VMs. For more
information about Secure Boot, see Secure Boot Support for VMs in the AHV Administration Guide.
• Windows Defender Credential Guard: Select this option to enable the Windows Defender Credential
Guard feature of Microsoft Windows operating systems that allows you to securely isolate user credentials
from the rest of the operating system. Follow the detailed instructions described in Windows Defender
Credential Guard Support in AHV in the AHV Administration Guide to enable this feature.
Note: For information on how to add the virtual TPM, see Securing AHV VMs with Virtual Trusted
Platform Module.
a. Type: Select the type of storage device, DISK or CD-ROM, from the drop-down list.
The following fields and options vary depending on whether you choose DISK or CD-ROM.
b. Operation: Specify the device contents from the drop-down list.
• Select Clone from ADSF file to copy any file from the cluster that can be used as an image onto the
disk.
• Select Empty CD-ROM to create a blank CD-ROM device. (This option appears only when CD-ROM
is selected in the previous field.) A CD-ROM device is needed when you intend to provide a system
image from CD-ROM.
• Select Allocate on Storage Container to allocate space without specifying an image. (This
option appears only when DISK is selected in the previous field.) Selecting this option means you are
allocating space only. You have to provide a system image later from a CD-ROM or other source.
• Select Clone from Image Service to copy an image that you have imported by using image service
feature onto the disk. For more information about the Image Service feature, see Configuring Images
and Image Management in the Prism Self Service Administration Guide.
c. Bus Type: Select the bus type from the dropdown list.
The options displayed in the Bus Type dropdown list varies based on the storage device Type selected in
Step a.
• For device Disk, select from SCSI, SATA, PCI, or IDE bus type.
• For device CD-ROM, you can select either IDE or SATA bus type.
Note:
• SCSI bus is the preferred bus type and it is used in most cases. Ensure you have installed
the VirtIO drivers in the guest OS. For more information about VirtIO drivers, see Nutanix
VirtIO for Windows in AHV Administration Guide.
• For AHV 5.16 and later, you cannot use an IDE device if Secured Boot is enabled for
UEFI Mode boot configuration.
Caution: Use SATA, PCI, IDE for compatibility purpose when the guest OS does not have VirtIO drivers
to support SCSI devices. This may have performance implications. For more information about VirtIO
drivers, see Nutanix VirtIO for Windows in AHV Administration Guide.
7. To create a network interface for the VM, click the Add New NIC button.
Prism console displays the Create NIC dialog box.
Note: To create or update a Traffic Mirroring destination type VM or vNIC, use command line interface. For
more information, see Traffic Mirroring on AHV Hosts in the AHV Administration Guide.
a. Subnet Name: Select the target virtual LAN from the drop-down list.
The list includes all defined networks.
Note: Selecting IPAM enabled subnet from the drop-down list displays the Private IP Assignment
information that provides information about the number of free IP addresses available in the subnet and in the
IP pool.
b. Network Connection State: Select the state for the network that you want it to operate in after VM
creation. The options are Connected or Disconnected.
c. Private IP Assignment: This is a read-only field and displays the following:
Note:
• Nutanix does not recommend configuring multiple clusters to use the same broadcast domain (the
same VLAN network), but if you do, configure MAC address prefixes for each cluster to avoid
a. Select the host or hosts on which you want configure the affinity for this VM.
b. Click Save.
The selected host or hosts are listed. This configuration is permanent. The VM will not be moved from this
host or hosts even in case of HA event and will take effect once the VM starts.
9. To customize the VM by using Cloud-init (for Linux VMs) or Sysprep (for Windows VMs), select the Custom
Script check box.
Fields required for configuring Cloud-init and Sysprep, such as options for specifying a configuration script or
answer file and text boxes for specifying paths to required files, appear below the check box.
10. To specify a user data file (Linux VMs) or answer file (Windows VMs) for unattended provisioning, do one of
the following:
» If you uploaded the file to a storage container on the cluster, click ADSF path, and then enter the path to the
file.
Enter the ADSF prefix (adsf://) followed by the absolute path to the file. For example, if the user data is in /
home/my_dir/cloud.cfg, enter adsf:///home/my_dir/cloud.cfg. Note the use of three slashes.
» If the file is available on your local computer, click Upload a file, click Choose File, and then upload the
file.
» If you want to create or paste the contents of the file, click Type or paste script, and then use the text box
that is provided.
11. To copy one or more files to a location on the VM (Linux VMs) or to a location in the ISO file (Windows VMs)
during initialization, do the following:
a. In Source File ADSF Path, enter the absolute path to the file.
b. In Destination Path in VM, enter the absolute path to the target directory and the file name.
For example, if the source file entry is /home/my_dir/myfile.txt then the entry for the Destination Path in
VM should be /<directory_name>/copy_destination> i.e. /mnt/myfile.txt.
c. To add another file or directory, click the button beside the destination path field. In the new row that
appears, specify the source and target details.
12. When all the field entries are correct, click the Save button to create the VM and close the Create VM dialog
box.
The new VM appears in the VM table view.
Managing a VM (AHV)
You can use the web console to manage virtual machines (VMs) in AHV managed clusters.
Note: Use Prism Central to update a VM if you want to enable memory overcommit for it. Prism Element web console
does not allow you to enable memory overcommit while updating a VM. You can enable memory overcommit in
After creating a VM, you can use the web console to start or shut down the VM, launch a console window, update the
VM configuration, take a snapshot, attach a volume group, migrate the VM, clone the VM, or delete the VM.
Note: Your available options depend on the VM status, type, and permissions. Unavailable options are grayed out.
Procedure
a. Select the Enable Nutanix Guest Tools checkbox to enable NGT on the selected VM.
b. Select the Mount Nutanix Guest Tools checkbox to mount NGT on the selected VM.
Ensure that VM must have at least one empty IDE CD-ROM slot to attach the ISO.
The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual machine. A
CD with volume label NUTANIX_TOOLS gets attached to the VM.
c. To enable self-service restore feature for Windows VMs, select the Self Service Restore (SSR)
checkbox.
The Self-Service Restore feature is enabled on the VM. The guest VM administrator can restore the desired
file or files from the VM. For more information about self-service restore feature, see Self-Service Restore
in the Data Protection and Recovery with Prism Element guide.
d. After you select the Enable Nutanix Guest Tools checkbox the VSS snapshot feature is enabled by
default.
After this feature is enabled, Nutanix native in-guest VmQuiesced Snapshot Service (VSS) agent takes
snapshots for VMs that support VSS.
Note: The AHV VM snapshots are not application consistent. The AHV snapshots are taken from the VM
entity menu by selecting a VM and clicking Take Snapshot.
The application consistent snapshots feature is available with Protection Domain based snapshots
and Recovery Points in Prism Central. For more information, see Conditions for Application-
consistent Snapshots in the Data Protection and Recovery with Prism Element guide.
e. Click Submit.
The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual machine. A
CD with volume label NUTANIX_TOOLS gets attached to the VM.
Note:
• If you clone a VM, by default NGT is not enabled on the cloned VM. If the cloned VM is
powered off, enable NGT from the UI and power on the VM. If cloned VM is powered on,
enable NGT from the UI and restart the nutanix guest agent service.
• You can enable NGT on multiple VMs simultaneously. For more information, see Enabling
NGT and Mounting the NGT Installer Simultaneously on Multiple Cloned VMs.
If you eject the CD, you can mount the CD back again by logging into the Controller VM and running the
following nCLI command.
nutanix@cvm$ ncli ngt mount vm-id=virtual_machine_id
For example, to mount the NGT on the VM with
VM_ID=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-c1601e759987, type the
following command.
nutanix@cvm$ ncli ngt mount vm-id=00051a34-066f-72ed-0000-000000005400::38dc7bf2-
a345-4e52-9af6-
c1601e759987
• Clicking the Mount ISO button displays a window that allows you to mount an ISO image to the VM. To
mount an image, select the desired image and CD-ROM drive from the drop-down lists and then click the
Mount button.
Note: For information on how to select CD-ROM as the storage device when you intent to provide a system
image from CD-ROM, see Add New Disk in Creating a VM (AHV) on page 272.
• Clicking the Unmount ISO button unmounts the ISO from the console.
• Clicking the C-A-D icon button sends a CtrlAltDel command to the VM.
• Clicking the camera icon button takes a screenshot of the console window.
• Clicking the power icon button allows you to power on/off the VM. These are the same options that you
can access from the Power On Actions or Power Off Actions action link below the VM table (see next
step).
6. To start or shut down the VM, click the Power on (or Power off) action link.
Power on begins immediately. If you want to power off the VMs, you are prompted to select one of the
following options:
• Power Off: Hypervisor performs a hard power off action on the VM.
• Power Cycle: Hypervisor performs a hard restart action on the VM.
• Reset: Hypervisor performs an ACPI reset action through the BIOS on the VM.
• Guest Shutdown: Operating system of the VM performs a graceful shutdown.
• Guest Reboot: Operating system of the VM performs a graceful restart.
Select the option you want and click Submit.
Note: If you perform power operations such as Guest Reboot or Guest Shutdown by using the Prism Element
web console or API on Windows VMs, these operations might silently fail without any error messages if at that
time a screen saver is running in the Windows VM. Perform the same power operations again immediately, so
that they succeed.
7. To make a snapshot of the VM, click the Take Snapshot action link.
For more information, see Virtual Machine Snapshots on page 283.
Note: Nutanix recommends to live migrate VMs when they are under light load. If they are migrated while
heavily utilized, migration may fail because of limited bandwidth.
Note:
• Before you add multiple vGPUs to the VM, see Multiple Virtual GPU Support and Restrictions for
Multiple vGPU Support in the AHV Administration Guide.
• Multiple vGPUs are supported on the same VM only if you select the highest vGPU profile type.
• For more information on vGPU profile selection, see:
• Virtual GPU Types for Supported GPUs in the NVIDIA Virtual GPU Software User Guide in the
NVIDIA's Virtual GPU Software Documentation web-page, and
• GPU and vGPU Support in the AHV Administration Guide.
Note:
• To create or update a Traffic Mirroring destination type VM or vNIC, use command line interface.
For more information, see Traffic Mirroring on AHV Hosts in the AHV Administration Guide.
• If you delete a vDisk attached to a VM and snapshots associated with this VM exist, space
associated with that vDisk is not reclaimed unless you also delete the VM snapshots.
To increase the memory allocation and the number of vCPUs on your VMs while the VMs are powered on (hot-
pluggable), do the following:
a. In the vCPUs field, you can increase the number of vCPUs on your VMs while the VMs are powered on.
b. In the Number of Cores Per vCPU field, you can change the number of cores per vCPU only if the VMs
are powered off.
c. In the Memory field, you can increase the memory allocation on your VMs while the VMs are powered on.
For more information about hot-pluggable vCPUs and memory, see Virtual Machine Memory and CPU Hot-
Plug Configurations.
To attach a volume group to the VM, do the following:
a. In the Volume Groups section, click Add volume group, and then do one of the following:
» From the Available Volume Groups list, select the volume group that you want to attach to the VM.
» Click Create new volume group, and then, in the Create Volume Group dialog box, create a
volume group. After you create a volume group, select it from the Available Volume Groups list.
Repeat these steps until you have added all the volume groups that you want to attach to the VM.
b. Click Add.
11. To enable flash mode on the VM, click the Enable Flash Mode check box.
» After you enable this feature on the VM, the status is updated in the VM table view. To view the status of
individual virtual disks (disks that are flashed to the SSD), click the update disk icon in the Disks pane in
the Update VM window.
» You can disable the flash mode feature for individual virtual disks. To update the flash mode for individual
virtual disks, click the update disk icon in the Disks pane and deselect the Enable Flash Mode check box.
12. To delete the VM, click the Delete action link. A window prompt appears; click the OK button to delete the
VM.
The deleted VM disappears from the list of VMs in the table.
• Disaster recovery
• Testing - as a safe restoration point in case something went wrong during testing.
• Migrate VMs
• Create multiple instances of a VM.
Snapshot is a point-in-time state of entities such as VM and Volume Groups, and used for restoration and replication
of data. You can generate snapshots and store them locally or remotely. Snapshots are mechanism to capture the
delta changes that has occurred over time. Snapshots are primarily used for data protection and disaster recovery.
Snapshots are not autonomous like backup, in the sense that they depend on the underlying VM infrastructure
and other snapshots to restore the VM. Snapshots consume less resources compared to a full autonomous backup.
Typically, a VM snapshot captures the following:
• The state including the power state (for example, powered-on, powered-off, suspended) of the VMs.
• The data includes all the files that make up the VM. This data also includes the data from disks, configurations,
and devices, such as virtual network interface cards.
You can schedule and generate snapshots as a part of the disaster recovery process using Nutanix DR solutions.
AOS generates snapshots when you protect a VM with a protection domain using the Data Protection dashboard in
Prism Element web console. For more information, see Snapshots in the Data Protection and Recovery with Prism
Element Guide. Similarly, AOS generates recovery points (snapshots are called recovery points in Prism Central)
when you protect a VM with a protection policy. For more information about protection policies, see Protection
Policies View in Nutanix Disaster Recovery Guide.
For example, in the Data Protection dashboard in Prism Element web console, you can create schedules to generate
snapshots using various RPO schemes such as asynchronous replication with frequency intervals of 60 minutes or
more, or NearSync replication with frequency intervals of as less as 20 seconds up to 15 minutes. These schemes
create snapshots in addition to the ones generated by the schedules, for example, asynchronous replication schedules
generate snapshots according to the configured schedule and, in addition, an extra snapshot every 6 hours. Similarly,
NearSync generates snapshots according to the configured schedule and also generates one extra snapshot every hour.
Similarly, you can use the options in the Data Protection entity of Prism Central to generate recovery points using
the same RPO schemes.
Procedure
What to do next
You can Delete the snapshot. For more information, see Deleting a VM Snapshot Manually on page 284.
You can clone a VM by clicking the Clone action link for the snapshot of the VM.
You can use the VM snapshot to Restore the VM to the previous state captured in the snapshot.
Click Details to view the details of the snapshot.
Procedure
4. Click the VM Snapshots tab in the Summary section of the VM Table view.
5. Click the Delete action link for the snapshot that you want to delete in the list of snapshots.
What to do next
To create a snapshot manually, see Creating a VM Snapshot Manually on page 283.
• For information on how to create a VM, see Creating a VM through Prism Central (AHV).
• For information on how to update an existing VM, see Updating a VM through Prism Central (AHV).
For more information on multiple vGPU support, see Multiple Virtual GPU Support.
• Select the license for NVIDIA Virtual GPU (vGPU) software version 10.1 (440.53) or later.
• Observe the guidelines and restrictions specified in Multiple Virtual GPU Support and Restrictions for Multiple
vGPU Support.
To add multiple vGPUs to the same VM, perform the following steps:
1. Click Add GPU in the Resources step of create VM workflow or update VM workflow. For more information,
see Creating a VM through Prism Central (AHV) or Updating a VM through Prism Central (AHV)
• Destination node is equipped with the required resources for the VM.
• The VM GPU drivers are compatible with the AHV host GPU drivers.
If the destination node is not equipped with the enough resources or there is any compatibility issue between the VM
GPU drivers and AHV host GPU drivers, the LCM forcibly shuts down the Non-HA-protected VMs.
Procedure
To migrate the vGPU-enabled VM to another host within the same cluster on Prism Element, perform the following
steps:
» Retain the System will automatically select a host option if you want to migrate the VM to a
host selected by the system.
The system selects a host based on the GPU resources available with the host as appropriate for the VM to
be migrated live.
» Select the host listed in the drop-down list that you want to migrate the VM to.
6. Click Migrate.
Prism submits the task and displays the following message:
Successfully submitted migrate operation.
Task details
Task details is a link to the Tasks page. Click the link to monitor the migration task on the Tasks page.
When the migration is complete, the host name of the VM in the List view changes to the host name to which you
migrated the VM.
Note:
• You can perform the power operations and launching of VM console even when vCenter Server is not
registered.
• If you are creating VM through Prism, configuration changes to the VM when it is powered on is
enabled by default and it depends on the guest operating system that is deployed on the VM.
• Ensure that all the hosts in the cluster is managed by a single vCenter Server.
• Ensure that DRS is enabled on the vCenter Server.
• Ensure that you are running ESXi and vCenter Server 5.5 or later releases.
• Ensure that you have homogeneous network configuration. For example, network should have either 1G or 10G
NICs.
• Ensure that you unregister the vCenter Server from the cluster before changing the IP address of the vCenter
Server. After you change the IP address of the vCenter Sever, you must register the vCenter Server again with the
new IP address.
• The vCenter Server Registration page displays the registered vCenter Server. If for some reason the Host
Connection field changes to Not Connected, it implies that the hosts are being managed by a different vCenter
Server. In this case, there will be new vCenter entry with host connection status as Connected and you need to
register to this vCenter Server. For more information about registering vCenter Server again, see Managing
vCenter Server Registration Changes on page 366.
Caution: If multiple vCenter Servers are managing the hosts, you will not be able to perform the VM management
operations. Move all the hosts into one vCenter Server.
• SCSI, IDE, and SATA disks are supported. PCI disks are not supported.
• The E1000, E1000e, PCnet32, VMXNET, VMXNET 2, VMXNET 3 network adapter types (NICs) are supported.
• Creating a VM by using a template is not supported.
• Creating a VM by using image service is not supported.
• If a VM is deleted, all the disks that are attached to the VM gets deleted.
• Network configuration (creation of port groups or VLANs) is not supported.
Creating a VM (ESXi)
In ESXi clusters, you can create a new virtual machine (VM) through the web console.
• Ensure that you refer requirements and limitations. For more information, see the requirements and limitations
section in VM Management through Prism Element (ESXi) before proceeding.
Procedure
4. To attach a disk to the VM, click the Add New Disk button.
The Add Disks dialog box appears. Do the following in the indicated fields:
a. Type: Select the type of storage device, DISK or CD-ROM, from the drop-down list.
The following fields and options vary depending on whether you choose DISK or CD-ROM.
b. Operation: Specify the device contents from the drop-down list.
• Select Clone from ADSF file to copy any file from the cluster that can be used as an image onto the disk.
• Select Allocate on Storage Container to allocate space without specifying an image. (This option
appears only when DISK is selected in the previous field.) Selecting this option means you are allocating
space only. You have to provide a system image later from a CD-ROM or other source.
c. Bus Type: Select the bus type from the drop-down list. The choices are IDE or SCSI.
d. ADSF Path: Enter the path to the desired system image.
This field appears only when Clone from ADSF file is selected. It specifies the image to copy. Enter the
path name as /storage_container_name/vmdk_name.vmdk. For example to clone an image from myvm-
flat.vmdk in a storage container named crt1, enter /crt1/myvm-flat.vmdk. When a user types the storage
e. Storage Container: Select the storage container to use from the drop-down list.
This field appears only when Allocate on Storage Container is selected. The list includes all storage
containers created for this cluster.
f. Size: Enter the disk size in GiBs.
g. When all the field entries are correct, click the Add button to attach the disk to the VM and return to the
Create VM dialog box.
h. Repeat this step to attach more devices to the VM.
5. To create a network interface for the VM, click the Add New NIC button.
The Create NIC dialog box appears. Do the following in the indicated fields:
a. VLAN Name: Select the target virtual LAN from the drop-down list.
The list includes all defined networks. For more information, see Network Configuration for VM Interfaces.
b. Network Adapter Type: Select the network adapter type from the drop-down list.
For information on the list of supported adapter types, see VM Management through Prism Element
(ESXi).
c. Network UUID: This is a read-only field that displays the network UUID.
d. Network Address/Prefix: This is a read-only field that displays the network IP address and prefix.
e. When all the field entries are correct, click the Add button to create a network interface for the VM and return
to the Create VM dialog box.
f. Repeat this step to create more network interfaces for the VM.
6. When all the field entries are correct, click the Save button to create the VM and close the Create VM dialog
box.
The new VM appears in the VM table view. For more information, see VM Table View.
Managing a VM (ESXi)
You can use the web console to manage virtual machines (VMs) in the ESXi clusters.
• Ensure that you refer the requirements and limitations. For more information, see the requirements and limitations
section in VM Management through Prism Element (ESXi) before proceeding.
• Ensure that you have registered the vCenter Server with your cluster. For more information, see Registering a
Cluster to vCenter Server on page 365.
Note: Your available options depend on the VM status, type, and permissions. Unavailable options are unavailable.
a. Select the Enable Nutanix Guest Tools checkbox to enable NGT on the selected VM.
b. Select the Mount Nutanix Guest Tools checkbox to mount NGT on the selected VM.
Ensure that VM has at least one empty IDE CD-ROM or SATA slot to attach the ISO.
The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual machine. A
CD with volume label NUTANIX_TOOLS gets attached to the VM.
c. To enable self-service restore feature for Windows VMs, select the Self Service Restore (SSR)
checkbox.
The self-service restore feature is enabled of the VM. The guest VM administrator can restore the desired file
or files from the VM. For information on the self-service restore feature, see Self-Service Restore in the
Data Protection and Recovery with Prism Element guide.
d. After you select the Enable Nutanix Guest Tools checkbox the VSS and application-consistent snapshot
feature is enabled by default.
After this feature is enabled, Nutanix native in-guest VmQuiesced snapshot service (VSS) agent is used to
take application-consistent snapshots for all the VMs that support VSS. This mechanism takes application-
consistent snapshots without any VM stuns (temporary unresponsive VMs) and also enables third-party
backup providers like Commvault and Rubrik to take application-consistent snapshots on Nutanix platform
e. To mount VMware guest tools, select the Mount VMware Guest Tools checkbox.
The VMware guest tools are mounted on the VM.
Note: You can mount both VMware guest tools and Nutanix Guest Tools at the same time on a particular
VM provided the VM has sufficient empty CD-ROM slots.
f. Click Submit.
The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual machine. A
CD with volume label NUTANIX_TOOLS gets attached to the VM.
Note:
• If you clone a VM, by default NGT is not enabled on the cloned VM. If the cloned VM is
powered off, enable NGT from the UI and start the VM. If cloned VM is powered on, enable
NGT from the UI and restart the Nutanix guest agent service.
• For information on how to enable NGT on multiple VMs simultaneously, see Enabling NGT
and Mounting the NGT Installer on Cloned VMs.
If you eject the CD, you can mount the CD back again by logging into the Controller VM and running the
following nCLI command.
ncli> ngt mount vm-id=virtual_machine_id
For example, to mount the NGT on the VM with
VM_ID=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-c1601e759987, type the
following command.
ncli> ngt mount vm-id=00051a34-066f-72ed-0000-000000005400::38dc7bf2-
a345-4e52-9af6-
c1601e759987
Caution: In AOS 4.6, for the powered-on Linux VMs on AHV, ensure that the NGT ISO is ejected or
unmounted within the guest VM before disabling NGT by using the web console. This issue is specific for 4.6
version and does not occur from AOS 4.6.x or later releases.
Note: If you have created the NGT ISO CD-ROMs prior to AOS 4.6 or later releases, the NGT functionality
will not work even if you upgrade your cluster because REST APIs have been disabled. You must unmount
the ISO, remount the ISO, install the NGT software again, and then upgrade to 4.6 or later version.
Note: A VNC client may not function properly on all browsers. Some keys are not recognized when the browser
is Google Chrome. (Firefox typically works best.)
• Power Off. Hypervisor performs a hard shut down action on the VM.
• Reset. Hypervisor performs an ACPI reset action through the BIOS on the VM.
• Guest Shutdown. Operating system of the VM performs a graceful shutdown.
• Guest Reboot. Operating system of the VM performs a graceful restart.
Note: The Guest Shutdown and Guest Reboot options are available only when VMware guest tools are
installed.
7. To pause (or resume) the VM, click the Suspend (or Resume) action link. This option is available only when
the VM is powered on.
Note:
Note: If you delete a vDisk attached to a VM and snapshots associated with this VM exist, space associated with
that vDisk is not reclaimed unless you also delete the VM snapshots.
» After you enable this feature on the VM, the status is updated in the VM table view. To view the status
of individual virtual disks (disks that are flashed to the SSD), go the Virtual Disks tab in the VM table
view.
» You can disable the Flash Mode feature for individual virtual disks. To update the Flash Mode for
individual virtual disks, click the update disk icon in the Disks pane and clear the Enable Flash Mode
checkbox.
10. To delete the VM, click the Delete action link. A window prompt appears; click the OK button to delete the
VM.
The deleted VM disappears from the list of VMs in the table. You can also delete a VM that is already powered
on.
• Images created on a Prism Element reside on Prism Element and can be managed from Prism Element.
• For better centralized management, you can migrate images manually to Prism Central by using the image import
feature in Prism Central. An image migrated to Prism Central in this way remains on Prism Element, but you can
manage the image only from Prism Central. Migrated images cannot be updated from the Prism Element.
• In the case of a local image upload, with more than one Prism Element cluster managed by Prism Central, the
image state is active on that Prism Element cluster. All other Prism Element clusters show the image as inactive.
If you create a VM from that image, the image bits are copied to the other Prism Element clusters. The image then
appears in an active state on all managed Prism Element clusters.
Note:
After you upload a disk image file, the storage size of the image file in the cluster appears higher than the actual size of
the image. This is because the image service in the cluster converts the image file to raw format which is required by
AHV to create a VM from an image.
Procedure
2. Click the gear icon in the main menu and then select Image Configuration in the Settings page.
The Image Configuration window appears.
• Click the From URL option to import the image from the Internet. Enter the appropriate URL address
in the field using the following syntax for either NFS or HTTP. (NFS and HTTP are the only supported
protocols.)
nfs://[hostname|IP_addr]/path
http://[hostname|IP_addr]/path
Enter either the name of the host (hostname) or the host IP address (IP_addr) and the path to the file. If
you use a hostname, the cluster must be configured to point at a DNS server that can resolve that name.
A file uploaded through NFS must have 644 permissions. For more information, see Configuring Name
Servers on page 352.
If the image files have been copied to a container on the cluster, replace IP_addr with CVM IP address.
For example, enter nfs://CVM_IP_addr/container_name/file_name.
Replace CVM_IP_addr with the CVM IP address, container_name with the name of the container where
the image is placed, and file_name with the image file name.
To identify the NFS path to the VM disks of a VM, log on to any CVM in a cluster as Nutanix user. Run
the following command to find the associated disk UUID.
nutanix@cvm$ acli vm.get <VM name> include_vmdisk_paths=1 | grep -E ‘disk_list|
vmdisk_nfs_path|vmdisk_size|vmdisk_uuid’
To construct the NFS path, append the VM disk path returned by the command to nfs://CVM_IP_addr.
For example, if the command returns the path ContainerA/.acropolis/vmdisk/9365b2eb-a3fd-45ee-
b9e5-64b87f64a2df, then your NFS path is nfs://CVM_IP_addr/ContainerA/.acropolis/vmdisk/9365b2eb-
a3fd-45ee-b9e5-64b87f64a2df.
Replace CVM_IP_addr with the CVM IP address.
• Click the Upload a file option to upload a file from your workstation. Click the Choose File button and
then select the file to upload from the file search window.
f. When all the fields are correct, click the Save button.
The Create Image window closes and the Image Configuration window reappears with the new image
appearing in the list.
4. To update the image information, click the pencil icon for that image.
The Update Image window appears. Update the fields as desired and then click the Save button.
Note: The pencil icon is unavailable for images imported to Prism Central. Use Prism Central to manage such
images.
5. To delete an image file from the store, click the X icon for that image.
The image file is deleted and that entry disappears from the list.
About Cloud-Init
Cloud-init is a utility that is used to customize Linux VMs during first-boot initialization. The utility must be pre-
installed in the operating system image used to create VMs. Cloud-init runs early in the boot process and configures
the operating system on the basis of data that you provide (user data). You can use Cloud-init to automate tasks such
as setting a host name and locale, creating users and groups, generating and adding SSH keys so that users can log in,
installing packages, copying files, and bootstrapping other configuration management tools such as Chef, Puppet, and
Salt. For more information about Cloud-init, see https://cloudinit.readthedocs.org/.
About Sysprep
Sysprep is a utility that prepares a Windows installation for duplication (imaging) across multiple systems. Sysprep
is most often used to generalize a Windows installation. During generalization, Sysprep removes system-specific
information and settings such as the security identifier (SID) and leaves installed applications untouched. You can
capture an image of the generalized installation and use the image with an answer file to customize the installation of
Windows on other systems. The answer file contains the information that Sysprep needs to complete an unattended
installation. For more information about Sysprep and answer files, see the Microsoft Sysprep documentation.
Note: The ISO image is mounted on bus IDE 3, so ensure that no other device is mounted on that bus.
You can also specify source paths to the files or directories that you want to copy to the VM, and you can specify the
target directories for those files. This is particularly useful if you need to copy software that is needed at start time,
such as software libraries and device drivers. For Linux VMs, AOS can copy files to the VM. For Windows VMs,
AOS can copy files to the ISO image that it creates for the answer file.
After customizing a VM, you can copy the VDisk of the VM to Image Service for backup and duplication.
Procedure
4. In the VM dashboard, select the VM, and then click Power On.
The VM is powered on and initialized based on the directives in the user data file. To create a reference image
from the VM, use Image Service. For more information about Image Service, see Configuring Images.
• AHV supports guest customization through cloud-init using Config Drive v2 datasource (see Cloud-init
documentation). For more information, see the example cloud-init scripts for network configuration.
#cloud-config
disable_root: False
cloud_config_modules:
-resolv_conf
# User Authentication
users:
- default
- name: linux.username
ssh-authorized-keys:
- public_key
sudo: ['ALL=(ALL) NOPASSWD:ALL']
# Configure resolv.conf
#cloud-config
apt_upgrade: true
repo_update: true
repo_upgrade: all
# Run the commands to add packages and resize the root partition
runcmd:
- netplan apply
packages:
- git
- wget
- curl
- unzip
- tar
- python3
- cloud-guest-utils
growpart:
mode: auto
devices: ['/']
ignore_growroot_disabled: false
#cloud-config
apt_upgrade: true
repo_update: true
repo_upgrade: all
# User Authentication
users:
- default
- name: rhel
groups: sudo
shell: /bin/bash
lock_passwd: false
ssh-authorized-keys:
- public_key
sudo: ["ALL=(ALL) NOPASSWD:ALL"]
chpasswd:
list: |
rhel: user_password
expire: false
# Run the commands to add packages and resize the root partition
packages:
- git
- wget
- curl
- unzip
- tar
- python3
- cloud-guest-utils
growpart:
Procedure
1. Log in to the web console by using the Nutanix credentials, and then browse to the VM dashboard.
2. Select the VM that you want to clone, click Launch Console, and then log in to the VM with administrator
credentials.
• Open the command prompt as an administrator and navigate to the sysprep folder.
cd C:\windows\system32\sysprep
Note: Make sure to shut down the VM. Restarting the VM will result in the VM losing its generalized state and
in Sysprep attempting to find an answer file that has not been provided yet. For the same reasons, until you have
completed this procedure, do not start the VM.
4. Create a reference image from the VM by using Image Service. For more information, see Configuring Images
on page 293.
Procedure
1. Log in to the web console by using the Nutanix credentials, and then browse to the VM dashboard.
2. Click Create VM, and then, in the Create VM dialog box, do the following:
a. Specify a name for the VM and allocate resources such as vCPUs, memory, and storage.
b. Click Add new disk, select the Clone from Image Service operation, and select the Windows reference
image that you copied to Image Service.
c. Click the Custom Script check box and specify how you want to customize the VM.
For more information about creating a VM, see Creating a VM (AHV) on page 272.
3. In the VM dashboard, select the VM, and then click Power On.
The VM is powered on and initialized based on the directives in the answer file. To create a reference image from
the VM, use Image Service. For more information about Image Service, see Configuring Images.
Procedure
1. Log in to the web console by using the Nutanix credentials, and then browse to the VM dashboard.
a. Specify a name for the VM and allocate resources such as vCPUs, memory, and storage.
b. In the Disks area, click the edit button that is provided against the default CD-ROM entry. In the Update
Disk dialog box, select the operation (Clone from ADSF File or Clone from Image Service), and then
specify the image that you want to use. Click Update.
c. Click Add new disk. Allocate space for a new disk on a storage container, and then click Add.
d. Click the Custom Script check box and specify how you want to customize the VM.
For more information about creating a VM, see Creating a VM (AHV) on page 272.
3. In the VM dashboard, select the VM, and then click Power On.
The VM is powered on and initialized based on the directives in the answer file. To create a reference image from
the VM, use Image Service. For more information about Image Service, see Configuring Images.
Note:
• Nutanix does not support VMs that are running with 100% remote storage for High Availability. The
VMs must have at least one local disk that is present on the cluster.
• The VM HA does not reserve the memory for the non-migratable VMs. For information on how to
check the non-migratable VMs, see Checking Live Migration Status of a VM in the Prism Central
Infrastructure Guide.
• OK: This state implies that the cluster is protected against a host failure.
• Healing: Healing period is the time that Acropolis brings the cluster to the protected state. There are two phases to
this state. The first phase occurs when the host fails. The VMs are restarted on the available host. After restarting
all the VMs if there are enough resources to protect the VM, the HA status of the cluster comes back to OK state.
If this does not occur, the cluster goes into critical state. The second phase occurs when the host comes back from
the failure. Once the host comes back from failure, no VMs are present on the host and hence during this healing
phase restore locality task occurs (VMs are migrated back). Apart from restoring the locality of the VMs, the
restore locality task ensures that the cluster is back to the same state before the HA failure. Once it is finished, the
HA status is back to OK state.
• Critical: If the host is down, the HA status of the cluster goes into Critical state. This happens because the cluster
cannot tolerate any more host failures. You have to ensure that you bring back the host so that your cluster is
protected against any further host failures.
Note: On a less loaded cluster, it is possible for HA to go directly back to OK state if enough resources is reserved
to protect another host failure. The start and migrate operations on the VMs are restricted in the Critical state
because Acropolis continuously tries to ensure that the HA status is back to the OK state.
Procedure
2. Click the gear icon in the main menu and then select Manage VM High Availability in the Settings page.
Note: This option does not appear in clusters that do not support this feature.
3. Check the Enable HA Reservation box and then click the Save button to enable.
Note: You can install both VMware guest tools and NGT in a VM because NGT is designed to install alongside
VMware guest tools.
Procedure
4. Select the VM for which you want to enable NGT and click Manage Guest Tools.
5. In the Manage VM Guest Tools window, select the Enable Nutanix Guest Tools checkbox.
Selecting this checkbox displays the options to mount the NGT installer, and to select the NGT applications that
you want to use in the VM.
a. Mount Nutanix Guest Tools: Select this checkbox to mount the NGT installer in the selected VM.
b. Self Service Restore (SSR): Select this checkbox to enable the self-service restore feature for Windows
VMs.
For more information about the self-service restore feature, see Self-Service Restore in the Data Protection
and Recovery with Prism Element guide.
c. Volume Snapshot Service / Application Consistent Snapshots (VSS): This checkbox is selected by
default when you select the Enable Nutanix Guest Tools checkbox.
This feature enables the Nutanix native in-guest Volume Snapshot Service (VSS) agent to take application-
consistent snapshots for all the VMs that support VSS. For more information, see Conditions for Application-
consistent Snapshots in the Data Protection and Recovery with Prism Element guide.
d. Click Submit.
Prism Element enables the NGT feature, mounts the NGT installer, and attaches the NGT installation media
with the volume label NUTANIX_TOOLS to the selected VM.
7. To verify whether NGT is enabled and the NGT installer is mounted successfully on a guest VM, do the following
from the Prism Element web console:
Note:
• NGT is not enabled on a cloned VM by default. For more information, see Enabling NGT and
Mounting the NGT Installer on Cloned VMs.
• For information about troubleshooting any NGT-related issues, see KB-3741.
What to do next
Install NGT in the guest VM by following the instructions in NGT Installation.
NGT Installation
Prism Element web console does not support automatic installation of NGT. You must log in to a VM to manually
install NGT in that VM.
Note: You cannot install NGT on VMs created on storage containers with replication factor 1.
• All the NGT requirements are met. For more information, see Nutanix Guest Tools Requirements in the Prism
Central Guide.
Procedure
2. Open File Explorer and click the CD drive with the NUTANIX_TOOLS label from the left navigation pane.
3. From the right pane that displays all the files and sub-folders in the drive, double-click the setup.exe file.
Note: If you mount the NGT installer while the VM is powered off, the CD drive might not display the
NUTANIX_TOOLS label after you power on the VM. You must open the CD drive, and double-click the setup.exe
file.
4. Accept the license agreement and follow the prompts to install NGT in the virtual machine.
A Setup Successful message appears if NGT is successfully installed.
5. After you install NGT in a Windows VM, the Nutanix Guest Agent (NGA) service in the VM starts periodic
communication with the CVM. To verify whether the NGA service is communicating with the CVM, log in to the
CVM and run the following command:
nutanix@cvm$ nutanix_guest_tools_cli list_vm_tools_entities include_vm_info=true
vm_name=vm-name
Replace vm-name with the name of the Windows VM.
In the command output, communication_link_active = true indicates that the NGA is communicating with the
CVM.
Note: For information about troubleshooting any NGT-related issues, see KB-3741.
• All the NGT requirements are met. For more information, see Nutanix Guest Tools Requirements in the Prism
Central Guide.
Note:
• Add the commands mentioned in the following procedure to a custom script and run the script to install
NGT on multiple Windows VMs. However, you might need to restart the VM after NGT is installed for
the updated functionalities to be available in the VM.
• For information about the contents included in the NGT silent installer package, see Nutanix Guest
Tools Overview in the Prism Central Guide.
Procedure
1. Open the command prompt and go to the drive on which the NGT installer is mounted.
2. Install NGT using the installer package by running either of the following commands:
Note: This command updates the Nutanix VirtIO drivers to the latest version, but the updated functionality of
the VirtIO drivers is available only after a VM restart. For more information about VirtIO drivers, see Nutanix
VirtIO for Windows in AHV Administration Guide.
Note:
• NGT installation on guest VMs running on AHV might require a VM restart if the VirtIO drivers
are updated during the installation whereas NGT installation on guest VMs running on ESXi does
not require a VM restart because the installed VirtIO drivers are not active until the VM moves
to AHV. For more information about VirtIO drivers, see Nutanix VirtIO for Windows in AHV
Administration Guide.
• The NGT installer has some built-in checks (for example, if the VSS service is disabled or if
KB2921916 Windows update is installed inside Windows 7/Windows Server 2008R2) that are
treated as warnings during an interactive installation of NGT. However, these checks are deemed as
errors during a silent installation. To ignore these errors and proceed with the silent installation, use
the IGNOREALLWARNINGS=yes flag. For example, drive:\> setup.exe /quiet ACCEPTEULA=yes
IGNOREALLWARNINGS=yes.
Note: For information on troubleshooting any NGT related issues, see KB-3741 available on the Nutanix support
portal.
4. After you install NGT in a Windows VM, the Nutanix Guest Agent (NGA) service in the VM starts periodic
communication with the CVM. To verify whether the NGA service is communicating with the CVM, log in to the
CVM and run the following command:
nutanix@cvm$ nutanix_guest_tools_cli list_vm_tools_entities include_vm_info=true
vm_name=vm-name
Replace vm-name with the name of the Windows VM.
In the command output, communication_link_active = true indicates that the NGA is communicating with the
CVM.
Note: For information about troubleshooting any NGT-related issues, see KB-3741.
• All the NGT requirements are met. For more information, see Nutanix Guest Tools Requirements in the Prism
Central Guide.
• NGT is enabled in the selected VM and the NGT installer is mounted on the selected VM. For more information,
see Enabling NGT and Mounting the NGT Installer in a VM.
Procedure
2. Run one of the following to determine the device in which the NUTANIX_TOOLS CD is inserted:
• $ blkid -L NUTANIX_TOOLS
The second command displays a list of directories. You must look for a directory with the label
NUTANIX_TOOLS.
Note: On some Linux distributions, it is appropriate to use /mnt or a specific directory created under /mnt.
5. Verify whether the Nutanix Guest Agent (NGA) service is installed in the VM by running the command based on
your package management tool.
The following is an example of the command for the YUM package management tool.
$ sudo yum list installed | grep 'nutanix-guest-agent'
Output similar to the following is displayed.
[root@localhost linux]# sudo yum list installed | grep 'nutanix-guest-agent'
Failed to set locale, defaulting to C
nutanix-guest-agent.x86_64 4.0-1 @nutanix-
ngt-20230524223422
[root@localhost linux]#
Note: For information about troubleshooting any NGT-related issues, see KB-3741.
Note: Bulk installation of NGT using third-party endpoint management tools does not require Prism Element web
console. However, you must enable and mount NGT in guest VMs using Prism Element web console after installing
NGT in VMs. For more information, see Enable and Configure NGT.
Procedure
1. Go to the Nutanix Support portal, select Downloads > NGT, and download the nutanix-guest-agent-
<version>.exe installer file for Windows, which matches the AOS version installed in your clusters.
Procedure
1. Go to the Nutanix Support portal, select Downloads > NGT, and download the nutanix-guest-agent-rpm-
<version>.tar.gz installer file for RPM-based distributions, which matches the AOS version installed in your
clusters.
2. Extract the nutanix-guest-agent-rpm-<version>.tar.gz file, and host the NUTANIX-NGT-GPG-KEY file and the
ngt_repo directory on the web server.
Procedure
1. Go to the Nutanix Support portal, select Downloads > NGT, and download the nutanix-guest-agent-deb-
<version>.tar.gz installer file for DEB-based distributions, which matches the AOS version installed in your
clusters.
2. Extract the nutanix-guest-agent-deb-<version>.tar.gz file, and host the i386 and amd64 directories on the web
server.
3. (Optional) Perform the following steps to verify the DEB installer packages against the detached signatures using
the NUTANIX-NGT-GPG-KEY file:
b. Run the following command to verify the DEB installer packages against the detached signatures:
$ gpg --verify os-arch/nutanix-guest-agent.deb.asc os-arch/nutanix-guest-
agent_version-1_os-arch.de
Replace os-arch with the architecture of the OS of guest VM, and version with the NGA version.
The following is an example.
$ gpg --verify i386/nutanix-guest-agent.deb.asc i386/nutanix-guest-
agent_4.0-1_i386.deb
gpg: Signature made Wed 24 May 2023 01:46:29 PM UTC using RSA key ID 42DBF8BB
gpg: Good signature from "Nutanix, Inc. (NGT Packaging) <[email protected]>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: D8B0 18BD CFEB 774C D157 F0A5 11B1 600F 42DB F8BB
• Ensure that the cluster meets all NGT requirements. For more information, see Nutanix Guest Tools
Requirements in the Prism Central Infrastructure Guide.
• Ensure that you registered the VMs with the third-party endpoint management tool deployed at your site. For more
information, see the tool-specific documentation.
• Review the end user license agreement (EULA) for Nutanix Guest Tools using a manual installation because the
following installation procedure requires you to accept the EULA automatically.
Procedure
1. Configure the third-party endpoint management tool deployed at your site to distribute the nutanix-guest-agent-
<version>.exe file to the VMs where NGT is installed.
For more information, see the tool-specific documentation.
2. Configure the third-party endpoint management tool to install NGT by running one of the following commands:
Note: This command might update the Nutanix VirtIO drivers if no Nutanix VirtIO drivers are installed or if
a newer version is available, but the updated functionality of the VirtIO drivers is available only after a VM
restart.
3. (Optional) To generate the NGT logs in a location other than the %TEMP% directory, install NGT by running the
following command.
C:\ngtinstaller> nutanix-guest-agent-version.exe /quiet ACCEPTEULA=yes /log log_file
Replace version with the NGA version, and log_file with the filename to write the logs.
Ensure that the directory containing the filename that you provide has the necessary write permissions. Also, the
installation process adds some events to the Windows application event log.
What to do next
After successful installation, enable and configure NGT in guest VMs. For more information, see Enable
and Configure NGT.
• Ensure that the cluster meets all NGT requirements. For more information, see Nutanix Guest Tools
Requirements in the Prism Central Infrastructure Guide.
• Ensure that you registered the VMs with the third-party endpoint management tool deployed at your site. For more
information, see the tool-specific documentation.
Procedure
1. To verify the package signatures, configure the third-party endpoint management tool deployed at your site to
install the NUTANIX-NGT-GPG-KEY file on RPM-based operating systems.
For more information, see the tool-specific documentation.
3. Configure the third-party endpoint management tool to install the Nutanix guest agent package by running the
package manager-specific install command.
For example, the yum install -y nutanix-guest-agent command installs Nutanix guest agent package using the yum
package manager for RedHat-based distributions.
[nutanix@localhost ~]$ yum install -y nutanix-guest-agent
For information about the install command specific to the package manager at your site, see the package manager-
specific documentation.
What to do next
After successful installation, enable and configure NGT in guest VMs. For more information, see Enable
and Configure NGT.
Procedure
• Mount NGT on guest VMs by performing Step 1 to Step 6 in Enabling NGT and Mounting the NGT Installer
in a VM on page 304.
What to do next
The Nutanix Guest Agent (NGA) service in the VMs starts periodic communication with the CVM. To
verify whether the NGA service is communicating with the CVM, log in to the CVM and run the following
command:
nutanix@cvm$ nutanix_guest_tools_cli list_vm_tools_entities include_vm_info=true
vm_name=vm-name
Replace vm-name with the name of one of the VMs.
The config-only mount method reduces the size of the NGT ISO mounted on the guest VM by not including
the NGT installers. Therefore, it is less prone to scalability issues (for example, the ISO stored on disk is
smaller) and might attach faster to the guest VM.
Procedure
1. Log in to the CVM using SSH and (admin, nutanix, or root) access.
Note: To enable SSR and VSS while enabling NGT, use the following command:
nutanix@cvm$ nutanix_guest_tools_cli create_vm_tools_entity vm_uuid
guest_tools_enabled=true file_level_restore=true vss_snapshot=true
Replace vm_uuid with the UUID of the VM.
3. Run the following command to mount the NGT configuration updates in the guest VM.
nutanix@cvm$ nutanix_guest_tools_cli mount_guest_tools vm_uuid config_only=true
Replace vm_uuid with the UUID of the VM.
Output similar to the following is displayed.
nutanix@cvm$ nutanix_guest_tools_cli mount_guest_tools aca91d9b-8a31-47ec-a9b7-
dfc613115748 config_only=true
2023-05-26 06:11:51,809Z:30612(0x7f33d60d4340):ZOO_INFO@zookeeper_init@994:
Initiating client connection, host=zk1:9876 sessionTimeout=20000
watcher=0x7f33e59dec10 sessionId=0 sessionPasswd=<null> context=0x7ffd1271ba00
flags=0
2023-05-26 06:11:51,813Z:30612(0x7f33d57ff700):ZOO_INFO@zookeeper_interest@1941:
Connecting to server 10.46.27.73:9876
mount_result : kNoError
task_uuid : d5f7eb25-9e3d-4c7d-ade9-5d25e1e737d5
nutanix@cvm$
What to do next
The Nutanix Guest Agent (NGA) service in the VMs starts periodic communication with the CVM. To
verify whether the NGA service is communicating with the CVM, log in to the CVM and run the following
command:
nutanix@cvm$ nutanix_guest_tools_cli list_vm_tools_entities include_vm_info=true
vm_name=vm-name
Replace vm-name with the name of one of the VMs.
• Ensure that the NGT version is compatible with the AOS version installed in your cluster. For more information,
see the NGT section in the Compatibility and Interoperability Matrix.
• Ensure that the cluster meets all NGT requirements. For more information, see Nutanix Guest Tools
Requirements in the Prism Central Infrastructure Guide.
• Ensure that you registered the VMs with the third-party endpoint management tool deployed at your site. For more
information, see the tool-specific documentation.
Procedure
• Ensure that the NGT version is compatible with the AOS version installed in your cluster. For more information,
see the NGT section in the Compatibility and Interoperability Matrix.
Procedure
1. To verify the package signatures, configure the third-party endpoint management tool deployed at your site to
install the NUTANIX-NGT-GPG-KEY file on RPM-based operating systems.
For more information, see the tool-specific documentation.
3. Configure the third-party endpoint management tool to upgrade the Nutanix guest agent package by running the
package manager-specific upgrade command.
For example, the yum update -y nutanix-guest-agent command upgrades Nutanix guest agent package using the
yum package manager for RedHat-based distributions.
[nutanix@localhost ~]$ yum update -y nutanix-guest-agent
For information about the upgrade command specific to the package manager at your site, see the package
manager-specific documentation.
Note: Before uninstalling NGT from a guest VM, ensure the communication link between guest VM and CVM
remains active. The CVM displays that NGT is uninstalled only if the communication between CVM and guest
VM is active during the uninstallation. If the communication link is down, CVM continues to display that NGT is
installed on the guest VM even after the successful uninstallation of NGT. For information about how to verify that the
communication link is active, see Step 4 in Enabling NGT and Mounting the NGT Installer on Cloned VMs
on page 317.
Procedure
• Configure the third-party endpoint management tool to uninstall the Nutanix Guest Agent package using the
uninstaller registered with Add/Remove Programs in the Windows Control Panel.
For more information, see the tool-specific documentation or Microsoft Windows documentation.
Procedure
• Configure the third-party endpoint management tool to uninstall the Nutanix guest agent package in bulk by
running the package manager-specific removal command.
For example, the yum remove -y nutanix-guest-agent command uninstalls Nutanix guest agent package using the
yum package manager for RedHat-based distributions.
[nutanix@localhost ~]$ yum remove -y nutanix-guest-agent
For information about the removal command specific to the package manager at your site, see the package
manager-specific documentation.
Note: After you perform the following steps, you do not need to separately install NGT on the cloned VMs.
Procedure
1. Enable NGT and mount the NGT installer on the cloned VM by following the instructions mentioned in Steps 1
through 6 of Enabling NGT and Mounting the NGT Installer in a VM.
4. Verify whether the NGA service is communicating with the CVM by logging in to the CVM and running the
following command:
nutanix@cvm$ nutanix_guest_tools_cli list_vm_tools_entities include_vm_info=true
vm_name=vm-name
Replace vm-name with the name of the guest VM.
In the command output, communication_link_active = true indicates that the NGA is communicating with the
CVM.
Note: For information on troubleshooting any NGT related issues, see KB-3741 available on the Nutanix support
portal.
Procedure
2. To view the Alerts Dashboard, select Alerts from the drop-down list on the far left of the main menu.
3. In the Alerts dashboard, determine the NGT client certificates that are expiring in less than 90 days.
The dashboard displays the Severity status of the guest VM as Critical if the certificate is expiring in less than
7 days, and the status as Warning if the certificate is expiring in less than 45 days. It also displays the name and
UUID of the guest VMs whose certificates are expiring.
Alternately, log in to a CVM with SSH and run the following command to determine the NGT client certificates
that are expiring in less than 90 days.
nutanix@cvm$ ncc health_checks ngt_checks ngt_client_cert_expiry_check
Note:
• If you do not use the optional parameters, AOS regenerates the NGT client certificates of all the
guest VMs that are expiring in less than 45 days.
• To use the vm_uuids parameter, replace string-containing vm_uuid1,vm_uuid2.... with
the UUID of the VM for which you want to regenerate the certificate. To regenerate certificates for
multiple VMs, specify a comma-separated list of UUIDs.
• To use the threshold_days parameter, replace number-of-days with the threshold value in number
of days. For example, to regenerate the certificates of VMs that are expiring in less than 30 days,
replace number-of-days with 30.
6. Check the Alerts dashboard in the Prism Element web console or run the NCC check mentioned in Step 2 to verify
if those VMs are still displayed.
Note:
• After regenerating the NGT client certificates, it might take a few minutes for the guest VM to
communicate with the CVM. Restart the NGA service to force the guest VM to communicate with
the CVM immediately.
• For information about troubleshooting any NGT-related issues, see KB-3741.
Upgrading NGT
After you upgrade AOS, you must reinstall NGT to upgrade NGT to the latest version.
Perform the following steps to upgrade NGT.
Reconfiguring NGT
If you reconfigure the cluster IP address, NGT loses connection with the CVM. You must reconfigure NGT
to reestablish the connection.
To reconfigure NGT, mount the NGT installer. For more information, see Enabling NGT and Mounting the NGT
Installer in a VM on page 304.
After you mount the NGT ISO, NGA fetches the latest configuration (new cluster IP address) mounted in the guest
VM. The guest VM can now use the new IP address to communicate with the cluster.
Note:
Procedure
1. Uninstall NGT.
Note: Before you uninstall NGT from a guest VM, ensure that the communication link between the guest VM and
the CVM is active. If the communication link is down, the CVM continues to display that NGT is installed on the
guest VM even after the successful uninstallation of NGT. The CVM displays that NGT is uninstalled only after
the communication between the CVM and the guest VM is restored. For information about how to verify that the
communication link is active, see Step 4 in Enabling NGT and Mounting the NGT Installer on Cloned
VMs.
• Windows VM
You can uninstall NGT from a Windows VM through Control Panel. Log in to the Windows VM and perform
the following.
1. Navigate to Control Panel > Programs > Programs and Features.
2. Select the Nutanix Guest Tools service.
3. Click Uninstall.
An Uninstall Successfully Completed message appears on successfully uninstalling NGT.
(Optional) You can uninstall NGT from a Windows VM using the PowerShell command. Log in to the
Windows VM and perform the following.
1. Run the following command from Windows PowerShell to generate the output string required to uninstall
NGT.
$ Get-ChildItem -Path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion
\Uninstall, HKLM:\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion
\Uninstall | Get-ItemProperty | Where-Object {$_.DisplayName -match "Nutanix
Note: If you do not want to restart the guest VM after uninstalling NGT, append /norestart to the generated
output string, and run the updated output string. For example,
"C:\ProgramData\Package Cache\{3cfa83ac-a36f-49f1-
ad23-3a51c0e6964a}\NutanixGuestTools.exe" /uninstall /quiet /norestart
Tip: After uninstalling NGT from a Windows VM, ensure the following entries that are created during NGT
installation, are removed from the VM registry.
HKEY_LOCAL_MACHINE\SOFTWARE\Nutanix HKEY_LOCAL_MACHINE\SOFTWARE\Nutanix
\VSS\1.0 HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Nutanix
Guest Agent HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Nutanix
Self Service Restore Gateway HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet
\Services\EventLog\Application\Nutanix HKEY_LOCAL_MACHINE\SYSTEM
\CurrentControlSet\Services\EventLog\Application\Nutanix Guest Agent
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\Application
\Nutanix Self Service Restore Gateway HKEY_LOCAL_MACHINE\SYSTEM
\CurrentControlSet\Services\VSS
• Linux VM
Log in to the Linux VM and run the following command.
$ sudo sh /usr/local/nutanix/ngt/python/bin/uninstall_ngt.sh
Output similar to the following is displayed.
$ sudo sh /usr/local/nutanix/ngt/python/bin/uninstall_ngt.sh
Stopping ngt_guest_agent.service systemctl service...
ngt_guest_agent.service service stopped.
Stopping ngt_self_service_restore.service systemctl service...
ngt_self_service_restore.service service stopped.
NGA is getting uninstalled.
Removing Desktop icon and shortcuts.
Notify CVM of agent uninstallation.
Successfully notified CVM of agent uninstallation.
Erasing : nutanix-guest-agent-4.0-1.x86_64
1/1
warning: /usr/local/nutanix/ngt/config/ngt_config.json saved as /usr/local/
nutanix/ngt/config/ngt_config.json.rpmsave
RPM is getting removed/uninstalled.
Successfully uninstalled Nutanix Guest Tools.
Verifying : nutanix-guest-agent-4.0-1.x86_64
1/1
Removed:
nutanix-guest-agent.x86_64 0:4.0-1
2. Verify whether the NGT information of the guest VM is removed from the CVM.
VM Id : 0005b6a7-6bcc-03f2-0000-0000000097fe::e4a1216a-
a287-43c2-bbbf-28951e1bf615
VM Name : win 2012
NGT Enabled : true
Tools ISO Mounted : false
Vss Snapshot : false
File Level Restore : false
Communication Link Active : true
In the VM ID column, the text after "::" is the ID of the VM as shown in the example.
3. If the NGT information of the guest VM appears when you run the command in Step 2b, remove the NGT
information from the CVM by running the following command.
nutanix@cvm$ ncli ngt delete vm-id=virtual_machine_id
Replace virtual_machine_id with the ID of the VM displayed in the output of Step b.
Note: For information about troubleshooting any NGT-related issues, see KB-3741.
Important: By default, NGT metrics collection is disabled in a guest VM. Contact Nutanix Support to enable metrics
collection in a guest VM.
The guest VM makes an RPC call to the CVM every 30 seconds to fetch the metrics data. Once the data is fetched
successfully, it internally checks if this counter is already registered in the Perfmon utility. If the counter is not
registered, it first registers the counter and then publishes this data to the Performance Monitor. By default, the
following metrics are collected from the CVM after enabling NGT metrics collection in the guest VM.
Important: Contact Nutanix Support to enable or disable NGT metrics collection in a CVM.
The default metrics available in the CVM are hypervisor_cpu_usage_ppm and hypervisor.cpu_ready_time_ppm. You
can modify the metrics list from the CVM by editing the ~/config/nutanix_guest_tools/ngt_metrics_info.json file to
add or remove the metrics to be published on the guest VMs.
For example, in the ngt_metrics_info.json file you can modify the array list to add or remove metrics in the metrics
info block. Any changes in the metrics list reflect after you restart the NGA service on the CVM.
{
"metrics_info" : [
"hypervisor_cpu_usage_ppm",
"hypervisor.cpu_ready_time_ppm"
]
}
After you modify the metrics list, you must restart the NGT service on the CVM by running the $ allssh "genesis
stop nutanix_guest_tools && cluster start" command for the updated metrics data to reflect in the guest VM. After
the guest VM fetches the metrics, it updates this data in the Perfmon utility without manual intervention (like service
restart) in the guest VM.
Note: The metrics list must be modified for all the nodes.
• Any failure in registering a counter or in publishing the metrics data is logged in the Nutanix Guest Agent logs in
the guest VM. Currently, you are not alerted if such an event occurs.
• In Windows Performance monitor, the NGT hypervisor CPU metrics is collected in parts per million. Because
the default scale is 1, it gives values over 100%. Nutanix recommends to adjust the scale in the Performance
Monitor Properties > Data tab to .0001.
• Verify that the guest VM has Microsoft Windows OS and NGT installed.
• Verify that the guest VM has Microsoft Windows Perfmon utility installed.
• Verify that the NGT metric collection is enabled in the guest VM. You can do this by checking if the
Ngt Metrics collection capabilities got enabled message is logged in C:\Program Files\Nutanix\logs
\guest_agent_service.INFO.
• Ensure that the CVM and the guest VM system clocks are in sync with the actual time to ensure that accurate
metrics are generated.
Procedure
What to do next
After metrics are collected, you can leverage all the Windows Performance Monitor functionalities like
generating reports, viewing metrics graphically.
Note:
• If you choose to apply the VM-host affinity policy, it limits Acropolis HA and Acropolis Dynamic
Scheduling (ADS) in such a way that a virtual machine cannot be powered on or migrated to a host that
does not conform to requirements of the affinity policy as this policy is enforced mandatorily.
• The VM-host anti-affinity policy is not supported.
You can define the VM-Host affinity policies by using Prism Element during the VM create or update operation. For
more information, see Creating a VM (AHV).
Important:
The VM-VM anti-affinity policy is a preferential policy. The system does not block any VM operation,
such as VM maintenance mode or manual live migration of the VM, even if there is a policy violation. For
example, when you manually migrate one VM of a VM-VM pair with an anti-affinity policy, the policy is
applied on a commercially reasonable effort.
The Acropolis Dynamic Scheduling (ADS) always attempts to maintain compliance with the VM-VM anti-
affinity policy and ensures that the VM-VM anti-affinity policy is enforced on a commercially reasonable
effort. For example, if you manually migrate a VM and the migration leads to non-compliance with the
VM-VM anti-affinity policy, ADS performs the following actions:
• Ignores compliance to VM-VM anti-affinity policy, if a host is specified during manual migration.
• Attempts to enforce the policy back into compliance on a commercially reasonable effort, if a host is not
specified during manual migration.
For more information on ADS, see Acropolis Dynamic Scheduling in AHV section in AHV Administration
Guide.
Note:
• Currently, you can only define VM-VM anti-affinity policy by using aCLI. For more information, see
Configuring VM-VM Anti-Affinity Policy on page 327.
• The VM-VM affinity policy is not supported.
• If a VM is cloned that has the affinity policies configured, then the policies are not automatically applied
to the cloned VM. However, if a VM is restored from a DR snapshot, the policies are automatically
applied to the VM.
Procedure
2. Create a group.
nutanix@cvm$ acli vm_group.create group_name
Replace group_name with the name of the group.
3. Add the VMs on which you want to define anti-affinity to the group.
nutanix@cvm$ acli vm_group.add_vms group_name vm_list=vm_name
Replace group_name with the name of the group. Replace vm_name with the name of the VMs that you want to
define anti-affinity on. In case of multiple VMs, you can specify comma-separated list of VM names.
Important:
The VM-VM anti-affinity policy is a preferential policy. The system does not block any VM operation,
such as VM maintenance mode or manual live migration of the VM, even if there is a policy violation.
For example, when you manually migrate one VM of a VM-VM pair with an anti-affinity policy, the
policy is applied on a commercially reasonable effort only.
The Acropolis Dynamic Scheduling (ADS) always attempts to maintain compliance with the VM-VM
anti-affinity policy and ensures that the VM-VM anti-affinity policy is enforced on a commercially
reasonable effort. For example, if you manually migrate a VM and the migration leads to non-
compliance with the VM-VM anti-affinity policy, the ADS checks if the host is specified in manual
migration, and performs the following actions:
• Ignores compliance to VM-VM anti-affinity policy, if a host is specified during manual migration.
• Attempts to enforce the policy back into compliance on a commercially reasonable effort, if a host is
not specified during manual migration.
For more information on ADS, see Acropolis Dynamic Scheduling in AHV section in AHV Administration
Guide.
Procedure
This feature helps you configure the Citrix Cloud integration settings in the following way:
1. Establishes the connection to the Citrix Cloud workspace.
2. Configures Nutanix cluster as resource location in the Citrix cloud.
3. Configures the Citrix Cloud connector VM.
4. Registers the Citrix Cloud connector VM to the Active Directory (AD) domain on the Citrix cloud.
Once the integration is complete, VDIs can be created using the Nutanix AHV MCS Plug-in for Citrix XenDesktop
1.1.1.0 or later. The AHV MCS Plug-in is designed to create and manage VDIs in a Nutanix Acropolis infrastructure
environment. For more information, see AHV Plug-in for Citrix install guide and release notes.
Thus, to begin deploying your VMs and applications, you must perform the following:
• Configure the Citrix cloud integration settings using the Connect to Citrix Cloud feature.
• Install the Nutanix CWA Plug-In for Citrix Cloud Connector.
Refer to the Citrix documentation for details on Citrix Cloud connector.
Caution: The Sysprep VM must be in the powered-off state. If you power on the VM, you will lose the Sysprep state
and the configuration will fail.
For more information, see Microsoft documentation on Sysprep (Generalize) a Windows installation.
Procedure
1. Log on to the Prism Element web console using your Nutanix administrator credentials.
Note: The procedure does not work if Prism Element is launched from Prism Central web console.
2. Click the gear icon in the main menu and then select Connect to Citrix Cloud from the Setup section in the
Settings page.
The Connect to Citrix Cloud dialog box opens.
• Enter Manually
1. Enter your Customer ID for the Citrix Cloud.
2. Enter your secure Client ID for the Citrix Cloud.
3. Enter the downloaded secure client Secret Key.
Note: You can find the Customer ID, Client ID, and Secret Key from the API Access page in the
Citrix Cloud console.
4. Click Connect.
• Upload Credential Key
1. Click Upload Key File to browse and select the key file in the CSV format.
You can create or download the key file from the API Access page in the Citrix Cloud console (within
Identity and Access Management). This key file is used for the Citrix Cloud connector installation.
2. Enter your Customer ID for the Citrix Cloud.
3. Click Connect.
5. (Optional) Review the Citrix Cloud details. If any change is required, click the Change hyperlink to edit the
connection details.
6. (Optional) Select the High Availability check-box to enable or disable high availability for the connector
nodes.
By default, high availability is enabled. On enabling high availability, two connector nodes are created for the
redundancy purpose.
Note: Citrix Cloud recommends that you install two connectors for redundancy and high availability.
7. In the VM Master Image search box, start typing the initial letters of the previously created Sysprep VM image
and select the auto-completed option.
9. If high availability is enabled, enter the Secondary Connector VM Name (for high availability).
11. Enter the Domain Credentials to join your enterprise domain to the resource location.
• Verify the connection status. For more information, see Viewing Citrix Connection Status on page 331.
• Verify if the connector VM or VMs are created from the VM Dashboard.
• Wait for the state of the connector VM or VMs to change from powered off to powered on and then click Launch
Console. The VM preparation process starts.
• Once the VM starts, install the Nutanix CWA Plug-In for Citrix Cloud Connector to start the application
deployment.
Note: For XenServer, the Delete Connection option does not delete the connector VMs. You must delete the
connector VMs manually.
Note: To create an MSFT cluster, ensure that the minimum Nutanix VirtIO version installed on the guest VM is 1.1.4.
For more information about VirtIO drivers, see Nutanix VirtIO for Windows in AHV Administration Guide.
Procedure
4. Attach the volume group you created to each VM in the guest cluster.
For information on how to attach a volume group to a VM, see Managing a VM (AHV).
Analysis Dashboard
The Analysis dashboard allows you to create charts that can monitor dynamically a variety of performance
measures. To view the Analysis dashboard, select Analysis from the pull-down list on the left of the main
menu.
Menu Options
The Analysis dashboard does not include menu options other than those available from the main menu.
• Chart definitions. The pane on the left lists the charts that can be run. No charts are provided by default, but you
can create any number of charts. A chart defines the metrics to monitor. There are two types of charts, metric
and entity. A metric chart monitors a single metric for one or more entities. An entity chart monitors one or more
metrics for a single entity.
Note: You can change the color assigned to a metric or entity by clicking that color box in the chart (left pane) and
then selecting a different color from the displayed palette.
• Chart monitors. When a chart definition is checked, the monitor appears in the middle pane. An Alerts & Events
monitor always appears first. The remaining monitors are determined by which charts are checked in the left pane.
You can customize the display by selecting a time interval (from 3 hours to a month) from the Range drop-down
(above the charts) and then refining the monitored period by moving the time interval end points to the desired
length.
• Alerts and events. Any alerts and events that occur during the interval specified by the time line in the middle
pane appear in the pane on the right. For more information, see Prism Element Alerts and Events Reference
Guide.
The following table describes each field in the Analysis dashboard. Some fields can include a slide bar on the right to
view additional information in that field. The displayed information is dynamically updated to remain current.
Note: For information about how the metrics are measured, see Understanding Displayed Statistics on page 55.
Name Description
Charts Displays the set of defined charts. Check the box next to a chart name to
run that chart in the middle pane. The chart monitor appears in the middle
pane shortly after checking the box. Uncheck the box to stop that monitor
and remove it from the middle pane. To edit a chart definition, click the
pencil icon to the right of the name. This opens the edit chart window,
which is the same as the new chart window except for the title. To delete a
chart, click the cross icon on the right.
New Metric Chart Allows you to create a chart that tracks a single metric for one or more
entities. For more information, see Creating a Metric Chart on page 336.
New Entity Chart Allows you to create a chart that tracks one or more metrics for a single
entity. For more information, see Creating an Entity Chart on page 335.
(range time line and monitor Displays a time line that sets the duration for the monitor displays. To set
period) the time interval, select the time period (3 hour, 6 hour, 1 day, 1 week,
WTD [week to date], 1 month) from the Range field pull-down menu (far
right of time line). To customize the monitor period, you may move through
the timeline by manipulating the translucent blue bar on the top of the
Analysis pane. To reach a specific point in time, use the solid blue bar at
the bottom of the Analysis pane.
• By default, if you select a scale that is greater than the current, the translucent
time scrubber tends to jump to the most recent record.
• If you need to move further back in time than the scrubber allows at the current
scale, increase the scale of the scrubber.
• After you have the scale of your choice, move the translucent scrubber across
the timeline to the period in time that you wish to examine.
• After you have a time period selected, slide the solid blue time slider to a
specific point in time.
• To move down further into the time period, lower the scale and move the
scrubber accordingly.
• The lowest choice for scale is 3 hours, but you can shrink the translucent
scrubber down to approximately five minutes within the UI.
• When exporting the charts, the selected scale is used for the file regardless of
whether the scrubber has been resized to a custom value.
Alerts & Events Monitor Displays a monitor of alert and event messages that were generated
during the time interval. Alerts and events are tracked by a moving
histogram with each bar indicating the number of messages generated
during that time. The message types are color coded in the histogram bars
(critical alert = red, warning alert = orange, informational alert = blue, event
= gray).
(defined chart monitors) Displays monitors for any enabled (checked) charts. In the figure above,
three charts are enabled (memory usage, CPU/memory, and disk IOPS).
You can export the chart data by clicking on the chart header. This
displays a drop-down menu (below) to save the data in CSV or JSON
format. It also includes a chart link option that displays the URL to that
chart, which you can copy to a clipboard and use to import the chart.
Alerts Displays the alert messages that occurred during the time interval. For
more information, see Alerts Dashboard in Prism Element Alerts and Events
Reference Guide. Clicking a message causes the monitor line to move to the
time when that alert occurred.
Events Displays the event messages that occurred during the time interval.
Clicking a message causes the monitor line to move to the time when that
event occurred.
Procedure
2. In the Analysis Dashboard on page 333, click New > New Entity Chart.
The New Entity Chart dialog box appears.
Note: If you are creating this chart for Prism Central, the list spans the registered clusters. Otherwise, the list is
limited to the current cluster.
d. Metric: Select a metric from the drop-down list. (Repeat to include additional metrics.)
For descriptions of the available metrics, see Chart Metrics on page 336.
4. When all the field entries are correct, click the Save button.
The Analysis dashboard reappears with the new chart appearing in the list of charts on the left of the screen.
Procedure
2. In the Analysis Dashboard on page 333, click New > New Metric Chart.
The New Metric Chart dialog box appears.
Note: If you are creating this chart for Prism Central the list spans the registered clusters. Otherwise, the list is
limited to the current cluster.
4. When all the field entries are correct, click the Save button.
The Analysis dashboard reappears with the new chart appearing in the list of charts on the left of the screen.
Chart Metrics
The following metrics can be added to charts.
Note: The mapping between a metric and an entity type is hypervisor dependent.
Metric
Content Cache Hit Rate Content cache hits over all lookups.
(%) • Host
ID: CONTENT_CACHE_HIT_PPM
• Cluster
Content Cache Logical Logical memory (in bytes) used to cache data without
Memory Usage • Host deduplication.
• Cluster ID: CONTENT_CACHE_LOGICAL_MEMORY_USAGE_BYTES
Metric
Content Cache Logical Logical SSD memory (in bytes) used to cache data
SSD Usage • Host without deduplication.
• Cluster ID: CONTENT_CACHE_LOGICAL_SSD_USAGE_BYTES
Content Cache Physical Real memory (in bytes) used to cache data by the content
Memory Usage • Host cache.
• Cluster ID: CONTENT_CACHE_PHYSICAL_MEMORY_USAGE_BYTES
Content Cache SSD Real SSD usage (in bytes) used to cache data by the
Usage • Host content cache.
• Cluster ID: CONTENT_CACHE_PHYSICAL_SSD_USAGE_BYTES
Disk I/O Bandwidth Data transferred per second in KB/second from disk.
• Host
ID: STATS_BANDWIDTH
• Cluster
• Disk
• Storage Pool
Disk I/O Bandwidth - Read data transferred per second in KB/second from
Read • Host disk.
• Cluster ID: STATS_READ_BANDWIDTH
• Disk
• Storage Pool
Metric
Disk I/O Bandwidth - Write data transferred per second in KB/second from
Write • Host disk.
• Cluster ID: STATS_WRITE_BANDWIDTH
• Disk
• Storage Pool
Disk IOPS - Read Input/Output read operations per second from disk.
• Host
ID: STATS_NUM_READ_IOPS
• Cluster
• Disk
• Storage Pool
Disk IOPS - Write Input/Output write operations per second from disk.
• Host
ID: STATS_NUM_WRITE_IOPS
• Cluster
• Disk
• Storage Pool
Metric
GPU video decoder Virtual Machine GPU video decoder usage in percentage.
Usage
ID: DECODER_USAGE_PPM
GPU video encoder Virtual Machine GPU video encoder usage in percentage
usage
ID: ENCODER_USAGE_PPM
Hypervisor CPU Ready Virtual Machine Percentage of time that the virtual machine was ready,
Time (%) but could not get scheduled to run.
ID: STATS_HYP_CPU_READY_TIME
Metric
Hypervisor IOPS - Read Input/Output read operations per second from Hypervisor.
• Host
ID: STATS_HYP_NUM_READ_IOPS
• Cluster
Note:
• Virtual Machine
• Cluster entity is applicable for AHV hypervisor.
• Virtual Machine entity is applicable for ESXi.
Metric
Logical Usage Storage Container Logical usage of storage (physical usage divided by
replication factor).
ID: STATS_UNTRANSFORMED_USAGE
Overall Memory Usage Percentage of memory usage used by AHV with HA.
(%) • Host
ID: OVERALL_MEMORY_USAGE_PPM
• Cluster
Network Rx Bytes Virtual Machine Network transmitted bytes reported by the hypervisor.
ID: HYPERVISOR_NUM_RECEIVED_BYTES
Network Tx Bytes Virtual Machine Write data transferred per second in KB/second.
ID: HYPERVISOR_NUM_TRANSMITTED_BYTES
Metric
Metric
Storage Controller IOPS Input/Output operations per second from the Storage
• Host Controller
• Cluster ID: STATS_CONTROLLER_NUM_IOPS
• Storage Container Note: The Host entity is applicable for AHV hypervisor
only.
• Virtual Machine
• Volume Group
• Virtual Disk
Storage Controller IOPS Input/Output read operations per second from the
- Read • Host Storage Controller
• Cluster ID: STATS_CONTROLLER_NUM_READ_IOPS
• Storage Container Note: The Host entity is applicable for AHV hypervisor.
• Virtual Machine
• Volume Group
• Virtual Disk
Storage Controller IOPS Percent of Storage Controller IOPS that are reads.
- Read (%) • Cluster
ID: STATS_CONTROLLER_READ_IO_PPM
• Storage Container
Note: The Host entity is applicable for AHV hypervisor.
• Virtual Machine
• Volume Group
• Virtual Disk
Storage Controller IOPS Input/Output write operations per second from the
- Write • Host Storage Controller
• Cluster ID: STATS_CONTROLLER_NUM_WRITE_IOPS
• Storage Container
• Virtual Machine
• Volume Group
• Virtual Disk
Metric
Storage Controller IOPS Percent of Storage Controller IOPS that are writes.
- Write (%) • Cluster
ID: STATS_CONTROLLER_WRITE_IO_PPM
• Storage Container
• Virtual Machine
• Volume Group
• Virtual Disk
Storage container own Storage Container Storage container's own usage + Reserved (not used).
usage
ID: NEW_CONTAINER_OWN_USAGE_LOGICAL
Metric
Swap Out Rate Virtual Machine Rate of data being swapped out.
ID: STATS_HYP_SWAP_OUT_RATE
Virtual NIC bytes Virtual Machine Virtual NIC bytes received packets with error.
received packets with
STATS_NETWORK_ERROR_RECEIVED_PACKETS
error.
Note: Virtual Machine entity is applicable for ESXi
hypervisor.
Virtual NIC bytes Virtual Machine Virtual NIC bytes received rate in kbps.
received rate.
STATS_NETWORK_RECEIVED_RATE
Virtual NIC bytes Virtual Machine Virtual NIC bytes transmitted rate in kbps.
transmitted rate.
STATS_NETWORK_TRANSMITTED_RATE
Virtual NIC dropped Virtual Machine Number of dropped transmitted packets by the Virtual
transmitted packets. NIC.
STATS_NETWORK_DROPPED_TRANSMITTED_PACKETS
Virtual NIC receive Virtual Machine Number of receive packets dropped by the Virtual NIC.
packets dropped.
STATS_NETWORK_DROPPED_RECEIVED_PACKETS
Procedure
3. Click the Range drop-down list and set the range to 1 Month.
The 1 Month range shows the data in monthly segments.
4. Export the performance data into a CSV or JSON file. Click the drop-down arrow next to the cluster chart you
want to export.
• To view the Task dashboard, log in to Prism Element web console, and select Home > Tasks.
• An icon also appears in the main menu when one or more tasks are active (running or completed within the last 48
hours). The icon appears blue when a task runs normally, yellow when it generates a warning, or red when it fails.
Clicking the icon displays a drop-down list of active tasks; clicking the View All Tasks button at the bottom of
that list displays a details screen with information about all tasks for this cluster.
Note: The drop-down list of active tasks may include a Clean Up button (top right). Clicking this button removes
from the list any tasks that are no longer running. However, this applies to the current session only. The full active
list (including the non-running tasks) appears when you open a new Prism Element web console session.
• When multiple tasks are active, you can filter the list by entering a name in the filter by field.
• You can also filter the list by clicking the Filters button and selecting the desired filter options
Each task appears in the list for a minimum of one hour after completion, but how long that task remains in the list
depends on several factors. In general, the maximum duration is two weeks. However, tasks are rotated off the list
as new tasks arrive, so a task might disappear from the list much sooner when activity is high. In some cases a task
appears for longer than two weeks because the last task for each component is retained in the listing.
Task Specifies which type of operation the task is Any cluster operation
performing. you can perform in the
Prism Element web
console
Entity Affected Display the entity on which task has been Entity description
performed. If the link appears on the entity, click it
to display the details.
Duration Displays how long the task took to complete. seconds, minutes, hours
• You can specify one or more name servers. For more information, see Configuring Name Servers on
page 352.
• If Acropolis is enabled, you can configure one or more network connections. For more information, see Network
Configuration for VM Interfaces on page 165.
• You can create a whitelist of IP addresses that are allowed access. For more information, see Configuring a
Filesystem Whitelist on page 351.
• You can specify one or more NTP servers for setting the system clock. For more information, see Configuring
NTP Servers on page 353.
• You can configure one or more network switches for statistics collection. For more information, see Configuring
Network Switch Information on page 170.
• You can specify an SMTP mail server. For more information, see Configuring an SMTP Server on page 354.
• You can configure SNMP. For more information, see Configuring SNMP on page 354.
• You can configure a login banner page. For more information, see Configuring a Banner Page on page 364.
Caution:
• There is no user authentication for NFS access, and the IP address in the allowlist has full read or write
access to the data on the container.
• It is recommended to allow single IP addresses (with net mask such as 255.255.255.255) instead of
allowing subnets (with netmask such as 255.255.255.0).
• Using a Nutanix storage container as a general-purpose NFS or SMB share is not supported. Because
the Nutanix solution is VM-centric, the preferred mechanism is to deploy a VM that provides file share
services.
To add (or delete) an address to (from) the filesystem allowlist, do the following:
Procedure
1. Click the gear icon in the main menu and then select Filesystem Whitelists in the Settings page.
The Filesystem Whitelists dialog box appears.
3. To delete an entry from the allowlist, click the X icon for that entry in the Whitelist Entry list.
A window prompt appears to verify the action; click the OK button. The entry is removed from the list.
Procedure
2. Click the gear icon in the main menu and then select Name Servers in the Settings page.
The Name Servers dialog box appears.
3. To add a name server, enter the server IP address in the Server IP field and then click the Add button to the right
of that field.
The server is added to the IP Address list (below the Server field).
Note: Changes in name server configuration may take up to 5 minutes to take effect. Functions that rely on DNS
may not work properly during this time. You can configure a maximum of three name servers.
4. To delete a name server entry, click the X icon for that server in the IP Address list.
A window prompt appears to verify the action; click the OK button. The server is removed from the list.
• Where possible, synchronize Nutanix clusters with internal NTP sources to ensure stability from both a network
and a security vulnerability perspective. When you cannot avoid using an external NTP source, Nutanix
recommends that you use a time source maintained by your national government.
Note: Using a pool.ntp.org server is not appropriate for all circumstances. For more context, see the Additional
Notes section.
Procedure
2. Click the gear icon in the main menu and then select NTP Servers in the Settings page.
The NTP Servers dialog box appears.
3. To add an NTP server entry, enter the server IP address or fully qualified host name in the NTP Server field and
then click the Add button to the right of that field.
The name or address is added to the HOST NAME OR IP ADRESS list (below the NTP Server field).
4. To delete an NTP server entry, click the cross icon for that server in the Servers list.
A window prompt appears to verify the action; click the OK button. The server is removed from the list.
Note: Since Nutanix CVM has FIPS authentication enabled, the SMTP client in the Nutanix CVM is incompatible with
an SMTP server with CRAM-MD5 Authentication enabled.
Procedure
2. Click the gear icon in the main menu and then select SMTP Server in the Settings page.
The SMTP Server Settings dialog box appears.
a. Host Name or IP Address: Enter the IP address or fully qualified domain name for the SMTP server.
b. Port: Enter the port number to use.
The standard SMTP ports are 25 (unencrypted), 587 (TLS), and 465 (SSL). For the complete list of required
ports, see Port Reference.
c. Security Mode: Enter the desired security mode from the pull-down list.
The options are NONE (unencrypted), STARTTLS (use TLS encryption), and SSL (use SSL encryption).
d. User: Enter a user name.
The User and Password fields apply only when a secure option (STARTTLS or SSL) is selected. The user
name might need to include the domain depending on the authentication process.
e. Password: Enter the user password.
f. From Email Address (optional): Enter an e-mail address that appears as the sender address.
By default, alert and cluster status information e-mails display [email protected] as the sender address.
You have the option to replace that address with a custom address by entering a sender address in this field.
4. When all the fields are correct, click the Save button.
Configuring SNMP
About this task
The Simple Network Management Protocol (SNMP) is an application layer protocol that facilitates the exchange
of management information between network devices. Nutanix systems include an SNMP agent that provides
interoperability with industry standard SNMP manager systems. Nutanix also provides a custom Management
Information Base (MIB) for Nutanix-specific information.
Note:
• The Net-SNMP package version 5.7.2 does not support 256-bit AES encryption.
Procedure
2. Click the gear icon in the main menu and then select SNMP in the Settings page.
The SNMP Configuration dialog box appears.
3. To enable SNMP for this cluster, select the Enable SNMP checkbox. To disable SNMP, clear the checkbox.
Note:
SNMP traps are sent by the Controller VM that functions as the Alert Manager leader. If you need to
open your firewall to receive the traps, keep in mind that the Alert Manager leader can rotate during
tasks like AOS or host upgrades. Therefore, it might be necessary to open all the Controller VM IP
addresses to ensure that the traps are received.
4. To view the Nutanix MIB (NUTANIX-MIB.txt), click the View MIB link. To download NUTANIX-MIB.txt,
right-click and select the appropriate download action for your browser and then copy NUTANIX-MIB.txt to your
SNMP manager systems.
See your SNMP manager documentation for instructions on how to install the Nutanix MIB.
5. To add an SNMP transport, click the Transports tab and the New Transport button, and then do the following
in the indicated fields. An SNMP transport is a combination of the transport protocol and port number on which
you want the Nutanix SNMP agent to receive queries. SNMP transports enable you to combine transport protocols
and port numbers other than the default port number. The port numbers that are specified in SNMP transports are
unblocked on the Controller VM, making them available to receive queries:
Note: To return to the SNMP Configuration window without saving, click the Cancel button.
• Trap Username: This field is displayed if you select v3 in the SNMP Version. Select a user from the
drop-down list.
• Community: This field is displayed if you select v2c in the SNMP Version. The default value for v2c
trap community is public, or you can enter any other name of your choice.
All users added previously (see Step 5) appear in the drop-down list. You cannot add a trap receiver entry
until at least one user has been added.
d. Address: Enter the target address.
An SNMP target address specifies the destination and user that receives outgoing notifications, such as trap
messages. SNMP target address names must be unique within the managed device.
e. Port: Enter the port number to use.
The standard SNMP port number is 161. For the complete list of required ports, see Ports and Protocols.
f. Engine ID: Optionally, enter an engine identifier value, which must be a hexadecimal string between 5 and
32 characters long.
If you do not specify an engine ID, an engine ID is generated for you for use with the receiver. Every SNMP
v3 agent has an engine ID that serves as a unique identifier for the agent. The engine ID is used with a
hashing function to generate keys for authentication and encryption of SNMP v3 messages.
g. Inform: Select True from the drop-down list to use inform requests as the SNMP notification method; select
False to use traps as the SNMP notification method.
SNMP notifications can be sent as traps or inform requests. Traps are one-way transmissions; they do not
require an acknowledgment from the receiver. Informs expect a response. If the sender never receives a
response, the inform request can be sent again. Therefore, informs are more reliable than traps. However,
informs consume more resources. Unlike a trap, which is discarded as soon as it is sent, an inform request
must be held in memory until a response is received or the request times out. Also, traps are sent only once,
while an inform may be retried several times. The retries increase traffic and add overhead on the network.
Thus, traps and inform requests provide a trade-off between reliability and resources.
Note: The SNMP server is blocked if it doesn't get a response to the inform traps in five consecutive
attempts.
h. Transport Protocol: Select the protocol to use from the drop-down list.
The options are TCP, TCP6, UDP, and UDP6.
i. When all the fields are correct, click the Save button (lower right).
This saves the configuration and redisplays the dialog box with the new trap entry appearing in the list.
j. To test all configured SNMP traps, click the Traps tab, and then click Test All.
The Nutanix cluster sends test alerts to all the SNMP trap receivers configured on the cluster.
8. To edit a user or trap receiver entry, click the appropriate tab (Users or Traps) and then click the pencil icon for
that entry in the list.
An edit window appears for that user or trap receiver entry with the same fields as the add window. (Transport
entries cannot be edited.) Enter the new information in the appropriate fields and then click the Save button.
Nutanix MIB
Overview
The Simple Network Management Protocol (SNMP) enables administrators to monitor network-attached devices for
conditions that warrant administrative attention. In the Nutanix SNMP implementation, information about entities in
the cluster is collected and made available through the Nutanix MIB (NUTANIX-MIB.txt). The Nutanix enterprise
tree is located at 1.3.6.1.4.1.41263.
The Nutanix MIB is divided into the following sections:
Important: A statistic (counter) value resets to zero and starts increasing again after it reaches the maximum limit
defined for its corresponding data type. The counter reinitialization is compliant with the RFC 2578 standard.
For example, the vmRxBytes statistic monotonically increases until it reinitializes on reaching a maximum
value of its data type (Counter64).
clusterVersion Cluster version number. This is the Nutanix core Display string
package version expected on all the Controller
VMs.
clusterStatus Current status of the cluster. Possible values are Display string
started and stopped.
clusterTotalStorageCapacity
Total storage capacity of the cluster in bytes. Unsigned 64-bit integer
clusterUsedStorageCapacity
Storage used on the cluster, in bytes. Unsigned 64-bit integer
clusterIops Average I/O operations per second (IOPS) in the Unsigned 64-bit integer
cluster.
clusterLatency Average I/O latency in the cluster, in milliseconds. Unsigned 64-bit integer
clusterIOBandwidth Cluster-wide I/O bandwidth in kilobytes per second Unsigned 64-bit integer
(KBps).
svtIndex Unique index that is used to identify an entry in the Signed 32-bit integer
software version information table.
cstIndex Unique index that is used to identify an entry in the Signed 32-bit integer
service status information table.
cstControllerVMId Nutanix Controller VM identification number. Display string
cstDataServiceStatus Status of the core data services on the Controller Display string
VM.
hypervisorIndex Number that is used to uniquely identify an entry in Signed 32-bit integer
the hypervisor information table.
hypervisorCpuCount Number of CPU cores available to the hypervisor Unsigned 32-bit integer
instance.
hypervisorCpuUsagePercent
Percentage of CPU resources in use by the Unsigned 32-bit integer
hypervisor instance.
hypervisorMemoryUsagePercent
Memory in use by the hypervisor instance, as a Unsigned 64-bit integer
percentage of the total available memory.
hypervisorReadIOPerSecond
Total number of read I/O operations per second Unsigned 32-bit integer
(IOPS) being performed by the hypervisor.
hypervisorWriteIOPerSecond
Total number of write I/O operations per second Unsigned 32-bit integer
(IOPS) being performed by the hypervisor.
vmIndex Number that is used to uniquely identify an entry in Signed 32-bit integer
the VM information table.
vmCpuCount Number of CPU cores available to the VM. Unsigned 32-bit integer
vmCpuUsagePercent Percentage of CPU resources in use by the VM. Unsigned 32-bit integer
vmMemoryUsagePercent Memory in use by the VM, as a percentage of the Unsigned 64-bit integer
total allocated memory.
vmReadIOPerSecond Total number of read I/O operations per second Unsigned 32-bit integer
(IOPS) being performed by the VM.
vmWriteIOPerSecond Total number of write I/O operations per second Unsigned 32-bit integer
(IOPS) being performed by the VM.
dstIndex Number that is used to uniquely identify an entry in Signed 32-bit integer
the disk information table.
dstDiskId Disk identification number. The number is unique Display string
for each disk.
dstNumRawBytes Physical storage capacity on the device, in terms of Unsigned 64-bit integer
number of raw bytes.
dstNumTotalBytes Usable storage on the device through its file Unsigned 64-bit integer
system, in terms of number of usable bytes.
dstNumFreeBytes Available storage on the device through its file Unsigned 64-bit integer
system for non-root users, in terms of number of
free bytes.
dstNumTotalInodes Total number of usable inodes on the device Unsigned 64-bit integer
through its file system.
dstNumFreeInodes Total number of available (free) inodes on the Unsigned 64-bit integer
device through its file system for non-root users.
dstAverageLatency Average I/O latency of the disk in microseconds Unsigned 64-bit integer
(µs).
dstIOBandwidth I/O bandwidth of the disk in kilobytes per second Unsigned 64-bit integer
(KBps).
dstNumberIops Current number of I/O operations per second Unsigned 64-bit integer
(IOPS) for the disk.
crtIndex Number that is used to uniquely identify an entry in Signed 32-bit integer
the Controller VM resource information table.
crtNumCpus Total number of CPUs allocated to the Controller Signed 32-bit integer
VM.
spitIOPerSecond Current number of I/O operations per second Signed 32-bit integer
(IOPS) for this storage pool.
spitAvgLatencyUsecs Average I/O latency for the storage pool in Unsigned 64-bit integer
microseconds.
spitIOBandwidth I/O bandwidth of the storage pool in kilobytes per Unsigned 64-bit integer
second (KBps).
citIndex Number that is used to uniquely identify an entry in Signed 32-bit integer
the storage container information table.
citIOPerSecond Current number of I/O operations per second Signed 32-bit integer
(IOPS) for this storage container.
citAvgLatencyUsecs Average I/O latency for the storage container in Unsigned 64-bit integer
microseconds.
citIOBandwidth I/O bandwidth of the storage container in kilobytes Unsigned 64-bit integer
per second (KBps).
Trap Resolution
In addition to generating an SNMP trap when an alert condition is detected, Nutanix clusters generate a trap when an
alert condition is resolved. The resolved trap, named ntxTrapResolved, is generated regardless of whether the alert
condition is resolved manually or automatically. If a Prism Central alert is resolved in Prism Central, Prism Central
sends a resolved trap. If a Nutanix cluster alert is resolved in either the Prism Element web console or Prism Central,
both Prism Central and the cluster send a resolved trap.
In this section, the terms original alert and original trap are used to refer, respectively, to the alert and trap that
are generated when the alert condition is detected, and the terms resolved alert and resolved trap are used to refer,
respectively, to the alert and trap that are generated when the alert condition is resolved.
To enable you to associate a resolved trap with the original alert and original trap, the UUID of the original alert is
included in the original trap, as part of the alert message (ntxAlertDisplayMsg), and in the resolved trap, as a MIB
object (ntxAlertUuid).
A resolved trap has the following MIB objects:
ntxAlertCreationTime Time of alert creation. The value is the number of Unsigned 64-bit integer
seconds since the UNIX epoch (01/01/1970).
ntxTrapName Name of the trap that was generated when the alert Display string
condition was detected. This MIB object is included
in resolved traps.
ntxAlertUuid UUID of the alert that was generated when the alert Display string
condition was detected. This MIB object is included
in resolved traps.
ntxAlertResolvedTime Time at which the alert was resolved. The value Unsigned 64-bit integer
is the number of seconds since the UNIX epoch
(01/01/1970). This MIB object is included in
resolved traps.
Procedure
2. Click the gear icon in the main menu and then select Welcome Banner in the Settings page.
The Edit Welcome Banner dialog box appears.
3. Enter (paste) the desired content in HTML format in the pane on the left.
Only safe HTML tags are supported. Inline event handlers, scripts, and externally-sourced graphics are not
allowed.
4. Click the Preview button to display the banner in the pane on the right.
5. If the banner is not correct, update the HTML code as needed until the preview pane displays the desired message.
Note: A live banner page includes an Accept terms and conditions bar at the bottom. Clicking on this bar sends the
user to the login page.
Procedure
2. Click Settings from the drop-down menu of the Prism Element web console and then select vCenter
Registration in the Settings page.
The vCenter Server that is managing the hosts in the cluster is auto-discovered and displayed.
4. Type the administrator user name and password of the vCenter Server in the Admin Username and Admin
Password fields.
5. Click Register.
During the registration process a certificate is generated to communicate with the vCenter Server. If the
registration is successful, relevant message is displayed in the Tasks dashboard. The Host Connection field
displays as Connected, which implies that all the hosts are being managed by the vCenter Server that is registered.
• Ensure that you unregister the vCenter Server from the cluster before changing the IP address of the vCenter
Server. After you change the IP address of the vCenter Server, you should register the vCenter Server again with
the new IP address with the cluster.
• The vCenter Server Registration page displays the registered vCenter Server. If for some reason the Host
Connection field changes to Not Connected, it implies that the hosts are being managed by a different vCenter
Server. In this case, there will be new vCenter entry with host connection status as Connected and you need to
register to this vCenter Server.
Procedure
2. Click Settings from the drop-down menu of the Prism Element web console and then select vCenter
Registration in the Settings page.
A message that cluster is already registered to the vCenter Server is displayed.
3. Type the administrator user name and password of the vCenter Server in the Admin Username and Admin
Password fields.
4. Click Unregister.
If the credentials are correct, the vCenter Server is unregistered from the cluster and a relevant message is
displayed in the Tasks dashboard.
Procedure
Note:
• This feature converts your existing ESXi cluster to an AHV cluster. You cannot start the conversion
process on the AHV cluster.
• Do not remove the hosts from the vCenter Server if you want to perform the reverse conversion process
(AHV to ESXi).
• This feature is supported on the clusters with multiple hypervisors (combination of ESXi nodes with one
AHV node or multiple AHV nodes).
• Decreased VM downtime: With the implementation of the new workflow of converting the nodes in the cluster
in a rolling manner, the VM downtime is reduced to only shutdown time and conversion time required for that
particular VM. Approximately, the VM downtime has been reduced from 3 to 4 hours to less than 5 minutes.
• Prism state: The Prism console is responsive during the conversion process. However, Prism goes into read-only
state.
• State of the VM: The current state of the VM is preserved and the VM is brought back into the same state post
conversion. For example, the VM is automatically powered on post conversion if that VM was powered on before
you started the conversion.
• Preservation of the MAC addresses of the VM NICs: After cluster conversion to AHV or ESXi, the MAC
addresses of the VM NICs are preserved. The preservation of the IP address depends on the operating system.
Typically, some Linux operating systems preserve the IP address when the MAC address is preserved, but
Windows operating systems do not.
• VMs that were created after conversion to AHV are retained post conversion back to ESXi.
Prerequisites
• Before performing In-place hypervisor conversion, ensure that you resolve all the NCC health check alerts
(warnings, failures and errors) and upgrade the firmware to the latest version using LCM firmware upgrade.
Note: You can convert an ESXi cluster, which hosts a Prism Central VM, to an AHV cluster without installing
NGT in the Prism Central VM. After the conversion, the Prism Central VM starts successfully.
General Limitations
In-place hypervisor conversion has the following limitations:
Version Support
5.5 All the versions that are included in the ISO Whitelist (see ~/
foundation/config/iso_whitelist.json file) that is bundled with AOS are
6.0 supported. If you have upgraded foundation by using standalone
6.5 foundation, the ISO Whitelist is invalidated. In this case, you need to
see the original Whitelist that was present before you performed the
6.7 upgrade. To verify the json file that was bundled with AOS, see the
nutanix-packages.json file.
7.0
8.0
Component Description
vSwitch
• Each host must have only one external vSwitch. If you have more than
one external vSwitch conversion validation fails.
• Ensure you have one active and multiple passive failover configuration
of NICs on vSwitch.
• Active/active load balancing policies are not supported and is converted
to active/passive on AHV.
• For a standard switch configuration it is recommended that all the port
groups are present on all the hosts because there might be a possibility
that the conversion process might move the VMs to a different host.
Distributed vSwitch Each host must have only one external distributed vSwitch. If
you have more than one external distributed vSwitch conversion
validation fails.
Internal vSwitch Internal vSwitch apart from Nutanix vSwitch is not supported.
• Only VMs with flat disks are supported. The delta disks are not supported.
• Only IDE and SCSI storage controllers are supported for automatic conversion. SATA and PCI disks are not
supported.
• On Windows VMs, set the SAN policy to OnlineAll for non-boot SCSI disks so that they can be automatically
brought online. For more information about setting SAN policy, see Bringing Multiple SCSI Disks Online.
• VMs that have NFS datastore folders with vSphere tagging can not be converted.
• Virtual machines with attached volume groups or shared virtual disks are not supported.
• After reverting back to ESXi from AHV, the VMs are converted to the maximum hardware version that is
supported by that specific ESXi version.
• Guest OS network interfaces on Linux VMs may change to generic type during ESXi to AHV conversion (for
example, in RHEL 7 network interface enoXXXX changes to eth0). You may have to reconfigure the network
settings according to changes post conversion.
• Guest OS type for the Linux VMs may change to a more generic type (for example RHEL 7 may change to Other
Linux 64-bit) during the conversion back from AHV to ESXi.
Windows
• Windows 2008 R2 or later • Only 64-bit operating systems are
versions supported.
• Windows 7 or later versions
ESXi to AHV
After you start the conversion process, all the nodes in the cluster are converted in a rolling manner to AHV one node
at a time. During conversion, the first node is placed in the maintenance mode and all the VMs that are running on the
node are migrated to other ESXi nodes in the cluster using the HA and DRS feature. After the VMs are migrated, the
node is converted to AHV. After node is successfully converted, all VMs that were migrated to ESXi are migrated
one at a time to AHV. Similar steps are performed for the rest of the nodes in the ESXi cluster until the last ESXi
node. The VMs that are running on the last ESXi node in the cluster are converted and migrated to the AHV hosts
and then the ESXi host is converted to AHV. If any error occurs during VM conversion, appropriate alerts or error
messages are displayed. When converting a VM the source vdisk is not modified. Therefore, if there are any fatal
errors during imaging, storing, or restoring of the configuration, the conversion is stopped and you are prompted to
abort the conversion.
Note: After conversion, do not remove the host from the vCenter Server until you are sure that the conversion is
successful, because it may impact the reverse conversion process.
AHV to ESXi
During the reverse conversion (AHV to ESXi), the process of conversion is similar. Additionally, if the cluster does
not have the ESXi ISO stored on the cluster, you need to provide the ESXi ISO image during the conversion process.
Note:
• The image that you provide should be of the same major ESXi version that you have used during ESXi
to AHV conversion.
• If new nodes were added to the cluster after conversion to AHV, then these nodes need to be removed
before starting the reverse conversion process.
After conversion to ESXi, all the hosts are automatically registered to the vCenter Server.
Procedure
2. Click the gear icon in the main menu and then select Convert Cluster in the Settings page.
4. Select the state of the VMs that you want post conversion from the VM Boot Options drop-down menu.
» Preserve power state of the user VMs: Select this option if you want to keep the original power state of
the VMs. For example, if you want the VMs to be in a running state post conversion automatically, select this
option.
» Power Off User VMs: Select this option if you want power off all the VMs running on ESXi cluster before
you start the conversion process. After conversion, these VMs will be in the powered off state.
5. Click Validate to enter vCenter Server credentials and to verify whether you have met all the requirements.
6. Enter the IP address of the vCenter Server in the vCenter Sever IP Address along with the administrator user
name and password of the vCenter Server in the Username and Password fields.
7. Click Yes.
A validation that you have met all the requirements is performed. Once the validation is successful the conversion
process proceeds. If validation fails (for any reason), a relevant message to take appropriate action is displayed.
Note: The cluster changes to Read Only mode. A message Oops - server Error may be displayed when the Prism
node is undergoing conversion. Wait for the conversion process to complete. You can access other Controller VM
for Read-Only Prism operations.
The entire conversion process may take 3 to 4 hours depending on the nodes that are present in your cluster.
However, the VM downtime will be less than 5 minutes because all the nodes in the cluster are converted in a
rolling manner. You can also track the progress of the conversion by logging again into the web console.
Procedure
2. Click the gear icon in the main menu and then select Convert Cluster in the Settings page.
» Preserve power state of the user VMs: Select this option if you want to keep the original power state
of the VMs. For example, if you want the VMs to be in a running state post conversion automatically, select
this option.
» Power Off User VMs: Select this option if you want power off all the VMs running on ESXi cluster before
you start the conversion process. After conversion, these VMs will be in the powered off state.
5. Click Validate to enter vCenter Server credentials and to verify whether you have met all the requirements.
6. Enter the IP address of the vCenter Server in the vCenter Sever IP Address along with the administrator
user name and password of the vCenter Server in the Username and Password fields.
7. Click Yes.
A validation that you have met all the requirements is performed. Once the validation is successful the
conversion process proceeds. If validation fails (for any reason), a relevant message to take appropriate action is
displayed.
9. (Optional) If you have not saved the ESXi ISOs at the foundation/isos/hypervisor/esx/ location, click Choose
File and select the ESXi ISO.
Note: If you have different versions of ESXi running in your cluster, you have to perform this step for every
version of ESXi ISO.
• Imaging Issue: If this issue occurs, you can only stop the cluster conversion process and a relevant message is
displayed. Aborting the conversion reverts the cluster back to the its original state.
• VM Conversion Issue: If this issue occurs, you can either stop the conversion or can continue with the process.
If you decide to continue with the process, cluster conversion is completed keeping the current state of the VM.
After conversion is completed, you must perform appropriate actions on the VMs to bring back the VMs.
Internationalization (i18n)
The following table lists all the supported and unsupported entities in UTF-8 encoding.
User management
Chart name
Caution: The creation of none of the above entities are supported on Hyper-V because of the DR limitations.
Localization (L10n)
Nutanix localizes the user interface in Simplified Chinese and Japanese language. All the static screens are translated
to the selected locale language. All the dashboards (including tool tips) and menus of the Prism Element are localized.
You have an option to change the language settings of the cluster from English (default) to Simplified Chinese or
Japanese. For more information, see Changing the Language Settings on page 375.
If the Prism Central instance is launched from the Prism Element, language settings of the Prism Central takes
precedence over Prism Element.
You can also create new users with the specified language setting. For more information, see User Management.
• Logical entities that do not have a contextual translation available in the localized language are not localized.
• The AOS generated alerts and events are not localized to the selected locale language.
• Following strings are not localized: VM, CPU, vCPU, Language Settings, licensing details page, hardware names,
storage denominations (GB, TB), About Nutanix page, EULA, service names (SNMP, SMTP), hypervisor types.
Procedure
2. Click the gear icon in the main menu and then select Language Settings in the Settings page.
The Language Settings dialog box appears.
3. To change the language, select the desired language from the Languages field pull-down menu.
The English language is selected by default, but you can change that to either Simplified Chinese or
Japanese.
4. To change the locale settings (date, time, calendar), select the appropriate region from the Region field drop-
down menu.
A default locale is set based on the language setting. However, you can change the region to display the date, time,
and calendar in some other format. This format for date, time, and calendar is applied to the entire cluster.
Hyper-V Setup
Adding the Cluster and Hosts to a Domain
After completing foundation of the cluster, you need to add the cluster and its constituent hosts to the
Active Directory (AD) domain. The adding of cluster and hosts to the domain facilitates centralized
administration and security through the use of other Microsoft services such as Group Policy and enables
administrators to manage the distribution of updates and hotfixes.
• If you have a VLAN segmented network, verify that you have assigned the VLAN tags to the Hyper-V hosts
and Controller VMs. For information about how to configure VLANs for the Controller VM, see the Advanced
Setup Guide.
• Ensure that you have valid credentials of the domain account that has the privileges to create a new computer
account or modify an existing computer account in the Active Directory domain. An Active Directory domain
created by using non-ASCII text may not be supported. For more information about usage of ASCII or non-ASCII
text in Active Directory configuration, see Internationalization (i18n) in Prism Element Web Console Guide.
1. Log on to the Web Console by using one of the Controller VM IP address or by using cluster virtual IP address.
2. Click the gear icon in the main menu and select Join Cluster and Hosts to the Domain on the Settings
page.
3. Enter the fully qualified name of the domain that you want to join the cluster and its constituent hosts to in the
Full Domain Name field.
4. Enter the IP address of the name server in the Name Server IP Address field that can resolve the domain
name that you have entered in the Full Domain Name field.
5. In the Base OU Path field, type the OU (organizational unit) path where the computer accounts must be stored
after the host joins a domain. For example, if the organization is nutanix.com and the OU is Documentation, the
Base OU Path can be specified as OU=Documentation,DC=nutanix,DC=com
Specifying the Base OU Path is optional. When you specify the Base OU Path, the computer accounts are
stored in the Base OU Path within the Active Directory after the hosts join a domain. If the Base OU Path is not
specified, the computer accounts are stored in the default Computers OU.
6. Enter a name for the cluster in the Nutanix Cluster Name field.
The maximum length of the cluster name should not be more than 15 characters and it should be a valid
NetBIOS name.
7. Enter the virtual IP address of the cluster in the Nutanix Cluster Virtual IP Address field.
If you have not already configured the virtual IP address of the cluster, you can configure it by using this field.
8. Enter the prefix that should be used to name the hosts (according to your convention) in the Prefix field.
9. In the Credentials field, enter the logon name and password of the domain account that has the privileges to
create a new or modify an existing computer accounts in the Active Directory domain.
Ensure that the logon name is in the DOMAIN\USERNAME format. The cluster and its constituent hosts require
these credentials to join the AD domain. Nutanix does not store the credentials.
What to do next
Create a Microsoft failover cluster. For more information, see Creating a Failover Cluster for Hyper-V on
page 376.
Procedure
1. Log on to the Prism Element web console by using one of the Controller VM IP addresses or by using the cluster
virtual IP address.
2. Click the gear icon in the main menu and select Configure Failover Cluster from the Settings page.
3. Type the failover cluster name in the Failover Cluster Name field.
The maximum length of the failover cluster name must not be more than 15 characters and must be a valid
NetBIOS name.
4. Type an IP address for the Hyper-V failover cluster in the Failover Cluster IP Address field.
This address is for the cluster of Hyper-V hosts that are currently being configured. It must be unique, different
from the cluster virtual IP address and from all other IP addresses assigned to the hosts and Controller VMs. It
must be in the same network range as the Hyper-V hosts.
5. In the Credentials field, type the logon name and password of the domain account that has the privileges to
create a new account or modify existing accounts in the Active Directory domain.
The logon name must be in the format DOMAIN\USERNAME. The credentials are required to create a failover
cluster. Nutanix does not store the credentials.
Procedure
3. Enter all the hosts that you want to add to the Failover cluster, and click Next.
Note:
If you select Yes, two tests fail when you run the cluster validation tests. The tests fail because the
internal network adapter on each host is configured with the same IP address (192.168.5.1). The
network validation tests fail with the following error message:
Duplicate IP address
The failures occur despite the internal network being reachable only within a host, so the internal
adapter can have the same IP address on different hosts. The second test, Validate Network
Communication, fails due to the presence of the internal network adapter. Both failures are benign and
can be ignored.
5. Enter a name for the cluster, specify a static IP address, and click Next.
6. Clear the All eligible storage to the cluster check box, and click Next.
7. Wait until the cluster is created. After you receive the message that the cluster is successfully created, click
Finish to exit the Cluster Creation wizard.
8. Go to Networks in the cluster tree and select Cluster Network 1 and ensure it is in the internal network by
verifying the IP address in the summary pane. The IP address must be 192.168.5.0/24 as shown in the following
screen shot.
9. Click the Action tab on the toolbar and select Live Migration Settings.
Note: If you do not perform this step, live migrations fail because the internal network is added to the live
migration network lists. Log on to SCVMM, add the cluster to SCVMM, check the host migration setting, and
ensure that the internal network is not listed.
• Join the hosts to the domain as described in Adding the Cluster and Hosts to a Domain on page 375.
• Verify that you have configured a service account for delegation. For more information on enabling delegation,
see the Microsoft documentation.
Procedure
1. Log on to the web console by using one of the Controller VM IP addresses or by using the cluster virtual IP
address.
2. Click the gear icon in the main menu and select Kerberos Management from the Settings page.
4. In the Credentials field, type the logon name and password of the domain account that has the privileges to
create and modify the virtual computer object representing the cluster in Active Directory. The credentials are
required for enabling Kerberos.
The logon name must be in the format DOMAIN\USERNAME. Nutanix does not store the credentials.
5. Click Save.
Note: Nutanix recommends you to configure Kerberos during a maintenance window to ensure cluster stability and
prevent loss of storage access for user VMs.
1. Log on to Domain Controller and perform the following for each Hyper-V host computer object.
a. Right-click the host object, and go to Properties. In the Delegation tab, select the Trust this computer
for delegation to specified services only option, and select Use any authentication protocol.
b. Click Add to add the cifs of the Nutanix storage cluster object.
Figure 56: Adding the cifs of the Nutanix storage cluster object
Example
> Setspn -S cifs/virat virat
> Setspn -S cifs/virat.sre.local virat
3. [Optional] To enable SMB signing feature, log on to each Hyper-V host by using RDP and run the following
PowerShell command to change the Require Security Signature setting to True.
> Set-SMBClientConfiguration -RequireSecuritySignature $True –Force
Caution: The SMB server will only communicate with an SMB client that can perform SMB packet signing,
therefore if you decide to enable the SMB signing feature, it must be enabled for all the Hyper-V hosts in the
cluster.
• Set the user authentication method. For more information, see Configuring Authentication in the Nutanix
Security Guide.
• Add, edit, or delete local user accounts. For more information, see User Management in the Nutanix Security
Guide.
• Install or replace an SSL certificate. For more information, see Certificate Management in the Nutanix Security
Guide.
• Control SSH access to the cluster. For more information, see Controlling Cluster Access in the Nutanix Security
Guide.
• Enable data-at-rest encryption. For more information, see Data-at-Rest Encryption in the Nutanix Security
Guide.
• Enable network segmentation. For more information, see Securing Traffic Through Network Segmentation in
the Nutanix Security Guide.
• Review authentication best practices. For more information, seeAuthentication Best Practices in the Nutanix
Security Guide and for firewall requirements see Firewall Requirements in the Nutanix Security Guide.
• Nutanix technical support can monitor your clusters and provide assistance when problems occur. For more
information, see Controlling Remote Connections on page 392, Configuring HTTP Proxy on page 393,
and Pulse Health Monitoring on page 383.
• Nutanix technical support maintains a portal that you can access to request assistance, download AOS updates, or
view documentation. For more information, see Accessing the Nutanix Support Portal on page 394.
• Nutanix supports a REST API that allows you to request information or run administration scripts for a Nutanix
cluster. For more information, see Accessing the REST API Explorer on page 396.
• System alerts
• Current Nutanix software version
• Nutanix processes and Controller VM information
• Hypervisor details such as type and version
Pulse frequently collects important data, like system-level statistics and configuration information, to automatically
detect issues and help simplify troubleshooting. With this information, Nutanix Support can apply advanced analytics
to optimize your implementation and address potential problems.
Pulse sends messages through HTTPS (port 443) using TLS 1.2. The HTTPS request uses certificate authentication
to validate that Pulse has established communication with the Nutanix Remote Diagnostics service. The TLS 1.2
protocol uses public key cryptography and server authentication to provide confidentiality, message integrity, and
authentication for traffic passed over the Internet. For the complete list of required ports, see Ports and Protocols.
• Ensure your firewall allows the IP addresses of all Controller VMs because Pulse data is sent from each controller
VM of the cluster to insights.nutanix.com over port 443 using the HTTPS REST endpoint.
• Ensure your firewall allows traffic from each Controller VM to the Nutanix Insights endpoints. For more
information, see Ports and Protocols.
Note: Firewall port requirements for the Controller VMs might not be necessary if you have a Prism Central
deployment. For more information, see Prism Central Proxy for Pulse Data in the Prism Central Admin Center
Guide.
Remote Diagnostics
Remote Diagnostics enables Nutanix Support to request granular diagnostic information from Pulse-enabled clusters.
Pulse streams a collection of configuration data, metrics, alerts, events, and select logs to Nutanix Insights, providing
a high-level representation of the cluster state. If the Pulse data stream is not detailed enough to diagnose a specific
issue, Nutanix Support might need to collect more diagnostic data from the cluster. Remote Diagnostics allows
Nutanix to remotely collect the following data only.
• Logs
• To check the Remote Diagnostics status, log on to a Controller VM through SSH and run the following command.
nutanix@cvm$ zkcat /appliance/logical/nusights/collectors/kCommand/override_config
Note: This command prints the Remote Diagnostics status only if the Remote Diagnostics status is set explicitly.
The command does not print anything if the status is the default status.
Pulse Configuration
When you log in to the Prism Element web console for the first time after an installation or an upgrade, the system
checks whether Pulse is enabled. If it is not enabled, a pop-up window appears recommending that you enable Pulse.
Enabling Pulse
Note:
• For information on how to enable Pulse simultaneously in all the clusters registered to a Prism Central,
see Enabling Pulse in the Prism Central Admin Center Guide.
• Nutanix recommends that you enable Pulse to allow Nutanix Support to receive cluster data and deliver
proactive and context-aware support.
• Nutanix does not collect any personally identifiable information (PII) through Pulse.
Procedure
Disabling Pulse
Procedure
Container
• Container name (may be anonymized)
• Capacity (logical used and total)
• IOPS and latency
• Replication factor
• Compression ratio
• Deduplication ratio
• Inline or post-process compression
• Inline deduplication
• Post-process deduplication
• Space available
• Space used
• Erasure coding and savings
Controller VM (CVM)
• Details of logs, attributes, and configurations of services on each CVM
• CVM memory
• vCPU usage
• Uptime
• Network statistics
• IP addresses (may be anonymized)
Disk Status
• Performance stats
• Usage
Hypervisor
• Hypervisor software and version
• Uptime
• Installed VMs
• Memory usage
• Attached datastore
Datastore
• Usage
• Capacity
• Name
Protection Domains
• Name (may be anonymized)
• Count and names of VMs in each protection domain
Feature
• Feature ID
• Name
• State (enabled or disabled)
• Mode
Tasks
• Task ID
• Operation type
• Status
• Entities
• Message Completion percentage
• Creation time
• Modification time
Logs
• Component
• Timestamp
• Source file name
• Line number
• Message
Procedure
2. Start ncli.
nutanix@cvm$ ncli
<ncli>
3. Run the cluster start-remote-support with the duration parameter set as required.
The duration parameter must be set in minutes even if you need a duration of hours.
Note: You can keep the remote support connection tunnel between 0-72 hours.
Procedure
1. Click the gear icon in the main menu and then select Remote Support in the Settings page.
The Remote Support dialog box appears.
• To allow remote access (temporarily) for Nutanix support, select Enable for, enter the desired number in the
field provided, and select the duration (hours or minutes) from the drop-down menu.
Note:
• Remote Support can be enabled for any time period between 1 minute and 24 hours.
• When you enable Remote Support, a new SSH key pair is automatically generated and pushed
to the Nutanix servers. This key is used to connect to the cluster in a secure way without sharing
CVM password with Nutanix Support.
3. Click the Save button to save the new setting. A Remote Support has been updated message is displayed along
with the updated connection status.
Note: It might take a few minutes for the connection status to be updated. If the connection status is not updated,
refresh the Remote Support setting screen to view the updated status.
Procedure
2. Click the gear icon in the main menu and then select HTTP Proxy in the Settings page.
The HTTP Proxy dialog box appears.
3. To add an HTTP proxy, click the New Proxy button and do the following in the displayed fields:
Note: Only one HTTP proxy can be configured at a time. If one exists currently, you must first delete it before
creating a new one.
Note: To return to the HTTP Proxy window without saving, click the Cancel button.
• To add an allowlist target, click the + Create link. This opens a line to enter a target address or a network. An
allowlist entry is a single host identified by an IP address or a network identified by the network address and
subnet mask. Adding an allowlist entry instructs the system to ignore proxy settings for a particular address or
network.
• To allow a single IP address, enter the target IP address and then click the Save link in that field.
• To allow an entire subnet, enter the network address and the subnet mask in the following
format: network_address/subnet_mask, and then click the Save link in that field. Replace
network_address with the network address and subnet_mask with the subnet mask of the network that
you want to allow.
• To edit an allowlist target, click the pencil icon for that target and update as needed.
• To delete an allowlist target, click the X icon for that target.
5. To delete an HTTP proxy entry, click the X icon for that entry.
A window prompt appears to verify the action; click the OK button. The entry is removed from the HTTP proxy
list.
Procedure
2. To access the Nutanix support portal, select Support Portal from the question mark icon dropdown menu.
The login screen for the Nutanix support portal appears in a new tab or window.
Note: Some options have restricted access and are not available to all users.
Solutions Displays a page from which you can view documents that describe
Documentation how to implement the Nutanix platform to solve a variety of business
applications.
EOL Information Displays a page from which you can view the end of life policy and
bulletins.
Field Advisories Displays a page from which you can view field advisories.
Security Advisories Displays a page from which you can view security advisories.
Acropolis Upgrade Displays a table of the supported AOS release upgrade paths.
Paths
Compatibility Displays a page from which you can view a compatibility matrix
Matrix broken down (filtered) by hardware model, AOS version, hypervisor
type and version, and feature version (NCC, Foundation, BMC/
BIOS).
Webinar Displays a page with links to a selection of Nutanix training webinars.
Recordings
Support & Forums Open Case Displays a form to create a support case.
View Cases Displays a page from which you can view your current support cases.
.NEXT Forums Provides a link to the (separate) Nutanix Next Community forum.
Terms & Displays a page from which you can view various warranty and terms
Conditions and conditions documents.
Downloads AOS (NOS) Displays a page from which you can download AOS releases.
Hypervisor Details Displays a page from which you can download Acropolis hypervisor
versions. You can also download supporting files used when
manually upgrading a hypervisor version (AHV, ESXi, or Hyper-V).
Prism Central Displays a page from which you can download the Prism Central
installation bundle. There are separate bundles for installing on AHV,
ESXi, or Hyper-V.
Tools & Firmware Displays a table of tools that can be downloaded, including the
Nutanix Cluster Check (NCC) and Prism Central VM.
Phoenix Displays a page from which you can download Phoenix ISO files.
Foundation Displays a page from which you can download Foundation releases.
My Products Installed Base Displays a table of your installed Nutanix appliances, including the
model type and serial number, location, and support coverage.
Licenses Displays a table of your product licenses along with buttons to add or
upgrade licenses for your clusters.
Procedure
» v1: Connect to the Prism Element web console, click the user icon in the upper-right corner of the Prism
Element web console, and select REST API Explorer. In the explorer, select Version 1 from the menu.
» v2: Connect to the Prism Element web console, click the user icon in the upper-right corner of the web
console, and select REST API Explorer. In the explorer, select Version 2 from the menu.
The REST API Explorer displays a list of the cluster objects that can be managed by the API. Each line has four
options:
Tip: The objects are listed by a relative path that is appended to the base URL
https://management_ip_addr:9440/PrismGateway/services/rest/v[1,2,3]/api, where management_ip_addr
is the IP address of any Nutanix Controller VM in the cluster.
2. Find the line for the object you want to explore and click Expand Operations. For this example, you will
operate on a storage pool.
3. Click GET on the first line to show the details for this API call.
The explorer displays the parameters that can be passed when this action is used.
4. Click Try it out! to test the API call when used with your cluster.
The test displays the request URL required for scripting, as well as sample responses expected from the API call.
• Context-sensitive help documentation. For more information, see Accessing Online Help on page 398.
• Health dashboard tutorial. For more information, see Health Dashboard.
• Customer support portal. For more information, see Accessing the Nutanix Support Portal on page 394.
• Nutanix community forum. For more information, see Accessing the Nutanix Next Community on
page 399.
• REST API explorer. For more information, see Accessing the REST API Explorer on page 396.
• Glossary of terms. For more information, see Nutanix Glossary.
Procedure
2. To open the online help, choose one of the following from the question mark icon drop-down list of the Main
Menu:
» Select Help with this page to display help documentation that describes the current screen.
Note: In a task window click the question mark icon in the upper right to display the help documentation for
that window.
3. To select a topic from the table of contents, click the click the collapse menu icon (also know as a hamburger
button) in the upper left.
A table of contents pane appears on the left. Click a topic in the table of contents to display that topic.
4. To display all the help contents as a single document (Prism Element Web Console Guide), click the epub or pdf
button in the upper right corner.
You can view the Prism Element Web Console Guide in either ePUB or PDF format by selecting the appropriate
button. If your browser does not support the selected format, you can download the PDF or ePUB file.
5. To search for a topic, click the Other icon in the main menu bar and enter a search string in the field.
This searches not only the help contents, but also all the documentation, knowledge base articles, and solution
briefs. Matching results appear below the search field. Click a topic from the search results to display that topic.
Procedure
2. To access the Nutanix next community forum, select Nutanix Next Community from the question mark icon
dropdown menu of the Main Menu.
The Nutanix Next Community main page appears in a new tab or window. From this page you can search existing
posts, ask questions, and provide comments.
Glossary
For terms used in this guide, see Nutanix Glossary.