Nutanix Clusters AWS
Nutanix Clusters AWS
on AWS Deployment
and User Guide
Cloud Clusters (NC2) Hosted
March 7, 2024
Contents
Cluster Deployment........................................................................................ 40
Creating an Organization.......................................................................................................................... 40
Updating an Organization...............................................................................................................41
Adding an AWS Cloud Account................................................................................................................41
Deactivating a Cloud Account........................................................................................................ 44
Reconnecting a Cloud Account......................................................................................................45
Adding a Cloud Account Region....................................................................................................45
Updating AWS Stack Configurations............................................................................................. 46
Creating a Cluster..................................................................................................................................... 48
AWS VPC Endpoints for S3..................................................................................................................... 65
Creating a Gateway Endpoint........................................................................................................ 65
Associating Route Tables With the Gateway Endpoint................................................................. 66
ii
Deploying and Configuring Prism Central.................................................................................................70
Logging into a Cluster by Using the Prism Element Web Console.......................................................... 71
Logging into a Cluster by Using SSH.......................................................................................................73
iii
Support Log Bundle Collection............................................................................................................... 148
iv
Disaster Recovery Without Layer 2 Stretch.................................................................................217
Disaster Recovery Over Layer 2 Stretch..................................................................................... 217
Preserving UVM IP Addresses During Disaster Recovery..................................................................... 220
Integration with Third-Party Backup Solutions........................................................................................ 222
Release Notes................................................................................................227
Copyright........................................................................................................228
v
ABOUT THIS DOCUMENT
This user guide describes the deployment processes for NC2 on AWS. The guide provides instructions for setting up
the Nutanix resources required for NC2 on AWS deployment, subscribing to NC2 payment plans. It also provides
detailed steps on UVM network management, end-to-end steps for creating a Nutanix cluster, and more.
This document is intended for users responsible for the deployment and configuration of NC2 on AWS. Readers
must be familiar with AWS concepts, such as AWS EC2 instances, AWS networking and security, AWS storage, and
VPN/Direct Connect. Readers must also know other Nutanix products, such as Prism Element, Prism Central, and
NCM Cost Governance (formerly Beam).
Document Organization
The following table shows how this user guide is organized and helps you find the most relevant sections in the guide
for the tasks that you want to perform.
• NC2 on AWS:
NC2 on AWS place the complete Nutanix hyperconverged infrastructure (HCI) stack directly on a bare-metal
instance in Amazon Elastic Compute Cloud (EC2). This bare-metal instance runs a Controller VM (CVM) and
Nutanix AHV as the hypervisor like any on-premises Nutanix deployment, using the AWS Elastic Network Interface
(ENI) to connect to the network. AHV user VMs do not require any additional configuration to access AWS services
or other EC2 instances.
• Runs on the EC2 bare-metal instances. For more information on the supported EC2 bare-metal instances, see
Supported Regions and Bare-metal Instances.
Use Cases
NC2 on AWS is ideally suited for the following key use cases:
• Disaster Recovery on AWS: Configure a Nutanix Cloud Cluster on AWS as your remote backup and data
replication site to quickly recover your business-critical workloads in case of a disaster recovery (DR) event for
your primary data center. Benefit from AWS’ worldwide geographical presence and elasticity to create an Elastic
DR configuration and save DR costs by only expanding your pilot light cluster when DR need arises.
• Capacity Bursting for Dev/Test: Increase your developer productivity by provisioning additional capacity for Dev/
Test workloads on NC2 on AWS if you may be running out of capacity on on-prem. Utilize a single management
plane to operate and manage your workloads across your data center and NC2 on AWS environments.
• Modernize Applications with AWS: Significantly accelerate your time to migrate applications to AWS with a
simple lift-and-shift operation—no need to refactor your workloads or rewrite your applications. Get your on-
prem workloads to AWS faster and modernize your applications with direct integrations with all AWS services.
For more information, see NC2 Use Cases.
NC2 eliminates the complexities in managing networking, using multiple infrastructure tools, and rearchitecting the
applications.
NC2 offers the following key benefits:
• Cluster management:
• One private management subnet for the internal cluster management and communication between CVM, AHV,
and so on.
• One public subnet with an Internet gateway and NAT gateway to provide external connectivity to the NC2 portal.
• One or more private subnets for UVM traffic, depending on your needs.
Note: All NC2 cluster deployments are single AZ deployments. Therefore, your UVM subnets will be in the same
AZ as the Management subnet. You must not add the Management subnet as a UVM subnet in Prism Element because
UVMs and Management VMs must be on separate subnets.
When you deploy a Nutanix cluster in AWS by using the NC2 console, you can either choose to deploy the cluster
in a new VPC and private subnet, or choose to deploy the cluster in an existing VPC and private subnet. If you opt
to deploy the cluster in a new VPC, during the cluster creation process, the NC2 console provisions a new VPC and
private subnet for management traffic in AWS. You must manually create one or more separate subnets in AWS for
user VMs.
Regardless of your deployment model, there are a few general outbound requirements for deploying a Nutanix cluster
in AWS on top of the existing requirements that on-premises clusters use for support services. For more information
on the endpoints the Nutanix cluster needs to communicate with for a successful deployment, see Outbound
Communication Requirements.
You can isolate your private subnets for UVMs between clusters and use the private Nutanix management subnets
to allow replication traffic between them. All private subnets can share the same routing table. You must edit the
inbound access in each Availability Zone’s security group as shown in the following tables to allow replication
traffic.
If Availability Zone 1 goes down, you can activate protected VMs on the cluster in Availability Zone 2. Once
Availability Zone 1 comes back online, you can redeploy a Nutanix cluster in Availability Zone 1 and reestablish data
protection. New clusters require full replication.
Multicluster Deployment
To protect your Nutanix cluster if there is an Availability Zone failure, use your existing on-prem Nutanix cluster as a
disaster recovery target.
The following table lists the inbound ports you need to establish replication between an on-premises cluster and a
Nutanix cluster running in AWS. You can create these ports on the infrastructure subnet security group that was
automatically created when you deployed NC2 on AWS. The ports must be open in both directions.
Note: Make sure you set up the cluster virtual IP address for your on-premises and AWS clusters. This IP address is
the destination address for the remote site.
Nutanix has native inbuilt replication capabilities to recover from complete cluster failure. Nutanix supports
asynchronous replication. You can set your Recovery Point Objective (RPO) to be one hour with asynchronous
replications.
Note: You can configure Prism to be accessible from the public network and then manually configure the AWS
resources, such as Load Balancer, NAT Gateway, Public IPs, and Internet Gateway for public access.
The following table lists the optional AWS components that can be used with the NC2 on AWS deployment.
Transit Gateway Yes Charges are also applicable for data traffic.
Network Services
You can view all the resources allocated to a cluster running on AWS.
To view the cloud resources created by NC2, perform the following:
1. Sign in to NC2 from the My Nutanix dashboard.
2. In the Clusters page, click the name of the cluster.
3. On the left navigation pane, click Cloud Resources.
The Cloud Resources page displays all the resources associated with the cluster.
NC2 Architecture
The bare-metal instance runs the AHV hypervisor and the hypervisor, like any on-premises deployment, runs a
Controller Virtual Machine (CVM) with direct access to NVMe instance storage hardware.
AOS Storage uses the following three core principles for distributed systems to achieve linear performance at scale:
1. Must have no single points of failure (SPOF).
2. Must not have any bottlenecks at any scale (must be linearly scalable).
3. Must apply concurrency (MapReduce).
Together, a group of Nutanix nodes forms a distributed system (Nutanix cluster) responsible for providing the Prism
and Acropolis capabilities. Each cluster node has two EBS volumes attached. Both are encrypted gp3 volumes. The
size of AHV EBS volume is 100 GB and CVM EBS is 150 GB. All services and components are distributed across all
CVMs in a cluster to provide for high-availability and linear performance at scale.
This enables our MapReduce Framework (Curator) to use the full power of the cluster to perform activities
concurrently. For example, activities such as data reprotection, compression, erasure coding, deduplication, and more.
Setting up a cluster with redundancy factor 2 (RF2) protects data against a single rack failure and setting it up with
RF3 protects against a two-rack failure. Also, to protect against multiple correlated failures within a data center and
an entire AZ failure, Nutanix recommends you set up sync replication to a second cluster in a different AZ in the
same Region or an Async replication to an AZ in a different Region. AWS data transfer charges may apply.
AWS deploys each node of the Nutanix cluster on a separate AWS rack (also called AWS partition) for fault
tolerance.
If a cluster loses rack awareness, an alert is displayed in the Alerts dashboard of the Prism Element web console and
the Data Resiliency Status dashboard displays a Critical status.
A cluster might lose rack awareness if you:
1. Update the cluster capacity.
For example, if you add or remove a node.
2. Manually replace a host or the replace host action is automatically triggered by the NC2 console.
3. Change the Replication Factor (RF), that is from RF2 to RF3.
4. Create a cluster with either 8 or 9 nodes and configure RF3 on the cluster.
If you want to disable Strict rack Awareness, run the following nCLI command:
ncli cluster disable-strict-domain-awareness
Contact Nutanix Support for assistance if you receive an alert in the Prism Element web console that indicates your
cluster has lost rack awareness.
Note: If your cluster is running in a single AZ without protection either by using disaster recovery to on-prem or
Nutanix Disaster Recovery beyond 30 days, the Nutanix Support portal displays a notification indicating that your
cluster is not protected.
The notification includes a list of all the clusters that are in a single AZ without protection.
Hover over the notification for more details and click Acknowledge. Once you acknowledge the
notification, the notification disappears and appears only if another cluster exceeds 30 days in a single
availability zone without protection.
NC2 supports Asynchronous and NearSync replication. NearSync replication is supported with AOS 6.7.1.5 and later,
while Asynchronous replication is supported with all supported AOS versions. NearSync replication is supported only
when clusters run AHV; NC2 does not support cross-hypervisor disaster recovery. For more information on Nutanix
Disaster Recovery capabilities, see Nutanix Disaster Recovery Guide.
Note: These permissions are only required for the creation of CloudFormation template and NC2 does not use
these for any other purpose.
Note: Do not use the AWS root user for any deployment or operations related to NC2.
NC2 on AWS does not use AWS Secrets Manager for maintaining any stored secrets. All customer-
sensitive data is stored on customer-managed cluster. Local NVMe storage on the bare-metal is used for
storing customer-sensitive data. Nutanix does not have any visibility into customer-sensitive data stored
locally on the cluster. Any data sent to Nutanix concerning cluster health is stripped of any Personal
Identifiable Information (PII).
Note: Nutanix recommends following the policy of least privilege for all access granted while deploying NC2. For
more information, see NC2 User Management.
For more information about how security is implemented in a Nutanix Cluster environment, see Network
Security using AWS Security Groups.
Data Encryption
To help reduce cost and complexity, Nutanix supports a native local key manager (LKM) for all clusters with three or
more nodes. The LKM runs as a service distributed among all the nodes. You can activate LKM from Prism Element
to enable encryption without adding another silo to manage.
You can activate LKM from Prism Element to enable encryption without adding another silo to manage. If you are
looking to simplify your infrastructure operations, you can also use one-click infrastructure for your key manager.
Organizations often purchase external key managers (EKMs) separately for both software and hardware. However,
because the Nutanix LKM runs natively in the CVM, it is highly available and there is no variable add-on pricing
based on the number of nodes. Every time you add a node, you know the final cost. When you upgrade your cluster,
the key management services are also upgraded. When upgrading the infrastructure and management services in
lockstep, you are ensuring your security posture and availability by staying in line with the support matrix.
Nutanix software encryption provides native AES-256 data-at-rest encryption, which can interact with any KMIP-
compliant or TCG-compliant external KMS server (Vormetric, SafeNet, and so on) and the Nutanix native KMS,
introduced in AOS version 5.8. The system uses Intel AES-NI acceleration for encryption and decryption processes
to minimize any potential performance impacts. Nutanix software encryption also provides in-transit encryption. Note
that in-transit encryption is currently applicable within a Nutanix cluster for data RF.
• IAMFullAccess: NC2 on AWS utilizes IAM roles to communicate with AWS APIs. You must have
IAMFullAccess privileges to create IAM roles in your AWS account.
• AWS_ConfigRole: You might want to have the AWS Config permission so that you can get configuration
details for AWS resources.
• AWSCloudFormationFullAccess: NC2 on AWS provides you with a CloudFormation script to create
two IAM roles used by NC2. You must have AWSCloudFormationFullAccess privileges to run that
CloudFormation stack in your account.
Note: These permissions are suggested for you to run the CloudFormation template, and NC2 does not need these
for any purpose.
By running the CloudFormation script provided by NC2, you will be creating the following two IAM roles for
NC2 on AWS:
• Nutanix-Clusters-High-Nc2-Cluster-Role-Prod
• Nutanix-Clusters-High-Nc2-Orchestrator-Role-Prod
You can either create these roles manually and assign the required permissions or run the CloudFormation script
to add these role. Nutanix recommends running the CloudFormation script so that the permissions are added
accurately to those roles.
One role allows the NC2 console to access your AWS account by using APIs, and the other role is assigned to
each of your bare-metal instances.
You can view information about your CloudFormation stack, namely Nutanix-Clusters-High-Nc2-Cloud-
Stack-Prod, on the Stacks page of the CloudFormation console.
If you want to create the IAM roles manually, you can review the CloudFormation script from https://s3.us-
east-1.amazonaws.com/prod-gcf-567c917002e610cce2ea/aws_cf_clusters_high.json and check the
For more information on how to secure your AWS resources, see Security Best Practices in IAM.
Note: For NC2 on AWS with AOS 6.7.1.5, you must run the CloudFormation template while adding your AWS cloud
account. If you have already run the CloudFormation template, you must run it again so that any new permissions added
to the IAM roles come into effect.
vCPU Limits
Review the supported regions and bare-metal instances. For details, see Supported Regions and Bare-metal
Instances
AWS supports the following vCPU limits for the bare-metal instances available for NC2 on AWS.
Note: Before you deploy a cluster, check if the EC2 instance type is supported in the Availability Zone in which you
want to deploy the cluster.
Not all instance types are supported in all the availability zones in an AWS region. An error message is
displayed if you try to deploy a cluster with an instance type that is not supported in the availability zone
you selected.
Configure a sufficient vCPU limit for your AWS account. Cluster creation fails if you do not have the sufficient
vCPU limit set for your AWS account.
You can calculate your vCPU limit in the AWS console under EC2 > Limits > Limits Calculator.
To learn more about setting AWS vCPU Limits for NC2, see the Nutanix University video.
IMDS Requirements
NC2 on AWS supports accessing the instance metadata from a running instance using one of the following methods:
Note: When you create a My Nutanix account, a default workspace gets created for you with the Account Admin role,
which is required to create an NC2 subscription and access the Admin Center and Billing Center portals. If you are
invited to a workspace, then you must get the Account Admin role so that you can subscribe to NC2 and access the
Admin Center and Billing Center.
Networking Requirements
1. Configure connectivity between your on-prem datacenter and AWS VPC by using either VPN or Direct Connect
if you want to pair both the clusters for data protection and other reasons.
See AWS Site-to-Site VPN to connect AWS VPC by using VPN.
To learn more about setting up a VPN to on-prem, see the Nutanix University video.
See Connect Your Data Center to AWS to connect AWS VPC by using Direct Connect.
2. Allow outbound internet access on your AWS VPC so that the NC2 console can successfully provision and
orchestrate Nutanix clusters in AWS.
For more information on how to allow outbound internet access on your AWS VPC, see AWS VPC
documentation.
3. Configure the AWS VPC infrastructure. You can choose to create a new VPC as part of cluster creation from the
NC2 portal or use an existing VPC.
To learn more about setting up an AWS Virtual Private Cloud (VPC) manually, see the Nutanix University
video.
4. If you deploy AWS Directory Service in a selected VPC or subnet to resolve DNS names, ensure that AWS
Directory Service resolves the following FQDN successfully to avoid deployment failure.
FQDN: gateway-external-api.cloud.nutanix.com
CIDR Requirements
You must use the following range of IP addresses for the VPCs and subnets:
Note: UVM subnet sizing would depend on the number of UVMs that would need to be deployed. NC2 supports the
network CIDR sizing limits enforced by AWS.
Note: NC2 might not support some bare-metal instance types in certain regions due to limitations in the number of
partitions available. NC2 supports EC2 bare-metal instances in regions with three or more partitions. The support for
g4dn.metal instance type is only available on clusters with AOS 6.1.1 and 5.20.4 or later releases.
You can use a combination of i3.metal, i3en.metal, and i4i.metal instance types or z1d.metal, m5d.metal,
and m6id.metal instance types while creating a new cluster or expanding the cluster capacity of an already
running cluster. The combination of these instance types is subject to bare-metal support from AWS in the
region where the cluster is being deployed. For more details, see Creating a Heterogeneous Cluster.
You can only create homogenous clusters with g4dn.metal instances; it cannot be used to create a
heterogeneous cluster.
The following table lists the AWS EC2 bare-metal instance types supported by Nutanix.
For more information, see Hardware Platform Spec Sheets. Select NC2 on AWS from the Select your
preferred Platform Providers list.
The following table lists the detailed information for each bare-metal instance type supported in each AWS region.
* - These regions are not auto-enabled by AWS. Ensure you first enable them in your AWS account before using
them with NC2. For more information on how to enable a region, see AWS documentation. Once you have enabled
these regions in your AWS console, ensure they are also selected in your NC2 portal. For more information, see the
instructions about adding cloud regions to the NC2 console in Adding an AWS Cloud Account.
Note: An instance type may not be supported in a region because the number of partitions is less than the minimum
three partitions required by NC2 or the instance type is not supported by AWS in the specified region.
Note: You have to manually install the NVIDIA driver on each new node when you expand the cluster size. Also, NC2
may automatically replace nodes in your cluster if there are issues with node availability. In such a scenario, the user
must also install the NVIDIA driver on the new node procured by NC2.
Note: If a GPU card is present in your cluster, LCM restricts update to AHV if it does not detect a compatible NVIDIA
GRID driver in its inventory. To fetch a compatible NVIDIA GRID driver for your version of AHV, see Updating
the NVIDIA GRID Driver with LCM.
Perform the following steps to install the NVIDIA driver on the G4dn hosts:
1. Download the NVIDIA host driver version 13.0 from the Nutanix portal at https://portal.nutanix.com/page/
downloads?product=ahv&bit=NVIDIA.
2. For detailed installation instructions on NVIDIA driver, see Installing the NVIDIA grid driver.
Note: Users have to sign in to controller VMs in the cluster with the SSH key pair provided during the cluster
creation instead of the default user credentials.
For more information about assigning and configuring a vGPU profile to a VM, see "Creating a VM
(AHV)" in the "Prism Web Console Guide".
Note: NVIDIA vGPU guest OS drivers for product versions 11.0 or later can be acquired using NVIDIA
Licensing Software Downloads under:
• All Available
• Product Family = vGPU
• Platform = Linux KVM
• Platform Version = All Supported
• Product Version = (match host driver version)
AHV-compatible host and guest drivers for older AOS versions can be found on the NVIDIA
Licensing Software Downloads site under 'Platform = Nutanix AHV'.
Limitations
Following are the limitations of NC2 in this release:
• A maximum of 28 nodes are supported in a cluster. NC2 supports 28-node cluster deployment in AWS regions
that have seven placement groups.
Note: NC2 does not recommend using single-node clusters in production environments.
In a cluster running in AWS, you have no visibility into the actual cloud infrastructure such as the
ToR switches. API support is not available to discover the cloud infrastructure components in
Nutanix clusters. Given that the cluster is deployed in a single VPC, the switch view is replaced
by the VPC. Any configuration options on the network switch are disabled for clusters deployed in
AWS.
Uplink Configuration:
The functionality to update the uplink configuration is disabled for a cluster running in AWS.
Hardware Configuration:
The Switch tab in the Hardware menu of the Prism Element web console is disabled for a cluster
running in AWS.
Rack Configuration:
The functionality to configure racks is disabled for a cluster running in AWS. Clusters are deployed
as rack-aware by default. APIs to create racks are also disabled on clusters running in AWS.
Broadcast and LLDP:
AWS does not support broadcast and any link layer information based on protocols such as LLDP.
Security Dashboard
A dashboard that provides a dynamic summary of the security posture across all registered clusters
is not supported for NC2.
Host NIC:
Elastic Network Interfaces (ENIs) provisioned on bare-metal AWS instances are virtual interfaces
provided by Nitro cards. AWS does not provide any bandwidth guarantees for each ENI, but
Cluster Operations
Perform the following actions using the NC2 console:
• Cluster deployment and provisioning must be performed by using the NC2 console and not by using Foundation.
• Perform add node and remove node operations by using the NC2 console and not by using the Prism Element web
console.
aCLI Operations
The following aCLI commands are disabled in a cluster in AWS:
Namespace Options
net create_cluster_vswitch
delete_cluster_vswitch
get_cluster_vswitch
list_cluster_vswitch
update_cluster_vswitch
host enter_maintenance_mode
enter_maintenance_mode_check
exit_maintenance_mode
nCLI Operations
The following nCLI commands are disabled in a cluster in AWS:
cluster edit-hypervisor-lldp-params
, get-hypervisor-lldp-config
edit-param disable-degraded-state-monitoring
disk delete, remove-start, remove-status
software download, list, remove, upload
API Operations
The following API calls are disabled or changed in a Nutanix cluster running in AWS:
API Changes
POST : /hosts/{hostid}/enter_maintenance_mode Not supported
POST : /hosts/{hostid}/exit_maintenance_mode Not supported
GET /clusters Values for the rack and block configuration are not
displayed.
POST /cluster/block_aware_fixer Not supported
DELETE /api/nutanix/v1/cluster/rackable_units/ Not supported
{uuid}
DELETE /api/nutanix/v3/rackable_units/{uuid} Not supported
DELETE /api/nutanix/v3//disks/{id} Not supported
• IAMFullAccess: Enables the NC2 console to run the CloudFormation template in AWS to link your
AWS and NC2 account.
You use the credentials of this IAM user when you are adding your AWS cloud account to the NC2
console. When you are adding your AWS cloud account, you run a CloudFormation template, and the
CloudFormation script adds two IAM roles to your AWS account. One role allows the NC2 console
Note: Only the user account you use to add your AWS account to NC2 has the IAMFullAccess privilege
and the NC2 console itself does not have the IAMFullAccess privilege.
• AWS_ConfigRole: Grants AWS Config permission to get configuration details for supported AWS
resources
• AWSCloudFormationFullAccess: Used to create the initial AWS resources needed to link your AWS
account and create a CloudFormation stack
Note: These permissions are only required for the creation of CloudFormation template and NC2 does not use
these for any other purpose.
3. A VPC
4. A private subnet for management traffic
5. One or more private subnets for user VM traffic
6. Two new AWS S3 buckets with Nutanix IAM role if you want to use the Cluster Protect feature to protect
Prism Central, UVM, and volume groups data.
See the AWS documentation for instructions about how to configure these requirements.
2. In the NC2 console:
1. A My Nutanix account to access the NC2 console.
See NC2 Payment Methods on page 75 for more information.
2. An organization
See Creating an Organization on page 40 for more information.
Procedure
1. Go to https://my.nutanix.com.
3. Enter your details, including first name, last name, company name, Job title, phone number, country,
email, and password.
Follow the specified password policy while creating the password. Personal domain email addresses, such as
gmail.com or yahoo.com are not allowed. You must sign up with a company email address.
4. Click Submit.
A confirmation page appears and you receive an email from [email protected] after you successfully
complete the sign-up process.
6. Sign in to the portal using the credentials you specified during the sign-up process.
A default Personal workspace is created after you successfully create a My Nutanix account. You can rename
your workspaces. For more information on workspaces, see Workspace Management.
Note: The default Personal workspace name contains the domain followed by the email address of the user and
the tenant word.
Note: When you create a My Nutanix account, a default workspace gets created for you with the Account Admin
role, which is required to create an NC2 subscription and access the Admin Center and Billing Center portals. If you
are invited to a workspace, then you must get the Account Admin role so that you can subscribe to NC2 and access
the Admin Center and Billing Center.
Note: The owner of the My Nutanix workspace that has been used to start the free trial for NC2 must add other users
from the NC2 console with appropriate RBAC if those users need to manage clusters in the same tenant. For more
information on adding users and the roles that can be assigned, see NC2 User Management.
Note: You are responsible for any hardware and cloud services costs incurred during the NC2 free trial.
Note: Ensure that you select the correct workspace from the Workspace dropdown list on the My Nutanix
dashboard. For more information on workspaces, see Workspace Management.
2. On the My Nutanix dashboard, scroll to Cloud Services, and under Nutanix Cloud Clusters (NC2), click
Get Started.
3. On the Nutanix Cloud Clusters (NC2) on Public Clouds page, under Try NC2, click Start your 30 day
free trial.
4. You are redirected to the NC2 console. When prompted to accept the Nutanix Cloud Services Terms of Service,
Click I Accept. The NC2 console opens in a new tab. You can now start using NC2.
Note: If you want to subscribe to NC2 instead of using a free trial, you can click the Select from our available
plan options to get started option, and then complete the subscription on the Nutanix Billing Center.
Creating an Organization
An organization in the NC2 console allows you to segregate your clusters based on your specific
requirements. For example, create an organization Finance and then create a cluster in the Finance
organization to run only your finance-related applications.
Procedure
Note: On the My Nutanix dashboard, ensure that you select the correct workspace from the Workspace
dropdown list that shows the workspaces you are part of and that you have used while subscribing to NC2.
3. In the Create a new organization dialog box, do the following in the indicated fields:
a. Customer. Select the customer account in which you want to create the organization.
b. Organization name. Enter a name for the organization.
c. Organization URL. The URL name is automatically generated. If needed, the name can be modified.
4. Click Create.
After a successful creation, the new organization will be listed in the Organizations tab.
Updating an Organization
Administrators can update the basic information for your organization from the NC2 console.
Note: Changes applied to the organization entity affect the entirety of the organization and any accounts listed
underneath it.
Procedure
2. In the Organization page, select the ellipsis button of a corresponding organization and click Update.
a. Navigate to the Basic Info tab of the Organization entity's update page.
b. You can edit any of the fields listed below if required:
Note: You can add one AWS account to multiple organizations within the same customer entity. However, you cannot
add the same AWS account to two or more different Customer (tenant) entities. If you have already added an AWS
account to an organization and want to add the same AWS account to another organization, follow the same process,
but you do not need to create the CloudFormation template.
If a cluster is present, do not delete the CloudFormation stacks.
Note: For NC2 on AWS with AOS 6.7.1.5, you must run the CloudFormation template while adding your AWS cloud
account. If you have already run the CloudFormation template, you must run it again so that any new permissions added
to the IAM roles come into effect.
Procedure
Note: On the My Nutanix dashboard, ensure that you select the correct workspace from the Workspace
dropdown list that shows the workspaces you are part of and that you have used while subscribing to NC2.
3. Click the ellipsis next to the organization that you want to add the cloud account to and click Cloud accounts.
6. In the Name field, type a name for your AWS cloud account.
Note: You can find your Account ID in My Account in the AWS cloud console. Ensure that you enter the AWS
cloud account ID without hyphens.
a. Sign in to the AWS account in which you want to create Nutanix clusters.
This account is the same AWS account that is linked to the Account ID you entered in step 7.
b. In the Quick create stack screen, note the template URL, stack name, and other parameters.
c. Select the I acknowledge that AWS CloudFormation might create IAM resources with custom
names check box.
d. Click Create stack.
e. Monitor the progress of the creation of the stack in the Events tab.
f. Wait until the Status changes to CREATE_COMPLETE.
You can view information about your CloudFormation stack, namely Nutanix-Clusters-High-Nc2-Cloud-
Stack-Prod, on the Stacks page of the CloudFormation console.
» Select All supported regions if you want to create clusters in any of the supported AWS regions.
» Select Specify regions if you want to create clusters in specific AWS regions and select the regions of
your choice from the list of available AWS regions.
Note: Some regions are not auto-enabled by AWS. Ensure you first enable them in your AWS account before
using them with NC2. For more information, see Supported Regions and Bare-metal Instances.
11. Select the add cloud account disclaimer checkbox for acknowledgment.
Note: A cloud account that has existing NC2 accounts cannot be deactivated. You must terminate all NC2 accounts
using the cloud account resources first.
Procedure
1. Navigate to the Customer or Organization dashboard in the NC2 console where the cloud account is
registered.
Procedure
1. Navigate to the Customer or Organization dashboard in the NC2 console where the cloud account is
registered.
2. Click the ellipsis icon against the desired organization or customer and then click Cloud Accounts.
3. Find the cloud account you want to reconnect. Click the ellipsis icon against the cloud account and click
Reconnect.
4. If the underlying issue(s) were addressed and the NC2 console can communicate with the cloud account
infrastructure, the account status will change to R.
Note: Administrators must ensure they have sufficient resource limits in the regions they decide to add before adding
those regions through the NC2 console.
Procedure
1. Navigate to the Customer or Organization dashboard in the NC2 console where the cloud account is
registered.
2. Click the ellipsis icon against the desired organization or customer and then click Cloud Accounts.
3. Find the cloud account where you want to add a new cloud region. Click the ellipsis icon against the cloud
account and click Add regions. A new window appears.
• All supported regions: Select this option if you would like to add all other supported regions besides those
you have already specified.
• Specify regions: Select this option if you would like to add just a few additional supported regions to your
cloud account. Click inside the regions field and select as many regions as you want from the drop-down
menu.
5. Once you have made your selection, click Save. You will receive updates in your notification center regarding
the status.
Note: You must not recreate the CloudFormation stack for existing clusters. Instead, you must update and rerun the
CloudFormation stack.
Procedure
1. Navigate to the Customer or Organization dashboard in the NC2 console, where the cloud account is
registered.
2. Click the ellipsis icon against the desired organization or customer and then click Cloud Accounts.
3. Find the cloud account for which you want to update the configurations. Click the ellipsis icon against the cloud
account and click Update.
• Update Stack: The Update Stack tab provides your CloudFormation Stack template URL and Stack
parameters. These details can be used to update IAM (Identity and Access Management) roles.
For example, to use new product features, you may need to use the CloudFormation Stack template URL to
expand your IAM permissions after an NC2 product update.
Note: To recreate your CloudFormation stack, you must delete the existing stack in your AWS Console, which
you can access directly from the Recreate Stack sub-tab.
Creating a Cluster
Create a cluster in AWS by using NC2. Your NC2 cluster runs on an EC2 bare-metal instance in AWS.
For more information on the AWS components that are either installed when the option to create a new VPC is
selected during NC2 on AWS deployment or you need to install manually when you choose to use an existing VPC,
see AWS Components Installed.
Note: Each node in a Nutanix cluster has two EBS volumes attached (AHV EBS and CVM EBS). Both are encrypted
gp3 volumes. The size of AHV EBS volume is 100 GB and CVM EBS is 150 GB.
AWS charges you for EBS volumes regardless of the cluster state (running or hibernate). These charges
are incurred once the cluster is created until it is deleted. See the AWS Pricing Calculator for information
about how AWS bills you for EBS volumes.
AWS bills you an additional charge for the EBS volumes and S3 storage for the time the cluster is
hibernated. If a node turns unhealthy and you add another node to a cluster for evacuation of data or VMs,
AWS also charges you for the new node.
Note: The default configuration for CVMs on NC2 with AOS 6.7 or earlier is 32 GiB of RAM. On NC2 with AOS
6.7.1.5, the CVM memory size is set to 48 GiB.
You must use the following range of IP addresses for the VPCs and subnets:
Note: UVM subnet sizing would depend on the number of UVMs that would need to be deployed. NC2 supports the
network CIDR sizing limits enforced by AWS.
Note: On the My Nutanix dashboard, ensure that you select the correct workspace from the Workspace
dropdown list that shows the workspaces you are part of and that you have used while subscribing to NC2.
» If you are creating a cluster for the first time, under You have no clusters, click Create Cluster.
» If you have created clusters before, click Create Cluster in the top-right corner of the Clusters page.
» General Purpose: A cluster that utilizes general purpose Nutanix licenses. For more information on NCI
licensing, see Nutanix Licenses for NC2.
» Virtual Desktop Infrastructure (VDI): A cluster that utilizes Nutanix licenses for virtual desktops. For
more information on NCI and EUC licensing, see Nutanix Licenses for NC2.
a. Organization. Select the organization in which you want to create the cluster.
b. Cluster Name. Type a name for the cluster.
c. Cloud Provider. Select AWS.
d. Cloud Account. Select the AWS cloud account in which you want to create the cluster.
e. Region and Availability Zone. Select the AWS region and Availability Zone in which you want to create
the cluster.
f. (If you select VDI) Under Consumption Method, the User-based consumption method is selected by
default. In this case, the consumption and cluster pricing are based on the number of users concurrently using
the cluster. Enter the maximum number of users allowed to use the cluster.
Note: The general purpose cluster uses a capacity-based method by default where the consumption and
cluster pricing is based on the capacity provisioned in the cluster.
g. In Advanced Settings, with Scheduled Cluster Termination, NC2 can delete the cluster at a
scheduled time if you are creating a cluster for a limited time or for testing purposes. Select one of the
following:
• Terminate on. Select the date and time when you want the cluster to be deleted.
• Time zone. Select a time zone from the available options.
Note: The cluster will be destroyed, and data will be deleted automatically at the specified time. This is an
irreversible action and data cannot be retrieved once the cluster is terminated.
• NCI (Nutanix Cloud Infrastructure): Select this license type and appropriate add-ons to use NCI
licensing.
Note: You must manually register the cluster to Prism Central and apply the NCI licenses in Prism
Central.
• AOS: Select this license type and appropriate add-ons to reserve and use AOS (legacy) licenses. For
more information on how to reserve AOS (legacy) licenses, see Reserving License Capacity.
• EUC (End User Computing): Select this option if you want to use EUC licenses for a specified
number of users.
Note: You need to manually register the cluster to Prism Central and manually apply the EUC
licenses.
• VDI: Select this option if you want to use VDI licenses for a specified number of users. For more
information on how to reserve VDI licenses, see Reserving License Capacity.
• AOS Version. Select the AOS version that you want to use for the cluster.
Note: The cluster must be running the minimum versions of AOS 6.0.1.7 for NCI and EUC licenses, and
AOS 6.1.1 for NUS license.
• Software Tier. In the Software Tier drop-down list, select the license type based on your cluster type
and the license option you selected.
• For General Purpose cluster: Select the Pro or Ultimate license tier that you want to apply to your
NCI or AOS cluster. Click the View Supported Features list to see the available features in each
license type.
• For VDI cluster: The only available license tier for the VDI or EUC cluster, that is, Ultimate, is
selected by default.
This option is used for metering and billing purposes. Usage is metered every hour and charged based on
your subscription plan. Any AOS (legacy) and VDI reserved licenses will be picked up and applied to your
NC2 cluster to cover its usage before billing overages to your subscription plan.
c. Under Add-on Products:
• If the NCI (Nutanix Cloud Infrastructure) or EUC (End User Computing) license option is
selected: you can optionally select Use NUS (Nutanix Unified Storage) on this cluster and specify
the storage capacity that you intend to use on this cluster.
Note: You need to manually apply the NCI and the NUS licenses to your cluster.
• If the AOS or VDI license option is selected, you can optionally select the following add-on products:
• Advanced Replication
• Data-at-Rest Encryption
• Use Files on this cluster: Specify the capacity of files you intend to use in the Unified Storage
Capacity field.
Note: The Advanced Replication and Data-at-Rest Encryption add-ons are selected by default for AOS
and VDI Ultimate; you need to select these add-ons for AOS Pro manually.
Note: NC2 only supports AOS 6.5.4.5 and 6.7.1.5 to run Microsoft Windows Server workloads.
Note: NC2 shares your intent to use a Windows server with AWS. AWS bills you for the Microsoft Windows
Server license cost. For more information, see Microsoft Windows on NC2.
When you choose to use Microsoft Windows Server, you must follow these additional instructions:
a. Bring your own Microsoft Windows binary that is in an AHV-compatible format. Nutanix supports
the RAW, VHD(X), VMDK, VDI, ISO, and QCOW2 disk formats. For more information, see AHV
Administration Guide.
b. Manually install the Windows binary on the NC2 on AWS cluster.
c. Manually license all Windows VMs on the NC2 on AWS cluster.
Note: You must perform this step again on the Windows VM after you migrate it back to the NC2 on AWS
cluster in the disaster recovery scenario.
• Run the below command as an administrator to set your Windows KMS machine IP address.
slmgr.vbs /skms 169.254.169.250:1688
• Host type: The instance type used during initial cluster creation is displayed.
• Number of Hosts. Click + or - depending on whether you want to add or remove nodes.
Note: A maximum of 28 nodes are supported in a cluster. NC2 supports 28-node cluster deployment in
AWS regions that have seven placement groups. Also, there must be at least three nodes in a cluster.
• Add Host Type: The other compatible instance types are displayed depending on the instance type used
for the cluster. For example, if you have used i3.metal node for the cluster, then i3en.metal, and i4i.metal
instance types are displayed.
Note: You can create a heterogeneous cluster using a combination of i3.metal, i3en.metal, and i4i.metal
instance types or z1d.metal, m5d.metal, and m6id.metal instance types.
The Add Host Type option is disabled when no compatible node types are available in the
region where the cluster is being deployed.
• Under Redundancy: Select one of the following redundancy factors (RF) for your cluster.
• RF 1: The number of copies of data replicated across the cluster is 1. The number of nodes for RF1 must
be 1.
Note: RF1 can only be used for single-node clusters. Single-node clusters are not recommended in
production environments. You can configure the cluster with RF1 only for clusters created for Dev, Test,
or PoC purposes. You cannot increase the capacity of a single-node cluster.
• RF 2: The number of copies of data replicated across the cluster is 2. The minimum number of nodes for
RF2 must be 3.
• RF 3: The number of copies of data replicated across the cluster is 3. The minimum number of nodes for
RF3 must be 5.
• Host type. Select the type of bare-metal instance that you want your cluster to run on.
• Number of Hosts. Select the number of hosts that you want in your cluster.
Note: A maximum of 28 nodes are supported in a cluster. NC2 supports 28-node cluster deployment in AWS
regions that have seven placement groups.
a. Under Networking, select the VPC in which you want to create the cluster from one of the following
options:
• Under Select Cluster VPC, select a VPC from the Virtual Private Network (VPC) drop-down
list.
• Under Select Cluster Management Subnet, select a subnet (from the VPC that you selected
in the previous step) from the Management Subnet drop-down list that you want to use as the
management subnet for your cluster.
Note: This subnet must be a dedicated private subnet for communication between Nutanix CVMs or
management services like Hypervisor.
Note: Ensure that you do not use 192.168.5.0/24 CIDR for the VPC being used to deploy the NC2
on AWS cluster. All Nutanix nodes use that CIDR for communication between the CVM and the
installed hypervisor.
Note: Two subnets will be created along with the VPC in the selected AZ. One private subnet without
outgoing internet access for the management network and one public subnet providing connectivity to
NC2 from the VPC.
• Prism (Cluster Management Console): Select one of the following options to control access of the
public Internet to and from your Nutanix cluster:
Note: This Public option is only available when you choose to either import a VPC or create a new
VPC in the Network tab.
Allowing Internet access could have security ramifications. Use of a load balancer is
optional and is not a recommended configuration. For securing network traffic when using
a load balancer, you can consider using secure listeners, configuring security groups,
and authenticating users through an identity provider. For more information, see AWS
Documentation.
You can also use a Bastion server (jump box) to gain SSH access to the CVMs and AHV
hosts of Nutanix clusters running on AWS. See Logging into a Cluster by Using SSH.
• Restricted: Restrict access only to a select number of IP addresses. In the IP addresses field,
provide a list of source IP addresses and ranges that must be allowed to access Prism Element. NC2
creates security group rules.
• Disabled: Disable cluster access to and from the public Internet. The security group attached to the
cluster hosts will not allow access to Prism Element.
• Management Services (Core Nutanix services running on this cluster). Select to allow or
restrict access to management services (access to CVMs and AHV hosts).
• Restricted: If any IP addresses require access to CVMs and AHV hosts, specify a list of such source
IP addresses and ranges. NC2 creates security group rules accordingly.
• Disabled: Disable access to management services in the security group attached to the cluster nodes.
Note: If you intend to use the Cluster Protect feature, ensure that the Cluster Management Services can be
accessed from the VPC and the Prism Central subnet. Ports 30900 and 30990 are opened while creating a new
NC2 cluster and are required for communication between AOS and Multicloud Snapshot Technology (MST)
to back up the VM and volume groups data.
• I want to protect the cluster: Select this option if you want to protect the cluster using the Cluster
Protect feature.
Note: You must register this cluster to a new or an existing Prism Central instance that runs in the same
availability zone. If you are going to use this cluster as a source or target for Disaster Recovery, then you
cannot also use the Cluster Protect feature to protect your cluster.
To protect the cluster using the Cluster Protect feature, you must perform the steps listed in Cluster Protect
Configuration.
• I will protect the cluster myself/ I do not need protection: Select this option if you do not want to
use the Cluster Protect feature to protect your cluster.
Note: You can select this option if you need to use this cluster as a source or target for a Disaster Recovery
setup. Nutanix recommends enabling the automatic backup of VM and Volume Groups data.
Note: The Cluster Protect feature is available only with AOS Ultimate or NCI Ultimate license tier and needs
AOS 6.7 or higher and Prism Central 2023.3 or higher. The Cluster Protect feature is available only for new
cluster deployments. Any clusters created before AOS 6.7 cannot be protected using this feature.
Note: Nutanix cluster is deployed in AWS in approximately 30 minutes. If there are any issues with provisioning
the Nutanix cluster, see the Notification Center on the NC2 console.
11. After the cluster is created, click the name of the cluster to view the cluster details.
What to do next
After you create a cluster in AWS, set up the network and security infrastructure in AWS and the cluster for your user
VMs. For more information, see User VM Network Management and Network Security using AWS Security
Groups.
• Gateway endpoints: These gateways are used for connectivity to Amazon S3 without using an internet
gateway or a NAT device for your VPC. A gateway endpoint targets specific IP routes in the AWS VPC route
table. Gateway endpoints do not use AWS PrivateLink, unlike interface endpoints. There is no additional charge
for using gateway endpoints.
For more information on how to create a new gateway endpoint, see Creating a Gateway Endpoint.
You can create a new or use an existing gateway endpoint. When using an existing gateway endpoint, you only
need to modify the route tables associated with the gateway endpoint. For more information, see Associating
Route Tables With the Gateway Endpoint.
• Interface endpoints: These gateways are used for connectivity to services over AWS PrivateLink. An interface
endpoint is a collection of one or more elastic network interfaces (ENIs) with a private IP address that serves as
an entry point for traffic destined to a supported service. Interface endpoints allow the use of security groups to
restrict access to the endpoint.
For more information, see AWS Documentation.
Note: Ensure that you create your gateway endpoint in the same AWS Region as your S3 buckets. Also, add the
gateway endpoint in the routing table of the resources that need to access S3. The outbound rules for the security group
for instances that access Amazon S3 through the gateway endpoint must allow traffic to Amazon S3.
You can add a new endpoint route to a route table and associate it with the gateway endpoint. The endpoint route is
deleted when you disassociate the route table from the gateway endpoint or when you delete the gateway endpoint.
Procedure
Note: Ensure that you do not select the Endpoints services option.
6. Under Services, search with the S3 keyword, and then select the service with the name:
com.amazonaws.<region>.s3 and type as Gateway.
7. Under VPCs, select the VPC where you want to create the endpoint.
Note: The VPC must be the same where your cluster is created. All NC2 clusters in that VPC will be able to
access the S3 endpoint. You must create a different endpoint for each VPC where an NC2 cluster is running.
8. Under Route tables, select the route tables corresponding to your NC2 cluster’s private subnet in the VPC.
This must be the route table associated with the cluster management subnet.
Note: You must add all route tables that are associated with the management subnet of all your clusters.
11. After successfully creating the endpoint, verify that the route table pointed to the S3 endpoint has the gateway
endpoint in its routes.
3. Select the gateway endpoint that you want to use for AWS S3.
7. Under Route tables, select the route tables corresponding to your NC2 cluster’s private subnet in the VPC.
This must be the route table associated with the cluster management subnet.
Note: You must add all route tables that are associated with the management subnet of all your clusters.
Note: Nutanix does not take responsibility for your Microsoft Windows licensing and compliance validation. You must
ensure you are in compliance with Microsoft and AWS requirements for the Microsoft licenses and associated costs.
NC2 on AWS supports Microsoft Windows Server versions that AWS supports, such as:
Note: NC2 only supports AOS 6.5.4.5 and 6.7.1.5 to run Microsoft Windows Server workload with Windows license
costs payable to AWS. Also, the entire cluster will be deployed either with instances with all Windows Licenses
Included or instances without Windows Licenses Included. When you choose to run Microsoft Windows workloads
on the NC2 on AWS cluster, AWS invoices you for the whole cluster. If you would like to switch to non-Windows
workloads, then you should deploy a separate NC2 on AWS cluster without selecting the Microsoft Windows Licensing
option in the NC2 console. Switching between the two options is not allowed after a cluster has been deployed.
You can check if you have recorded your intent to pay AWS for the Microsoft Windows Server license costs on an
NC2 cluster from the cluster’s Summary page. You need to perform additional steps to install Microsoft Windows
Server and then activate a new license for your Microsoft Windows Server VMs. For more information, see Viewing
Licensing Details.
Note: You must perform this step again on the Windows VM after you migrate it back to the NC2 on AWS cluster
in the disaster recovery scenario.
• Run the below command as an administrator to set your Windows KMS machine IP address.
slmgr.vbs /skms 169.254.169.250:1688
Note: These instructions are indicative of listing the lowest prices; NC2 recommends reviewing AWS documentation
for up-to-date information on AWS pricing.
Note: The estimated cost displayed is for each node in your NC2 on AWS cluster. The cost will be in multiple of
the number of nodes in your NC2 on AWS cluster.
Note: If you do not want to run Microsoft Windows workloads on an NC2 on AWS cluster, you must not record your
intention to run Microsoft Windows Server while creating an NC2 on AWS cluster.
Note: While deploying Prism Central, you need to specify the CIDR of the subnet created for your NC2 cluster. You
can find this CIDR from your AWS console listed under IP Address Management > Network Prefix Length.
For more information about registering your cluster with Prism Central, see Registering Cluster with Prism
Central.
After you deploy Prism Central, perform the following additional networking and security configurations:
Procedure
1. Configure the name servers to host a network service for providing responses to queries against a directory
service, such as a DNS server. For more information, see Configuring Name Servers for Prism Central.
Note: Ensure that the name server IP address is similar to the one you entered during the deployment of Prism
Central.
2. Configure the NTP servers to synchronize the system clock. For more information, see Configuring NTP
Servers for Prism Central.
You can use:
• 0.pool.ntp.org
• 1.pool.ntp.org
• 2.pool.ntp.org
• 3.pool.ntp.org
3. Add an authentication directory. For more information, see Adding An Authentication Directory (Prism
Central).
4. Configure role permissions. For more information, see Assigning Role Permissions.
5. Configure SSL certificate management. For more information, see Importing an SSL Certificate.
6. Deploy a load balancer to allow Internet access. For more information, see Deploying a Load Balancer to Allow
Internet Access.
7. Create and associate AWS security groups with an EC2 instance to control outbound and inbound traffic. For
more information, see Controlling Inbound and Outbound Traffic Using Security Groups.
9. Register Prism Central with the Prism Element cluster. For more information, see Registering or Unregistering
Cluster with Prism Central.
What to do next
For more information about how to sign into the Prism Element web console, see Logging into a Cluster by Using
the Prism Element Web Console.
For more information about how to sign into the Prism Central web console, see Logging Into Prism Central.
Procedure
• Username: admin
• Password: Nutanix/4u
The default password is Nutanix/4u. You are prompted to change the default password if you are logging on for
the first time.
For more information, see Logging Into the Web Console.
What to do next
After you create a cluster in AWS, set up the network and security infrastructure in AWS and the cluster for your user
VMs. For more information, see User VM Network Management and Network Security using AWS Security
Groups.
Note: When you configure a Linux bastion host, ensure that you do the following:
• Open the EC2 console in the same region as the Nutanix cluster.
• When you are configuring an instance, ensure that you do the following:
• Under Network, change the default VPC to the same VPC being used by the Nutanix cluster
running on AWS.
• Under Subnet, select the subnet containing Nutanix Cluster xxxxxxxxx Public.
• Enable the Auto-assign Public IP option.
• You must restrict access to Management services (access to CVMs and AHV hosts) while configuring
the cluster. To do this, launch the NC2 console, click on the ellipsis for the cluster, and then click
Update Configuration. Select the Access Policy tab, and then select Restricted under
Management Services (Core Nutanix services running on this cluster).
Procedure
Note: You can either upload (secure copy (scp)) the key.pem file from your local machine to the host or create
a new pem file on the host by using the content of the key.pem file via vim key.pem, and then run the chmod
400 key.pem command.
What to do next
After you create a cluster in AWS, set up the network and security infrastructure in AWS and the cluster for your user
VMs. For more information, see User VM Network Management and Network Security using AWS Security
Groups.
Note: You cannot switch back from NCI licensing to AOS licensing. You cannot switch back from EUC licensing to
VDI licensing.
Nutanix also provides flexible subscription options that help you select a suitable subscription type and payment
method for NC2.
You can use the legacy portfolio licenses and pay using the Pay As You Go (PAYG) subscription plan for overages
above the legacy license capacity used.
For more information on the pricing that is used to charge for overages above legacy AOS license capacity, see NC2
pricing options.
For the new NCI licensing, NC2 does not charge for overages above the NCI license capacity used. For more details
on the new NCI licenses, see Nutanix Cloud Platform Software Options.
You can choose to be invoiced either directly by Nutanix or through your cloud marketplace account, if you choose to
use your cloud marketplace.
NC2 supports Advanced Replication and Security add-ons for NCI Pro and Nutanix Unified Storage (NUS) Pro, and
you have to manually apply these licenses to Prism Central managing your NC2 cluster. NC2 supports Advanced
Replication, Data-at-Rest Encryption, and Files add-ons for AOS (legacy) Pro, and you have to reserve capacity from
these licenses, after which they are automatically picked up and applied to your NC2 cluster.
The following table lists the combination of license types based on the software configuration and the subscription
plan available for these license types.
Note: Your NC2 cluster is enabled with AOS, NCI, VDI, or EUC licenses during the free trial. You can switch from
AOS to NCI licenses at any time; however, you cannot switch from NCI to AOS licenses. You can switch from VDI to
EUC licenses at any time; however, you cannot switch from EUC to VDI licenses.
You must deploy Prism Central and configure your NC2 cluster with that Prism Central in order to use NCI
licenses.
For more information on how to switch an already running cluster with AOS legacy licensing to NCI licensing, see
Applying NCI, EUC, and NUS Licenses.
Once you have configured Prism Central with the cluster, you can manually apply the NCI licenses to that Prism
Central to cover the usage of the cloud usage.
Note: You can use the same Prism Central with both AOS and NCI-licensed clusters.
Applying cloud platform licenses, excluding NUS, requires that the cluster is running the minimum versions of the
following software:
• AOS 6.0.1.7
• Nutanix Cluster Check (NCC) 4.3.0
• Prism Central pc.2021.9
Applying NUS licenses requires that the cluster is running the minimum versions of the following software:
• AOS 6.1.1
• NCC 4.5.0
• pc.2022.4
Procedure
1. After the cluster is successfully deployed, register the cluster to a Prism Central instance.
Note: You can register this cluster to an existing Prism Central instance or deploy a new Prism Central on this
cluster.
For more information, see Registering Cluster with Prism Central and Installing a new Prism Central.
2. If you are using a free trial for NC2, you can select NCI, AOS, VDI, or EUC as the option during the free trial
period.
You can switch from the AOS to the NCI licensing option or from the VDI licensing to the EUC licensing at any
time. Make sure you follow the appropriate licensing instructions for legacy licenses or new portfolio licenses.
Note: You must perform this step with every NC2 cluster that use the new portfolio licenses, for both general
purpose and VDI clusters.
Perform the following steps to change the license type from AOS to NCI:
4. If you already have the following licenses that you are ready to use, you can manually apply these licenses by
following the procedures described in Applying and Managing Cloud Platform Licenses.
Note: License reservation is required for AOS (legacy) licenses and the associated Advanced Replication and Data-at-
Rest Encryption add-ons. License reservation is not required for NCI licenses and the associated Advanced Replication
and Data-at-Rest Encryption add-ons, as you need to manually apply the NCI licenses.
You do not need to delete the license reservation when terminating an NC2 cluster if you intend to use the
same license reservation quantity for a cluster you might create in the future.
Procedure
1. Sign in to the Nutanix Support portal at https://portal.nutanix.com and then click the Licenses link on the
portal home page. You are redirected to the Licensing portal.
2. Under Licenses on the left pane, click Active Licenses and then click the Available tab on the All Active
Licenses page.
3. Select the licenses that you want to reserve for NC2 and then select Update reservation for Nutanix Cloud
Clusters (NC2) from the Actions list.
Note: This option becomes available only after you select at least one license for reservation.
5. Enter the number of licenses that you want to reserve in the Reserved for AWS and Reserved for Azure
columns for the license. The available licenses appear in the Total Available to Reserve column.
Procedure
1. Terminate your cluster from the NC2 console. For more information, see Terminating a Cluster.
2. Update the license reservation for the NC2 cluster under Reserved for AWS or Reserved for Azure columns
as 0 on the Licensing portal. For more information, see Modifying License Reservations.
3. Your license capacity is now available for use with any other Nutanix cluster, including on-prem clusters.
Managing Licenses
Follow these steps to manage licenses and change license type or add add-on products to your running
NC2 cluster.
Procedure
2. In the Clusters page, click the cluster name for which you want to update the add-on product selection.
4. Under Software Configuration, you can change your license tier Pro to Ultimate or vice versa from the
Software Tier list.
5. Under Add-on Products, based on the cluster type (General Purpose or VDI cluster) and the license tier, the
available add-on products are displayed. Select or remove the add-on product based on your requirements.
6. Click Save.
Note: For the workspace you want to use to create an NC2 subscription, you must have the Account Admin role. The
default workspace that was created when you created a My Nutanix account has the Account Admin role. If you are
invited to a workspace, then you must get the Account Admin role so that you can subscribe to NC2 and access the
Admin Center and Billing Center.
Note: You can only reserve your legacy portfolio licenses. You must not reserve the new portfolio licenses, such as
NCI and EUC licenses. You need to apply these licenses to an NC2 cluster manually.
To learn more about how to reserve the legacy portfolio licenses, see Reserving License Capacity.
To learn more about how to manually apply new portfolio licenses, see Applying NCI, EUC, and NUS Licenses.
You can subscribe to NC2 from the My Nutanix dashboard > Administration > Billing Center > Launch. In the
Billing Center, under Nutanix Cloud Clusters, click Subscribe Now.
At the beginning of the subscription steps, you get the following options to cover your NC2 usage:
• Use your reserved license capacity: You can reserve your legacy portfolio licenses, such as AOS Pro, AOS
Ultimate, VDI Ultimate license, and associated add-ons for NC2 usage. These licenses are automatically applied
to the cloud clusters to cover their configuration and usage.
You still need to select a subscription plan to cover any overage above your reserved license capacity. You have a
choice of paying directly to Nutanix or using your cloud marketplace account to pay for NC2 software usage.
Note: Ensure that you have reserved enough license capacity for NC2 if you plan to use Nutanix licenses for NC2
usage.
• Use your subscription plan: You can use your paid subscription plan and pay directly to Nutanix or use your
cloud marketplace account.
Based on your preferences, you can use the following subscription workflows to pay for your NC2 software usage,
such as any overage above your reserved license capacity or invoices for your subscription plan.
• Nutanix Direct Subscription: Pay for your NC2 software usage directly to Nutanix.
For more information, see Nutanix Direct.
• Cloud Marketplace Subscription: Pay for your NC2 software usage through your cloud marketplace account.
For more information, see AWS Marketplace.
Nutanix Direct
Perform the following procedure to pay for NC2 on AWS and NC2 on Azure consumption with a Nutanix Direct
subscription plan:
Procedure
• On the My Nutanix dashboard, scroll down to Administration > Billing Center and click Launch. In the
Billing Center, under Nutanix Cloud Clusters, click Subscribe Now.
• On the NC2 console, click the Nutanix billing center link in the banner displayed on the top of the NC2
console.
You are directed to the Nutanix Billing Center.
• Select Yes, I would like to use Nutanix Licenses to cover NC2 usage if you want to use Nutanix
licenses for NC2. You must reserve the legacy license capacity from the Nutanix license portal or manually
apply new portfolio licenses to your NC2 cluster.
If you select this option, the licenses reserved or applied are used to cover the NC2 usage first, and any
overage is charged to the subscription plan you select in the next step.
• Select No, I don’t want to use my licenses. Invoice all NC2 usage to my subscription plan
option if you do not want to use any licenses for NC2. All NC2 usage will be charged to the subscription
plan that you select in the next step.
5. Next, the How would you like to pay for overage above any reserved license capacity? option is
presented.
• Pay directly to Nutanix: The NC2 software usage on all supported clouds (AWS and Azure) is paid to a
single subscription plan.
• Pay via Cloud Marketplace: The cloud marketplace subscription option is only available for NC2 on
Azure.
Select Pay directly to Nutanix and then click Next.
Legacy License Portfolio: You can click Reserve existing licenses on the Support Portal to reserve
licenses for the NC2 usage. To learn more about how to reserve the legacy portfolio licenses, see Reserving
License Capacity.
New Portfolio Licenses: To learn more about how to manually apply new portfolio licenses, see Applying
NCI, EUC, and NUS Licenses.
Select Pay As You Go (For NC2 on AWS and Azure) payment plan for your Nutanix cluster. With this
plan, you are billed at the end of each month for the NC2 usage for that month without any term commitments.
Click Next.
8. On the Company Details page, type the details about your organization and then click Next.
Nutanix Cloud Services considers the address that you provide in the Address 1 and Address 2 fields as the
Bill To Address and uses this location to determine your applicable taxes.
If the address where you consume the Nutanix services is different than your Bill To Address, under the
Sold to Address section, clear the Same information as provided above checkbox and then provide the
address of the location where you use the Cloud services. However, only the Bill To Address is considered to
determine your applicable taxes.
9. On the Payment Method page, select one of the following payment methods, and then click Next.
11. (Optional) If you have received a promotional code from Nutanix, type the code in the Promo code field and
click Apply.
What to do next
You can now begin using the NC2.
You can do one of the following:
AWS Marketplace
Nutanix provides a convenient and cost-beneficial way to pay for NC2 through AWS Marketplace. You can work
with your Nutanix Account Manager and Nutanix reseller to get a discounted private offer for Nutanix licenses or
subscription plan and pay for the following new portfolio licenses included in your discounted private offer through
AWS Marketplace:
Note: Any overages above the license capacity purchased through AWS Marketplace will also be billed through AWS
Marketplace, and the same discounted rate used for the initial license purchase through AWS Marketplace will be used
to calculate the billable amount for overages. The overages will be billed and invoiced monthly by AWS.
You must manually apply new portfolio licenses to Prism Central to manage your NC2 clusters. For more
information, see Applying NCI, EUC, and NUS Licenses.
Perform the following steps to subscribe to NC2 from the AWS marketplace:
1. Contact your Nutanix Account Manager with your NC2 sizing requirements, such as the number of licenses
required and the term for usage.
Your Nutanix Account Manager works with a Nutanix reseller, if applicable, to create customized pricing and
convert that into a private offer in AWS Marketplace. Once the offer is ready for you to accept through AWS
Marketplace, you will receive an email from the Nutanix reseller with the private offer details, including the
pricing that is specific to you.
Note: You need to provide your AWS billing account details to the Nutanix Account Manager. You can find
your billing account ID in the AWS Management Console.
2. Sign in to the AWS Marketplace console and click the Private Offer URL in the email you receive from the
Nutanix reseller.
Alternatively, in the AWS Marketplace console, navigate to the Private offers page > Available offers >
select the Offer ID for the offer of interest, and click View offer.
You are redirected to the Nutanix Cloud Clusters (NC2) listing page, where you need to configure your
software contract.
4. Under How long do you want your contract to run?, review the tenure of your contract.
5. Under Dates, review the Service start date, Service end date, and Offer expiration date. You must
accept the offer before the offer expiration date.
7. Under Additional usage fees, review the pay-as-you-go monthly charges for additional usage.
You will be charged this rate for any NC2 usage on AWS above the license capacity you purchase.
11. After successful payment, click Set up your account to set up your billing subscription with NC2.
12. You are redirected to the Nutanix Billing Center to complete your NC2 Billing configuration.
Note: If you do not already have an existing My Nutanix account, you must sign up for a new My Nutanix
account and verify the email address used to sign up for My Nutanix. After verifying your email address, you will
be automatically redirected to My Nutanix Billing Center. For more information, see Creating My Nutanix
Account.
14. Select the correct workspace from the Workspace list on the My Nutanix dashboard.
The workspace must be the same workspace you used when creating NC2 clusters. For more information on
workspaces, see Workspace Management.
15. Click Add Addresses to add your billing address and the address where the NC2 subscription will be used.
Procedure
2. Select the correct workspace from the Workspace dropdown list on the My Nutanix dashboard. For more
information on workspaces, see Workspace Management.
3. On the My Nutanix dashboard, go to Administration > Billing Center and click Launch.
6. In the Cancel Plan dialog, click Yes, Cancel to cancel the subscription plan or click Nevermind to close the
Cancel Plan dialog.
7. In the Share Your Feedback dialog, you can specify your reasons to cancel the plan, and click Send.
What to do next
Your plan is deactivated at the end of the current billing schedule. The Cancel Plan dialog displays the date on
which your plan is scheduled to be deactivated.
Note: You can revoke the cancellation of your plan at the most two times before the plan is deactivated.
Note: Only the primary billing contact can modify any billing or subscription details.
• If you have applied the Nutanix software licenses, you can change the licenses allocated to NC2.
• View details about the unbilled amount for the current month.
• View details of usage, such as rate, quantity, and the amount charged for each entity (CPU hours, public IP
address hours, disk size, and memory hours) for each cluster.
For more information on how to manage billing, see Nutanix Cloud Services Administration Guide.
• Details about the rate, quantity, and amount charged per unit for a selected billing cycle. You can check the details
for the current and last two billing cycles.
• Details about the usage of clusters by units of measure for a selected billing cycle.
Perform the following procedure to display the billing and usage details of NC2:
1. Sign in to your My Nutanix account.
• Spend: Displays a graph detailing your estimated daily spending for a selected billing cycle. You can check
details for the current and last two billing cycles. You can apply filters to the graph for individual units of
measure. A summary table with detailed information about the current billing cycle is also displayed.
• Usage: Displays an estimate of your total usage for the billing cycle that you select. You can filter the usage
by clusters and units of measure. Individual units of measure are a breakdown of total usage on the latest day
of the billing cycle that you select. You can apply filters to see more details, such as usage information of each
cluster and find out whether a usage is processed through licensing or subscription.
Select the billing period on the top-right corner of the usage graph to see the total usage for the selected billing
cycle in the form of a graph.
Under Usage broken down by individual units of measure, click Clusters, and then select a cluster
ID and choose a unit of measure to see the total usage of each cluster for a selected billing cycle in a graphical
view. Hover over the bars in the graph to see the number of licenses and subscriptions you used.
Click Units and select a unit of measure to see the total usage of all the clusters by that unit of measure.
A breakdown of the total usage of the same billing cycle you selected is displayed in a table after the graph.
You can view the usage graph for three billing cycles.
• Name: Enter a unique name for your API key to help you identify the key.
• Scope: Select the Usage Analytics scope category under Billing from the Scope drop-down list.
e. Click Create. The Created API dialog is displayed.
Note: You cannot recover the generated API key and key ID after you close this dialog.
For more details on API Key management, see the API Key Management section in the Licensing Guide.
Note: This step uses Python to generate a JWT token. You can use other programming languages, such as
Javascript and Golang.
b. Replace the API Key and Key ID in the following Python script and then run it to generate a JWT token.
Also, you can specify expiry time in seconds for the JWT token to remain valid. In the requesterip attribute,
enter the requester IP.
from datetime import datetime
from datetime import timedelta
import base64
import hmac
import hashlib
import jwt
def generate_jwt():
curr_time = datetime.utcnow()
payload = {
"aud": aud_url,
"iat": curr_time,
"exp": curr_time + timedelta(seconds=120),
"iss": key_id,
"metadata": {
"reason": "fetch usages",
"requesterip": "enter the requester IP",
"date-time": curr_time.strftime("%m/%d/%Y, %H:%M:%S"),
"user-agent": "datamart"
}
}
signature = base64.b64encode(hmac.new(bytes(api_key, 'UTF-8'), bytes(key_id,
'UTF-8'), digestmod=hashlib.sha512).digest())
token = jwt.encode(payload, signature, algorithm='HS512',
headers={"kid": key_id})
print("Token (Validate): {}" .format(token))
generate_jwt()
c. A JWT token is generated. Copy the JWT token on your system for further use. The JWT token can be used as
an Authorization header when validating the API call. The JWT token remains valid for the duration that you
have specified.
• You create UVM networks by specifying a CIDR value that matches the CIDR value of the AWS subnet.
• NC2 supports only AHV managed networks.
• UVMs use only the DHCP servers provided by the cluster.
• You do not need to specify the VLAN ID when you are creating a network.
• AWS Gateway is used as the default gateway for the UVM networks and cannot be changed.
Nutanix clusters consume the AWS subnets from Prism Element. You must add the AWS subnets you created
for UVMs as networks by using the Prism Element web console. Before you create networks for UVMS in Prism
Element, create AWS subnets manually either by using the AWS console, AWS Cloud Formation template, or any
other tools of your choice.
Nutanix recommends the following:
Note: In NC2 on AWS with AOS 6.6.x, while creating a subnet from the Settings > Network Configuration >
Subnets tab, the list of (AWS) Cloud Subnets does not appear. As a workaround, you can add the Cloud Subnets
using Network Prefix Length and Gateway IP Address based on Cloud Subnet CIDR.
Procedure
2. You can navigate to the Create Subnet dialog box in any of the following way:
• Network Prefix Length: Associated VPC CIDR of the cloud subnet that you have selected.
• Gateway IP Address: Gateway IP address of the cloud subnet that you have selected.
Note: IP Address Management is enabled by default and indicates that the network is an AHV managed
network. AHV networking stack manages the IP addressing of the UVMs in the network.
• Network IP Prefix: The associated VPC CIDR of the cloud subnet that you have selected is populated.
• Start Address: Enter the starting IP address of the range.
• End Address: Enter the ending IP address of the range.
• Click Submit to close the window and return to the Create Subnet dialog box.
f. Under DHCP Settings, provide the following details:
• DHCP Settings: Select this checkbox to define a domain. When this checkbox is selected, the fields to
specify DNS servers and domains are displayed. Clearing this checkbox hides those fields.
• Domain Name Servers (comma separated): Enter a comma-delimited list of DNS servers. If you
leave this field blank, the cluster uses the IP address of the AWS VPC DNS server.
• Domain Search (comma separated): Enter a comma-delimited list of domains.
• Domain Name: Enter the domain name.
• TFTP Server Name: Enter the hostname or IP address of the TFTP server from which virtual machines
can download a boot file. It is required in a Pre-boot execution Environment (PXE).
• Boot File Name: Enter the name of the boot file to download from the TFTP server.
4. Click Save to configure the network connection, close the Create Subnet dialog box.
Procedure
2. Click the entities menu in the main menu, expand Network & Security, and then select Subnets. The
Subnets window appears.
Note: Ensure that you do not use the Create Subnet option displayed adjacent to the Network Config option
on the Subnets window.
4. On the Create Subnet dialog box, provide the required details in the indicated fields:
• IP Address Management: When you select the cloud subnet, the following details are populated under IP
Address Management:
• Network Prefix Length: Associated VPC CIDR of the cloud subnet that you have selected. This maps to
the CIDR block on the Cloud subnet.
• Gateway IP Address: Gateway IP address of the cloud subnet that you have selected.
Note:
IP Address Management is enabled by default and indicates that the network is an AHV managed
network. AHV networking stack manages the IP addressing of the UVMs in the network.
• Network IP Prefix: The associated VPC CIDR of the cloud subnet that you have selected is populated.
• Start Address: Enter the starting IP address of the range.
• End Address: Enter the ending IP address of the range.
• Click Submit to close the window and return to the Create Subnet dialog box.
• DHCP Settings: Select the DHCP Settings checkbox to define a domain. Select this checkbox to define a
domain. When this checkbox is selected, the fields to specify DNS servers and domains are displayed. Provide
the following details:
• Domain Name Servers (comma separated): Enter a comma-delimited list of DNS servers. If you
leave this field blank, the cluster uses the IP address of the AWS VPC DNS server.
• Domain Search (comma separated): Enter a comma-delimited list of domains.
• Domain Name: Enter the domain name.
• TFTP Server Name: Enter the hostname or IP address of the TFTP server from which virtual machines
can download a boot file. It is required in a Pre-boot eXecution Environment (PXE).
• Boot File Name: Enter the name of the boot file to download from the TFTP server.
5. Click Save to configure the network connection, close the Create Subnet dialog box.
Procedure
• Click the gear icon in the main menu and select Network Configuration in the Settings page. The
Network Configuration window appears.
• Go to the VMs dashboard and click the Network Config button.
3. On the Network Configuration window, select the UVM network you want to update and click the pencil icon
on the right.
The Update Network dialog box appears, which contains the same fields as the Create Network dialog box
(see Creating a UVM Network using Prism Element on page 108).
5. Click Save to update the network configuration and return to the Network Configuration window.
6. To delete a UVM network, in the Network Configuration window, select the UVM network you want to delete
and click the X icon (on the right).
A window prompt appears to verify the action; click OK. The network is removed from the list.
Note: This operation does not delete the AWS subnet associated with the UVM network.
Procedure
2. Click the entities menu in the main menu, expand Network & Security, and then select Subnets. The
Subnets window appears.
3. On the Subnets window, click Network Config. The Network Configuration dialog box appears. On the
Network Configuration window, select the UVM network you want to update and click the pencil icon on the
right.
The Update Subnet dialog box appears, which contains the same fields as the Create Subnet dialog box. See
Creating a UVM Network using Prism Central on page 111.
6. To delete a UVM network, in the Network Configuration window, select the UVM network you want to delete
and click the X icon on the right.
A window prompt appears to verify the action; click OK. The network is removed from the list.
Note: This operation does not delete the AWS subnet associated with the UVM network.
See the Command Reference guide for detailed information about how to block an IP address on a managed
network.
The cluster does not use the IP addresses blocked by using AHV IPAM for any UVM vNIC assignments.
Note: ENIs can have upto 49 secondary IP addresses and NC2 implements sharing of ENIs for vNIC IP addresses until
the ENI IP address capacity is reached.
Bare-metal instances support up to 15 ENIs. One ENI is dedicated for AHV or CVM connectivity and the rest of the
14 ENIs are dynamically created as UVMs are powered on or migrated to the AHV node. Note that an ENI belongs to
a single AWS subnet and so UVMs from more than 14 subnets on a given AHV node is not supported.
To learn more about the number of AWS ENIs on bare metal instances, see AWS Documentation.
Procedure
3. In the Create VM dialog box, scroll down to Network Adaptors (NIC) and click Add New NIC.
4. In the Network Name drop-down list, select the UVM network to which you want to add the vNIC.
5. Select (click the radio button for) Connected or Disconnected to connect or disconnect the vNIC to the
network.
6. The Network Address / Prefix is a read-only field that displays the IP address and prefix of the network.
7. In the IP address field, enter an IP address for the NIC if you manually want to assign an IP address to the
vNIC.
This is an optional field. Clusters in AWS support only managed networks. Therefore, an IP address to the vNIC
is automatically assigned if you leave this field blank.
Note: See the AWS documentation for instructions about how to perform these tasks.
Procedure
2. Create a NAT gateway, associate the gateway with the public subnet, assign a public elastic IP address to the NAT
gateway.
3. Create a route table and add a route to that route table with the target as the NAT gateway (created in step 2).
4. Add the route table you created in step 3 with the private subnet you have created for UVMs.
6. Create a UVM network as described in Creating a UVM Network using Prism Element on page 108.
7. Go to the UVM in the Prism Element web console and add a vNIC to the UVM by using the AWS private subnet
as described in Adding a Virtual Network Interface (vNIC) to a User VM on page 116.
Your UVM can now access the internet.
Note: Additional AWS charges might apply for the use of a network load balancer. Check with your AWS
representative before you create a network load balancer.
Procedure
Note: Make sure the port you want to access is open in an inbound policy of the security group associated
with bare-metal instances of the cluster.
Note: Additional AWS charges might apply if you use the network load balancer. Check with your AWS
representative before you create a network load balancer.
Perform the following procedure to set up the network load balancer in AWS.
Procedure
Note: If you choose a private subnet in the VPC, the Prism Element or Prism Central cannot be accessed
from the Internet.
Note: Make sure the port you want to access is open in an inbound policy of the security group associated
with bare-metal instances of the cluster.
The IP address you choose for the target group must be one of the CVM IP addresses that you can
see on the NC2 portal.
Note: Nutanix recommends you manually blacklist the virtual IP address configured on Prism Central to
avoid IP address conflicts.
What to do next
Note down the DNS name of the load balancer. To find the DNS name, open the load balancer on your
AWS console and then navigate to Description > Basic Configuration. Then to get the IP address of
the load balancer, navigate to Network & Security > Network Interfaces > search the name of the load
balancer and then copy the Primary Private IPv4 address. You would need the load balancer IP address
while modifying the inbound rules under the UVM security group.
Procedure
2. Filter and select the cluster node on which the Prism Central is deployed, and then click the Security tab.
4. For the selected UVM security group, in the Inbound rules tab, click Add rule, and then enter the TCP port as
9440 and the custom source IP as the load balancer IP.
Note: These default security groups are created for each cluster. Amending security group rules in one cluster does not
affect the security group rules in another cluster. When you amend inbound and outbound rules within the default UVM
security group, the policies are applied to all UVMs that are part of the cluster.
You can also create custom security groups to more granularly control traffic to your NC2 environment. You can:
• create a security group that applies to the entire VPC, if you want the same security group rules applied to all
clusters in that VPC.
• create a security group that applies to a specific cluster if you want certain security group rules applied only to that
particular cluster.
• create a security group that applies to a subset of UVMs in a specific cluster, if you want certain security group
rules to only apply those subsets of UVMs.
You must configure Prism Central VM security group and all the UVM security groups in a way that allows
communication between Prism Central VM and UVMs. In a single cluster deployment, the Prism Central VM and
UVM communication is open by default. However, if your Prism Central is hosted on a different NC2 cluster,
then you must allow communication between the Prism Central VM on the cluster hosting Prism Central and the
management subnets of the remaining NC2 clusters.
You do not need to configure security groups for communication between the CVM of the cluster hosting Prism
Central and Prism Central VM.
You cannot deploy Prism Central in the Management subnet. You must deploy Prism Central in a separate subnet.
Suppose your Prism Central is hosted on a different NC2 cluster (say, NC2-Cluster2). In that case, you must modify
the security groups associated with the management subnet on NC2-Cluster1 to include inbound and outbound
security group rules for communication between the Prism Central subnet on NC2-Cluster2 and Management subnet
on NC2-Cluster1. This might extend to management subnets across multiple clusters managed by the same Prism
Central.
Note: Ensure that all AWS subnets used for NC2, except the Management subnet, use the same route table. For more
information on AWS route tables, see AWS documentation.
For more information on the ports and endpoints the NC2 cluster needs, see Ports and Endpoint Requirements.
For more details on the default Internal management, User management, and UVM security groups, see Default
Security Groups. For more information on creating custom security groups, see Custom Security Groups.
Perform the following steps to control inbound and outbound traffic:
1. Determine if you want to use the default UVM security group to control inbound and outbound traffic for all
UVMs in the cluster or if you want more granular control over UVM security rules with different security groups
for different UVMs.
Cloud Clusters (NC2) | Network Security using AWS Security Groups | 122
2. Edit the default UVM security group to add inbound and outbound rules if you want those rules to apply to all
UVMs on your cluster.
3. You may also create additional custom security groups for more granular control of traffic flow in your NC2
environment:
1. Create a security group in AWS.
2. Add appropriate tags to the security group. For more details on the tags needed with custom security groups,
see Custom Security Groups.
3. Add rules to enable or restrict inbound and outbound traffic.
The Internal management, User management, and UVM security groups have the recommended default rules set
up by NC2 at cluster creation. All management ENIs created, even after initial cluster deployment, have the default
Internal management (internal_management) and User management (user_management) security groups
attached.
Note: Nutanix recommends that you do not modify Internal management and User management security groups or
change any security group attachments.
Cloud Clusters (NC2) | Network Security using AWS Security Groups | 123
All elastic network interfaces (ENIs) for CVMs and the EC2 bare-metal hosts are present on the private Management
subnet.
All UVMs on a cluster are associated with the default UVM security group unless you create additional UVM
security groups. The default UVM security group controls all traffic that enters the ENIs belonging to the UVM
subnets. Additional custom security groups can be created to control traffic at the VPC, individual cluster, or UVM
subnet levels.
To allow communication from external sources to the UVMs, you must modify the default UVM security group to
add new inbound rules for the source IP addresses and the load balancer IP addresses.
Note: Each cluster in the same VPC has its default security group. When you amend inbound and outbound rules
within the default UVM security group, the policies are applied to all UVMs that are part of the cluster.
Note: NC2 supports the ability to create custom security groups when it uses AOS 6.7 or higher.
A custom security group at the VPC level is attached to all ENIs in the VPC. A custom security group at the cluster
level is attached to all ENIs of the cluster. Custom security groups at the UVM subnet level are attached to all ENIs of
all specified UVM subnets.
You can use custom security groups to apply security group rules across all clusters in a VPC or a specific cluster
or a subset of UVM Subnets in a specific cluster. A custom security group per UVM subnet can be beneficial when
controlling traffic for specific UVMs or restricting traffic between UVMs from different subnets. To support custom
security groups at the UVM subnet level, NC2 assigns tags with key-value pairs that can be used to identify the
custom security groups. For more information about default security groups for internal management and UVMs, see
Default Security Groups.
Note: To be able to increase the custom security groups quota beyond the default limit, you must add the
GetServiceQuota permission to the Nutanix-Clusters-High-Nc2-Cloud-Stack-Prod IAM role. To change the
permissions and policies attached to the IAM role, sign into the AWS Management Console, open the IAM console at
https://console.aws.amazon.com/iam/, and choose Roles > Permissions. For more information, see AWS
documentation.
Cloud Clusters (NC2) | Network Security using AWS Security Groups | 124
Figure 70: GetServiceQuota permission
The default AWS service quota allows you to create a maximum of five custom security groups per ENI. Out of the
five security groups per ENI quota, one is used for the default UVM security group. You can add only one custom
security group at the VPC level and one custom security group at the cluster level. You can add the remaining custom
security groups at the UVM subnet level.
For example, if you create one custom security group at the VPC level and one at the cluster level, you can create
two security groups at the UVM subnet level, assuming you have the default AWS Service quota limit of 5 security
groups per ENI. Similarly, if you create one security group at the cluster level and no security group at the VPC level,
you can create three security groups at the UVM subnet level.
Note: If you need more security groups, you can contact AWS support to increase the number of security groups per
ENI in your VPC.
The following table lists the AWS tags for custom security groups and the level at which these security groups can
be applied. These three tags have hierarchical order that defines the order in which the security groups with these
tags are honoured. A higher hierarchical tag is a prerequisite for the lower hierarchical tag, and therefore the higher
hierarchical tag must be present in the security group with the lower hierarchical tag. For example, if you use the
networks tag (the lowest hierarchical tag) for a security group, both the cluster-uuid (middle hierarchical) tag and
external (higher hierarchical) tag must also be present in that security group. Similarly, if you add the cluster-uuid
tag, the external tag must be present in that security group.
For example, if you want to create a security group to apply rules to all clusters in a certain VPC, you must attach the
following tag to the security group. The tag value can be left blank:
Cloud Clusters (NC2) | Network Security using AWS Security Groups | 125
Table 13: Tag example for VPC-level security group
The following figure shows an example of tags applied for the custom security group at the VPC level.
If you want to create a security group to apply rules to a cluster with UUID 1234, then you must apply both of these
tags to the security group:
The following figure shows an example of tags applied for the custom security group at the cluster level.
Cloud Clusters (NC2) | Network Security using AWS Security Groups | 126
Figure 72: Example for cluster-level security group
If you want to create a security group to apply rules to a UVM subnet 10.70.0.0/24 in a cluster with UUID 1234, then
you must apply all three of these tags to the security group:
The following figure shows an example of tags applied for the custom security group at the subnet level.
Cloud Clusters (NC2) | Network Security using AWS Security Groups | 127
Ports and Endpoints Requirements
This section lists the ports and endpoint requirements for the following:
• Outbound communication
• Inbound Communication
• Communication to UVMs
For more information on the general firewall support requirements, see the Port and Protocols guide.
Note: Many of the destinations listed here use DNS failover and load balancing. For this reason, the IP address
returned when resolving a specific domain may change rapidly. Nutanix cannot provide specific IP addresses in place of
domain names.
Cloud Clusters (NC2) | Network Security using AWS Security Groups | 128
Table 17: Cluster Outbound to EC2
Cloud Clusters (NC2) | Network Security using AWS Security Groups | 129
Figure 74: Inbound Rules in User Management Security Group
Cloud Clusters (NC2) | Network Security using AWS Security Groups | 130
Description Protocol Number Source: User
Management Security
Group
Prism Central to Prism TCP 9300 and 9301 default: allow
Element communication
Note: You must
manually open
these ports in the
default UVM
security group.
Cloud Clusters (NC2) | Network Security using AWS Security Groups | 131
CLUSTER MANAGEMENT
Modify, update, manually replace, display AWS events, hibernate, resume and delete NC2 running on AWS by using
NC2 console.
• i3.metal, i3en.metal, i4i.metal: Any combination of these instance types can be mixed, subject to the bare-metal
availability in the region where the cluster is being deployed.
• z1d.metal, m5d.metal, m6id.metal: Any combination of these instance types can be mixed, subject to the bare-
metal availability in the region where the cluster is being deployed.
For more details, see Creating a Heterogeneous Cluster.
Note: The tasks to add or remove nodes are executed sequentially while updating the capacity of a cluster.
Note: You must update the cluster capacity by using the NC2 console only. Support to update the cluster capacity by
using the Prism Element web console is not available.
When expanding an NCI cluster beyond what the NCI license covers, you need to purchase and manually
apply additional license capacity. Contact your Nutanix account representative to purchase an additional
license capacity.
Procedure
2. In the Clusters page, click the name of the cluster for which you want to update the capacity.
• Host type. The instance type used during initial cluster creation is displayed.
• Number of Hosts. Click + or - depending on whether you want to add or remove nodes from the cluster.
Note: A maximum of 28 nodes are supported in a cluster. NC2 supports 28-node cluster deployment in AWS
regions that have seven placement groups. Also, there must be at least three nodes in a cluster for RF2 and five
nodes when RF3.
Nutanix recommends that the number of hosts must match the RF number or multiples of the RF
number that has been selected for the base cluster.
• Add Host Type: Depending on the instance type used for the cluster, the other compatible instance types are
displayed. For example, if you have used i3.metal node for the cluster, then i3en.metal, and i4i.metal instance
types are displayed.
Note: You can create a heterogeneous cluster using a combination of i3.metal, i3en.metal, and i4i.metal
instance types or z1d.metal, m5d.metal, and m6id.metal instance types.
The Add Host Type option is disabled when no compatible node types are available in the region where the
cluster is deployed.
Note: UVMs that have been created and powered ON in the original cluster running a specific node or a
combination of compatible nodes, as listed below, cannot be live migrated across different node types when other
• If z1d.metal is present in the heterogeneous cluster either as the initial node type of the cluster or as
the new node type added to an existing cluster.
• If i4i.metal is the initial node type of the cluster and any other compatible node is added.
• If m6id.metal is the initial node type of the cluster and any other compatible node is added.
• If i3en.metal is the initial node type of the cluster and the i3.metal node is added.
• RF 1. Data is not replicated across the cluster for RF1. The minimum cluster size must be 1.
• RF 2. The number of copies of data replicated across the cluster is 2. The minimum cluster size must be 3.
• RF 3. Number of copies of data replicated across the cluster is 3. The minimum cluster size must be 5.
6. Under Service Quotas, the service quotas for AWS resources under your AWS quota are displayed. Click
Check quotas to verify the cluster creation or expansion limits.
7. Click Save. The Increase capacity? or Reduce capacity? dialog appears based on your choice to expand or
shrink the cluster capacity in the previous steps.
8. Click Yes, Increase Capacity or Yes, Reduce Capacity to confirm your action.
Note: The cluster expansion to the target capacity might fail if enough AWS nodes are unavailable in the current
region. The NC2 console will automatically retry to provision the nodes. If the error in provisioning the nodes is
consistent, you must check with your AWS account representative to ensure enough nodes are available from AWS
in your target AWS region and Availability Zone.
Ensure that all VMs on the nodes you want to remove must be turned off before performing the node
removal task.
You can cancel any pending operations to expand the cluster capacity and try to expand the cluster
capacity with a different instance type. See Creating a Heterogeneous Cluster for more details.
What to do next
For more information when you see an alert in the Alerts dashboard of the Prism Element web console or
if the Data Resiliency Status dashboard displays a Critical status, see Maintaining Availability: Node and
Rack Failure.
Note: If a host turns unhealthy and you add another host to a cluster for evacuation of data or VMs, AWS charges you
additionally for the new host.
Procedure
3. In the Hosts page, click the ellipsis of the corresponding host you want to replace, and click Replace Host.
What to do next
For more information when you see an alert in the Alerts dashboard of the Prism Element web console or
if the Data Resiliency Status dashboard displays a Critical status, see Maintaining Availability: Node and
Rack Failure.
• NC2 on AWS supports a combination of i3.metal, i3en.metal, and i4i.metal instance types or z1d.metal,
m5d.metal, and m6id.metal instance types. The AWS region must have these instance types supported by NC2 on
AWS. For more information, see Supported Regions and Bare-metal Instances.
Note: You can only create homogenous clusters with g4dn.metal; it cannot be used to create a heterogeneous
cluster.
• Nutanix recommends that the minimum number of additional nodes must be equal to or greater than your cluster's
redundancy factors (RF), and the cluster must be expanded in multiples of RF for the additional nodes. A warning
is displayed if the number of nodes is not evenly divisible by the RF number.
• UVMs that have been created and powered ON in the original cluster running a specific node or a combination of
compatible nodes, as listed below, cannot be live migrated across different node types when other nodes are added
to the cluster. After successful cluster expansion, all UVMs must be powered OFF and powered ON to enable live
migration.
• If z1d.metal is present in the heterogeneous cluster either as the initial node type of the cluster or as the new
node type added to an existing cluster.
• If i4i.metal is the initial node type of the cluster and any other compatible node is added.
• If m6id.metal is the initial node type of the cluster and any other compatible node is added.
• If i3en.metal is the initial node type of the cluster and the i3.metal node is added.
• You can expand or shrink the cluster with any number of i3.metal, i3en.metal, and i4i.metal instance types or
z1d.metal, m5d.metal, and m6id.metal instance types as long as the cluster size remains within the cap of a
maximum of 28 nodes.
Note: You must update the cluster capacity using the NC2 console. You cannot update the cluster capacity using the
Prism Element web console.
For more information on how to add two different node types when expanding a cluster, see Updating the Cluster
Capacity.
You can use a gateway endpoint for connectivity to Amazon S3 without using an internet gateway or a NAT device
for your VPC. For more information, see AWS VPC Endpoints for S3.
Hibernate and resume feature is generally available with the AOS 6.5.1 version. All previously hibernated clusters
running AOS 6.0.1 or prior versions must be resumed once and then upgraded to AOS 6.5.1 or later versions to
hibernate the cluster again in future. Henceforth, if you want to use the GA version of this feature, upgrade to AOS
6.5.1 version.
After you hibernate your cluster, you will not be billed for any Nutanix software usage or the AWS bare-metal
instance for the duration the cluster is in the hibernated state. However, you may be charged by AWS for the data
stored in Amazon S3 buckets for the duration the cluster is hibernated. For more information about the Amazon S3
billing, see the AWS documentation.
NC2 does not consume any of your reserved license capacities while a cloud cluster is in the hibernated state. Once a
cloud cluster is resumed, an appropriate license will be automatically applied to the cluster from your reserved license
pool, provided that enough reserved capacity is available to cover your cluster capacity. To learn more about license
reservations for cloud clusters, visit Reserving License Capacity on page 80.
You can hibernate and resume a single-node and three or more node clusters.
Note: You cannot hibernate the clusters that are protected by the Cluster Protect feature. You must stop protecting the
cluster before triggering hibernation.
You cannot hibernate a cluster if any of the following conditions are met:
For more architectural details on the hibernate/resume operation, visit the Tech Note for NC2 on AWS.
Note: The encryption will be enabled by default on all S3 buckets used for hibernation.
2. Click on the cluster that you want to hibernate. The cluster summary page will open.
4. In the Hibernate cluster "Cluster Name" dialog box, review the hibernation guidelines and limitations, and
then type the name of the cluster in the text box.
Note: Your data is retained in S3 buckets for 6 days post a successful resume operation.
When a hibernated cluster is resumed, it returns to the same licensing state it had before entering
hibernation. The IP addresses of hosts and CVMs remain the same as pre-hibernate.
Procedure
• Do not attempt failover, failback, VM restore or create new DR configurations during hibernate or resume. Any
such running operations might fail if you start hibernating a cluster.
• Disable SyncRep schedules from Prism Central for a cluster that is used as a source or target for SyncRep before
hibernating that cluster. Failure to do so might result in data loss.
• Ensure that no ongoing synchronous or asynchronous replications are happening when you initiate the cluster
hibernation.
• Disable existing near-sync/minutely snapshots and do not configure new minutely snapshots during the hibernate
or resume operation. You may have to wait until the data of the minutely snapshots gets garbage collected before
trying to hibernate again. The waiting period could be approximately 70 minutes.
• Remove remote schedules of protection policies and suspend remote schedules of protection domains targeting a
cluster until the cluster is hibernated.
• Snapshot retention is not guaranteed after the cluster has been resumed. Long-term snapshot retention is subject to
hibernate and resume durations and retention policies.
• If a node in the cluster goes down or degrades, or the CVM goes down, the hibernate or resume operation might
not succeed.
• Hibernate and resume works for Autonomous Extent Store (AES) containers only. For NC2 on AWS, every
container is automatically enabled with AES.
Note: You must only terminate the clusters from the NC2 console and not from your public cloud console. If you try
to terminate the cluster or some nodes in the cluster from your cloud console, then NC2 will continue to attempt to re-
provision your nodes in the cluster.
You do not need to delete the license reservation when terminating an NC2 cluster if you intend to use the
same license reservation quantity for a cluster you might create in the future.
Note: Ensure that the cluster on which Prism Central is deployed is not deleted if Prism Central has multiple Prism
Elements registered with it.
Procedure
2. Go to the Clusters page, click the ellipsis in the row of the cluster you want to terminate, and click Terminate.
3. In the Terminate tab, select the confirmation message to terminate the cluster.
Note: Multicast traffic is disabled by default in NC2. You can enable multicast traffic for each cluster so that clusters
running in AWS do not drop the multicast traffic egressing from AHV.
For more information on multicast concepts, see Multicast on transit gateways - Amazon VPC, and on how
to manage multicast domains and groups, see Managing multicast domains - Amazon VPC and Managing
multicast groups - Amazon VPC.
For multicast traffic to work in NC2, IGMP snooping must be enabled on AHV so that AHV can send multicast
traffic to only subscribed UVMs. If IGMP snooping is disabled, AHV will send multicast traffic to all UVMs, which
might be undesirable. This unwanted traffic results in consuming more computing power, slowing down normal
functions, and making the network vulnerable to security risks. With IGMP snooping enabled, networks use less
bandwidth and operate faster.
Note: A default virtual switch is created automatically when multicast is enabled. You can enable or disable IGMP
snooping only for the UVMs attached to the default virtual switch. You cannot enable or disable IGMP snooping at
the subnet level. All UVMs associated with the default virtual switch will have IGMP snooping enabled or disabled.
Multicast traffic is supported only for UVM subnets and not for CVM (management cluster) subnets. For instructions,
see Enabling or Disabling IGMP Snooping.
When a UVM with multicast traffic enabled is migrated to another NC2 node in the same cluster, multicast traffic can
be forwarded to that UVM even after migration.
The following figure shows a typical topology where both the multicast sender and receiver are in the same VPC.
Various scenarios with different multicast senders and receivers are described below.
Figure 83: Multicast traffic with the multicast sender and receiver are in the same VPC
In this example, the AWS transit gateway is configured on AWS Subnet X. The UVMs in Blue are in Subnet X,
and the UVMs in Green are in Subnet Y. EC2 instance can be any AWS-native (non-bare metal) compute instance,
outside of NC2. All the components shown in this example, other than the EC2-native instance, belong to a single
cluster.
A multicast sender is a host that sends multicast traffic; it can be any EC2-native instance or any UVM on an
NC2 host with IGMP snooping enabled. A multicast receiver is a host that receives multicast traffic; it can be any
EC2-native instance and UVMs that share a subnet. UVMs that are not configured as the receiver will still receive
multicast traffic if snooping is disabled when those UVMs share the subnet with a UVM that is configured as a
receiver UVM. UVMs that are not configured as a receiver and that are not sharing the subnet with another UVM that
is configured as a receiver will not receive multicast traffic regardless of the snooping status.
Table 20: Multicast traffic routing for multicast senders and receivers
Configured Multicast Configured Multicast IGMP Snooping Status Multicast Traffic Status
Sender Receivers
EC2-native UVM1, UVM2, UVM4 Enabled Traffic from the sender
(EC2-native) is received
by the configured
receivers UVM1, UVM2,
and UVM4.
EC2-native UVM1, UVM2, UVM4 Disabled Traffic from EC2-native
is received by:
Note: When
IGMP Snooping
is enabled, traffic
from the multicast
sender is received
only by the
multicast receivers.
• Configured receiver
EC2-native instance on
Subnet Y.
• UVM9 because it shares
the subnet with the
sender UVM8 on NC2-
Host3.
The following figure shows an example topology where both the multicast sender and receiver are in different VPCs.
The transit gateway is configured on Subnet X in VPC 1. The transit gateway allows connecting different VPCs (for
example, Subnet X in VPC 1 to Subnet Y in VPC 2). The following table shows how multicast traffic will be routed
for certain senders and receivers based on the IGMP snooping status.
Table 21: Multicast traffic routing for multicast senders and receivers
Configured Multicast Configured Multicast AOS IGMP Snooping Multicast Traffic Status
Sender Receiver/s Status
EC2-native2 / UVM3 UVM1, EC2-native1 Enabled Traffic from the sender
is received by the
configured receivers
UVM1 and EC2-native1.
Note: When
IGMP Snooping
is enabled, traffic
from the multicast
sender is received
only by the
multicast receivers.
1. Run the following command on the CVM to enable IGMP snooping using aCLI.
net.update_virtual_switch virtual-switch-name enable_igmp_snooping=true
enable_igmp_querier=[true | false] igmp_query_vlan_list=VLAN IDs
igmp_snooping_timeout=timeout
The default timeout is 300 seconds. The AWS Transit Gateway acts as a multicast querier, and you have the
option to add additional multicast queries. You can set enable_igmp_querier variable as true or false if you
want to enable or disable AOS IGMP querier.
If you want to enable IGMP queries to only specific subnets, then you must specify the list of VLANs for
igmp_query_vlan_list. You can get the subnet to VLAN mapping using the net.list aCLI command.
Note: While creating an AWS transit gateway, ensure that you select the Multicast support option. You can enable
the transit gateway for multicast traffic only when you create the transit gateway; you cannot modify an existing
transit gateway to enable multicast traffic.
5. Create an association between subnets in the transit gateway VPC attachment and the multicast domain.
For more information, see Associating VPC attachments and subnets with a multicast domain.
6. Change the default IGMP version for all IGMP group members by running the following command on each UVM
that is intended to be a multicast receiver on the cluster:
sudo sysctl net.ipv4.conf.eth0.force_igmp_version=2
• Configure the inbound security group rule to allow traffic from the sender by specifying the sender’s IP
address.
• Configure the outbound security rule that allows traffic to the multicast group IP address.
Also, allow IGMP queries from the Transit Gateway; add the source IP address as 0.0.0.0/32, and the protocol
must be IGMP. For more information, see Multicast routing - Amazon VPC.
Procedure
2. Select the ellipsis button of a corresponding cluster and click Notification Center.
View AOS specific alerts from Prism Web Console.
4. To acknowledge a notification, in the row of a notification, click the corresponding ellipsis, and select
Acknowledge.
Procedure
Note: Ensure that you select the correct workspace from the Workspace dropdown list on the My Nutanix
dashboard. For more information on workspaces, see Workspace Management.
2. In the Clusters page, click the name of the cluster whose licensing details you want to display.
4. You can check if you are running the cluster with Windows License Included EC2 instances by navigating to the
cluster’s Summary page and the Microsoft Windows Licensing section, and then check if Run Microsoft
Windows Server on this Cluster is set to True.
You will pay the cost associated with Microsoft Windows Server licensing directly to AWS.
• Clusters_agents_upgrader
• Cluster_agent
• Host_agent
• Hostsetup
In the event of a failure event that impacts multiple clusters, you can first recover a cluster that will be used to recover
Prism Central (if the failure event also impacted Prism Central) and then recover the remaining failed clusters and
their associated VMs, and Volume Groups from the backups in the S3 buckets. If the failure is not AZ-wide and
Prism Central of the impacted cluster is hosted on another cluster and that Prism Central is not impacted, then you can
restore the impacted cluster from that existing Prism Central.
Note: With Cluster Protect, all the VMs in a cluster are auto-protected using a single category value and hence are
recovered by a single Recovery Plan. A single Recovery Plan can recover up to 300 entities. Nutanix does not support
multiple recovery plans in parallel, irrespective of the number of entities in the recovery plan.
Note: Currently, up to five NC2 clusters registered with one Prism Central in the same AWS AZ can be protected by
Cluster Protect.
You need to follow various protection and recovery procedures individually for each cluster that needs to be protected
and recovered. Prism Central can be recovered on any AWS cluster that it was previously registered with. All UVMs
and volume groups data are protected automatically to an Amazon S3 bucket with a 1-hour Recovery Point Objective
(RPO). Only the two most recent snapshots per protected entity are retained in the S3 bucket.
When the cluster recovery process is initiated, the impacted clusters are marked as failed, and new recovery clusters
with the same configurations are created through the NC2 console. If you had previously opted to use the NC2
console to create VPCs, subnets, and associated security groups, then NC2 automatically creates those resources
again during the recovery process. Else, you will need to first manually recreate those resources in your AWS console
if you did not use NC2 to create them before the failure event.
Cluster Protect can protect the following services and recover the associated metadata:
• Leap
• Flow Network Security
• Prism Pro (AIOps)
• VM management
• Cluster management
• Identity and Access Management (IAMv1)
• Categories
• Networking
The following services continue to run though these services are not protected, so data associated with them is not
recovered.
• Nutanix Files
• Self-Service
• LCM
• Nutanix Kubernetes Engine
• Objects
• Catalog
• Images
• VM templates
• Reporting Template
• AOS version must be 6.7 or higher and Prism Central version must be 2023.3 or higher.
• License tier must be AOS Ultimate or NCI Ultimate.
Note: You can use the same subnet or different subnets for Prism Central and MST.
• Clusters to be protected by Cluster Protect must be registered with the same Prism Central instance.
Note: Prism Central that manages protected clusters can also be protected by Prism Central Disaster Recovery.
• Two new AWS S3 buckets must be manually created with the bucket names prefixed with nutanix-clusters.
• Nutanix Guest Tools (NGT) must be installed on all UVMs.
• You must re-run the CloudFormation script if you have already added your AWS account in the NC2 console,
so that the IAM role that has the required permissions to access only the S3 buckets with the nutanix-clusters
prefix comes into effect.
Note: If you already have run the CloudFormation template, you must run it again to use Cluster Protect on newly
deployed NC2 clusters.
Note: Ports 30900 and 30990 are opened by default while creating a new NC2 cluster and are required for
communication between AOS and MST to back up the VM and Volume Groups data.
• The Cluster Protect feature and Protection Policies cannot be used at the same time in the same cluster to protect
the data. If a user-created protection or DR policy already protects a VM or Volume Group, it cannot also be
protected with the Cluster Protect feature. If you need to use DR configurations for a cluster, you must use those
protection policies instead of Cluster Protect to protect your data. A new DR policy creation fails if the cluster is
already protected using the Cluster Protect feature.
• You cannot hibernate or terminate the clusters that are protected by the Cluster Protect feature. You must disable
Cluster Protect before triggering hibernation or termination.
• All clusters being protected must be in the same Availability Zone. Prism Central must be deployed within the
same Availability Zone as the clusters it is protecting.
• The Cluster Protect feature is available only for new cluster deployments. Any clusters created before AOS 6.7
cannot be protected using this feature.
• A recovered VDI cluster might consume more storage space than the initial storage space consumed by the
protected VDI cluster. This issue might arise because the logic that efficiently creates VDI clones is inactive
during cluster recovery. This issue might also occur if there are multiple clones on the source that are created from
the same image. As a workaround, you can add additional nodes to your cluster if your cluster runs out of space
during the recovery process.
For more information, see https://portal.nutanix.com/kb/14558.
• With the Cluster Protect feature, up to 300 entities (VMs or Volume Groups) per Prism Element and 500 entities
per Prism Central can be protected. Based on the tests Nutanix has performed, Multicloud Snapshot Technology
(MST) can manage a maximum of 15 TB of data across all managed clusters.
The recovery process will be blocked if the number of entities exceeds the allowed limit. When there are more
than 300 entities, you can contact Nutanix Support to continue recovery. For more information, see https://
portal.nutanix.com/kb/14961.
Procedure
a. Create clusters in a new VPC or an existing VPC using the NC2 console.
Note: While deploying a cluster, ensure that you select the option to protect the cluster.
Note: You can protect your NC2 clusters even without protecting the Prism Central instance that is managing
these NC2 clusters; however, Nutanix recommends protecting your Prism Central instance as well.
For more information, see Protecting UVM and Volume Groups Data.
Note: After you complete all of these steps, wait for an hour and then check that at least one backup of Prism
Central is completed. One Prism Central backup must be completed after backing up the UVM data so that
protection policies, recovery points, and so on created during UVM backups are included in the Prism Central
backup. To ensure the same, run the following command and validate that the Prism Central replication to the S3
bucket has happened successfully:
nutanix@pcvm$ pcdr-cli list-protection-targets
The command returns the details in the following format:
UUID NAME
TIME-ELAPSED-SINCE-LAST-SYNC BACKUP-PAUSED
BACKUP-PAUSED-REASON TYPE
8xxxxxf5-3xxx-3xxx-bxxc-dxxxxxxxxxx6 https://nutanix-clusters-xxxx-
pcdr-3node.s3.us-west-2.amazonaws.com 30m59s false
kS3
The CLI shows sync in progress until the Prism Central data is synced to S3 for the first time. After
that, the CLI shows a non-zero time elapsed since the last sync. This confirms that the Prism Central
backup has been completed.
Creating S3 Buckets
You must set up two new Amazon S3 buckets with the default settings, one to back up the UVMs and volume group
data, and another to back up the Prism Central data. These S3 buckets must be empty and exclusively used only for
UVMs, volume groups, and Prism Central backups.
For instructions on how to create an S3 bucket, see the AWS documentation. While creating the S3 buckets, follow
the NC2-specific recommendations:
Note: NC2 creates an IAM role with the required permissions to access S3 buckets with the nutanix-clusters
prefix. This IAM role is added to the CloudFormation template. You must run the CloudFormation template
while adding your AWS cloud account. If you already have run the CloudFormation template, you must run
it again to be able to use Cluster Protect on newly deployed NC2 clusters. For more information, see https://
portal.nutanix.com/kb/15256.
If the S3 buckets do not have the nutanix-clusters prefix, the commands to protect Prism Central and
clusters fail.
Note: While the cluster protection status can be checked from the cluster summary page on the NC2 console, the Prism
Central protection status can only be checked by running the nutanix@pcvm$ pcdr-cli list-protection-
targets command.
• To deploy a new Prism Central: Perform the instructions described in Installing Prism Central (1-Click
Internet) to install Prism Central.
When deploying Prism Central, follow these recommendations:
• The Prism Central subnet must be a private subnet, and must only be used for Prism Central. The Prism
Central subnet must not be used for UVMs.
• When creating a DHCP pool in Prism Element, ensure that at least 3 IP addresses are kept outside the
DHCP pool for MST.
If you choose to use IPs from the DHCP pool, you can run the following aCLI command to reserve the IPs
in a network from the DHCP pool:
acli net.add_to_ip_blacklist <network_name> ip_list=ip_address1,ip_address2
• While deploying Prism Central, do not change the Microservices Platform (MSP) settings because these
are required to enable MST. You must choose Private network (defaults) in the MSP configuration when
prompted.
Note: You must not use managed networks for CMSP clusters with Cluster Protect enabled. CMSP cluster
is deployed in the VXLAN/kPrivateNetwork mode only.
• Modify the User management security group of the cluster hosting Prism Central to allow traffic from the
Internal Management subnet of the cluster hosting Prism Central to the Prism Central subnet. A rule to
allow traffic on all protocols gets added and the Management Subnet CIDR is used as the source. For more
information, see Port and Endpoint Requirements.
Note: Ports 30900 and 30990 are opened by default while creating a new NC2 cluster and are required for
communication between AOS and Multicloud Snapshot Technology (MST) to back up the VM and Volume
Groups data.
• To register a cluster with Prism Central: After you deploy Prism Central on one of the NC2 clusters in the
VPC, you must register your remaining NC2 clusters in that VPC to Prism Central that you deployed.
To register a cluster with Prism Central, follow the steps described in Registering a Cluster with Prism
Central.
Note: Any NC2 clusters that are not configured with the Prism Central that is hosting the Multicloud Snapshot
Technology will not be protected by Prism Central.
2. Configure the Prism Central protection and UVMs data protection. For more information, see Protecting Prism
Central Configuration and Protecting UVM and Volume Groups Data.
Note: In addition to protecting Prism Central to the S3 bucket, if your Prism Central instance is registered with
multiple NC2 clusters, then you must also protect Prism Central to one or more of the NC2 clusters it is registered with.
In this case, you must prioritize recovery of Prism Central configuration from another NC2 cluster where Prism Central
configuration was backed up if that NC2 cluster has not also been lost to a failure event. For more information, see
Protecting Prism Central.
The Prism Central configuration gets backed up to the S3 bucket once every hour and is available in the pcdr/
folder in the S3 bucket.
UUID NAME
TIME-ELAPSED-SINCE-LAST-SYNC BACKUP-PAUSED BACKUP-PAUSED-REASON
TYPE
8xxxxxf5-3xxx-3xxx-bxxc-dxxxxxxxxxx6 https://nutanix-clusters-xxxx-pcdr-3node.s3.us-
west-2.amazonaws.com 30m59s false
kS3
The CLI shows sync in progress until the Prism Central data is synced to S3 for the first time. After that, the
CLI shows a non-zero time elapsed since the last sync. This confirms that the Prism Central backup has been
completed.
Wait until the sync is completed before performing the next steps to deploy MST.
Note: When creating a DHCP pool in Prism Element, ensure that at least 3 IP addresses are reserved to be used with
the MST and another 3 for the Prism Central VM to be deployed (that are added as Virtual IPs during Prism Central
To define the IP addresses for the MST, update the subnet > Settings > Network Configuration > IP Address
Pool.
Procedure
1. Sign in to the Prism Central VM using the credentials provided while installing Prism Central.
What to do next
Back up all UVM and Volume Groups data from NC2 clusters. For more information, see Protecting UVM
and Volume Groups Data.
Note: You must run this command separately for each NC2 cluster you want to protect by specifying the UUID for
each NC2 cluster. This command also creates a recovery point for the protected entities.
Procedure
1. Sign in to the Prism Central VM using the credentials provided while installing Prism Central.
Note:
If the clustermgmt-cli command fails, it might be due to the clustermgmt-nc2 service did not get
properly installed. You can run the following command to verify if the clustermgmt-nc2 service is
installed:
nutanix@pcvm$ allssh "docker ps | grep nc2"
An empty response in the output of this command indicates that the clustermgmt-nc2 service did not get
installed properly. To overcome this issue, you must restart the pc_platform_bootstrap service to install
the clustermgmt-nc2 service. To do this, run the following commands on the Prism Central VM using
CLI:
nutanix@pcvm$ allssh "genesis stop pc_platform_bootstrap"
nutanix@pcvm$ allssh "cluster start"
Wait for 5-10 minutes and then rerun the following command to verify that the clustermgmt-nc2 service
is installed:
nutanix@pcvm$ allssh "docker ps | grep nc2"
After you verify that the clustermgmt-nc2 service is successfully installed, you must rerun the
clustermgmt-cli command:
nutanix@pcvm$ clustermgmt-cli deploy-cloudSnapEngine -b S3_bucket_name -r aws
region -i IP1,IP2,IP3 -s Private Subnet
The Protection Summary page includes an overview of the protection status of all clusters. It also provides
details about the VMs that are lagging behind their RPO. You can see the cluster being protected and the target
being the AWS S3 bucket.
The Recovery Points of the VM show when the VM was last backed up to S3. Only the two most recent snapshots
per protected entity are retained in the S3 bucket.
1. Run the following command to check the Prism Central protection status by listing protection targets:
nutanix@pcvm$ pcdr-cli list-protection-targets
As Prism Central can be protected to S3 and its registered clusters, you need to know the protection target, S3
bucket, or one of the NC2 clusters where Prism Central configuration is backed up.
This command lists information about the Prism Central protection targets with their UUIDs. These UUIDs are
different from cluster UUIDs and are required when running the unprotect-cluster command.
2. Run the following command on the Prism Central VM to disable Prism Central protection:
nutanix@pcvm$ pcdr-cli unprotect -u protection_target_uuid
Use the protection target UUID that you derived using the list-protection-targets command in Step 1.
A warning is issued if Cluster Protect is enabled for any cluster managed by this Prism Central and asks for your
confirmation to proceed with unprotecting Prism Central. You can unprotect Prism Central even if Cluster Protect
is enabled for any cluster. Nutanix recommends keeping Prism Central protected for seamless recovery of NC2
clusters.
Note: If the failure is not AZ-wide and Prism Central that is managing one of the failed clusters is hosted on
another cluster, and that cluster is not impacted, then you can restore the failed cluster from that running Prism
Central.
3. Run the following command to disable cluster protection for any NC2 cluster:
nutanix@pcvm$ clustermgmt-cli unprotect-cluster -u cluster_uuid
Replace cluster_uuid with the UUID of the NC2 cluster for which you want to disable Cluster Protect. You can
find the UUID listed as Cluster ID under General in the cluster Summary page in the NC2 console.
Note: Before you initiate the recovery of an NC2 cluster, ensure that you have protected Prism Central, deployed
Multicloud Snapshot Technology, and protected UVMs and volume groups. Also, after completing these cluster
protection steps, wait for one hour and then check that at least one backup of Prism Central is completed. One Prism
Central backup must be completed after backing up the UVM data so that protection policies, recovery points, and
so on created during UVM backups are included in the Prism Central backup. To ensure the same, run the following
command and validate that the Prism Central replication to the S3 bucket has happened successfully:
nutanix@pcvm$ pcdr-cli list-protection-targets
The command returns the details in the following format:
UUID NAME
TIME-ELAPSED-SINCE-LAST-SYNC BACKUP-PAUSED BACKUP-
PAUSED-REASON TYPE
8xxxxxf5-3xxx-3xxx-bxxc-dxxxxxxxxxx6 https://nutanix-clusters-xxxx-
pcdr-3node.s3.us-west-2.amazonaws.com 30m59s false
kS3
Note: The NC2 console automatically detects if an EC2 instance is deleted and then flags the cluster status as Failed.
However, the cluster might fail for any reason that NC2 might not detect. Therefore, it is recommended to perform
these steps to set the cluster to the Failed state whenever a failed cluster needs to be recovered.
Procedure
2. On the Clusters page, click the name of the cluster you want to set to the Failed state.
7. Ensure that the cluster status is changed to Failed for the cluster on the Clusters page.
What to do next
After you set the cluster to the Failed state, redeploy the cluster. See Recreating a Cluster for more
information.
You must figure out on your own when the failure event, such as an AWS AZ failure, impacting your cluster is over
so that they can start the cluster recovery process. Nutanix does not indicate when an AWS AZ has recovered enough
for your recovery cluster to be deployed.
Recreating a Cluster
When a protected cluster fails, and you set the cluster state to Failed, you need to redeploy the cluster.
Follow these steps to redeploy the cluster:
Procedure
2. On the Clusters page, click the name of the failed cluster that you want to redeploy.
• Under General:
Note: The recovery cluster name must be different than the failed cluster. It will be enforced by the NC2
console during recovery cluster creation.
• Cloud Account, Region, and Availability Zone: These configurations from the failed cluster that you
are recreating are displayed. Your recovery cluster will use the same configuration.
• Under Network Configuration:
• When manually created VPC and subnets were used to deploy the failed cluster, the previously used
resources are displayed. You must recreate the same VPC and subnets that you had previously created in
your AWS console.
• When VPC and subnets created by the NC2 console were used to deploy the failed cluster, the NC2
console will automatically recreate the same VPCs and subnets during the cluster recovery process.
6. Review the cluster summary on the Summary page and then click Recreate Cluster.
What to do next
After recreating the cluster, you must recover Prism Central if it was running on a cluster that suffered a
failure event and user VM or volume groups data. See Recovering Prism Central and User Data.
Note: The Prism Central subnet must be created first before following these instructions to recover Prism Central.
Also, the Prism Central image must be present on the cluster.
Procedure
2. On the redeployed cluster, run the following CLI command on the CVM to recover Prism Central from the S3
bucket where Prism Central data was backed up:
nutanix@cvm$ pcdr-cli recover -b S3_bucket -r AWS_region -n PC-Subnet
Replace the variables with their appropriate values as follows:
3. Track the Prism Central recovery status in the Tasks section on the recreated Prism Element console.
Note: The Prism Central recovery might take approximately four hours. Also, the recovered Prism Central and
original Prism Central are of the same version.
What to do next
After you recover Prism Central, register any newly created NC2 clusters with the recovered Prism Central. If the
clusters that were registered with Prism Central prior to the recovery of Prism Central did not suffer any failure, they
will be auto-registered with the recovered Prism Central.
Note: After the cluster recovery is complete, the failed Prism Element remains registered with recovered Prism
Central. To remove this Prism Element, unregister the Prism Element from Prism Central. For detailed instructions, see
the KB article 000004944.
Note: The configuration data for the recovery Prism Central must be recovered from the Prism Central S3 bucket
before recovering the UVM data on the recovery clusters. For more information, see Recovering Prism Central.
Procedure
1. Sign in to the Prism Central VM using the credentials provided while installing Prism Central.
• S3_bucket: the S3 bucket where you want to protect the user VMs data.
• AWS_region: the AWS region where the S3 bucket is created.
• IP1,IP2,IP3: the static IPs reserved for MST.
Note: These IPs can be different than the IPs used earlier while deploying MST prior to cluster failure.
• PC-Subnet: the AWS private subnet configured for the recovery Prism Central.
• NC2 clusters are recreated. For more information, see Recreating a Cluster.
• Prism Central is redeployed.
Note: The configuration data for the recovery Prism Central must be recovered from the Prism Central S3 bucket
before recovering the UVM data on the recovery clusters. For more information, see Recovering Prism
Central and MST.
• Multicloud Snapshot Technology is redeployed. For more information, see Recovering Prism Central and
MST.
• Disaster Recovery must be enabled.
Note: The UVM subnet names on the failed and recovered clusters must be the same for the correct mapping of
subnets in the recovery plan. If the names do not match correctly, the cluster recovery might proceed, but the VMs are
recovered without the UVM subnet attached. You can manually attach the subnet post-recovery. If there are multiple
UVM subnets, then all UVM subnets must be recreated with the same names for the correct mapping of subnets
between failed and recovered clusters.
Procedure
1. Sign in to the Prism Central VM using the credentials provided while installing Prism Central.
a. Run the following command to list all the subnets associated with the protected Prism Elements:
nutanix@pcvm$ clustermgmt-cli list-recovery-info -u UUID_OldPE
Replace UUID_OldPE with the UUID of the old NC2 cluster.
A list of subnets is displayed.
b. Recreate these subnets on the recovery Prism Elements in the same way they were created in the first place.
For more information, see Creating a UVM Network.
4. Run the following command to create a Recovery Plan to restore UVM data from the S3 buckets.
nutanix@pcvm$ clustermgmt-cli create-recovery-plan -o UUID_OldPE -n UUID_NewPE
Replace the variables with their appropriate values as follows:
Note: You must perform this step for each NC2 cluster you want to recover.
a. Sign in to Prism Central using the credentials provided while installing Prism Central.
b. Go to Data Protection > Recovery Plans.
You can identify the appropriate recovery plan to use by looking at the recovery plan name. It is in the format:
s3-recovery-plan-UUID_OldPE
Once the failover is complete, your UVM and Volume Groups data is recovered on the recovery Prism
Element.
Once the recovery plan is finished, your VMs are recovered.
7. Run the following command on all NC2 clusters that are recovered after the cluster failure to remove the category
values and protection policies associated with the old clusters that no longer exist.
nutanix@pcvm$ clustermgmt-cli finalize-recovery -u UUID_OldPE
Replace UUID_OldPE with the UUID of the old NC2 cluster.
What to do next
Manually turn on all UVMs.
For example:
nutanix@pcvm$ pcdr-
cli unprotect -u
8xxxxxx5-3xx4-3xx1-
bxxc-dbxxxxxxx0b6
Create a recovery nutanix@pcvm$ Prism Central VM -h, --help Help for create-
plan which can clustermgmt-cli recovery-plan
be executed from create-recovery- command.
Prism Central UI to plan [flags]
-n, -- UUID of the new
recover a cluster. For example: new_cluster_uuid recovery NC2
nutanix@pcvm$ string cluster.
clustermgmt-cli -o, -- UUID of the old,
create-recovery-
old_cluster_uuid failed NC2 cluster.
plan -o 0xxxxxx6-
cxxc-dxxx-8xxf- string
dxxxxxxxxx99 --output string Supported output
-n 0xxxxxxe- formats: ['default',
dxxd-fxxx-fxxe-
'json'] (default
cxxxxxxxxxe5
"default")
Deploy MST, nutanix@pcvm$ Prism Central VM -b, --bucket string Name of the S3
which can be used clustermgmt- bucket that will be
to protect NC2 cli deploy- used to store the
clusters. cloudSnapEngine backup of NC2
[flags] clusters.
For example: -h, --help Help for the deploy-
nutanix@pcvm$ cloudSnapEngine
clustermgmt- command.
cli deploy-
cloudSnapEngine -b --recover Deploys MST using
nutanix-clusters- old configuration
xxxxx-xxxx-xxxxx data, if available on
-r us-west-2 -i Cloud Clusters (NC2) | Cluster Protect Configuration
Prism Central. If| old
181
10.0.xxx.11,10.0.xxx.12,10.0.xxx.13 configuration data
-s PC-Subnet is unavailable, the
Purpose Command Command Flags Description
available on
-r, --region string Name of the AWS
region where the
provided S3 bucket
exists.
-i, --static_ips Comma-separated
strings list of 3 static IPs
that are part of
the same subnet
specified by the
subnet_name flag.
-s, -- Name of the subnet
subnet_name which can be used
string for MST VMs.
For example:
nutanix@pcvm$
clustermgmt-
cli delete-
cloudSnapEngine
Mark completion nutanix@pcvm$ Prism Central VM -u, --cluster_uuid UUID of the old
of recovery of a clustermgmt-cli string NC2 cluster.
cluster. finalize-recovery
[flags] -h, --help Help for the finalize-
recovery command.
For example:
--output string Supported output
nutanix@pcvm$ formats: ['default',
clustermgmt-cli 'json'] (default
finalize-recovery
"default")
-u 0xxxxxxx-
cxxc-dxxx-8xxx-
dxxxxxxxxxx9
Get a list nutanix@pcvm$ Prism Central VM -u, --cluster_uuid UUID of the NC2
of recovery clustermgmt-cli string cluster.
information, such as list-recovery-info
[flags] -h, --help Help for the list-
subnets that were
recovery-info
available on the For example: command.
original(failed) NC2
cluster. nutanix@pcvm$ --verbose With the verbose
clustermgmt- flag, a detailed
cli list-
JSON output is
recovery-info -u
00xxxxxb-0xxd-8xxx-6xx4-3xxxxxxxxx7d returned. If the
verbose flag is not
specified, only the
important fields,
such as subnet
name, IP Pool
ranges, and CIDR,
are returned.
• NC2 Console: Use the NC2 console to create, hibernate, resume, update, and terminate a NC2 cluster running on
AWS.
• Prism Element Web Console: Use the Prism Element web console to manage routine Nutanix tasks in a single
console. For example, creating a user VM. Unlike Prism Central, Prism Element is used to manage a specific
Nutanix cluster.
For more information on how to sign into the Prism Element web console, see Logging into a Cluster by Using
the Prism Element Web Console.
For more information on how to manage Nutanix tasks, see Prism Web Console Guide.
• Prism Central Web Console: Use to manage multiple Nutanix clusters.
For more information on how to sign into the Prism Central web console, see Logging Into Prism Central.
For more information on how to manage multiple NC2, see Prism Central Infrastructure Guide.
NC2 Console
The NC2 console displays information about clusters, organization, and customers.
The following section explains about all the tasks you can perform and view from this console.
Main Menu
The following options are displayed in the main menu at the top of the NC2 console:
Navigation Menu
The navigation menu has three tabs: Clusters, Organizations, and Customers. The selected tab is displayed in the top-
left corner. For more information, see Navigation Menu on page 186.
• Circle icon displays ongoing actions performed in a system that takes a while to complete.
For example, actions like creating a cluster or changing cluster capacity.
Circle icon also displays progress of each ongoing task and a success message appears if the task is complete or an
error message appears if the task fails.
• Gear icon displays the source details of each task performed.
For example, account, organization, or customer.
Notifications
• Bell icon displays notifications if some event in the system occurs or if there is a need to act and resolve an
existing issue.
Warning: You can choose to Dismiss notifications from the Notification Center. However, the dismissed
notifications no longer appear to you or any other user.
• Gear icon displays source details and a tick mark to acknowledge notifications.
• Drop-down arrow to the right of each notification displays more information about the notification.
Note: If you want to receive notifications about a cluster that is not created by you, you must be an organization
administrator and subscribe to notifications of respective clusters in the Notification Center. The cluster creator is
subscribed to notifications by default.
User Menu
The Profile user name option from the drop-down list provides the following opitons:
• General: Edit your First name, Last name, Email, and Change password from this screen. This screen
also displays various roles assigned.
• Preferences: Displays enable or disable slider options based on your preference.
• Storage providers: Displays the storage options with various storage providers.
• Advanced: Displays various assertion fields and values.
• Notification Center: Displays the list of Tasks, Notifications, and Subscriptions.
Navigation Menu
The navigation menu has three tabs on the top; Clusters, Organizations, and Customers, two tabs in the bottom;
Documentation and Support.
Clusters
• Audit Trail: Displays the activity log of all actions performed by the user on a specific cluster.
• Users: Displays the screens for user management like User Invitations, Permissions, Authentication
Providers.
• Notification Center: Displays the complete list of all the tasks and notifications.
• Update Configuration: Displays the screens to update the settings of clusters.
• Update Capacity: Displays a screen to update the resource allocation of clusters.
• Hibernate: Opens a dialog box for Cluster Hibernation or a Resume option appears if the cluster is
already hibernated.
• Terminate: Displays a screen to delete the cluster.
Organizations
• Audit Trail: Displays the activity log of all actions performed on a specific organization.
• Users: Displays the screens for user management like User Invitations, Permissions, Authentication
Providers.
• Sessions: Displays the basic details of the organization and information about the terminating the cluster.
• Notification Center: Displays the complete list of all Tasks and Notifications.
• Cloud accounts: Displays the status of the Cloud Account if it is active (A-Green) or inactive (I-Red).
The ellipsis icon against each cloud account displays the following options:
• Add regions: Select this option to update the regions in which the cloud account can deploy clusters to.
• Update: Select this option to create a new stack or update an existing stack.
• Deactivate: Select this option to deactivate the cloud account.
• Update: Displays the options to update settings of organizations.
Customers
• Audit Trail: Displays the activity log of all actions performed on a specific cluster.
• Users: Displays the screens for user management like User Invitations, Permissions, Authentication
Providers.
• Notification Center: Displays the complete list of all tasks and notifications.
• Cloud accounts: Displays the status of the Cloud Account if it is active (A-Green) or inactive (I-Red).
• Update: Displays the options to update settings of customers.
Documentation
Directs you to the documentation section of NC2.
Support
Directs you to the Nutanix Support portal.
Audit Trail
Administrators can monitor user activity using the Audit Trail. Audit Trail provides administrators with an audit
log to track and search through account actions. Account activity can be audited at all levels of the NC2 console
hierarchy.
You can access the Audit Trail page for an Organization or Customer entity from the menu button to the right of the
desired entity.
The following figure illustrates the Audit Trail at the organization level.
Under the Audit Trail section header, you can search the audit trail by first name, last name, and email address. You
can also click the column titles to sort the Audit Trail by ascending or descending order.
If you want to search for audit events within a certain period, click the date range in the upper right corner of the
section. Set your desired period by clicking on the starting and ending dates in the calendar view.
You can filter your results using the filter icon in the top right corner by specific account action.
You can download the details of your Audit Trail in CSV format by clicking the Download CSV link in the upper
right corner. The CSV will provide all Audit Trail details for the period specified to the left of the download link.
Notification Center
Admins can easily stay up to date regarding their NC2 resources with the Notification Center. Real-time notifications
are displayed in a Notification Center widget at the top of the NC2 console. The Notification Center displays two
different types of information: tasks and notifications. The information displayed in the Notification Center can be for
organizations or customer entities.
Note: Customer Administrators can see notifications for all organizations and accounts associated with the tenant by
navigating to the Customer or Organization dashboard from the initial NC2 console view and clicking Notification
Center.
Tasks
Tasks (bullet list icon) show the status of various changes made within the platform. For example, creating an
account, changing capacity settings, and so on trigger a task notification informing the admin that an event has
started, is in progress, or has been completed.
Notifications
Notifications (bell icon) differ from tasks; notifications notify administrators when specific events happen. For
example, resource limits, cloud provider communication issues, and so on.). There are three types of notifications:
info, warning, or error.
Dismiss Tasks and Notifications
You can dismiss tasks or notifications from the Notification Center widget by selecting the task or notification icon
and click the dismiss (x) button inside the event.
Dismissing an event only dismisses the task or notification for your console view; other subscribed admins still see
the event.
Acknowledge Notifications
You can click the check mark icon to acknowledge and dismiss a notification for all users subscribed to that resource.
Acknowledging a notification removes it from the widget, but the notification is still available on the Notification
Center page.
Note: Acknowledging a notification will dismiss it for all administrators subscribed to the same resource.
Procedure
Note: If you want to set email notifications for an organization or customer entity, select the Organizations or
Customers tab.
• Receive email notifications: To enable automatic email notifications, turn on the Receive email
notifications toggle.
• Severity:
6. Click Save.
User Roles
The NC2 console uses a hierarchical approach to organizing administration and access to accounts.
The NC2 console has the following entities:
• Customer: This entity is the highest business entity in the NC2 platform. You create multiple organizations
under a customer and then create clusters within an organization. When you sign up for NC2, a Customer
entity is created for you. You can then create an Organization, add a cloud (Azure or AWS) account to that
organization, and create clusters in that organization. You cannot create a new Customer entity in your NC2
platform.
• Organization: This entity allows you to set up unique environments for different departments within your
company. You can create multiple clusters within an organization. You can separate your clusters based on your
specific requirements. For example, create an organization Finance and then create a cluster in the Finance
organization to run only your finance-related applications.
Users can be added from the Cluster, Organization, and Customer entities. However, the user roles that are available
while adding users vary based on whether the users are invited from the Cluster, Organization, and Customer entities.
Administrators can grant permissions based on their own level of access. For example, while a customer administrator
can assign any role to any cluster or organization under that customer entity, an organization administrator can only
grant roles for that organization and the clusters within that organization.
The following user roles are available in NC2.
Role Description
Customer Administrator Highest level of access. Customer administrators can create
and manage multiple organizations and clusters. Customer
administrators can also modify permissions for any of the user
roles.
Customer Auditor Customer Auditor users have read only access to functionality at
the customer, organizations, and account levels.
Cluster Administrator Cluster Administrator can access and manage any clusters
assigned to them by the Organization or Customer administrators.
Cluster Admin can also open, close, or extend a support tunnel for
the Nutanix Support team.
Cluster Super Admin Cluster Super Admin can open, close, or extend a support tunnel
for the Nutanix Support team.
Cluster Auditor Cluster Auditor users have read only access to the clusters under
the organization.
Cluster User Cluster User can access a specific cluster assigned to them by the
Cluster, Organization or Customer Administrator.
See the Local User Management section of the Nutanix Cloud Services Administration Guide for more
information about the following:
Note: The user roles described in the Local User Management section of the Nutanix Cloud Services
Administration Guide guide are not applicable to NC2. For the user roles in NC2, see the user roles described in this
section.
See the Nutanix Cloud Services Administration Guide for more information about authentication mechanisms,
such as multi-factor authentication and SAML authentication.
Procedure
3. Click the ellipsis icon against the desired customer entity, and click Users.
The Authentication tab displays the identity authentication providers that are currently enabled for your
account, and the relevant tabs for the enabled authentication providers are displayed. The NC2 account
administrator must have first unlocked the Enforce settings slider.
Perform the following steps to invite users based on the authentication provider.
• Application Id
• Auth provider metadata: URL or XML
• Metadata URL or Metadata XML
• Integration Name
• Custom Label
• Authentication token expiration
• Signed response
• Signed assertion
d. Click Add.
To add SAML 2 Permission:
a. Click the SAML 2 Permission tab. The SAML 2 Permissions dialog appears.
b. Click Add Permission. The Create A SAML2 Permission dialog appears.
• For provider: Select the SAML2 Provider you are designating permissions for.
• Allow Access:
• Always: Once the user is authenticated, they have access to the role you specify – no conditions
required.
• When all conditions are satisfied: The user must meet all conditions specified by the
administrator to be granted access to the role specified.
• When any condition is satisfied: The user can meet any conditions specified by the administrator
to be granted access to the role specified.
• Conditions: Specify your assertion claims and their values which correspond with the roles you wish to
grant.
• Grant roles: Select the desired roles you wish to grant to your users. You can add multiple role sets using
the Add button.
d. Click Save.
e. To update the SAML 2 permissions of the users in your account, click the SAML 2 Permissions tab. The
SAML 2 Permissions page displays the list of all users in your account.
f. Click the ellipsis icon against the user you want to edit the SAML 2 permissions for, and then click Update.
The Update a rule dialog appears.
9. To invite users with Secure Anonymous: You can create many users without email invitation or activation.
Mass user creation can be used to deliver training and certification tests to end users who are guest users (not
Procedure
3. Click the ellipsis icon against the organization entity, and then click Users.
• Full access to this organization and its accounts: Grants NC2 support engineers the same level of
access as a Customer Administrator.
• Full access without ability to start sessions and manage users: NC2 support engineers may not
start sessions to your workload VMs.
• No Access: NC2 support engineers have no access to your customer and organization(s).
6. If you choose to give full access, then you can choose to give full access to specific NC2 specialists. Click Add
Personnel and then enter the email address of the NC2 specialist.
To revoke access, click the trashcan symbol listed to the right of the Nutanix staff member you would like to
remove from the Authorized Nutanix Personnel list. Click Save to apply your changes.
Note: Ensure that you select the correct workspace from the Workspace list on the My Nutanix dashboard. For
more information on workspaces, see Workspace Management.
b. In the My Nutanix dashboard, go to the API Key Management tile and click Launch.
If you have previously created API keys, a list of keys is displayed.
c. Click Create API Keys to create a new key.
The Create API Key dialog appears.
• Name: Enter a unique name for your API key to help you identify the key.
• Scope: Select the NC2 scope category under Cloud from the Scope list.
• Admin: Create or delete a cluster and all permissions that are assigned to the User role.
• User: Manage clusters, hibernate and resume a cluster, update cluster capacity, and all permissions that
are assigned to the Viewer role.
• Viewer: View account, organization, cluster, and tasks on the NC2 console.
e. Click Create.
The Created API dialog is displayed.
Note: You cannot recover the generated API key and key ID after you close this dialog.
For more details on API Key management, see the API Key Management section in the Licensing Guide.
Note: This step uses Python to generate a JWT token. You can use other programming languages, such as
Javascript and Golang.
b. Replace the API Key and Key ID in the following Python script and then run it to generate a JWT token.
Also, you can specify expiry time in seconds for the JWT token to remain valid. In the requesterip attribute,
enter the requester IP.
from datetime import datetime
from datetime import timedelta
import base64
import hmac
import hashlib
import jwt
def generate_jwt():
curr_time = datetime.utcnow()
payload = {
"aud": aud_url,
"iat": curr_time,
"exp": curr_time + timedelta(seconds=120),
"iss": key_id,
"metadata": {
"reason": "fetch usages",
"requesterip": "enter the requester IP",
"date-time": curr_time.strftime("%m/%d/%Y, %H:%M:%S"),
"user-agent": "datamart"
}
}
signature = base64.b64encode(hmac.new(bytes(api_key, 'UTF-8'), bytes(key_id,
'UTF-8'), digestmod=hashlib.sha512).digest())
token = jwt.encode(payload, signature, algorithm='HS512',
headers={"kid": key_id})
print("Token (Validate): {}" .format(token))
generate_jwt()
c. A JWT token is generated. Copy the JWT token on your system for further use. The JWT token can be used as
an Authorization header when validating the API call. The JWT token remains valid for the duration that you
have specified.
Costs
Costs for deploying an NC2 infrastructure include the following:
1. AWS EC2 bare-metal instances: AWS sets the cost for EC2 bare-metal instances. Engage with AWS or see their
documentation about how your EC2 bare-metal instances are billed. For more information, see the following
links:
• EC2 Pricing
• AWS Pricing Calculator
2. NC2 on AWS: Nutanix sets the costs for running Nutanix clusters in AWS. Engage with your Nutanix sales
representatives to understand the costs associated with running Nutanix clusters on AWS.
Sizing
You can use the Nutanix Sizer tool to enable you to create the optimal Nutanix solution for your needs. See the Sizer
User Guide for more information.
Capacity Optimizations
The Nutanix enterprise cloud offers capacity optimization features that improve storage utilization and performance.
The two key features are compression and deduplication.
Compression
Nutanix systems currently offer the following two types of compression policies:
Inline
The system compresses data synchronously as it is written to optimize capacity and to maintain high performance
for sequential I/O operations. Inline compression only compresses sequential I/O to avoid degrading performance for
random write I/O.
Post-Process
For random workloads, data writes to the SSD tier uncompressed for high performance. Compression occurs after
cold data migrates to lower-performance storage tiers. Post-process compression acts only when data and compute
resources are available, so it does not affect normal I/O operations.
Nutanix recommends that you carefully consider the advantages and disadvantages of compression for your specific
applications. For further information on compression, see the Nutanix Data Efficiency tech note.
• Key: nutanix:clusters:cluster-uuid
• Value: UUID of the cluster created in AWS
You must add and activate the nutanix:clusters:cluster-uuid tag as a cost allocation tag in AWS, so that Cost
Governance can successfully display the cost analytics of Nutanix clusters in AWS.
For more information about setting up and using Cost Governance, see the NCM Cost Governance documentation.
Procedure
3. In AWS, add and activate the NC2 tag nutanix:clusters:cluster-uuid as a user-defined tag.
See Activating User-Defined Cost Allocation Tags section in the AWS documentation.
The tag activates after 24 hours.
Note: Add and activate the tag by using the payer account of your organization in AWS.
4. Sign in to the Cost Governance console to see the cost analytics of your Nutanix clusters in AWS.
Procedure
3. Select AWS and your AWS account in the cloud and account selection menu.
Note: Nutanix Cloud Clusters (NC2) supports File Analytics versions 2.2.0 and later.
See the Files Analytics documentation on the Nutanix Support portal for more information about File Analytics.
In the Prism Element web console, go to the Files page and click File Analytics.
If you are accessing the VM from inside the VPC, you can access the VM by using the File Analytics IP address. If
you want to access the file analytics VM from outside the VPC, you must configure a load balancer that has a public
IP address.
Note: NC2 recommends that you enable File Analytics for a desired file server before you add a load balancer to the
File Analytics VM.
Disaster Recovery
NC2 supports Asynchronous and NearSync replication. NearSync replication is supported with AOS 6.7.1.5 and later,
while Asynchronous replication is supported with all supported AOS versions. NearSync replication is supported only
when clusters run AHV; NC2 does not support cross-hypervisor disaster recovery. For more information on Nutanix
Disaster Recovery capabilities, see Nutanix Disaster Recovery Guide.
You can pair the Prism Central of the Nutanix cluster running in AWS with the Prism Central of the Nutanix cluster
running in your on-premises datacenter. You must configure connectivity between your on-prem datacenter and
AWS VPC by using either the AWS VPN or AWS Direct Connect. You must also ensure the ports are open on
the management security group required for replication. The existing best practices listed in the Nutanix Disaster
Recovery Guide apply.
See the AWS documentation at AWS Site-to-Site VPN to connect AWS VPC by using VPN.
See the AWS documentation at Connect Your Data Center to AWS to connect AWS VPC by using Direct
Connect.
If you want to use protection policies and recovery plans to protect applications across multiple Nutanix clusters,
set up Nutanix Disaster Recovery (formerly Leap) from Prism Central. Nutanix Disaster Recovery allows you to
stage your application to be restored in the right order. You can also use protection policies to failback to on-prem if
required.
NC2 on AWS, when using Prism Central 2022.9 or later, also supports disaster recovery from on-prem to AWS over
layer 2 stretched subnets. Layer 2 subnet extension assumes that the reachability between on-prem and AWS is over
a VPN or AWS Direct Connect. NC2 on AWS supports partial failover (with Layer 2 stretch) and complete failover
(with or without Layer 2 stretch) while maintaining IP reachability.
IP addresses of VMs can be maintained while the VMs are migrated between:
Note: For IPs to be maintained, ensure that there are no IP conflicts prior to the creation of a recovery plan.
For more information on disaster recovery without the Layer 2 stretch, see Disaster Recovery Without Layer 2
Stretch.
For more information on disaster recovery over the Layer 2 stretch, see Disaster Recovery Over Layer 2 Stretch.
• Understand how layer 2 virtual network extension. For details, see AHV Administration Guide.
• Understand how to use Nutanix Disaster Recovery. For details, see Nutanix Disaster Recovery Guide.
Note: The following steps cover both VPN and VTEP gateway. The fields vary based on your selection for VPN or
VTEP gateway.
Procedure
1. Pair the Prism Central at the primary AZ with the Prism Central at the recovery AZ.
The Availability Zone Type must be selected as Physical Location. Ensure that the availability zone is reachable.
The primary AZ and the recovery AZ can be:
7. If you must extend the subnet over VPN, then perform these additional steps:
Note: Ensure that you perform the subnet extension steps from Prism Central using the Networking & Security
> Connectivity > Subnet Extension option.
You must not perform these steps using the Network and Security > Subnets > List > Actions
> Manage Extensions option and the Virtual Private Cloud > Subnet > Manage Extension
option.
• To extend a subnet over VPN, see Layer 2 Virtual Subnet Extension Over VPN.
• To extend a subnet over VTEP, see Layer 2 Virtual Subnet Extension Over VTEP.
Note: Ensure that you have installed Nutanix Guest Tools (NGT) on the user VMs for static IP address mapping of
user VMs between source and target virtual networks and static IP address preservation after failover.
For more information on the typical tasks that you would perform, see Nutanix Disaster Recovery Guide.
Note: For IPs to be maintained, ensure that there are no IP conflicts between the UVMs on the primary site and UVMs
and ENI IPs on the recovery site prior to the creation of a Recovery Plan.
Health Check
Nutanix provides robust mechanisms to monitor the health of your clusters by using Nutanix Cluster Check and
health monitoring through the Prism Element web console.
You can use the NC2 console to check the status of the cluster and view notifications and logs that the NC2 console
provides.
For more information on how to assess and monitor the health of your cluster, See Health Monitoring.
Routine Maintenance
This section has more information about routine maintenance activities like monitoring certificates, software updates,
managing licenses and system credentials.
Monitoring Certificates
You must monitor your certificates for expiration. Nutanix does not provide a process for monitoring certificate
expiration, but AWS provides an AWS CloudFormation template that can help you set up alarms.
See acm-certificate-expiration-check for more information. Follow the AWS best practices for certificate renewals.
• Licensed Clusters. Displays a table of licensed clusters including the cluster name, cluster UUID, license tier,
and license metric. NC2 clusters with AOS and NCI licensing appear under Licensed Clusters.
• Cloud Clusters. Displays a table of licensed Nutanix Cloud Clusters including the cluster name, cluster UUID,
billing mode, and status. NC2 clusters with AOS licensing appear under Cloud Clusters. NCI-licensed clusters
do not appear under Cloud Clusters.
To purchase and manage the software licenses for your Nutanix clusters, see the License Manager Guide.
System Credentials
See the AWS documentation to manage your AWS accounts and their permissions.
For NC2 credentials, see the NC2 Payment Methods and User Management.
Emergency Maintenance
The NC2 software can automatically perform emergency maintenance if you configure redundancy factor 2 (RF2) or
RF3 on your cluster to protect against rack failures and synchronous or asynchronous replication to protect against
AZ failures. For node failures, NC2 detects a node failure and replaces the failed node with a new node.
Hosts in a cluster are deployed by using a partition placement group with seven partitions. A placement group is
created for each host type and the hosts are balanced within the placement group. The placement group along with
the partition number is translated into a rack ID of the node. This enables AOS Storage to place meta data and data
replicas in different fault domains.
A redundancy factor 2 (RF2) configuration of the cluster protects data against a single-rack failure and an RF3
configuration protects against a two-rack failure. Additionally, to protect against multiple correlated failures within a
Note: NC2 detects a node failure in a few minutes and brings a replaced node online in approximately one hour; this
duration varies depending on the time taken for data replication, the customer’s specific setup, and so on.
Procedure
1. When accessing a document on https://portal.nutanix.com/, navigate to the Feedback dialog displayed at the
bottom of the page.
2. Select one to five stars to rate the page you referred to. Here, a single star means poor, and five stars mean
excellent.
Nutanix Support
You can access the technical support services in a variety of ways to troubleshoot issues with your Nutanix cluster.
See the Nutanix Support Portal Help for more information.
Nutanix offers a support tier called Production Support for NC2.
See Product Support Programs under Cloud Services Support for more information about Production
Support tier and SLAs.
AWS Support
Nutanix recommends that you sign up for an AWS Support Plan subscription for technical support of the AWS
entities such as Amazon EC2 Instances, VPC, and more. See AWS Support Plan Offerings for more information.
• Changes or enhancements
• Known Issues
• Fixes and workarounds
• Software compatibility