0% found this document useful (0 votes)
196 views228 pages

Nutanix Clusters AWS

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
196 views228 pages

Nutanix Clusters AWS

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 228

Nutanix Cloud Clusters

on AWS Deployment
and User Guide
Cloud Clusters (NC2) Hosted
March 7, 2024
Contents

About This Document.......................................................................................6


Reference Information for NC2................................................................................................................... 7

Nutanix Cloud Clusters (NC2) Overview.........................................................8


Use Cases................................................................................................................................................. 10
AWS Infrastructure for NC2...................................................................................................................... 11
NC2 on AWS Deployment Models........................................................................................................... 11
Single Availability Zone Deployment..............................................................................................12
Multiple Availability Zone Deployment........................................................................................... 12
Multicluster Deployment................................................................................................................. 13
AWS Components Installed...................................................................................................................... 14
NC2 Architecture....................................................................................................................................... 17
Preventing Network Partition Errors...............................................................................................18
Resolving Bad Disk Resources......................................................................................................18
Maintaining Availability: Node and Rack Failure............................................................................19
Maintaining Availability: AZ Failure................................................................................................ 20
NC2 Security Approach.............................................................................................................................20

Getting Started With NC2............................................................................... 22


Requirements for NC2 on AWS................................................................................................................22
Supported Regions and Bare-metal Instances......................................................................................... 26
Installing NVIDIA grid host driver for AHV on G4dn.metal instance.............................................. 29
Limitations..................................................................................................................................................30
Non-Applicable On-Prem Configurations.................................................................................................. 31
NC2 Infrastructure Deployment.................................................................................................................34
Creating My Nutanix Account................................................................................................................... 35
Starting a Free Trial for NC2.................................................................................................................... 38

Cluster Deployment........................................................................................ 40
Creating an Organization.......................................................................................................................... 40
Updating an Organization...............................................................................................................41
Adding an AWS Cloud Account................................................................................................................41
Deactivating a Cloud Account........................................................................................................ 44
Reconnecting a Cloud Account......................................................................................................45
Adding a Cloud Account Region....................................................................................................45
Updating AWS Stack Configurations............................................................................................. 46
Creating a Cluster..................................................................................................................................... 48
AWS VPC Endpoints for S3..................................................................................................................... 65
Creating a Gateway Endpoint........................................................................................................ 65
Associating Route Tables With the Gateway Endpoint................................................................. 66

Microsoft Windows on NC2........................................................................... 68

Prism Central Configuration.......................................................................... 70

ii
Deploying and Configuring Prism Central.................................................................................................70
Logging into a Cluster by Using the Prism Element Web Console.......................................................... 71
Logging into a Cluster by Using SSH.......................................................................................................73

NC2 Payment Methods................................................................................... 75


Nutanix Licenses for NC2......................................................................................................................... 76
New Portfolio Licenses...................................................................................................................76
Legacy Portfolio Licenses.............................................................................................................. 79
Managing Licenses.........................................................................................................................81
Subscription Plan for NC2........................................................................................................................ 82
Nutanix Direct................................................................................................................................. 84
AWS Marketplace........................................................................................................................... 89
Changing Payment Method.......................................................................................................................95
Canceling the Subscription Plan............................................................................................................... 97
Billing Management................................................................................................................................. 100
Viewing Billing and Usage Details............................................................................................... 100
Using the Usage Analytics API.................................................................................................... 102

User VM Network Management................................................................... 108


Creating a UVM Network........................................................................................................................ 108
Creating a UVM Network using Prism Element........................................................................... 108
Creating a UVM Network using Prism Central............................................................................ 111
Updating a UVM Network....................................................................................................................... 114
Updating a UVM Network using Prism Element.......................................................................... 114
Updating a UVM Network using Prism Central............................................................................115
Using EC2 Instances on the Same Subnet as UVMs............................................................................ 116
AWS Elastic Network Interfaces (ENIs) and IP Addresses.................................................................... 116
Adding a Virtual Network Interface (vNIC) to a User VM....................................................................... 116
Enabling Outbound Internet Access to UVMs........................................................................................ 117
Enabling Inbound Internet Access to UVMs........................................................................................... 117
Deploying a Load Balancer to Allow Internet Access.............................................................................119
Prism Central UI Access for Site-to-Site VPN Setup..............................................................................121

Network Security using AWS Security Groups..........................................122


Default Security Groups.......................................................................................................................... 123
Custom Security Groups......................................................................................................................... 124
Ports and Endpoints Requirements........................................................................................................ 128

Cluster Management..................................................................................... 132


Updating the Cluster Capacity................................................................................................................ 132
Manually Replacing a Host..................................................................................................................... 135
Creating a Heterogeneous Cluster......................................................................................................... 136
Hibernate and Resume in NC2...............................................................................................................136
Hibernating Your NC2 Cluster......................................................................................................137
Resuming an NC2 Cluster........................................................................................................... 138
Limitations in Hibernate and Resume.......................................................................................... 139
Terminating a Cluster..............................................................................................................................140
Multicast Traffic Management................................................................................................................. 140
Configuring AWS Transit Gateway for Multicast..........................................................................145
AWS Events in NC2................................................................................................................................146
Displaying AWS Events................................................................................................................147
Viewing Licensing Details....................................................................................................................... 148

iii
Support Log Bundle Collection............................................................................................................... 148

Cluster Protect Configuration...................................................................... 150


Prerequisites for Cluster Protect............................................................................................................. 151
Limitations of Cluster Protect.................................................................................................................. 152
Protecting NC2 Clusters..........................................................................................................................153
Creating S3 Buckets.....................................................................................................................154
Protecting Prism Central Configuration........................................................................................ 155
Deploying Multicloud Snapshot Technology................................................................................ 157
Protecting UVM and Volume Groups Data.................................................................................. 159
Disabling Cluster Protect..............................................................................................................162
Recovering NC2 Clusters........................................................................................................................163
Setting Clusters to Failed State................................................................................................... 164
Recreating a Cluster.....................................................................................................................167
Recovering Prism Central and MST............................................................................................ 172
Recovering UVM and Volume Groups Data................................................................................ 174
Reprotecting Clusters and Prism Central.....................................................................................177
CLI Commands Library........................................................................................................................... 178

NC2 Management Consoles........................................................................185


NC2 Console........................................................................................................................................... 185
Main Menu.................................................................................................................................... 185
Navigation Menu...........................................................................................................................186
Audit Trail......................................................................................................................................189
Notification Center........................................................................................................................ 190
Configuring Email Notifications for Alerts.....................................................................................191

NC2 User Management................................................................................. 194


User Roles...............................................................................................................................................194
Adding Users from the NC2 Console..................................................................................................... 195
Managing Support Authorization............................................................................................................. 207

API Key Management for NC2..................................................................... 209

NC2 Planning Guidance.............................................................................. 213


Costs........................................................................................................................................................213
Sizing....................................................................................................................................................... 213
Capacity Optimizations............................................................................................................................213
Compression................................................................................................................................. 213

Cost Analytics............................................................................................... 214


Integrating Cost Governance with NC2.................................................................................................. 214
Displaying Cost Analytics in the Cost Governance Console.................................................................. 214

File Analytics................................................................................................. 216

Disaster Recovery and Backup................................................................... 217


Disaster Recovery................................................................................................................................... 217

iv
Disaster Recovery Without Layer 2 Stretch.................................................................................217
Disaster Recovery Over Layer 2 Stretch..................................................................................... 217
Preserving UVM IP Addresses During Disaster Recovery..................................................................... 220
Integration with Third-Party Backup Solutions........................................................................................ 222

System Maintenance..................................................................................... 223


Health Check........................................................................................................................................... 223
Routine Maintenance...............................................................................................................................223
Monitoring Certificates.................................................................................................................. 223
Nutanix Software Updates............................................................................................................223
Managing Nutanix Licenses......................................................................................................... 224
System Credentials.......................................................................................................................224
Managing Access Keys and AWS Service Limits........................................................................224
Emergency Maintenance.........................................................................................................................224
Automatic Node Failure Detection............................................................................................... 225
Troubleshooting Deployment Issues....................................................................................................... 225
Documentation Support and Feedback.................................................................................................. 225
Nutanix Support.......................................................................................................................................226
AWS Support...........................................................................................................................................226

Release Notes................................................................................................227

Copyright........................................................................................................228

v
ABOUT THIS DOCUMENT
This user guide describes the deployment processes for NC2 on AWS. The guide provides instructions for setting up
the Nutanix resources required for NC2 on AWS deployment, subscribing to NC2 payment plans. It also provides
detailed steps on UVM network management, end-to-end steps for creating a Nutanix cluster, and more.
This document is intended for users responsible for the deployment and configuration of NC2 on AWS. Readers
must be familiar with AWS concepts, such as AWS EC2 instances, AWS networking and security, AWS storage, and
VPN/Direct Connect. Readers must also know other Nutanix products, such as Prism Element, Prism Central, and
NCM Cost Governance (formerly Beam).

Document Organization
The following table shows how this user guide is organized and helps you find the most relevant sections in the guide
for the tasks that you want to perform.

Table 1: NC2 on AWS User Guide Roadmap

For information about See the following


A high-level overview of NC2, essential concepts Nutanix Cloud Clusters (NC2) Overview on
for NC2 on AWS, architecture, and infrastructure page 8
guidance.
Data encryption and network security details for NC2 Security Approach on page 20
NC2 clusters.
Details on various subscription and payment plans, NC2 Payment Methods
billing workflow. How to cancel and manage your
existing subscription plan.
Getting started with NC2 on AWS, its requirement Getting Started With NC2 on page 22
and limitation, and how to create a My Nutanix
account and start a free trial for NC2.
How to create an organization and add your cloud Cluster Deployment
account to NC2. How to create an NC2 cluster and
create an AWS VPC endpoint for S3.
How to deploy and configure Prism Central and Prism Central Configuration
log into a cluster by using the Prism Element web
console or SSH.
How to create and manage UVM networks. User VM Network Management
Identify security groups and policies for traffic Network Security using AWS Security Groups
control, modify default UVM security groups, and
create custom security groups. Details about the
port and endpoints requirements for inbound and
outbound communication.
How to create a heterogeneous cluster, update Cluster Management on page 132
cluster capacity. Hibernate, resume, and terminate
a cluster.
How to configure and use the Cluster Protect Cluster Protect Configuration
feature to protect your NC2 clusters.

Cloud Clusters (NC2) | About This Document | 6


For information about See the following
How to add users to NC2, manage NC2 user roles NC2 User Management
and manage authorization to Nutanix Support.
Planning guidance for size and cost optimization. NC2 Planning Guidance on page 213
Supported regions for various bare-metal instances Supported Regions and Bare-metal Instances on
and its limitations while using NC2 on AWS. page 26
How to analyze your cloud consumption using Cost Cost Analytics on page 214
Governance.
How to configure Disaster Recovery and backup Disaster Recovery and Backup
your data.
Reference information on the system and System Maintenance on page 223
operational features, such as health check and
routine maintenance tasks.

Reference Information for NC2


In addition to the User Guide, Nutanix also publishes a Nutanix Validated Design document as an example of a
typical NC2 customer deployment that has been validated by Nutanix.
The following documentation is available for NC2. While using NC2, you need to use several other Nutanix products,
such as Prism, Flow Virtual Networking, and Nutanix Disaster Recovery. Nutanix recommends that you read product
documentation for these products to understand how you can use these products.

• NC2 on AWS:

• Nutanix Cloud Clusters on AWS - Solution Tech Note


• Hybrid Cloud Design Guide
• Nutanix Cloud Clusters on AWS GovCloud Supplement
• Nutanix Cloud Clusters on AWS Release Notes
• Compatibility and Interoperability Matrix
• Nutanix Configuration Maximums
• Nutanix University
• Supporting Nutanix products:

• Prism Central Infrastructure Guide


• Prism Central Admin Center Guide
• Prism Central Alerts and Events Reference Guide
• Prism Web Console Guide
• Flow Virtual Networking Guide
• Nutanix Disaster Recovery Guide

Cloud Clusters (NC2) | About This Document | 7


NUTANIX CLOUD CLUSTERS (NC2)
OVERVIEW
Nutanix Cloud Clusters (NC2) delivers a hybrid multicloud platform designed to run applications in private or
multiple public clouds. NC2 operates as an extension of on-prem datacenters and provides a hybrid cloud architecture
that spans private and public clouds, operated as a single cloud.
NC2 extends the simplicity and ease of use of the Nutanix software stack to public clouds using a unified
management console. Using the same platform on both clouds, NC2 on AWS reduces the operational complexity of
extending, bursting, or migrating your applications and data between clouds. NC2 runs AOS and AHV on the public
cloud instances and packages the same CLI, GUI, and APIs that cloud operators use in their on-prem environments.
NC2 resources, including bare-metal hosts, are deployed in your AWS account so that you can leverage your existing
cloud provider relationships, credits, commits, and discounts. Nutanix provisions the full bare-metal host for your use,
and the bare-metal hosts are not shared by multiple customers. Every customer that deploys NC2 will be provisioning
bare-metal hosts independent of other customers’ bare-metal hosts. The bare-metal hosts are not shared by multiple
tenants.

Figure 1: Overview of the Nutanix Hybrid Multicloud Platform

NC2 on AWS place the complete Nutanix hyperconverged infrastructure (HCI) stack directly on a bare-metal
instance in Amazon Elastic Compute Cloud (EC2). This bare-metal instance runs a Controller VM (CVM) and
Nutanix AHV as the hypervisor like any on-premises Nutanix deployment, using the AWS Elastic Network Interface
(ENI) to connect to the network. AHV user VMs do not require any additional configuration to access AWS services
or other EC2 instances.

Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 8


AHV runs an embedded distributed network controller that integrates user VM networking with AWS networking.
Instead of creating an overlay network, NC2 on AWS integrates IP address management with AWS Virtual Private
Cloud (VPC). AWS allocates all user VM IP addresses from the AWS subnets in the existing VPCs. Native
integration with AWS networking allows you to seamlessly use AWS services on AHV user VMs without a complex
network deployment or performance loss.
AOS can withstand hardware failures and software glitches and ensures that application availability and performance
are managed as per the configured resilience. Combining features such as native rack awareness with AWS partition
placement groups allows Nutanix to operate freely in a dynamic cloud environment.
In addition to the traditional resilience solutions for Prism Central, NC2 on AWS also provides the Cluster Protect
feature that helps to protect Prism Central. UVM, and volume groups data in case of full cluster failures caused by
scenarios, such as Availability Zones (AZs) failures or users shutting down all nodes from the AWS console. For
details, see Cluster Protect Configuration.
NC2 on AWS provides on-prem workloads, a home in the cloud, offering native access to available AWS services
without requiring you to reconfigure your software.
You use the NC2 console to deploy a cluster in a VPC in AWS. After you launch a Nutanix cluster in AWS by using
NC2, you can operate the cluster in the same manner as you operate your on-prem Nutanix cluster with no change in
nCLI, the Prism Element and Prism Central web console, and APIs. You use the NC2 console to create, hibernate,
resume, update, and delete your Nutanix cluster.

Figure 2: Overview of NC2 on AWS

Following are the key points about NC2 on AWS:

• Runs on the EC2 bare-metal instances. For more information on the supported EC2 bare-metal instances, see
Supported Regions and Bare-metal Instances.

Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 9


• Supports three or more EC2 bare-metal instances. See the Limitations section for more information about the
number of nodes supported by NC2.
• Supports only the AHV hypervisor on Nutanix clusters running in AWS.
• Supports both existing on-prem Prism Central instance or Prism Central instance deployed on NC2 on AWS.

Use Cases
NC2 on AWS is ideally suited for the following key use cases:

• Disaster Recovery on AWS: Configure a Nutanix Cloud Cluster on AWS as your remote backup and data
replication site to quickly recover your business-critical workloads in case of a disaster recovery (DR) event for
your primary data center. Benefit from AWS’ worldwide geographical presence and elasticity to create an Elastic
DR configuration and save DR costs by only expanding your pilot light cluster when DR need arises.
• Capacity Bursting for Dev/Test: Increase your developer productivity by provisioning additional capacity for Dev/
Test workloads on NC2 on AWS if you may be running out of capacity on on-prem. Utilize a single management
plane to operate and manage your workloads across your data center and NC2 on AWS environments.
• Modernize Applications with AWS: Significantly accelerate your time to migrate applications to AWS with a
simple lift-and-shift operation—no need to refactor your workloads or rewrite your applications. Get your on-
prem workloads to AWS faster and modernize your applications with direct integrations with all AWS services.
For more information, see NC2 Use Cases.
NC2 eliminates the complexities in managing networking, using multiple infrastructure tools, and rearchitecting the
applications.
NC2 offers the following key benefits:

• Cluster management:

• A single management console to manage private and public clouds


• Built-in integration into public cloud networking
• Burst into public clouds to meet a seasonal increase in demand
• Modernize applications and connect natively to cloud services
• Use public clouds for high availability and disaster recovery
• Easy to deploy and manage
• App mobility:

• Lift and shift applications with no retooling and refactoring


• Same performance as on-prem cloud and public clouds
• Cost management:

• Flexible subscription options


• Pay based on your actual usage
• Use your existing Nutanix licenses for NC2

Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 10


AWS Infrastructure for NC2
The NC2 console places the complete Nutanix hyperconverged infrastructure (HCI) stack directly on a bare-metal
instance in Amazon Elastic Compute Cloud (EC2). This bare-metal instance runs a Controller VM (CVM) and
Nutanix AHV as the hypervisor like any on-premises Nutanix deployment, using the AWS Elastic Network Interface
(ENI) to connect to the network. AHV user VMs do not require any additional configuration to access AWS services
or other EC2 instances.
Within the VPC where NC2 is deployed, you need the following subnets to manage inbound and outbound traffic:

• One private management subnet for the internal cluster management and communication between CVM, AHV,
and so on.
• One public subnet with an Internet gateway and NAT gateway to provide external connectivity to the NC2 portal.
• One or more private subnets for UVM traffic, depending on your needs.

Note: All NC2 cluster deployments are single AZ deployments. Therefore, your UVM subnets will be in the same
AZ as the Management subnet. You must not add the Management subnet as a UVM subnet in Prism Element because
UVMs and Management VMs must be on separate subnets.

Figure 3: AWS Infrastructure for NC2

When you deploy a Nutanix cluster in AWS by using the NC2 console, you can either choose to deploy the cluster
in a new VPC and private subnet, or choose to deploy the cluster in an existing VPC and private subnet. If you opt
to deploy the cluster in a new VPC, during the cluster creation process, the NC2 console provisions a new VPC and
private subnet for management traffic in AWS. You must manually create one or more separate subnets in AWS for
user VMs.

NC2 on AWS Deployment Models


NC2 on AWS supports several deployment models that help deploying NC2 in varying customer environments. The
following are the most common deployment models:

Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 11


• Single Availability Zone Deployment
• Multiple Availability Zone Deployment
• Multicluster Deployment
Several AWS components get installed as part of NC2 on AWS deployment. For more information, see AWS
Components Installed.
Nutanix cluster is deployed in AWS in approximately 30 minutes. If there are any issues with provisioning the
Nutanix cluster, see the Notification Center on the NC2 console.
The following table lists the time taken by each stage of cluster deployment.

Table 2: Cluster Deployment Stages

Deployment Stage Duration


AHV Download and Installation Approximately 35 to 45 minutes.
Happens in parallel on all nodes.

AOS Tar Download Approximately 4 minutes


Happens in parallel on all nodes.

AOS Installation Approximately 15 minutes


Happens in parallel on all nodes.

Cluster creation Approximately 3 minutes per node


Happens in sequence on all nodes.

Restore from AWS S3 Applicable for Hibernate - Resume use


case.
Approximately 1.5 hours per 10 TB of data
to hydrate the cluster once the nodes are
available.

Regardless of your deployment model, there are a few general outbound requirements for deploying a Nutanix cluster
in AWS on top of the existing requirements that on-premises clusters use for support services. For more information
on the endpoints the Nutanix cluster needs to communicate with for a successful deployment, see Outbound
Communication Requirements.

Single Availability Zone Deployment


NC2 on AWS deploys a cluster in a single availability zone by default. Deploying a single cluster in AWS is
beneficial for more ephemeral workloads where you want to take advantage of performance improvements and use
the same automation pipelines you use on-prem. You can use backup products compatible with AHV to target S3 as
the backup destination, and, depending on the failure mode you want to recover from, you can also replicate that S3
bucket to a different Region. HYCU and Veeam are compatible with AHV.
Alternatively, you can use a third-party partner, such as Veeam or HYCU to backup your data to S3. For more
information, see Tech Note.

Multiple Availability Zone Deployment


If you do not have an on-prem cluster available for data protection or you want to use the low-latency links between
Availability Zones, you can create a second NC2 cluster in a different Availability Zone.

Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 12


Figure 4: Multiple Availability Zone Deployment

You can isolate your private subnets for UVMs between clusters and use the private Nutanix management subnets
to allow replication traffic between them. All private subnets can share the same routing table. You must edit the
inbound access in each Availability Zone’s security group as shown in the following tables to allow replication
traffic.

Table 3: Availability Zone 1 NC2 on AWS Security Group Settings

Type Protocol Port Range Source Description


Custom TCP rule TCP 9440 10.88.4.0/24 UI access
Custom TCP rule TCP 2020 10.88.4.0/24 Replication
Custom TCP rule TCP 2009 10.88.4.0/24 Replication

Table 4: Availability Zone 2 NC2 on AWS Security Group Settings

Type Protocol Port Range Source Description


Custom TCP rule TCP 9440 10.88.2.0/24 UI access
Custom TCP rule TCP 2020 10.88.2.0/24 Replication
Custom TCP rule TCP 2009 10.88.2.0/24 Replication

If Availability Zone 1 goes down, you can activate protected VMs on the cluster in Availability Zone 2. Once
Availability Zone 1 comes back online, you can redeploy a Nutanix cluster in Availability Zone 1 and reestablish data
protection. New clusters require full replication.

Multicluster Deployment
To protect your Nutanix cluster if there is an Availability Zone failure, use your existing on-prem Nutanix cluster as a
disaster recovery target.

Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 13


Figure 5: Hybrid Deployment

The following table lists the inbound ports you need to establish replication between an on-premises cluster and a
Nutanix cluster running in AWS. You can create these ports on the infrastructure subnet security group that was
automatically created when you deployed NC2 on AWS. The ports must be open in both directions.

Table 5: Inbound Security Group Rules for AWS

Type Protocol Port Range Source Description


SSH TCP 22 On-premises CVM SSH into the AHV
subnet node
Custom TCP rule TCP 2222 On-premises CVM SSH access to the
subnet CVM
Custom TCP rule TCP 9440 On-premises CVM UI access
subnet
Custom TCP rule TCP 2020 On-premises CVM Replication
subnet
Custom TCP rule TCP 2009 On-premises CVM Replication
subnet

Note: Make sure you set up the cluster virtual IP address for your on-premises and AWS clusters. This IP address is
the destination address for the remote site.

Nutanix has native inbuilt replication capabilities to recover from complete cluster failure. Nutanix supports
asynchronous replication. You can set your Recovery Point Objective (RPO) to be one hour with asynchronous
replications.

AWS Components Installed


When deploying NC2 on AWS, several AWS components get installed.
The following table lists the mandatory AWS components that are either installed when the option to create a new
VPC is selected during NC2 on AWS deployment or you need to install manually when you choose to use an existing
VPC.

Note: You can configure Prism to be accessible from the public network and then manually configure the AWS
resources, such as Load Balancer, NAT Gateway, Public IPs, and Internet Gateway for public access.

Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 14


Table 6: Mandatory AWS Components Installed on Deployment

AWS Component Charged by AWS Description


Compute (Dedicated EC2 Hosts)
Bare metal Instances Yes For the lists of the supported AWS EC2 bare-metal
instance, see Supported Regions and Bare-metal
Instances.
Networking and Security
Elastic Network Interfaces No ENIs are used for UVMs on each host in the cluster.
(ENIs) Each ENI can have up to 50 IP addresses with 49
usable IPs (because the host will use 1 IP). If the IPs
are exhausted by the UVMs, then another ENI will be
added to the host. There can be a maximum of 14
ENIs per host.
Load Balancer Yes A load balancer is deployed only when deploying in
new VPC from the NC2 portal and only when Prism
access from the Internet is set to public. If deploying
in existing VPC, you can leverage an existing Load
Balancer.
Charges are also applicable for additional items.

NAT Gateway Yes A NAT gateway is deployed only when deploying in


new VPC. If deploying in an existing VPC, you can
leverage an existing NAT Gateway.
Charges are also applicable for data traffic.

Internet Gateway No Internet Gateway is deployed only when deploying in


new VPC. If deploying in an existing VPC, you can
leverage an existing Internet Gateway.
Charges are only applicable for data traffic.

Public IP Yes Public IP is deployed if Load Balancer is deployed


upon deployment.
Security Groups No You can create and associate AWS security groups
with an EC2 instance to control outbound and inbound
traffic for that instance. A default security group gets
created for every AWS VPC. You can create additional
security groups for each VPC and then associate them
with the resources in that VPC.
Access Control Lists (ACLs) No You can use the default ACL, or you can create a
custom ACL with rules that are similar to the rules for
your security groups.
VPC No A VPC is deployed only when you choose to create
a new VPC at the time of cluster creation on the NC2
portal.
Charges are also applicable for data traffic.

Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 15


AWS Component Charged by AWS Description
Subnets No When you deploy a Nutanix cluster in AWS by using
the NC2 console, you can either choose to deploy the
cluster in a new VPC and private subnet, or choose
to deploy the cluster in an existing VPC and private
subnet. If you opt to deploy the cluster in a new
VPC, during the cluster creation process, the NC2
console provisions a new VPC and private subnet for
management traffic in AWS. You must manually create
one or more separate subnets in AWS for user VMs.
Storage
Elastic Block Store (EBS) Yes Each node in the NC2 cluster has two EBS volumes
attached (AHV EBS and CVM EBS). Both are
encrypted gp3 volumes. The size of AHV EBS volume
is 100 GB and CVM EBS is 150 GB. Software, configs,
and logs require 250GB per host.
Storage is needed during cluster creation also.

EBS Snapshots Yes Upon hibernating a cluster, snapshots of the EBS


volumes on each host are taken.
S3 Yes An S3 bucket is created at the time of cluster creation,
which remains empty until the Hibernate feature is
used. When the Hibernate feature is used, all data
from your NC2 cluster is placed in the S3 bucket. Once
the cluster is resumed, data is hydrated back onto
hosts but also stays in the S3 bucket as a backup.

Note: The S3 bucket used must not be publicly


accessible.

If you intend to protect your clusters using the Cluster


Protect feature, you need two more S3 buckets. For more
information, see Prerequisites for Cluster Protect.
Charges are also applicable for data traffic.

The following table lists the optional AWS components that can be used with the NC2 on AWS deployment.

Table 7: Optional AWS Components

AWS Component Charged by AWS Description


Network Connectivity
VPN Yes A VPN or Direct Connect is needed for connectivity
between on-prem and AWS.
Direct Connect Yes
Charges are also applicable for data traffic.

Transit Gateway Yes Charges are also applicable for data traffic.

Network Services

Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 16


AWS Component Charged by AWS Description
AWS DNS Used by clusters for VMs by default.
You can configure AHV to use own DNS.

You can view all the resources allocated to a cluster running on AWS.
To view the cloud resources created by NC2, perform the following:
1. Sign in to NC2 from the My Nutanix dashboard.
2. In the Clusters page, click the name of the cluster.
3. On the left navigation pane, click Cloud Resources.
The Cloud Resources page displays all the resources associated with the cluster.

Figure 6: Viewing Cloud Resources

NC2 Architecture
The bare-metal instance runs the AHV hypervisor and the hypervisor, like any on-premises deployment, runs a
Controller Virtual Machine (CVM) with direct access to NVMe instance storage hardware.
AOS Storage uses the following three core principles for distributed systems to achieve linear performance at scale:
1. Must have no single points of failure (SPOF).
2. Must not have any bottlenecks at any scale (must be linearly scalable).
3. Must apply concurrency (MapReduce).
Together, a group of Nutanix nodes forms a distributed system (Nutanix cluster) responsible for providing the Prism
and Acropolis capabilities. Each cluster node has two EBS volumes attached. Both are encrypted gp3 volumes. The
size of AHV EBS volume is 100 GB and CVM EBS is 150 GB. All services and components are distributed across all
CVMs in a cluster to provide for high-availability and linear performance at scale.

Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 17


Figure 7: NC2 on AWS Architecture

This enables our MapReduce Framework (Curator) to use the full power of the cluster to perform activities
concurrently. For example, activities such as data reprotection, compression, erasure coding, deduplication, and more.

Figure 8: Cluster Deployment in a VPC

Preventing Network Partition Errors


AOS Storage uses the Paxos algorithm to avoid split-brain scenarios. Paxos is a proven protocol for reaching
consensus or quorum among several participants in a distributed system.
Before any file system metadata is written to Cassandra, Paxos ensures that all nodes in the system agree on the
value. If the nodes do not reach a quorum, the operation fails in order to prevent any potential corruption or data
inconsistency. This design protects against events such as network partitioning, where communication between nodes
may fail or packets may become corrupt, leading to a scenario where nodes disagree on values. AOS Storage also
uses time stamps to ensure that updates are applied in the proper order.

Resolving Bad Disk Resources


AOS Storage incorporates a Curator process that performs background housekeeping tasks to keep the entire cluster
running smoothly.

Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 18


The multiple responsibilities of Curator include ensuring file system metadata consistency and combing the extent
store for corrupt and under replicated data.
Also, Curator scans extents in successive passes, computes each extent’s checksum, and compares it with the
metadata checksum to validate consistency. If the checksums do not match, the corrupted extent is replaced with a
valid extent from another node. This proactive data analysis protects against data loss and identifies bad sectors you
can use to detect disks that are about to fail.

Maintaining Availability: Node and Rack Failure


The cluster orchestrator running in the cloud service is responsible for maintaining your intended capacity under rack
and node failures.
Instances in the cluster are deployed using a partition placement group with seven partitions. A placement group
is created for each instance type and the instances are maintained well balanced within the placement group. The
placement group + partition number is translated into a rack ID of the node. This enables AOS Storage to place meta
data and data replicas in different fault domains.

Figure 9: NC2 on AWS –- Partition Placement (Multi)

Setting up a cluster with redundancy factor 2 (RF2) protects data against a single rack failure and setting it up with
RF3 protects against a two-rack failure. Also, to protect against multiple correlated failures within a data center and
an entire AZ failure, Nutanix recommends you set up sync replication to a second cluster in a different AZ in the
same Region or an Async replication to an AZ in a different Region. AWS data transfer charges may apply.
AWS deploys each node of the Nutanix cluster on a separate AWS rack (also called AWS partition) for fault
tolerance.
If a cluster loses rack awareness, an alert is displayed in the Alerts dashboard of the Prism Element web console and
the Data Resiliency Status dashboard displays a Critical status.
A cluster might lose rack awareness if you:
1. Update the cluster capacity.
For example, if you add or remove a node.
2. Manually replace a host or the replace host action is automatically triggered by the NC2 console.
3. Change the Replication Factor (RF), that is from RF2 to RF3.
4. Create a cluster with either 8 or 9 nodes and configure RF3 on the cluster.

Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 19


5. Deploy a cluster in an extremely busy AWS region.
This is a rare scenario in which the cluster might get created without rack awareness due to limited availability of
resources in that region. For more details on the Nutanix metadata awareness conditions, visit the AOS Storage
section in the Nutanix Bible.
The Strict Rack Awareness feature helps to maintain rack awareness in the scenarios listed above. To maintain the
rack awareness for Clusters on AWS, you need to enable the Strict Rack Awareness feature and maintain at least
three racks for RF2 or at least five racks for RF3. This feature is disabled by default.
You can run the following nCLI command to enable Strict rack Awareness:
ncli cluster enable-strict-domain-awareness

If you want to disable Strict rack Awareness, run the following nCLI command:
ncli cluster disable-strict-domain-awareness

Contact Nutanix Support for assistance if you receive an alert in the Prism Element web console that indicates your
cluster has lost rack awareness.

Maintaining Availability: AZ Failure


AWS Availability Zones are distinct locations within an AWS Region that are engineered to be isolated from failures
in other Availability Zones. They provide inexpensive, low-latency network connectivity to other Availability Zones
in the same AWS Region.
Similarly, the storage is also designed to be ephemeral in nature. Although rare, failures can occur that affect the
availability of instances that are in the same AZ. If you host all your instances in a single AZ that is affected by such a
failure, none of your instances are available.
Deployment of a production cluster in an AZ without protection either by using disaster recovery to on-prem or
Nutanix Disaster Recovery results in data loss if there are AZ failures. You can use Nutanix Disaster Recovery
to protect your data to another on-prem cluster, or another NC2 cluster in a different Availability Zone. For more
information, see Nutanix Disaster Recovery Guide.

Note: If your cluster is running in a single AZ without protection either by using disaster recovery to on-prem or
Nutanix Disaster Recovery beyond 30 days, the Nutanix Support portal displays a notification indicating that your
cluster is not protected.
The notification includes a list of all the clusters that are in a single AZ without protection.
Hover over the notification for more details and click Acknowledge. Once you acknowledge the
notification, the notification disappears and appears only if another cluster exceeds 30 days in a single
availability zone without protection.

NC2 supports Asynchronous and NearSync replication. NearSync replication is supported with AOS 6.7.1.5 and later,
while Asynchronous replication is supported with all supported AOS versions. NearSync replication is supported only
when clusters run AHV; NC2 does not support cross-hypervisor disaster recovery. For more information on Nutanix
Disaster Recovery capabilities, see Nutanix Disaster Recovery Guide.

NC2 Security Approach


Nutanix takes a holistic approach to security and mandates the following to deploy a secure NC2 infrastructure:

Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 20


1. An AWS account with the following permissions.
1. IAMFullAccess
2. AWS_ConfigRole
3. AWSCloudFormationFullAccess

Note: These permissions are only required for the creation of CloudFormation template and NC2 does not use
these for any other purpose.

The CloudFormation stack, namely Nutanix-Clusters-High-Nc2-Cloud-Stack-Prod creates the Nutanix-


Clusters-High-Nc2-Cluster-Role-Prod and Nutanix-Clusters-High-Nc2-Orchestrator-Role-Prod
IAM roles. For more information on the IAM roles and associated permissions, see AWS Account and IAM
Role Requirements.

Note: Do not use the AWS root user for any deployment or operations related to NC2.
NC2 on AWS does not use AWS Secrets Manager for maintaining any stored secrets. All customer-
sensitive data is stored on customer-managed cluster. Local NVMe storage on the bare-metal is used for
storing customer-sensitive data. Nutanix does not have any visibility into customer-sensitive data stored
locally on the cluster. Any data sent to Nutanix concerning cluster health is stripped of any Personal
Identifiable Information (PII).

2. Access control and user management in the NC2 console.

Note: Nutanix recommends following the policy of least privilege for all access granted while deploying NC2. For
more information, see NC2 User Management.

For more information about how security is implemented in a Nutanix Cluster environment, see Network
Security using AWS Security Groups.

Data Encryption
To help reduce cost and complexity, Nutanix supports a native local key manager (LKM) for all clusters with three or
more nodes. The LKM runs as a service distributed among all the nodes. You can activate LKM from Prism Element
to enable encryption without adding another silo to manage.
You can activate LKM from Prism Element to enable encryption without adding another silo to manage. If you are
looking to simplify your infrastructure operations, you can also use one-click infrastructure for your key manager.
Organizations often purchase external key managers (EKMs) separately for both software and hardware. However,
because the Nutanix LKM runs natively in the CVM, it is highly available and there is no variable add-on pricing
based on the number of nodes. Every time you add a node, you know the final cost. When you upgrade your cluster,
the key management services are also upgraded. When upgrading the infrastructure and management services in
lockstep, you are ensuring your security posture and availability by staying in line with the support matrix.
Nutanix software encryption provides native AES-256 data-at-rest encryption, which can interact with any KMIP-
compliant or TCG-compliant external KMS server (Vormetric, SafeNet, and so on) and the Nutanix native KMS,
introduced in AOS version 5.8. The system uses Intel AES-NI acceleration for encryption and decryption processes
to minimize any potential performance impacts. Nutanix software encryption also provides in-transit encryption. Note
that in-transit encryption is currently applicable within a Nutanix cluster for data RF.

Cloud Clusters (NC2) | Nutanix Cloud Clusters (NC2) Overview | 21


GETTING STARTED WITH NC2
Perform the tasks described in this section to get started with NC2 on AWS.
Before you get started, ensure that you have registered for NC2 on AWS from the My Nutanix portal. See NC2
Payment Methods on page 75 for more information.
For more information on AOS and other software compatibility details, visit Release Notes and Software
Compatibility.
After you create a cluster in AWS, set up the network and security infrastructure in AWS and the cluster for your user
VMs. For more information, see User VM Network Management and Network Security using AWS Security
Groups.

Requirements for NC2 on AWS


This guide assumes prior knowledge of the Nutanix stack and AWS EC2, VPC, and CloudFormation services and
familiarity of AWS framework is highly recommended to operate significant deployments on AWS.
Following are the requirements to use NC2 on AWS.

AWS Account and IAM Role Requirements


1. Configure an AWS account to access the AWS console.

Note: You must have CreateRole access to the AWS account.

Cloud Clusters (NC2) | Getting Started With NC2 | 22


2. You will need to run a CloudFormation script that creates IAM roles for NC2 on AWS in your AWS account.
When running the CloudFormation script, you must log into your AWS account with a user role that has the
following permissions:

• IAMFullAccess: NC2 on AWS utilizes IAM roles to communicate with AWS APIs. You must have
IAMFullAccess privileges to create IAM roles in your AWS account.
• AWS_ConfigRole: You might want to have the AWS Config permission so that you can get configuration
details for AWS resources.
• AWSCloudFormationFullAccess: NC2 on AWS provides you with a CloudFormation script to create
two IAM roles used by NC2. You must have AWSCloudFormationFullAccess privileges to run that
CloudFormation stack in your account.

Note: These permissions are suggested for you to run the CloudFormation template, and NC2 does not need these
for any purpose.

By running the CloudFormation script provided by NC2, you will be creating the following two IAM roles for
NC2 on AWS:

• Nutanix-Clusters-High-Nc2-Cluster-Role-Prod
• Nutanix-Clusters-High-Nc2-Orchestrator-Role-Prod
You can either create these roles manually and assign the required permissions or run the CloudFormation script
to add these role. Nutanix recommends running the CloudFormation script so that the permissions are added
accurately to those roles.
One role allows the NC2 console to access your AWS account by using APIs, and the other role is assigned to
each of your bare-metal instances.
You can view information about your CloudFormation stack, namely Nutanix-Clusters-High-Nc2-Cloud-
Stack-Prod, on the Stacks page of the CloudFormation console.

Figure 10: Viewing CloudFormation Template

If you want to create the IAM roles manually, you can review the CloudFormation script from https://s3.us-
east-1.amazonaws.com/prod-gcf-567c917002e610cce2ea/aws_cf_clusters_high.json and check the

Cloud Clusters (NC2) | Getting Started With NC2 | 23


required permissions. Alternatively, you can view the CloudFormation script by clicking Open AWS Console
while adding your cloud account. For more information, see Adding an AWS Cloud Account.
To view the permissions and policies attached to these IAM roles, you can sign into the AWS Management
Console and open the IAM console at https://console.aws.amazon.com/iam/ and then choose Roles >
Permissions.

Figure 11: Viewing IAM Roles

For more information on how to secure your AWS resources, see Security Best Practices in IAM.

Note: For NC2 on AWS with AOS 6.7.1.5, you must run the CloudFormation template while adding your AWS cloud
account. If you have already run the CloudFormation template, you must run it again so that any new permissions added
to the IAM roles come into effect.

vCPU Limits
Review the supported regions and bare-metal instances. For details, see Supported Regions and Bare-metal
Instances
AWS supports the following vCPU limits for the bare-metal instances available for NC2 on AWS.

• i4i.metal - 128 vCPUs for each instance


• m6id.metal - 128 vCPUs for each instance
• m5d.metal: 96 vCPUs for each instance
• i3.metal: 72 vCPUs for each instance
• i3en.metal: 96 vCPUs for each instance
• z1d.metal: 48 vCPUs for each instance
• g4dn metal: 96 vCPUs for each instance

Note: Before you deploy a cluster, check if the EC2 instance type is supported in the Availability Zone in which you
want to deploy the cluster.
Not all instance types are supported in all the availability zones in an AWS region. An error message is
displayed if you try to deploy a cluster with an instance type that is not supported in the availability zone
you selected.

Configure a sufficient vCPU limit for your AWS account. Cluster creation fails if you do not have the sufficient
vCPU limit set for your AWS account.
You can calculate your vCPU limit in the AWS console under EC2 > Limits > Limits Calculator.

Cloud Clusters (NC2) | Getting Started With NC2 | 24


Note: Each node in a Nutanix cluster has two EBS volumes attached (AHV EBS and CVM EBS). Both are encrypted
gp3 volumes. The size of AHV EBS volume is 100 GB, and that of the CVM EBS is 150 GB.

To learn more about setting AWS vCPU Limits for NC2, see the Nutanix University video.

IMDS Requirements
NC2 on AWS supports accessing the instance metadata from a running instance using one of the following methods:

• Instance Metadata Service Version 1 (IMDSv1) – a request/response method


• Instance Metadata Service Version 2 (IMDSv2) – a session-oriented method
By default, you can use either IMDSv1 or IMDSv2, or both. For more information on how to configure the instance
metadata options, see AWS Documentation.

My Nutanix Account Requirements


Configure a My Nutanix account, that is an account to access the NC2 console. See Creating My Nutanix Account
for more information.

Note: When you create a My Nutanix account, a default workspace gets created for you with the Account Admin role,
which is required to create an NC2 subscription and access the Admin Center and Billing Center portals. If you are
invited to a workspace, then you must get the Account Admin role so that you can subscribe to NC2 and access the
Admin Center and Billing Center.

Cluster Protect Requirements


If you intend to protect your clusters using the Cluster Protect feature, you need to meet additional requirements. For
more information, see Prerequisites for Cluster Protect.

Networking Requirements
1. Configure connectivity between your on-prem datacenter and AWS VPC by using either VPN or Direct Connect
if you want to pair both the clusters for data protection and other reasons.
See AWS Site-to-Site VPN to connect AWS VPC by using VPN.
To learn more about setting up a VPN to on-prem, see the Nutanix University video.
See Connect Your Data Center to AWS to connect AWS VPC by using Direct Connect.
2. Allow outbound internet access on your AWS VPC so that the NC2 console can successfully provision and
orchestrate Nutanix clusters in AWS.
For more information on how to allow outbound internet access on your AWS VPC, see AWS VPC
documentation.
3. Configure the AWS VPC infrastructure. You can choose to create a new VPC as part of cluster creation from the
NC2 portal or use an existing VPC.
To learn more about setting up an AWS Virtual Private Cloud (VPC) manually, see the Nutanix University
video.
4. If you deploy AWS Directory Service in a selected VPC or subnet to resolve DNS names, ensure that AWS
Directory Service resolves the following FQDN successfully to avoid deployment failure.
FQDN: gateway-external-api.cloud.nutanix.com

Cloud Clusters (NC2) | Getting Started With NC2 | 25


5. Ensure that both enableDnsHostnames and enableDnsSupport DNS attributes are enabled for your VPC
so that:

• Instances with public IP addresses receive corresponding public DNS hostnames.


• The Amazon Route 53 Resolver server can resolve Amazon-provided private DNS hostnames.
To learn more about these DNS attributes, see DNS attributes for your VPC.
For more information on how to view and update DNS attributes for your VPC, see AWS Documentation.
6. If you use an existing VPC, you need two private subnets, one for user VMs and one for cluster management. If
you choose to create a new VPC during the cluster creation workflow, then the NC2 console creates the required
private subnets.
7. Create an internet gateway and attach it to your VPC. Set the default route on your Public Route Table to the
internet gateway.
8. Create a public NAT gateway and associate it with the public subnet and assign a public elastic IP address to the
NAT gateway. Set the default route for your default route table to the NAT gateway.
9. If you use AOS 5.20, ensure that the cluster virtual IP address is set up for your on-premises and NC2 on AWS
cluster after the cluster is created. In NC2 on AWS with AOS 6.x, the orchestrator automatically assigns the
cluster virtual IPs. This IP address is the destination address for the remote site.

CIDR Requirements
You must use the following range of IP addresses for the VPCs and subnets:

• VPC: between /16 and /25, including both


• Private management subnet: /16 and /25, including both
• Public subnet: /16 and /25 including both
• UVM subnets: /16 and /25, including both

Note: UVM subnet sizing would depend on the number of UVMs that would need to be deployed. NC2 supports the
network CIDR sizing limits enforced by AWS.

Supported Regions and Bare-metal Instances


Nutanix Cloud Clusters (NC2) running on AWS supports certain EC2 bare-metal instances in various AWS
regions.

Note: NC2 might not support some bare-metal instance types in certain regions due to limitations in the number of
partitions available. NC2 supports EC2 bare-metal instances in regions with three or more partitions. The support for
g4dn.metal instance type is only available on clusters with AOS 6.1.1 and 5.20.4 or later releases.
You can use a combination of i3.metal, i3en.metal, and i4i.metal instance types or z1d.metal, m5d.metal,
and m6id.metal instance types while creating a new cluster or expanding the cluster capacity of an already
running cluster. The combination of these instance types is subject to bare-metal support from AWS in the
region where the cluster is being deployed. For more details, see Creating a Heterogeneous Cluster.
You can only create homogenous clusters with g4dn.metal instances; it cannot be used to create a
heterogeneous cluster.

The following table lists the AWS EC2 bare-metal instance types supported by Nutanix.

Cloud Clusters (NC2) | Getting Started With NC2 | 26


Table 8: EC2 Bare-metal Instance Details

Metal types Details

i4i.metal 64 physical cores, 1024 GiB Memory, 27.28 TiB


NVMe SSD storage
m6id.metal 64 physical cores, 512 GiB memory, 7.42 TiB
NVMe SSD storage
m5d.metal 48 physical cores, 384 GiB memory, 3.27 TiB
NVMe SSD storage
i3.metal 36 physical cores, 512 GiB memory, 13.82 TiB
NVMe SSD storage
i3en.metal 48 physical cores, 768 GiB memory, 54.57 TiB
NVMe SSD storage
z1d.metal 24 physical cores, 384 GiB memory, 1.64 TiB
NVMe SSD storage
g4dn.metal 48 physical cores, 384 GiB Memory, 128 GiB GPU
Memory, 1.64 TiB NVMe SSD storage

For more information, see Hardware Platform Spec Sheets. Select NC2 on AWS from the Select your
preferred Platform Providers list.
The following table lists the detailed information for each bare-metal instance type supported in each AWS region.

Table 9: AWS Clusters - Available Regions and Supported Bare-metal Types

Region i4i.metal m6id.metal m5d.metal i3.metal i3en.metal z1d.metal g4dn.metal


name

US-East Yes Yes Yes Yes Yes Yes Yes


(N.Virginia)
US-East Yes Yes Yes Yes Yes Yes Yes
(Ohio)
US-West Yes No Yes Yes Yes Yes Yes
(N.California)
US-West Yes Yes Yes Yes Yes Yes Yes
(Oregon)
Africa Yes No Yes Yes Yes No Yes
(Cape
Town)*
Asia Pacific Yes No Yes Yes Yes No Yes
(Hong
Kong)*
Asia Pacific Yes No Yes No Yes No No
(Jakarta)*
Asia Pacific Yes No Yes Yes Yes Yes Yes
(Mumbai)

Cloud Clusters (NC2) | Getting Started With NC2 | 27


Region i4i.metal m6id.metal m5d.metal i3.metal i3en.metal z1d.metal g4dn.metal
name
Asia Pacific Yes No Yes No Yes No No
(Hyderabad)*
Asia Pacific Yes No Yes Yes Yes Yes Yes
(Seoul)
Asia Pacific Yes No Yes Yes Yes Yes Yes
(Singapore)
Asia Pacific Yes Yes Yes Yes Yes Yes Yes
(Sydney)
Asia Pacific Yes No Yes No Yes No No
(Melbourne)*
Asia Pacific Yes Yes Yes Yes Yes Yes Yes
(Tokyo)
Asia Pacific Yes No Yes Yes Yes No Yes
(Osaka)
Canada Yes No Yes Yes Yes No Yes
(Central)
EU Yes Yes Yes Yes Yes Yes Yes
(Frankfurt)
EU Yes Yes Yes Yes Yes Yes Yes
(Ireland)
EU Yes No Yes Yes Yes Yes Yes
(London)
EU (Milan)* Yes No Yes Yes Yes No Yes
EU (Paris) Yes No Yes Yes Yes No Yes
EU Yes No Yes Yes Yes No Yes
(Stockholm)
EU (Spain)* No No Yes No Yes No No
EU Yes No Yes No Yes No No
(Zurich)*
Israel (Tel No No Yes No Yes No No
Aviv)*
Middle East Yes No Yes No Yes No Yes
(Bahrain)*
Middle East Yes No Yes No Yes No No
(UAE)*
South Yes No Yes Yes Yes No Yes
America
(Sao Paulo)
AWS No No Yes No Yes No No
GovCloud
(US-East)

Cloud Clusters (NC2) | Getting Started With NC2 | 28


Region i4i.metal m6id.metal m5d.metal i3.metal i3en.metal z1d.metal g4dn.metal
name
AWS No No Yes Yes Yes No No
GovCloud
(US-West)

* - These regions are not auto-enabled by AWS. Ensure you first enable them in your AWS account before using
them with NC2. For more information on how to enable a region, see AWS documentation. Once you have enabled
these regions in your AWS console, ensure they are also selected in your NC2 portal. For more information, see the
instructions about adding cloud regions to the NC2 console in Adding an AWS Cloud Account.

Note: An instance type may not be supported in a region because the number of partitions is less than the minimum
three partitions required by NC2 or the instance type is not supported by AWS in the specified region.

Installing NVIDIA grid host driver for AHV on G4dn.metal instance


Nutanix Cloud Clusters (NC2) supports AWS G4dn.metal bare-metal instances from AOS 6.1.1 (STS)
and AOS 5.20.4 (LTS) release onwards. G4dn.metal is the only GPU-enabled instance provided by
AWS currently being supported in NC2. Users wanting to use NC2 on AWS with G4dn.metal instances
need to install the NVIDIA grid host driver for AHV on each node running with G4dn.metal instances. It
supports NVIDIA T4 GPUs. Visit https://aws.amazon.com/ec2/instance-types/g4/ to learn more about AWS
G4dn.metal instances. To find more details on the GPU PCIs, run lspci | grep "NVIDIA" command
from the AHV host.

Note: You have to manually install the NVIDIA driver on each new node when you expand the cluster size. Also, NC2
may automatically replace nodes in your cluster if there are issues with node availability. In such a scenario, the user
must also install the NVIDIA driver on the new node procured by NC2.

Note: If a GPU card is present in your cluster, LCM restricts update to AHV if it does not detect a compatible NVIDIA
GRID driver in its inventory. To fetch a compatible NVIDIA GRID driver for your version of AHV, see Updating
the NVIDIA GRID Driver with LCM.

Perform the following steps to install the NVIDIA driver on the G4dn hosts:
1. Download the NVIDIA host driver version 13.0 from the Nutanix portal at https://portal.nutanix.com/page/
downloads?product=ahv&bit=NVIDIA.

2. For detailed installation instructions on NVIDIA driver, see Installing the NVIDIA grid driver.

Note: Users have to sign in to controller VMs in the cluster with the SSH key pair provided during the cluster
creation instead of the default user credentials.
For more information about assigning and configuring a vGPU profile to a VM, see "Creating a VM
(AHV)" in the "Prism Web Console Guide".

Cloud Clusters (NC2) | Getting Started With NC2 | 29


3. Perform the following steps to install the NVIDIA guest driver into guest VMs:
1. Ensure the guest driver version/build matches the host driver version/build. For more information about the
build number and version number matrix, see NVIDIA Virtual GPU (vGPU) Software Documentation web
page.
2. Install the NVIDIA vGPU Software Graphics Driver. For more information, see "Installing the NVIDIA vGPU
Software Graphics Driver" in the NVIDIA "Virtual GPU Software User Guide".
3. Download the NVIDIA GRID drivers for the guest driver from the NVIDIA dashboard.

Note: NVIDIA vGPU guest OS drivers for product versions 11.0 or later can be acquired using NVIDIA
Licensing Software Downloads under:

• All Available
• Product Family = vGPU
• Platform = Linux KVM
• Platform Version = All Supported
• Product Version = (match host driver version)
AHV-compatible host and guest drivers for older AOS versions can be found on the NVIDIA
Licensing Software Downloads site under 'Platform = Nutanix AHV'.

Limitations
Following are the limitations of NC2 in this release:

• A maximum of 28 nodes are supported in a cluster. NC2 supports 28-node cluster deployment in AWS regions
that have seven placement groups.

Note: NC2 does not recommend using single-node clusters in production environments.

• Two-node clusters are not supported.


• There can be a maximum of 14 ENIs per host. Each ENI can have up to 50 IP addresses with 49 usable IPs. For
more information, see AWS Elastic Network Interfaces (ENIs) and IP Addresses.
• You can create a heterogeneous cluster using a combination of i3.metal, i3en.metal, and i4i.metal instance types
or z1d.metal, m5d.metal, and m6id.metal instance types. However, you can only create homogenous clusters
with g4dn.metal instances; it cannot be used to create a heterogeneous cluster. See Creating a Heterogeneous
Cluster for more details.
• NC2 does not support sharing of AWS subnets among multiple clusters.
• Only IPv4 is supported.
• SyncRep operations
• Do not use 192.168.5.0/24 CIDR for the VPC being used to deploy the NC2 on AWS cluster. All Nutanix nodes
use that CIDR for communication between the CVM and the installed hypervisor.
• Unmanaged networks are not supported in this release.
• Broadcast and unknown unicast traffic are dropped.
• The default configuration for CVMs on NC2 with AOS 6.7 or earlier is 32 GiB of RAM. On NC2 with AOS
6.7.1.5, the CVM memory size is set to 48 GiB.

Cloud Clusters (NC2) | Getting Started With NC2 | 30


• If you choose to create a new VPC by using the NC2 console when you are creating a cluster, you cannot deploy
other clusters in that VPC. However, if you choose to deploy a cluster in an existing VPC (VPC you created using
the AWS console), you can deploy multiple clusters in that VPC.
• For Prism Central scale-out deployments used in an NC2 on AWS environment, reconfiguration of Prism Central
VM IP addresses is not supported.
• IP preservation is not currently supported when VMs are recovered from a Cluster Protect backup in S3 buckets.
If you intend to protect your clusters using the Cluster Protect feature, understand the limitations of using this feature
listed in Limitations of Cluster Protect.

Non-Applicable On-Prem Configurations


Following is a list of the configurations and settings that are supported in an on-prem Nutanix cluster, but are not
applicable to a cluster running in AWS.

Prism Element and Prism Central Configurations


VLAN ID:
AWS does not support VLANs. Therefore, if you deploy a cluster on AWS, you do not need to
provide the VLAN ID when you create or update the network in the cluster. The VLAN ID is replaced
by the subnet ID, which uniquely identifies a given network in a VPC.
Network Visualization:
The Network Visualization feature of the on-prem Prism Element (Prism Element) web console is
a consolidated graphical representation of the network of the Nutanix cluster VMs, hosts, network
components (as physical and logical interfaces), and attached first-hop switches (alternately
referred to as Top-of-Rack (ToR) switch or switches in this document). In an on-prem cluster, the
information about ToR Switches is configured by using a CLI command. The cluster also uses
SNMP and LLDP to fetch more information from the switch.

In a cluster running in AWS, you have no visibility into the actual cloud infrastructure such as the
ToR switches. API support is not available to discover the cloud infrastructure components in
Nutanix clusters. Given that the cluster is deployed in a single VPC, the switch view is replaced
by the VPC. Any configuration options on the network switch are disabled for clusters deployed in
AWS.
Uplink Configuration:
The functionality to update the uplink configuration is disabled for a cluster running in AWS.
Hardware Configuration:
The Switch tab in the Hardware menu of the Prism Element web console is disabled for a cluster
running in AWS.
Rack Configuration:
The functionality to configure racks is disabled for a cluster running in AWS. Clusters are deployed
as rack-aware by default. APIs to create racks are also disabled on clusters running in AWS.
Broadcast and LLDP:
AWS does not support broadcast and any link layer information based on protocols such as LLDP.
Security Dashboard
A dashboard that provides a dynamic summary of the security posture across all registered clusters
is not supported for NC2.
Host NIC:
Elastic Network Interfaces (ENIs) provisioned on bare-metal AWS instances are virtual interfaces
provided by Nitro cards. AWS does not provide any bandwidth guarantees for each ENI, but

Cloud Clusters (NC2) | Getting Started With NC2 | 31


provides an aggregate multi-flow bandwidth of 25G. Also, when clusters are deployed on AWS, ENI
creation and deletion is dynamic based on UVMs and you do not need to perform these workflows.
Hence, the Prism Element web console displays only single host NIC information, that is eth0,
which is the primary ENI of the bare-metal instance. All the configuration and statistical attributes
are associated with eth0.
Hosts Only No Blocks:
Hosts are independent and not put together as a block. The block view is changed to host view in
the Prism Element web console.
Field Replaceable Units:
The functionality to replace and repair disks is disabled for a cluster running in AWS.
Backplane LAN:
NC2 on AWS do not support RDMA based vNICs and hence the support for backplane LAN is
disabled.
Firmware Upgrades:
For an on-prem cluster, Life Cycle Manager (LCM) allows you to perform upgrades of the BMC,
BIOS, and any hardware component firmware. However, these components are not applicable to
clusters deployed in AWS. Therefore, LCM does not list these items in the upgrade inventory.
License Updates:
You cannot update your NC2 licenses by using the Prism Element web console. Update your NC2
licenses by using the NC2 console.

Cluster Operations
Perform the following actions using the NC2 console:

• Cluster deployment and provisioning must be performed by using the NC2 console and not by using Foundation.
• Perform add node and remove node operations by using the NC2 console and not by using the Prism Element web
console.

aCLI Operations
The following aCLI commands are disabled in a cluster in AWS:

Namespace Options
net create_cluster_vswitch
delete_cluster_vswitch
get_cluster_vswitch
list_cluster_vswitch
update_cluster_vswitch

host enter_maintenance_mode
enter_maintenance_mode_check
exit_maintenance_mode

nCLI Operations
The following nCLI commands are disabled in a cluster in AWS:

Cloud Clusters (NC2) | Getting Started With NC2 | 32


Entity Options
network add-switch-config,
edit-switch-config,
delete-switch-config,
list-switch,
list-switch-ports,
get-switch-collector-config,
edit-switch-collector-config

cluster edit-hypervisor-lldp-params
, get-hypervisor-lldp-config
edit-param disable-degraded-state-monitoring
disk delete, remove-start, remove-status
software download, list, remove, upload

API Operations
The following API calls are disabled or changed in a Nutanix cluster running in AWS:

API Changes
POST : /hosts/{hostid}/enter_maintenance_mode Not supported
POST : /hosts/{hostid}/exit_maintenance_mode Not supported
GET /clusters Values for the rack and block configuration are not
displayed.
POST /cluster/block_aware_fixer Not supported
DELETE /api/nutanix/v1/cluster/rackable_units/ Not supported
{uuid}
DELETE /api/nutanix/v3/rackable_units/{uuid} Not supported
DELETE /api/nutanix/v3//disks/{id} Not supported

Cloud Clusters (NC2) | Getting Started With NC2 | 33


API Changes
GET /hosts: Returns the instance ID of the AWS instance as
compared to the serial number of the host in an on-prem
cluster.
Returns no values for the following attributes:

• ipmiAddress (string, optional),


• ipmiPassword (string, optional),
• ipmiUsername (string, optional),
• backplaneIp (string, optional),
• bmcModel (string, optional): Specifies the model of
bmc, present on the node,
• bmcVersion (string, optional): Specifies the version
of bmc, present on the node,
• controllerVmBackplaneIp (string, optional),

NC2 Infrastructure Deployment


To deploy an NC2 infrastructure, configure the entities listed in this topic in AWS and the NC2 console.
1. In AWS:
1. Configure an AWS account to access the AWS console.
See How do I create and activate a new AWS account? for more information.
2. Configure an AWS IAM user with the following permissions:
See Creating an IAM User in Your AWS Account for more information.

• IAMFullAccess: Enables the NC2 console to run the CloudFormation template in AWS to link your
AWS and NC2 account.
You use the credentials of this IAM user when you are adding your AWS cloud account to the NC2
console. When you are adding your AWS cloud account, you run a CloudFormation template, and the
CloudFormation script adds two IAM roles to your AWS account. One role allows the NC2 console

Cloud Clusters (NC2) | Getting Started With NC2 | 34


to access your AWS account by using APIs and the other role is assigned to each of your bare-metal
instances.

Note: Only the user account you use to add your AWS account to NC2 has the IAMFullAccess privilege
and the NC2 console itself does not have the IAMFullAccess privilege.

• AWS_ConfigRole: Grants AWS Config permission to get configuration details for supported AWS
resources
• AWSCloudFormationFullAccess: Used to create the initial AWS resources needed to link your AWS
account and create a CloudFormation stack

Note: These permissions are only required for the creation of CloudFormation template and NC2 does not use
these for any other purpose.

3. A VPC
4. A private subnet for management traffic
5. One or more private subnets for user VM traffic
6. Two new AWS S3 buckets with Nutanix IAM role if you want to use the Cluster Protect feature to protect
Prism Central, UVM, and volume groups data.
See the AWS documentation for instructions about how to configure these requirements.
2. In the NC2 console:
1. A My Nutanix account to access the NC2 console.
See NC2 Payment Methods on page 75 for more information.
2. An organization
See Creating an Organization on page 40 for more information.

Creating My Nutanix Account


You need a My Nutanix account to access the NC2 console. A My Nutanix account allows you to subscribe to,
access, and manage NC2. After creating a My Nutanix account, you can access the NC2 console through the My
Nutanix dashboard. You can use NC2 for a 30-day free trial period (one common free trial period for NC2 on all
supported clouds) or sign up to pay for NC2 usage beyond the free trial period. You can pay for NC2 using your
Nutanix licenses or with the subscription plan.
Perform the following procedure to create a My Nutanix account.

Procedure

1. Go to https://my.nutanix.com.

2. Click Sign up now.

3. Enter your details, including first name, last name, company name, Job title, phone number, country,
email, and password.
Follow the specified password policy while creating the password. Personal domain email addresses, such as
gmail.com or yahoo.com are not allowed. You must sign up with a company email address.

4. Click Submit.
A confirmation page appears and you receive an email from [email protected] after you successfully
complete the sign-up process.

5. Click the link in the email to verify your email address.


A confirmation message briefly appears, and you are directed to the Nutanix Support portal.

6. Sign in to the portal using the credentials you specified during the sign-up process.

Cloud Clusters (NC2) | Getting Started With NC2 | 35


7. Click My Nutanix to go to the My Nutanix dashboard.

Cloud Clusters (NC2) | Getting Started With NC2 | 36


8. An educational tutorial explaining the multiple workspaces appears when you access the My Nutanix for the first
time. Click Take a Tour to learn more about workspaces. If you have an existing My Nutanix account and are
familiar with workspaces, click Skip.

Figure 12: Take a tour - multiple workspaces

A default Personal workspace is created after you successfully create a My Nutanix account. You can rename
your workspaces. For more information on workspaces, see Workspace Management.

Note: The default Personal workspace name contains the domain followed by the email address of the user and
the tenant word.

Cloud Clusters (NC2) | Getting Started With NC2 | 37


Figure 13: Workspace

Note: When you create a My Nutanix account, a default workspace gets created for you with the Account Admin
role, which is required to create an NC2 subscription and access the Admin Center and Billing Center portals. If you
are invited to a workspace, then you must get the Account Admin role so that you can subscribe to NC2 and access
the Admin Center and Billing Center.

Starting a Free Trial for NC2


Before you sign up for a paid subscription plan to use NC2, you can start a 30-day free trial. While NC2 supports
multiple public clouds (AWS, Azure), Nutanix offers only one 30-day free trial period for NC2. The free trial is for
Nutanix software usage. If your free trial period is expired, consider subscribing to a paid subscription plan.
After your NC2 trial expires, your cluster will still be accessible, but you will not be able to change the cluster
capacity, hibernate a cluster, or create new clusters until you subscribe to NC2. You will not be billed for Nutanix
software usage while your trial is expired; however, your cloud provider might charge you for hardware. If needed,
the NC2 team can work with you during this period to offer you an extension on your expired trial.
Your trial remains expired for a grace period of 30 days, after which your NC2 trial gets cancelled, and no more trial
extensions are possible. The NC2 cluster stays running, but you cannot modify the capacity of an existing cluster,
create new clusters, or hibernate a cluster. Billing from your cloud provider will continue as usual. You can still
switch to a paid subscription and regain the capabilities to use the existing configurations and NC2 features fully. For
more information on subscribing to NC2, see Changing Payment Method.

Note: The owner of the My Nutanix workspace that has been used to start the free trial for NC2 must add other users
from the NC2 console with appropriate RBAC if those users need to manage clusters in the same tenant. For more
information on adding users and the roles that can be assigned, see NC2 User Management.

Note: You are responsible for any hardware and cloud services costs incurred during the NC2 free trial.

Perform the following procedure to start a free trial of NC2:

Cloud Clusters (NC2) | Getting Started With NC2 | 38


Procedure

1. Sign in to https://my.nutanix.com using your My Nutanix credentials.

Note: Ensure that you select the correct workspace from the Workspace dropdown list on the My Nutanix
dashboard. For more information on workspaces, see Workspace Management.

Figure 14: Selecting a Workspace

2. On the My Nutanix dashboard, scroll to Cloud Services, and under Nutanix Cloud Clusters (NC2), click
Get Started.

Figure 15: Cloud Services - NC2 Get Started

3. On the Nutanix Cloud Clusters (NC2) on Public Clouds page, under Try NC2, click Start your 30 day
free trial.

4. You are redirected to the NC2 console. When prompted to accept the Nutanix Cloud Services Terms of Service,
Click I Accept. The NC2 console opens in a new tab. You can now start using NC2.

Note: If you want to subscribe to NC2 instead of using a free trial, you can click the Select from our available
plan options to get started option, and then complete the subscription on the Nutanix Billing Center.

Cloud Clusters (NC2) | Getting Started With NC2 | 39


CLUSTER DEPLOYMENT
The NC2 deployment includes the following steps:

• Create an organization in the NC2 console.


• Add your AWS cloud account to NC2.
• Create a cluster.

Creating an Organization
An organization in the NC2 console allows you to segregate your clusters based on your specific
requirements. For example, create an organization Finance and then create a cluster in the Finance
organization to run only your finance-related applications.

About this task


In the NC2 console, create an organization and then create a cluster within that organization.
To create an organization, perform the following steps:

Procedure

1. Sign in to Nutanix Cloud Clusters (NC2) from the My Nutanix dashboard.

Note: On the My Nutanix dashboard, ensure that you select the correct workspace from the Workspace
dropdown list that shows the workspaces you are part of and that you have used while subscribing to NC2.

Figure 16: Selecting a Workspace

Cloud Clusters (NC2) | Cluster Deployment | 40


2. In the Organizations tab, click Create Organization.

Figure 17: NC2 Create Organization

3. In the Create a new organization dialog box, do the following in the indicated fields:

a. Customer. Select the customer account in which you want to create the organization.
b. Organization name. Enter a name for the organization.
c. Organization URL. The URL name is automatically generated. If needed, the name can be modified.

4. Click Create.
After a successful creation, the new organization will be listed in the Organizations tab.

Updating an Organization
Administrators can update the basic information for your organization from the NC2 console.

Note: Changes applied to the organization entity affect the entirety of the organization and any accounts listed
underneath it.

To update your organization, perform the following:

Procedure

1. Sign in to the NC2 console: https://cloud.nutanix.com.

2. In the Organization page, select the ellipsis button of a corresponding organization and click Update.

3. To update the organization’s basic details:

a. Navigate to the Basic Info tab of the Organization entity's update page.
b. You can edit any of the fields listed below if required:

• Name: Edit the name of your organization in this field.


• URL name: This specifies the slug of the URL unique to your organization. For example, specifying
documentation would look like this:
https://cloud.nutanix.com/[customer_URL]/documentation/[account_URL]

• Description: Add a description of the organization.


• Website: Specify the web address for your organization. For example, https://www.google.com.
c. Click Save.

Adding an AWS Cloud Account


To add your AWS account to NC2, specify your AWS cloud details, create and verify a CloudFormation stack in the
AWS console, and select AWS regions in which you want to create Nutanix clusters.

Cloud Clusters (NC2) | Cluster Deployment | 41


About this task

Note: You can add one AWS account to multiple organizations within the same customer entity. However, you cannot
add the same AWS account to two or more different Customer (tenant) entities. If you have already added an AWS
account to an organization and want to add the same AWS account to another organization, follow the same process,
but you do not need to create the CloudFormation template.
If a cluster is present, do not delete the CloudFormation stacks.

Note: For NC2 on AWS with AOS 6.7.1.5, you must run the CloudFormation template while adding your AWS cloud
account. If you have already run the CloudFormation template, you must run it again so that any new permissions added
to the IAM roles come into effect.

To add an AWS account to NC2, perform the following procedure:

Procedure

1. Sign in to NC2 from the My Nutanix dashboard.

Note: On the My Nutanix dashboard, ensure that you select the correct workspace from the Workspace
dropdown list that shows the workspaces you are part of and that you have used while subscribing to NC2.

Figure 18: Selecting a Workspace

2. In the left navigation pane, click Organizations.

3. Click the ellipsis next to the organization that you want to add the cloud account to and click Cloud accounts.

4. Click Add Cloud Account.

5. Under Select Cloud Provider, select AWS.

6. In the Name field, type a name for your AWS cloud account.

Cloud Clusters (NC2) | Cluster Deployment | 42


7. In the Enter Account ID field, type your AWS cloud account ID.

Note: You can find your Account ID in My Account in the AWS cloud console. Ensure that you enter the AWS
cloud account ID without hyphens.

8. Under Prepare AWS Account, click Open AWS Console.


The AWS console opens in a new tab.
To create and manage resources in your AWS account, NC2 requires several IAM resources. Nutanix has
created a CloudFormation template that creates all the necessary resources.
Do the following in the AWS console:

a. Sign in to the AWS account in which you want to create Nutanix clusters.
This account is the same AWS account that is linked to the Account ID you entered in step 7.
b. In the Quick create stack screen, note the template URL, stack name, and other parameters.
c. Select the I acknowledge that AWS CloudFormation might create IAM resources with custom
names check box.
d. Click Create stack.
e. Monitor the progress of the creation of the stack in the Events tab.
f. Wait until the Status changes to CREATE_COMPLETE.
You can view information about your CloudFormation stack, namely Nutanix-Clusters-High-Nc2-Cloud-
Stack-Prod, on the Stacks page of the CloudFormation console.

Figure 19: Viewing CloudFormation Template

The CloudFormation template creates the Nutanix-Clusters-High-Nc2-Cluster-Role-Prod and Nutanix-


Clusters-High-Nc2-Orchestrator-Role-Prod IAM roles. To view the permissions and policies attached

Cloud Clusters (NC2) | Cluster Deployment | 43


to these IAM roles, you can sign into the AWS Management Console and open the IAM console at https://
console.aws.amazon.com/iam/ and then choose Roles > Permissions.

Figure 20: Viewing IAM Roles

9. In the NC2 console, click Verify Connection.


If CloudFormation is successfully verified, a message indicating that the cloud account setup is verified appears
below this field.

10. Under Select data centers, do one of the following:

» Select All supported regions if you want to create clusters in any of the supported AWS regions.
» Select Specify regions if you want to create clusters in specific AWS regions and select the regions of
your choice from the list of available AWS regions.

Note: Some regions are not auto-enabled by AWS. Ensure you first enable them in your AWS account before
using them with NC2. For more information, see Supported Regions and Bare-metal Instances.

11. Select the add cloud account disclaimer checkbox for acknowledgment.

12. Click Add Account.


You can monitor the status of the cloud account in the Cloud account page. An R status indicates that your cloud
account is ready.

Deactivating a Cloud Account


NC2 administrators can deactivate a cloud account from the NC2 console when they want to de-register a
cloud account from their Customer or Organization entity. Once the cloud account is deactivated, the cloud
administrator can terminate the corresponding resources that are not managed by NC2.

Note: A cloud account that has existing NC2 accounts cannot be deactivated. You must terminate all NC2 accounts
using the cloud account resources first.

To deactivate a cloud account, perform the following steps:

Procedure

1. Navigate to the Customer or Organization dashboard in the NC2 console where the cloud account is
registered.

2. Select Cloud Accounts in the left-hand menu.

Cloud Clusters (NC2) | Cluster Deployment | 44


3. Find the cloud account that you want to deactivate. Click the ellipsis icon against the desired cloud account and
select Deactivate.

Figure 21: Deactivate a cloud account

Reconnecting a Cloud Account


When the NC2 console is unable to communicate with the cloud account infrastructure, the status for the
cloud account in the Cloud Accounts list is displayed as U for Unavailable (instead of R for Ready). The
administrator can correct the issue and manually trigger a reconnection of the cloud account.
A cloud account might become unavailable when the required IAM roles and permissions are removed from the AWS
IAM role.
To reconnect an unavailable cloud account after the issues have been addressed, perform the following steps:

Procedure

1. Navigate to the Customer or Organization dashboard in the NC2 console where the cloud account is
registered.

2. Click the ellipsis icon against the desired organization or customer and then click Cloud Accounts.

3. Find the cloud account you want to reconnect. Click the ellipsis icon against the cloud account and click
Reconnect.

4. If the underlying issue(s) were addressed and the NC2 console can communicate with the cloud account
infrastructure, the account status will change to R.

Adding a Cloud Account Region


Administrators can add additional regions after their cloud account has been set up.

Note: Administrators must ensure they have sufficient resource limits in the regions they decide to add before adding
those regions through the NC2 console.

Procedure

1. Navigate to the Customer or Organization dashboard in the NC2 console where the cloud account is
registered.

2. Click the ellipsis icon against the desired organization or customer and then click Cloud Accounts.

3. Find the cloud account where you want to add a new cloud region. Click the ellipsis icon against the cloud
account and click Add regions. A new window appears.

Cloud Clusters (NC2) | Cluster Deployment | 45


4. Choose the region from:

• All supported regions: Select this option if you would like to add all other supported regions besides those
you have already specified.
• Specify regions: Select this option if you would like to add just a few additional supported regions to your
cloud account. Click inside the regions field and select as many regions as you want from the drop-down
menu.

Figure 22: Adding a region to a cloud account

5. Once you have made your selection, click Save. You will receive updates in your notification center regarding
the status.

Updating AWS Stack Configurations


For an AWS cloud account, administrators can update the stack or recreate the stack when needed.

Note: You must not recreate the CloudFormation stack for existing clusters. Instead, you must update and rerun the
CloudFormation stack.

Perform the following steps:

Procedure

1. Navigate to the Customer or Organization dashboard in the NC2 console, where the cloud account is
registered.

2. Click the ellipsis icon against the desired organization or customer and then click Cloud Accounts.

3. Find the cloud account for which you want to update the configurations. Click the ellipsis icon against the cloud
account and click Update.

Cloud Clusters (NC2) | Cluster Deployment | 46


4. Update the AWS Configurations:

• Update Stack: The Update Stack tab provides your CloudFormation Stack template URL and Stack
parameters. These details can be used to update IAM (Identity and Access Management) roles.
For example, to use new product features, you may need to use the CloudFormation Stack template URL to
expand your IAM permissions after an NC2 product update.

Figure 23: Configuration - Update Stack (AWS)


• Recreate Stack: Use the Recreate Stack sub-tab to recreate your CloudFormation stack to a known good
state and verify the connection. Typically, most administrators will access this page when troubleshooting
permissions/account setup issues.

Note: To recreate your CloudFormation stack, you must delete the existing stack in your AWS Console, which
you can access directly from the Recreate Stack sub-tab.

Cloud Clusters (NC2) | Cluster Deployment | 47


Figure 24: Configuration - Recreate Stack (AWS)

Creating a Cluster
Create a cluster in AWS by using NC2. Your NC2 cluster runs on an EC2 bare-metal instance in AWS.
For more information on the AWS components that are either installed when the option to create a new VPC is
selected during NC2 on AWS deployment or you need to install manually when you choose to use an existing VPC,
see AWS Components Installed.

About this task

Note: Each node in a Nutanix cluster has two EBS volumes attached (AHV EBS and CVM EBS). Both are encrypted
gp3 volumes. The size of AHV EBS volume is 100 GB and CVM EBS is 150 GB.
AWS charges you for EBS volumes regardless of the cluster state (running or hibernate). These charges
are incurred once the cluster is created until it is deleted. See the AWS Pricing Calculator for information
about how AWS bills you for EBS volumes.
AWS bills you an additional charge for the EBS volumes and S3 storage for the time the cluster is
hibernated. If a node turns unhealthy and you add another node to a cluster for evacuation of data or VMs,
AWS also charges you for the new node.

Note: The default configuration for CVMs on NC2 with AOS 6.7 or earlier is 32 GiB of RAM. On NC2 with AOS
6.7.1.5, the CVM memory size is set to 48 GiB.

You must use the following range of IP addresses for the VPCs and subnets:

• VPC: between /16 and /25, including both


• Private management subnet: /16 and /25, including both
• Public subnet: /16 and /25 including both
• UVM subnets: /16 and /25, including both

Note: UVM subnet sizing would depend on the number of UVMs that would need to be deployed. NC2 supports the
network CIDR sizing limits enforced by AWS.

To create a Nutanix cluster on AWS, perform the following:

Cloud Clusters (NC2) | Cluster Deployment | 48


Procedure

1. Sign in to NC2 from the My Nutanix dashboard.

Note: On the My Nutanix dashboard, ensure that you select the correct workspace from the Workspace
dropdown list that shows the workspaces you are part of and that you have used while subscribing to NC2.

Figure 25: Selecting a Workspace

2. In the Clusters page, do one of the following:

» If you are creating a cluster for the first time, under You have no clusters, click Create Cluster.
» If you have created clusters before, click Create Cluster in the top-right corner of the Clusters page.

Figure 26: NC2 Create Access Page

Cloud Clusters (NC2) | Cluster Deployment | 49


3. Select one of the following cluster options:

» General Purpose: A cluster that utilizes general purpose Nutanix licenses. For more information on NCI
licensing, see Nutanix Licenses for NC2.
» Virtual Desktop Infrastructure (VDI): A cluster that utilizes Nutanix licenses for virtual desktops. For
more information on NCI and EUC licensing, see Nutanix Licenses for NC2.

Figure 27: Create a cluster - select a cluster type

Cloud Clusters (NC2) | Cluster Deployment | 50


4. In the General tab of the Create Cluster dialog box, do the following:

a. Organization. Select the organization in which you want to create the cluster.
b. Cluster Name. Type a name for the cluster.
c. Cloud Provider. Select AWS.
d. Cloud Account. Select the AWS cloud account in which you want to create the cluster.
e. Region and Availability Zone. Select the AWS region and Availability Zone in which you want to create
the cluster.
f. (If you select VDI) Under Consumption Method, the User-based consumption method is selected by
default. In this case, the consumption and cluster pricing are based on the number of users concurrently using
the cluster. Enter the maximum number of users allowed to use the cluster.

Note: The general purpose cluster uses a capacity-based method by default where the consumption and
cluster pricing is based on the capacity provisioned in the cluster.

g. In Advanced Settings, with Scheduled Cluster Termination, NC2 can delete the cluster at a
scheduled time if you are creating a cluster for a limited time or for testing purposes. Select one of the
following:

• Terminate on. Select the date and time when you want the cluster to be deleted.
• Time zone. Select a time zone from the available options.

Note: The cluster will be destroyed, and data will be deleted automatically at the specified time. This is an
irreversible action and data cannot be retrieved once the cluster is terminated.

Cloud Clusters (NC2) | Cluster Deployment | 51


5. In the Software tab, do the following in the indicated fields:

a. Under Licensing Option, select one of the following:

• For the General Purpose cluster option selected in step 3:

• NCI (Nutanix Cloud Infrastructure): Select this license type and appropriate add-ons to use NCI
licensing.

Note: You must manually register the cluster to Prism Central and apply the NCI licenses in Prism
Central.

• AOS: Select this license type and appropriate add-ons to reserve and use AOS (legacy) licenses. For
more information on how to reserve AOS (legacy) licenses, see Reserving License Capacity.

Figure 28: License Option


• For the Virtual Desktop Infrastructure (VDI) cluster option selected in step 3:

• EUC (End User Computing): Select this option if you want to use EUC licenses for a specified
number of users.

Note: You need to manually register the cluster to Prism Central and manually apply the EUC
licenses.

• VDI: Select this option if you want to use VDI licenses for a specified number of users. For more
information on how to reserve VDI licenses, see Reserving License Capacity.

Cloud Clusters (NC2) | Cluster Deployment | 52


Figure 29: License Option
For more information on license options, see NC2 Licensing Models.
b. Under AOS Configuration:

• AOS Version. Select the AOS version that you want to use for the cluster.

Note: The cluster must be running the minimum versions of AOS 6.0.1.7 for NCI and EUC licenses, and
AOS 6.1.1 for NUS license.

• Software Tier. In the Software Tier drop-down list, select the license type based on your cluster type
and the license option you selected.

• For General Purpose cluster: Select the Pro or Ultimate license tier that you want to apply to your
NCI or AOS cluster. Click the View Supported Features list to see the available features in each
license type.
• For VDI cluster: The only available license tier for the VDI or EUC cluster, that is, Ultimate, is
selected by default.

Cloud Clusters (NC2) | Cluster Deployment | 53


Note: If you have selected VDI and User-based licensing, then the Ultimate software edition is
automatically selected, as only the VDI Ultimate license tier is supported on NC2.

This option is used for metering and billing purposes. Usage is metered every hour and charged based on
your subscription plan. Any AOS (legacy) and VDI reserved licenses will be picked up and applied to your
NC2 cluster to cover its usage before billing overages to your subscription plan.
c. Under Add-on Products:

• If the NCI (Nutanix Cloud Infrastructure) or EUC (End User Computing) license option is
selected: you can optionally select Use NUS (Nutanix Unified Storage) on this cluster and specify
the storage capacity that you intend to use on this cluster.

Note: You need to manually apply the NCI and the NUS licenses to your cluster.

• If the AOS or VDI license option is selected, you can optionally select the following add-on products:

• Advanced Replication
• Data-at-Rest Encryption
• Use Files on this cluster: Specify the capacity of files you intend to use in the Unified Storage
Capacity field.

Note: The Advanced Replication and Data-at-Rest Encryption add-ons are selected by default for AOS
and VDI Ultimate; you need to select these add-ons for AOS Pro manually.

For more information, see Software Options.


If you want to run a Microsoft Windows Server on this NC2 on AWS cluster, select the I want to use
Microsoft Windows Server on this cluster and agree to pay Microsoft Windows licensing cost
for the whole cluster directly to AWS checkbox.

Note: NC2 only supports AOS 6.5.4.5 and 6.7.1.5 to run Microsoft Windows Server workloads.

Note: NC2 shares your intent to use a Windows server with AWS. AWS bills you for the Microsoft Windows
Server license cost. For more information, see Microsoft Windows on NC2.

Cloud Clusters (NC2) | Cluster Deployment | 54


Figure 30: Installing Microsoft Windows

When you choose to use Microsoft Windows Server, you must follow these additional instructions:

a. Bring your own Microsoft Windows binary that is in an AHV-compatible format. Nutanix supports
the RAW, VHD(X), VMDK, VDI, ISO, and QCOW2 disk formats. For more information, see AHV
Administration Guide.
b. Manually install the Windows binary on the NC2 on AWS cluster.
c. Manually license all Windows VMs on the NC2 on AWS cluster.

Note: You must perform this step again on the Windows VM after you migrate it back to the NC2 on AWS
cluster in the disaster recovery scenario.

Follow these steps on the Windows VM you installed:

• Run the below command as an administrator to set your Windows KMS machine IP address.
slmgr.vbs /skms 169.254.169.250:1688

• Set your Windows KMS setup key.


First, identify the correct Microsoft KMS client setup key (KMSSetupKey) for your operating system
version. For more information, see Key Management Services (KMS) client activation and product
keys.

Cloud Clusters (NC2) | Cluster Deployment | 55


Then, run this command as an administrator:
slmgr.vbs /ipk <KMSSetupKey>

• Run the below command as an administrator to activate Windows.


slmgr.vbs /ato

Cloud Clusters (NC2) | Cluster Deployment | 56


6. In the Capacity tab, do the following on the Capacity and Redundancy page:
Under Cluster Capacity and Redundancy

• Under Host Configuration:

• Host type: The instance type used during initial cluster creation is displayed.
• Number of Hosts. Click + or - depending on whether you want to add or remove nodes.

Note: A maximum of 28 nodes are supported in a cluster. NC2 supports 28-node cluster deployment in
AWS regions that have seven placement groups. Also, there must be at least three nodes in a cluster.

• Add Host Type: The other compatible instance types are displayed depending on the instance type used
for the cluster. For example, if you have used i3.metal node for the cluster, then i3en.metal, and i4i.metal
instance types are displayed.

Note: You can create a heterogeneous cluster using a combination of i3.metal, i3en.metal, and i4i.metal
instance types or z1d.metal, m5d.metal, and m6id.metal instance types.
The Add Host Type option is disabled when no compatible node types are available in the
region where the cluster is being deployed.

• Under Redundancy: Select one of the following redundancy factors (RF) for your cluster.

• RF 1: The number of copies of data replicated across the cluster is 1. The number of nodes for RF1 must
be 1.

Note: RF1 can only be used for single-node clusters. Single-node clusters are not recommended in
production environments. You can configure the cluster with RF1 only for clusters created for Dev, Test,
or PoC purposes. You cannot increase the capacity of a single-node cluster.

• RF 2: The number of copies of data replicated across the cluster is 2. The minimum number of nodes for
RF2 must be 3.
• RF 3: The number of copies of data replicated across the cluster is 3. The minimum number of nodes for
RF3 must be 5.
• Host type. Select the type of bare-metal instance that you want your cluster to run on.
• Number of Hosts. Select the number of hosts that you want in your cluster.

Note: A maximum of 28 nodes are supported in a cluster. NC2 supports 28-node cluster deployment in AWS
regions that have seven placement groups.

Cloud Clusters (NC2) | Cluster Deployment | 57


Figure 31: Create a cluster - capacity

Cloud Clusters (NC2) | Cluster Deployment | 58


7. In the Network tab, do the following:

a. Under Networking, select the VPC in which you want to create the cluster from one of the following
options:

• Use an existing VPC

• Under Select Cluster VPC, select a VPC from the Virtual Private Network (VPC) drop-down
list.
• Under Select Cluster Management Subnet, select a subnet (from the VPC that you selected
in the previous step) from the Management Subnet drop-down list that you want to use as the
management subnet for your cluster.

Note: This subnet must be a dedicated private subnet for communication between Nutanix CVMs or
management services like Hypervisor.

Cloud Clusters (NC2) | Cluster Deployment | 59


Figure 32: Create a cluster - use an existing VPC
• Create New VPC. Select this option to create a new VPC for this cluster and do not want to use any of
your existing VPCs.

Cloud Clusters (NC2) | Cluster Deployment | 60


• Under Cluster VPC, enter the CIDR size for the cluster VPC in Virtual Private Cloud (VPC)
CIDR.

Note: Ensure that you do not use 192.168.5.0/24 CIDR for the VPC being used to deploy the NC2
on AWS cluster. All Nutanix nodes use that CIDR for communication between the CVM and the
installed hypervisor.

Note: Two subnets will be created along with the VPC in the selected AZ. One private subnet without
outgoing internet access for the management network and one public subnet providing connectivity to
NC2 from the VPC.

Figure 33: Create a cluster - use an existing VPC


b. Under Host Access through SSH, do one of the following:

Cloud Clusters (NC2) | Cluster Deployment | 61


• Use an existing Key Pair. Select an existing SSH key from the drop-down list.
• Create a New Key Pair. Type a Key name and click Generate to generate a new SSH key. You can
use this SSH key to sign in to a node in the cluster, without a password.
c. Under Access Policy, specify the following options to control the user management and UVM's AWS
security groups that are deployed when you create the cluster:

• Prism (Cluster Management Console): Select one of the following options to control access of the
public Internet to and from your Nutanix cluster:

• Public: Allow cluster access to and from the public Internet.


When you select Access Policy as Public, NC2 will automatically provision a Network Load
Balancer that enables access to Prism Element from the Internet.

Note: This Public option is only available when you choose to either import a VPC or create a new
VPC in the Network tab.
Allowing Internet access could have security ramifications. Use of a load balancer is
optional and is not a recommended configuration. For securing network traffic when using
a load balancer, you can consider using secure listeners, configuring security groups,
and authenticating users through an identity provider. For more information, see AWS
Documentation.
You can also use a Bastion server (jump box) to gain SSH access to the CVMs and AHV
hosts of Nutanix clusters running on AWS. See Logging into a Cluster by Using SSH.

• Restricted: Restrict access only to a select number of IP addresses. In the IP addresses field,
provide a list of source IP addresses and ranges that must be allowed to access Prism Element. NC2
creates security group rules.
• Disabled: Disable cluster access to and from the public Internet. The security group attached to the
cluster hosts will not allow access to Prism Element.
• Management Services (Core Nutanix services running on this cluster). Select to allow or
restrict access to management services (access to CVMs and AHV hosts).

• Restricted: If any IP addresses require access to CVMs and AHV hosts, specify a list of such source
IP addresses and ranges. NC2 creates security group rules accordingly.
• Disabled: Disable access to management services in the security group attached to the cluster nodes.

Note: If you intend to use the Cluster Protect feature, ensure that the Cluster Management Services can be
accessed from the VPC and the Prism Central subnet. Ports 30900 and 30990 are opened while creating a new
NC2 cluster and are required for communication between AOS and Multicloud Snapshot Technology (MST)
to back up the VM and volume groups data.

Cloud Clusters (NC2) | Cluster Deployment | 62


8. During initial cluster creation, in the Cluster Protection tab, record your intention whether you want to
protect your cluster data, UVM data, and Prism Central configuration data against failures impacting the whole
cluster and click Next.

• I want to protect the cluster: Select this option if you want to protect the cluster using the Cluster
Protect feature.

Note: You must register this cluster to a new or an existing Prism Central instance that runs in the same
availability zone. If you are going to use this cluster as a source or target for Disaster Recovery, then you
cannot also use the Cluster Protect feature to protect your cluster.

To protect the cluster using the Cluster Protect feature, you must perform the steps listed in Cluster Protect
Configuration.
• I will protect the cluster myself/ I do not need protection: Select this option if you do not want to
use the Cluster Protect feature to protect your cluster.

Note: You can select this option if you need to use this cluster as a source or target for a Disaster Recovery
setup. Nutanix recommends enabling the automatic backup of VM and Volume Groups data.

Note: The Cluster Protect feature is available only with AOS Ultimate or NCI Ultimate license tier and needs
AOS 6.7 or higher and Prism Central 2023.3 or higher. The Cluster Protect feature is available only for new
cluster deployments. Any clusters created before AOS 6.7 cannot be protected using this feature.

Figure 34: Cluster Protect

Cloud Clusters (NC2) | Cluster Deployment | 63


9. In the Summary tab, do the following:

a. Review your cluster configuration.


b. To reinitiate the resource allocation check for the cluster, under Cloud Resources > AWS Cloud
Account, click Show details, and click Check quotas.
AWS Quota Check
A quota check for the cluster is automatically triggered prior to cluster creation. The NC2 console
automatically checks your resource quota. Ensure that your AWS subscription has enough quotas (such as
vCPUs).
Amazon EC2 service quotas are managed in terms of the number of virtual central processing units (vCPUs)
with 1 physical core = 2 vCPUs conversion ratio. The required vCPU quota must be at least enough to cover
n+1 nodes (for RF = 2 or 3) where n is the number of nodes you are deploying in your cluster. The quota for
the additional nodes is required in case a node needs to be replaced, then during the node replace cycle, you
will temporarily have n+1 nodes while the new node is being added and an old node is being removed.
If the quotas for the resource allocation are insufficient, you will not be blocked from creating the cluster,
but your cluster will likely not form successfully. The available quotas for resource allocation changes
dynamically depending on the resources used by other clusters in your organization. You can also change the
default resource allocation through the AWS console.

Figure 35: AWS Service Quota Check


c. Click Create.

Cloud Clusters (NC2) | Cluster Deployment | 64


10. Monitor the cluster creation progress in the Clusters page.
When the cluster creation is in progress, the status is Creating. After the cluster is created, the status changes to
Running.

Note: Nutanix cluster is deployed in AWS in approximately 30 minutes. If there are any issues with provisioning
the Nutanix cluster, see the Notification Center on the NC2 console.

11. After the cluster is created, click the name of the cluster to view the cluster details.

What to do next
After you create a cluster in AWS, set up the network and security infrastructure in AWS and the cluster for your user
VMs. For more information, see User VM Network Management and Network Security using AWS Security
Groups.

AWS VPC Endpoints for S3


You can leverage the AWS VPC endpoints to connect to AWS Services privately from your VPC without going
through the public internet. When using VPC endpoints, you can optimize the network path by avoiding traffic to
internet gateways and incurring costs associated with NAT gateways, NAT instances, or maintaining firewalls. With
VPC endpoints, you have finer control over how users and applications access AWS services.
Nutanix recommends using the following VPC endpoints:

• Gateway endpoints: These gateways are used for connectivity to Amazon S3 without using an internet
gateway or a NAT device for your VPC. A gateway endpoint targets specific IP routes in the AWS VPC route
table. Gateway endpoints do not use AWS PrivateLink, unlike interface endpoints. There is no additional charge
for using gateway endpoints.
For more information on how to create a new gateway endpoint, see Creating a Gateway Endpoint.
You can create a new or use an existing gateway endpoint. When using an existing gateway endpoint, you only
need to modify the route tables associated with the gateway endpoint. For more information, see Associating
Route Tables With the Gateway Endpoint.
• Interface endpoints: These gateways are used for connectivity to services over AWS PrivateLink. An interface
endpoint is a collection of one or more elastic network interfaces (ENIs) with a private IP address that serves as
an entry point for traffic destined to a supported service. Interface endpoints allow the use of security groups to
restrict access to the endpoint.
For more information, see AWS Documentation.

Creating a Gateway Endpoint


A gateway endpoint can be used for connectivity to Amazon S3 without using an internet gateway or a NAT device
for your VPC. A gateway endpoint targets specific IP routes in an AWS VPC route table in the form of a prefix-list,
used for traffic destined to AWS S3.
You can configure resource policies on both the gateway endpoint and the AWS resource that the endpoint provides
access to. This enables granular access control and private network connectivity from within a VPC.
For more information about gateway endpoints, see AWS Documentation.

Note: Ensure that you create your gateway endpoint in the same AWS Region as your S3 buckets. Also, add the
gateway endpoint in the routing table of the resources that need to access S3. The outbound rules for the security group
for instances that access Amazon S3 through the gateway endpoint must allow traffic to Amazon S3.

You can add a new endpoint route to a route table and associate it with the gateway endpoint. The endpoint route is
deleted when you disassociate the route table from the gateway endpoint or when you delete the gateway endpoint.

Cloud Clusters (NC2) | Cluster Deployment | 65


All instances in the subnets associated with a route table associated with a gateway endpoint automatically use the
gateway endpoint to access the service. Instances in subnets that are not associated with these route tables use the
public service endpoint, not the gateway endpoint. Nutanix recommends using the gateway endpoint instead of the
public service endpoint.
The following instructions are for your quick reference; Nutanix recommends referring to AWS Documentation for
the most up-to-date instructions.
Follow these steps to create a gateway endpoint that connects to Amazon S3:

Procedure

1. Sign into the AWS VPC console at https://console.aws.amazon.com/vpc/.

2. In the navigation pane, select Endpoints, which is a VPC-level feature.

Note: Ensure that you do not select the Endpoints services option.

3. Click Create endpoint.

4. Provide a name for the gateway endpoint.

5. Under Service category, select AWS services.

6. Under Services, search with the S3 keyword, and then select the service with the name:
com.amazonaws.<region>.s3 and type as Gateway.

7. Under VPCs, select the VPC where you want to create the endpoint.

Note: The VPC must be the same where your cluster is created. All NC2 clusters in that VPC will be able to
access the S3 endpoint. You must create a different endpoint for each VPC where an NC2 cluster is running.

8. Under Route tables, select the route tables corresponding to your NC2 cluster’s private subnet in the VPC.
This must be the route table associated with the cluster management subnet.

Note: You must add all route tables that are associated with the management subnet of all your clusters.

9. Under Policy, use the default selection as Full access.

10. Click Create endpoint.

11. After successfully creating the endpoint, verify that the route table pointed to the S3 endpoint has the gateway
endpoint in its routes.

a. Navigate to Endpoints and click the gateway endpoint you created.


b. Click the Route tables tab.
c. Click the route table name.
d. Click the Routes tab and verify the route.

Associating Route Tables With the Gateway Endpoint


If you want to use an existing gateway endpoint, you must change the route tables that are associated with the
gateway endpoint.
The following instructions are for quick reference; Nutanix recommends referring to AWS Documentation for the
most up-to-date instructions.
Follow these steps to associate route tables with the gateway endpoint:

Cloud Clusters (NC2) | Cluster Deployment | 66


Procedure

1. Sign into the AWS VPC console at https://console.aws.amazon.com/vpc/.

2. In the navigation pane, select Endpoints.

3. Select the gateway endpoint that you want to use for AWS S3.

4. Click Actions > Manage route tables.

5. Select or deselect route tables as needed.

6. Click Modify route tables.

7. Under Route tables, select the route tables corresponding to your NC2 cluster’s private subnet in the VPC.
This must be the route table associated with the cluster management subnet.

Note: You must add all route tables that are associated with the management subnet of all your clusters.

8. Under Policy, make sure you have selected Full access.

Cloud Clusters (NC2) | Cluster Deployment | 67


MICROSOFT WINDOWS ON NC2
You can run Microsoft Windows Server on the NC2 on AWS cluster with AOS 6.5.4.5 and 6.7.1.5 and pay the cost
associated with new Microsoft Windows licenses directly to AWS.

Note: Nutanix does not take responsibility for your Microsoft Windows licensing and compliance validation. You must
ensure you are in compliance with Microsoft and AWS requirements for the Microsoft licenses and associated costs.

NC2 on AWS supports Microsoft Windows Server versions that AWS supports, such as:

• Microsoft Windows Server 2022: Base, Core


• Microsoft Windows Server 2019: Base, Core
• Microsoft Windows Server 2016: Base, Core
For a complete list of supported Windows versions, see AWS documentation.

Note: NC2 only supports AOS 6.5.4.5 and 6.7.1.5 to run Microsoft Windows Server workload with Windows license
costs payable to AWS. Also, the entire cluster will be deployed either with instances with all Windows Licenses
Included or instances without Windows Licenses Included. When you choose to run Microsoft Windows workloads
on the NC2 on AWS cluster, AWS invoices you for the whole cluster. If you would like to switch to non-Windows
workloads, then you should deploy a separate NC2 on AWS cluster without selecting the Microsoft Windows Licensing
option in the NC2 console. Switching between the two options is not allowed after a cluster has been deployed.

You can check if you have recorded your intent to pay AWS for the Microsoft Windows Server license costs on an
NC2 cluster from the cluster’s Summary page. You need to perform additional steps to install Microsoft Windows
Server and then activate a new license for your Microsoft Windows Server VMs. For more information, see Viewing
Licensing Details.

Requirements to Run Microsoft Windows Server


If you want to run Microsoft Windows workloads on an NC2 on AWS cluster, you must record your intent to run
Microsoft Windows Server and agree to pay the Microsoft Windows Server license costs directly to AWS while
creating an NC2 on AWS cluster with AOS 6.5.4.5 or 6.7.1.5.
For more information on how to deploy the cluster with Windows License Included EC2 instances, see Creating a
Cluster.
You also need to perform these steps:
1. Bring your own Microsoft Windows binary that is in an AHV-compatible format. Nutanix supports the RAW,
VHD(X), VMDK, VDI, ISO, and QCOW2 disk formats. For more information, see AHV Administration Guide.
2. Manually install the Windows binary on the NC2 on AWS cluster.

Cloud Clusters (NC2) | Microsoft Windows on NC2 | 68


3. Manually license all Windows VMs on the NC2 on AWS cluster.

Note: You must perform this step again on the Windows VM after you migrate it back to the NC2 on AWS cluster
in the disaster recovery scenario.

Follow these steps on the Windows VM you installed:

• Run the below command as an administrator to set your Windows KMS machine IP address.
slmgr.vbs /skms 169.254.169.250:1688

• Set your Windows KMS setup key.


First, identify the correct Microsoft KMS client setup key (KMSSetupKey) for your operating system version.
For more information, see Key Management Services (KMS) client activation and product keys.
Then, run this command as an administrator:
slmgr.vbs /ipk <KMSSetupKey>

• Run the below command as an administrator to activate Windows.


slmgr.vbs /ato

AWS Pricing for Microsoft Windows Server on NC2


AWS identifies when Microsoft Windows workloads are running on the NC2 on AWS cluster and invoices you for
the Windows license cost for the whole NC2 on AWS cluster. You will pay the combined cost of EC2 instances and
Microsoft Windows licenses directly to AWS. For more information on AWS pricing for Windows License Included
instances, see Microsoft Licensing on AWS.
You have several options for paying for Nutanix software. For more information on paying for NC2 software, see
NC2 Payment Methods.
To understand the AWS pricing for running Microsoft Windows Server on an NC2 on AWS cluster:

Note: These instructions are indicative of listing the lowest prices; NC2 recommends reviewing AWS documentation
for up-to-date information on AWS pricing.

1. Browse AWS documentation.


2. From Categories in the left pane, select Amazon Machine Image as Delivery methods, Amazon Web
Services as Publisher, and the required Windows Server version from Operation system > All Windows.
3. Select the Base or Core version of the selected Windows Server. You are redirected to the AWS marketplace page
for the selected Windows Server.
4. To estimate the pricing, click the Pricing tab and then select the Region and Fulfilment Option.
5. Select the EC2 instance type from the Usage tab. The EC2 instance you select must be from the EC2 instances
that NC2 on AWS supports.
6. Check the pricing details under Infrastructure Pricing Details > Estimated Infrastructure Cost.

Note: The estimated cost displayed is for each node in your NC2 on AWS cluster. The cost will be in multiple of
the number of nodes in your NC2 on AWS cluster.

Note: If you do not want to run Microsoft Windows workloads on an NC2 on AWS cluster, you must not record your
intention to run Microsoft Windows Server while creating an NC2 on AWS cluster.

Cloud Clusters (NC2) | Microsoft Windows on NC2 | 69


PRISM CENTRAL CONFIGURATION
This section lists the steps on how to:

• Deploy and configure Prism Central.


• Log into a cluster by using Prism Element Web console.
• Log into a cluster by using SSH.

Deploying and Configuring Prism Central


After the cluster is successfully deployed, deploy a new Prism Central instance or register the cluster to an
existing Prism Central instance. Ensure that you have created the Prism Central subnet and management
subnet. The Prism Central subnet must be different from the UVM subnet.
For more information about installing a new Prism Central instance, see Installing a new Prism Central.

Note: While deploying Prism Central, you need to specify the CIDR of the subnet created for your NC2 cluster. You
can find this CIDR from your AWS console listed under IP Address Management > Network Prefix Length.

For more information about registering your cluster with Prism Central, see Registering Cluster with Prism
Central.
After you deploy Prism Central, perform the following additional networking and security configurations:

Procedure

1. Configure the name servers to host a network service for providing responses to queries against a directory
service, such as a DNS server. For more information, see Configuring Name Servers for Prism Central.

Note: Ensure that the name server IP address is similar to the one you entered during the deployment of Prism
Central.

2. Configure the NTP servers to synchronize the system clock. For more information, see Configuring NTP
Servers for Prism Central.
You can use:

• 0.pool.ntp.org
• 1.pool.ntp.org
• 2.pool.ntp.org
• 3.pool.ntp.org

3. Add an authentication directory. For more information, see Adding An Authentication Directory (Prism
Central).

4. Configure role permissions. For more information, see Assigning Role Permissions.

5. Configure SSL certificate management. For more information, see Importing an SSL Certificate.

6. Deploy a load balancer to allow Internet access. For more information, see Deploying a Load Balancer to Allow
Internet Access.

7. Create and associate AWS security groups with an EC2 instance to control outbound and inbound traffic. For
more information, see Controlling Inbound and Outbound Traffic Using Security Groups.

Cloud Clusters (NC2) | Prism Central Configuration | 70


8. Enable the inbound access to the Prism Central UI to configure the Site-to-Site VPN setup. For more information,
see Prism Central UI Access for Site-to-Site VPN Setup.

9. Register Prism Central with the Prism Element cluster. For more information, see Registering or Unregistering
Cluster with Prism Central.

What to do next
For more information about how to sign into the Prism Element web console, see Logging into a Cluster by Using
the Prism Element Web Console.
For more information about how to sign into the Prism Central web console, see Logging Into Prism Central.

Logging into a Cluster by Using the Prism Element Web Console


Use the Prism Element web console to sign in to your Nutanix cluster running in AWS.
Perform the following to sign in to your Nutanix cluster.

Procedure

1. Sign in to NC2 from the My Nutanix dashboard.

2. Go to the Clusters page and click the name of the cluster.

Cloud Clusters (NC2) | Prism Central Configuration | 71


3. On the Summary page, click Launch Prism Element.

Figure 36: NC2 Prism Element Access

The Prism Element sign in page opens in a new tab.

4. Use the following default credentials to sign in.

• Username: admin
• Password: Nutanix/4u
The default password is Nutanix/4u. You are prompted to change the default password if you are logging on for
the first time.
For more information, see Logging Into the Web Console.

What to do next
After you create a cluster in AWS, set up the network and security infrastructure in AWS and the cluster for your user
VMs. For more information, see User VM Network Management and Network Security using AWS Security
Groups.

Cloud Clusters (NC2) | Prism Central Configuration | 72


Logging into a Cluster by Using SSH
You can also use SSH to sign in to a node and CVMs in your cluster. Sign in to your cluster by using the
SSH key pair you created or selected when you created the cluster.

Before you begin


If you have not configured VPN or Direct Connect to access your clusters, you can configure a Linux bastion host to
gain SSH access to the CVMs and AHV hosts of Nutanix clusters running on AWS.
For instructions about how to configure a Linux bastion host, see the AWS documentation. The URL varies
depending on the region you are in.

Note: When you configure a Linux bastion host, ensure that you do the following:

• Open the EC2 console in the same region as the Nutanix cluster.
• When you are configuring an instance, ensure that you do the following:

• Under Network, change the default VPC to the same VPC being used by the Nutanix cluster
running on AWS.
• Under Subnet, select the subnet containing Nutanix Cluster xxxxxxxxx Public.
• Enable the Auto-assign Public IP option.
• You must restrict access to Management services (access to CVMs and AHV hosts) while configuring
the cluster. To do this, launch the NC2 console, click on the ellipsis for the cluster, and then click
Update Configuration. Select the Access Policy tab, and then select Restricted under
Management Services (Core Nutanix services running on this cluster).

About this task


Sign in to CVM with the Nutanix user and then sign in to a host in the cluster with the root user.
Perform the following to sign in to your cluster with SSH.

Procedure

1. Sign in to NC2 from the My Nutanix dashboard.

2. Go to the Clusters page and click the name of the cluster.


Select the Hosts pane.
The details such as the name of the host, IP address of the host, IP address of the CVM, host type, and state of the
host are displayed.

3. Open a terminal session.

Cloud Clusters (NC2) | Prism Central Configuration | 73


4. Sign in to a host in the cluster by using the private key. Use the private key you created or selected when you
created the cluster.

Note: You can either upload (secure copy (scp)) the key.pem file from your local machine to the host or create
a new pem file on the host by using the content of the key.pem file via vim key.pem, and then run the chmod
400 key.pem command.

$ ssh -i private-key-file root@node-ip-address


Replace the variables in the command with their appropriate values as follows:

• Replace private-key-file with the file name of the private key.


• Replace node-ip-address with the IP address of the node you want to sign in to (determined in step 4).

5. Sign in to a CVM in the cluster.


user@host$ ssh nutanix@ip-address-of-the-cvm
Replace ip-address-of-the-cvm with the IP address of the CVM (determined in step 5).

6. Type the password of the Nutanix user.


The default password of the Nutanix user is nutanix/4u. You are prompted to change the default password if you
are logging on for the first time.

What to do next
After you create a cluster in AWS, set up the network and security infrastructure in AWS and the cluster for your user
VMs. For more information, see User VM Network Management and Network Security using AWS Security
Groups.

Cloud Clusters (NC2) | Prism Central Configuration | 74


NC2 PAYMENT METHODS
Nutanix offers a simplified licensing experience in purchasing and consuming NC2. In addition to the legacy
licensing model for AOS, Nutanix has introduced the Nutanix Cloud Platform packages that support the new and
consolidated Nutanix product portfolio. The packages are:

• Nutanix Cloud Infrastructure (NCI)


• Nutanix Cloud Manager (NCM)
• Nutanix Cloud Platform (NCP, bundle of NCI and NCM)
• Nutanix Unified Storage (NUS)
• Nutanix Database Service (NDB)
• Nutanix End User Computing (EUC)
You can use the NCI, AOS, VDI, or EUC licensing options and any associated add-ons. When you select the AOS
licensing option, you can continue using the cluster with AOS option or switch to the NCI licensing. When you select
the VDI licensing option, you can continue using the cluster with the VDI option or switch to the EUC licensing.
You must deploy Prism Central and configure your NC2 cluster with that Prism Central in order to use NCI licenses.

Note: You cannot switch back from NCI licensing to AOS licensing. You cannot switch back from EUC licensing to
VDI licensing.

Nutanix also provides flexible subscription options that help you select a suitable subscription type and payment
method for NC2.
You can use the legacy portfolio licenses and pay using the Pay As You Go (PAYG) subscription plan for overages
above the legacy license capacity used.
For more information on the pricing that is used to charge for overages above legacy AOS license capacity, see NC2
pricing options.
For the new NCI licensing, NC2 does not charge for overages above the NCI license capacity used. For more details
on the new NCI licenses, see Nutanix Cloud Platform Software Options.
You can choose to be invoiced either directly by Nutanix or through your cloud marketplace account, if you choose to
use your cloud marketplace.
NC2 supports Advanced Replication and Security add-ons for NCI Pro and Nutanix Unified Storage (NUS) Pro, and
you have to manually apply these licenses to Prism Central managing your NC2 cluster. NC2 supports Advanced
Replication, Data-at-Rest Encryption, and Files add-ons for AOS (legacy) Pro, and you have to reserve capacity from
these licenses, after which they are automatically picked up and applied to your NC2 cluster.
The following table lists the combination of license types based on the software configuration and the subscription
plan available for these license types.

Table 10: Summary of license types

Cluster Type Available License Types Available Software Tier


General Purpose NCI
• NCI Pro + Advanced Replication +
Data-at-Rest Encryption + NUS
• NCI Ultimate + Advanced Replication
+ Data-at-Rest Encryption + NUS

Cloud Clusters (NC2) | NC2 Payment Methods | 75


Cluster Type Available License Types Available Software Tier
General Purpose AOS
• AOS Pro + Advanced Replication +
Data-at-Rest Encryption + Files
• AOS Ultimate + Files

Note: Advanced Replication


and Data-at-Rest Encryption
add-ons are included in AOS
Ultimate license.

VDI EUC EUC Ultimate + NUS


VDI VDI VDI Ultimate + Files

Note: Advanced Replication and


Data-at-Rest Encryption add-ons are
included in VDI Ultimate license.

Nutanix Licenses for NC2


Your Nutanix licenses are given priority when covering your NC2 usage. You can use the Pay As You Go (PAYG)
subscription plan to pay for overages above your legacy license capacity. There is currently no charge for overages
above the NCI license capacity used for the new NCI licensing.
For more information on how to select AOS or NCI license during cluster creation, see Creating a Cluster.
For more information on how to switch an already running cluster with AOS legacy licensing to NCI licensing, see
Applying NCI, EUC, and NUS Licenses.

New Portfolio Licenses


When creating a cluster, you can select your general purpose type NC2 cluster to use either NCI Pro or NCI Ultimate
licenses. Or, you can select your VDI type NC2 cluster to use the End User Computing (EUC) Ultimate license. The
rest of the new portfolio, such as Nutanix Cloud Manager (NCM), Nutanix Unified Storage (NUS), and Nutanix
Database Service (NDB), is also supported with NC2. Steps to apply licenses are similar for all products and services
in the new portfolio - manually apply licenses to Prism Central. For more information, see Applying and Managing
Cloud Platform Licenses.
For the new NCI licensing, there is currently no charge for overages above the NCI license capacity used.
You must not reserve license capacity for the new portfolio licenses, unlike you do it for the legacy portfolio licenses.
In addition to using the Nutanix licenses, you also need to subscribe to NC2. For more information, see NC2
Subscription Workflow.

Note: Your NC2 cluster is enabled with AOS, NCI, VDI, or EUC licenses during the free trial. You can switch from
AOS to NCI licenses at any time; however, you cannot switch from NCI to AOS licenses. You can switch from VDI to
EUC licenses at any time; however, you cannot switch from EUC to VDI licenses.
You must deploy Prism Central and configure your NC2 cluster with that Prism Central in order to use NCI
licenses.

For more information on how to switch an already running cluster with AOS legacy licensing to NCI licensing, see
Applying NCI, EUC, and NUS Licenses.
Once you have configured Prism Central with the cluster, you can manually apply the NCI licenses to that Prism
Central to cover the usage of the cloud usage.

Cloud Clusters (NC2) | NC2 Payment Methods | 76


When switching cloud cluster from one Prism Central to another Prism Central, you must manually re-license that
new Prism Central with the NCI license you want to use.

Note: You can use the same Prism Central with both AOS and NCI-licensed clusters.

Applying cloud platform licenses, excluding NUS, requires that the cluster is running the minimum versions of the
following software:

• AOS 6.0.1.7
• Nutanix Cluster Check (NCC) 4.3.0
• Prism Central pc.2021.9
Applying NUS licenses requires that the cluster is running the minimum versions of the following software:

• AOS 6.1.1
• NCC 4.5.0
• pc.2022.4

Applying NCI, EUC, and NUS Licenses


You must manually apply the NCI, NUS, and EUC licenses. Perform the following steps to use these
licenses:

Procedure

1. After the cluster is successfully deployed, register the cluster to a Prism Central instance.

Note: You can register this cluster to an existing Prism Central instance or deploy a new Prism Central on this
cluster.

For more information, see Registering Cluster with Prism Central and Installing a new Prism Central.

2. If you are using a free trial for NC2, you can select NCI, AOS, VDI, or EUC as the option during the free trial
period.
You can switch from the AOS to the NCI licensing option or from the VDI licensing to the EUC licensing at any
time. Make sure you follow the appropriate licensing instructions for legacy licenses or new portfolio licenses.

Cloud Clusters (NC2) | NC2 Payment Methods | 77


3. If you have a running cluster with AOS legacy licensing, then while using the new portfolio licenses, you must
switch the license type to NCI before manually applying the new portfolio licenses in Prism Central.

Note: You must perform this step with every NC2 cluster that use the new portfolio licenses, for both general
purpose and VDI clusters.

Perform the following steps to change the license type from AOS to NCI:

a. Sign in to the NC2 console: https://cloud.nutanix.com


b. In the Clusters page, click the name of the cluster for which you want to update the license type.
c. On the Settings page, click the Cluster Configuration tab.

Figure 37: Switch to NCI License


d. Your current selection of AOS license type is displayed. Click Switch to NCI.

Cloud Clusters (NC2) | NC2 Payment Methods | 78


Figure 38: Switch to NCI Manual Steps
e. Click Switch to NCI Licensing to confirm the switch of license type to NCI.
Ensure that you want to switch to NCI, as you would not be able to switch back to AOS after switching to
NCI.

4. If you already have the following licenses that you are ready to use, you can manually apply these licenses by
following the procedures described in Applying and Managing Cloud Platform Licenses.

• Nutanix Cloud Infrastructure (NCI)


• Nutanix Cloud Manager (NCM)
• Nutanix Cloud Platform (NCP, bundle of NCI and NCM)
• Nutanix Unified Storage (NUS)
• Nutanix Database Service (NDB)
• Nutanix End User Computing (EUC)
If you do not have these licenses, you can also convert your legacy AOS licenses to the new NCI licenses. For
more information, see Converting to Cloud Platform Licenses.

Legacy Portfolio Licenses


While Nutanix is transitioning from our legacy portfolio packaging, you can still use the legacy portfolio licenses
for your NC2 clusters. Overages above the license capacity used can be paid using a subscription plan and will be
invoiced directly by Nutanix. For more information on the pricing that will be used to charge for overages above
legacy AOS license capacity, see NC2 Pricing.
In addition to using the Nutanix licenses, you also need to subscribe to NC2. For more information, see NC2
Subscription Workflow.

Cloud Clusters (NC2) | NC2 Payment Methods | 79


Under the legacy portfolio licenses, you can reserve AOS Pro, AOS Ultimate, Files, and VDI Ultimate license for
NC2 on AWS. These licenses are automatically applied to the cloud clusters to cover their configuration and usage.
Rest of the legacy portfolio licenses can be manually applied to an NC2 cluster.

Reserving License Capacity

Note: License reservation is required for AOS (legacy) licenses and the associated Advanced Replication and Data-at-
Rest Encryption add-ons. License reservation is not required for NCI licenses and the associated Advanced Replication
and Data-at-Rest Encryption add-ons, as you need to manually apply the NCI licenses.
You do not need to delete the license reservation when terminating an NC2 cluster if you intend to use the
same license reservation quantity for a cluster you might create in the future.

To reserve licenses for NC2, do the following:

Procedure

1. Sign in to the Nutanix Support portal at https://portal.nutanix.com and then click the Licenses link on the
portal home page. You are redirected to the Licensing portal.

2. Under Licenses on the left pane, click Active Licenses and then click the Available tab on the All Active
Licenses page.

Figure 39: Active Licenses Page

3. Select the licenses that you want to reserve for NC2 and then select Update reservation for Nutanix Cloud
Clusters (NC2) from the Actions list.

Note: This option becomes available only after you select at least one license for reservation.

Cloud Clusters (NC2) | NC2 Payment Methods | 80


4. On the Manage Reservation for Nutanix Cloud Clusters (NC2) page, click the hamburger icon available
in the row of the license you want to reserve, and then click Edit.

Figure 40: Manage License Reservation

5. Enter the number of licenses that you want to reserve in the Reserved for AWS and Reserved for Azure
columns for the license. The available licenses appear in the Total Available to Reserve column.

6. Click Save to save the license reservations.

Reclaiming a CBL License


Your reserved licenses can only be used on an NC2 cluster provided the license reservation is active. When you
terminate or hibernate a cloud cluster, the license capacity that was being used by that cloud cluster is returned to the
reserved licenses pool and can be used for any other NC2 cluster.
If you want to use the reserved license for an on-prem cluster, ensure that you update the capacity that was reserved
for cloud clusters to zero so that it can be used by on-prem clusters.
To reclaim the NC2 license and use the license for an on-prem cluster, perform the following steps:

Procedure

1. Terminate your cluster from the NC2 console. For more information, see Terminating a Cluster.

2. Update the license reservation for the NC2 cluster under Reserved for AWS or Reserved for Azure columns
as 0 on the Licensing portal. For more information, see Modifying License Reservations.

3. Your license capacity is now available for use with any other Nutanix cluster, including on-prem clusters.

Managing Licenses
Follow these steps to manage licenses and change license type or add add-on products to your running
NC2 cluster.

Procedure

1. Sign in to the NC2 console: https://cloud.nutanix.com

2. In the Clusters page, click the cluster name for which you want to update the add-on product selection.

Cloud Clusters (NC2) | NC2 Payment Methods | 81


3. On the Settings page, click the Cluster Configuration tab.

Figure 41: Manage add-on products

4. Under Software Configuration, you can change your license tier Pro to Ultimate or vice versa from the
Software Tier list.

5. Under Add-on Products, based on the cluster type (General Purpose or VDI cluster) and the license tier, the
available add-on products are displayed. Select or remove the add-on product based on your requirements.

6. Click Save.

Subscription Plan for NC2


You can choose to pay for your NC2 usage either directly to Nutanix or through your cloud marketplace. Nutanix
licenses are given priority to cover your NC2 usage. Any overage beyond license capacity applied on the cluster is
billed to your chosen subscription plan - Nutanix Direct or Cloud marketplace.

Table 11: NC2 Subscription Plan

Subscription Plan Description Payment Method

Nutanix Direct subscription for NC2 on AWS

Cloud Clusters (NC2) | NC2 Payment Methods | 82


Subscription Plan Description Payment Method
Pay As You Go You are billed every month When you choose to pay for your NC2
for the NC2 software usage of usage directly to Nutanix, you can use
that month. There is no term one of the following payment methods:
commitment in this plan.
• Credit Card – Allows you to purchase
a payment plan using your credit card
details.
• ACH Bank Transfer – Allows
you to pay using your ACH bank
transfer details. The ACH payment
method is available only if the bill-
to address of your organization is in
the United States of America. Nutanix
enables the ACH bank transfer payment
method to you either after at least one
positive credit card transaction or if
you make a request to use this payment
method through your Nutanix sales
representative.
• Invoice Me – Direct invoicing by
Nutanix at the end of every billing
cycle. If you prefer to be invoiced by
Nutanix instead of using your credit
card or bank transfer, ask your Nutanix
account manager to enable the Invoice
Me option in your NC2 account.

Cloud Marketplace subscription for NC2 on AWS


Nutanix licenses and You can work with your Nutanix The full $ value of your Nutanix
overages Account Manager and Nutanix software licenses goes towards
reseller to get a discounted meeting your AWS spend
private offer to pay for Nutanix commitments that may be part of your
licenses for your NC2 cluster. By AWS Enterprise Discount Program
subscribing to NC2 through AWS (EDP). You need to pay the total cost
Marketplace, the upfront cost of of Nutanix software for the entire
your licenses and future overages duration of any multi-year license
is billed to your cloud account. contract in a single upfront payment.
The cost of future overages is billed to
your cloud account.

Note: For more information on pricing, see https://www.nutanix.com/products/nutanix-cloud-clusters/


pricing.

NC2 Subscription Workflow


You must subscribe to NC2 to continue NC2 usage after the trial period ends. You can also subscribe to NC2 anytime
during your free trial period.

Note: For the workspace you want to use to create an NC2 subscription, you must have the Account Admin role. The
default workspace that was created when you created a My Nutanix account has the Account Admin role. If you are
invited to a workspace, then you must get the Account Admin role so that you can subscribe to NC2 and access the
Admin Center and Billing Center.

Cloud Clusters (NC2) | NC2 Payment Methods | 83


You must subscribe to an NC2 subscription plan (Nutanix Direct or Cloud Marketplace) to cover your NC2 usage.
Any licenses applied to your NC2 cluster will be given priority to cover NC2 usage, and the remaining overages will
be billed to that subscription plan.

Note: You can only reserve your legacy portfolio licenses. You must not reserve the new portfolio licenses, such as
NCI and EUC licenses. You need to apply these licenses to an NC2 cluster manually.

To learn more about how to reserve the legacy portfolio licenses, see Reserving License Capacity.
To learn more about how to manually apply new portfolio licenses, see Applying NCI, EUC, and NUS Licenses.
You can subscribe to NC2 from the My Nutanix dashboard > Administration > Billing Center > Launch. In the
Billing Center, under Nutanix Cloud Clusters, click Subscribe Now.
At the beginning of the subscription steps, you get the following options to cover your NC2 usage:

• Use your reserved license capacity: You can reserve your legacy portfolio licenses, such as AOS Pro, AOS
Ultimate, VDI Ultimate license, and associated add-ons for NC2 usage. These licenses are automatically applied
to the cloud clusters to cover their configuration and usage.
You still need to select a subscription plan to cover any overage above your reserved license capacity. You have a
choice of paying directly to Nutanix or using your cloud marketplace account to pay for NC2 software usage.

Note: Ensure that you have reserved enough license capacity for NC2 if you plan to use Nutanix licenses for NC2
usage.

• Use your subscription plan: You can use your paid subscription plan and pay directly to Nutanix or use your
cloud marketplace account.
Based on your preferences, you can use the following subscription workflows to pay for your NC2 software usage,
such as any overage above your reserved license capacity or invoices for your subscription plan.

• Nutanix Direct Subscription: Pay for your NC2 software usage directly to Nutanix.
For more information, see Nutanix Direct.
• Cloud Marketplace Subscription: Pay for your NC2 software usage through your cloud marketplace account.
For more information, see AWS Marketplace.

Nutanix Direct
Perform the following procedure to pay for NC2 on AWS and NC2 on Azure consumption with a Nutanix Direct
subscription plan:

Procedure

1. Sign in to https://my.nutanix.com using your My Nutanix credentials.

Cloud Clusters (NC2) | NC2 Payment Methods | 84


2. Select the correct workspace from the Workspace dropdown list on the My Nutanix dashboard. For more
information on workspaces, see Workspace Management.

Figure 42: Selecting a Workspace

3. Perform one of the following:

• On the My Nutanix dashboard, scroll down to Administration > Billing Center and click Launch. In the
Billing Center, under Nutanix Cloud Clusters, click Subscribe Now.
• On the NC2 console, click the Nutanix billing center link in the banner displayed on the top of the NC2
console.
You are directed to the Nutanix Billing Center.

Cloud Clusters (NC2) | NC2 Payment Methods | 85


4. At the beginning of your subscription steps, the Would you like to use your existing Nutanix Licenses
for your NC2 usage? option is presented.

Figure 43: Payment Plan - Pay directly to Nutanix

• Select Yes, I would like to use Nutanix Licenses to cover NC2 usage if you want to use Nutanix
licenses for NC2. You must reserve the legacy license capacity from the Nutanix license portal or manually
apply new portfolio licenses to your NC2 cluster.
If you select this option, the licenses reserved or applied are used to cover the NC2 usage first, and any
overage is charged to the subscription plan you select in the next step.
• Select No, I don’t want to use my licenses. Invoice all NC2 usage to my subscription plan
option if you do not want to use any licenses for NC2. All NC2 usage will be charged to the subscription
plan that you select in the next step.

5. Next, the How would you like to pay for overage above any reserved license capacity? option is
presented.

• Pay directly to Nutanix: The NC2 software usage on all supported clouds (AWS and Azure) is paid to a
single subscription plan.
• Pay via Cloud Marketplace: The cloud marketplace subscription option is only available for NC2 on
Azure.
Select Pay directly to Nutanix and then click Next.

Cloud Clusters (NC2) | NC2 Payment Methods | 86


6. Click Next.

Figure 44: Payment Plan - Reserve Existing Licenses

Legacy License Portfolio: You can click Reserve existing licenses on the Support Portal to reserve
licenses for the NC2 usage. To learn more about how to reserve the legacy portfolio licenses, see Reserving
License Capacity.
New Portfolio Licenses: To learn more about how to manually apply new portfolio licenses, see Applying
NCI, EUC, and NUS Licenses.

Cloud Clusters (NC2) | NC2 Payment Methods | 87


7. On the next screen, the payment plan is presented to you based on the choices made in the previous step.

Figure 45: Pay directly to Nutanix

Select Pay As You Go (For NC2 on AWS and Azure) payment plan for your Nutanix cluster. With this
plan, you are billed at the end of each month for the NC2 usage for that month without any term commitments.
Click Next.

8. On the Company Details page, type the details about your organization and then click Next.
Nutanix Cloud Services considers the address that you provide in the Address 1 and Address 2 fields as the
Bill To Address and uses this location to determine your applicable taxes.
If the address where you consume the Nutanix services is different than your Bill To Address, under the
Sold to Address section, clear the Same information as provided above checkbox and then provide the
address of the location where you use the Cloud services. However, only the Bill To Address is considered to
determine your applicable taxes.

9. On the Payment Method page, select one of the following payment methods, and then click Next.

• Credit Card: Enter your credit card details.


• ACH Bank Transfer: Enter your Automated Clearing House (ACH) bank transfer details. You must
discuss with your account team if you prefer to use the ACH Bank Transfer option. The ACH payment
method is available only if the bill-to address of your organization is in the United States of America, and
you must at least have made one positive payment from your account for the same or any other service.
• Invoice Me: Direct invoicing by Nutanix at the end of every billing cycle. You must ask your account
manager to enable this option in your NC2 account if you prefer to be invoiced by Nutanix instead of using a
credit card or bank transfer.

Cloud Clusters (NC2) | NC2 Payment Methods | 88


10. On the Review & Confirm page, review all the details, click Edit next to each section if you want to edit any
of the details.

11. (Optional) If you have received a promotional code from Nutanix, type the code in the Promo code field and
click Apply.

12. Click Accept & Place Order.


A message confirming the success of your subscription displays. You also receive a confirmation email. You
can now begin using the NC2.

What to do next
You can now begin using the NC2.
You can do one of the following:

• Proceed to NC2: Start using the NC2 service.


• Go to Billing Center: Proceed to the Billing page of your cloud services account.
• Go to Admin Center: Proceed to the administration page of your cloud services account.

AWS Marketplace
Nutanix provides a convenient and cost-beneficial way to pay for NC2 through AWS Marketplace. You can work
with your Nutanix Account Manager and Nutanix reseller to get a discounted private offer for Nutanix licenses or
subscription plan and pay for the following new portfolio licenses included in your discounted private offer through
AWS Marketplace:

• Nutanix Cloud Infrastructure (NCI) Pro, Ultimate


• End User Computing (EUC) Ultimate
• Nutanix Cloud Manager (NCM) Starter, Pro, Ultimate
• Nutanix Cloud Platform (NCP) Starter, Pro, Ultimate
• Nutanix Unified Storage (NUS) Pro
• Nutanix Database Service (NDB) Platform
For more information on AWS Marketplace Private Offers, see AWS Documentation.
When you subscribe to NC2 using a private offer sent to you through AWS Marketplace, you will be invoiced by
AWS. The full $ value of your Nutanix software licenses goes towards meeting your AWS spend commitments that
might be part of your AWS Enterprise Discount Program (EDP). You need to pay the total cost of Nutanix software
for the entire duration of any multi-year license contract in a single upfront payment.

Note: Any overages above the license capacity purchased through AWS Marketplace will also be billed through AWS
Marketplace, and the same discounted rate used for the initial license purchase through AWS Marketplace will be used
to calculate the billable amount for overages. The overages will be billed and invoiced monthly by AWS.
You must manually apply new portfolio licenses to Prism Central to manage your NC2 clusters. For more
information, see Applying NCI, EUC, and NUS Licenses.

Perform the following steps to subscribe to NC2 from the AWS marketplace:

Cloud Clusters (NC2) | NC2 Payment Methods | 89


Procedure

1. Contact your Nutanix Account Manager with your NC2 sizing requirements, such as the number of licenses
required and the term for usage.
Your Nutanix Account Manager works with a Nutanix reseller, if applicable, to create customized pricing and
convert that into a private offer in AWS Marketplace. Once the offer is ready for you to accept through AWS
Marketplace, you will receive an email from the Nutanix reseller with the private offer details, including the
pricing that is specific to you.

Note: You need to provide your AWS billing account details to the Nutanix Account Manager. You can find
your billing account ID in the AWS Management Console.

2. Sign in to the AWS Marketplace console and click the Private Offer URL in the email you receive from the
Nutanix reseller.
Alternatively, in the AWS Marketplace console, navigate to the Private offers page > Available offers >
select the Offer ID for the offer of interest, and click View offer.

Figure 46: Available private offers

You are redirected to the Nutanix Cloud Clusters (NC2) listing page, where you need to configure your
software contract.

Cloud Clusters (NC2) | NC2 Payment Methods | 90


3. Under Offer selection, select the private offer to view its terms and pricing information.

Figure 47: Configure your software contract

4. Under How long do you want your contract to run?, review the tenure of your contract.

5. Under Dates, review the Service start date, Service end date, and Offer expiration date. You must
accept the offer before the offer expiration date.

Cloud Clusters (NC2) | NC2 Payment Methods | 91


6. Under Contract Options, review the number of units you want to purchase for the required Nutanix licenses.

Note: Units are pre-defined as per the requirement.

Figure 48: Contract options

7. Under Additional usage fees, review the pay-as-you-go monthly charges for additional usage.
You will be charged this rate for any NC2 usage on AWS above the license capacity you purchase.

8. Review the Total Contract Price.


The Total Contract Price changes based on the number of units under Contract Options.

9. Click Create contract.


You are redirected to the payment screen.

Cloud Clusters (NC2) | NC2 Payment Methods | 92


10. Click Pay now to make the payment for your contract price.

Figure 49: Pay for your contract price

11. After successful payment, click Set up your account to set up your billing subscription with NC2.

Figure 50: Set up your Nutanix billing subscription

12. You are redirected to the Nutanix Billing Center to complete your NC2 Billing configuration.

Cloud Clusters (NC2) | NC2 Payment Methods | 93


13. When you are redirected to My Nutanix Billing Center, sign in with your My Nutanix account credentials.

Note: If you do not already have an existing My Nutanix account, you must sign up for a new My Nutanix
account and verify the email address used to sign up for My Nutanix. After verifying your email address, you will
be automatically redirected to My Nutanix Billing Center. For more information, see Creating My Nutanix
Account.

14. Select the correct workspace from the Workspace list on the My Nutanix dashboard.
The workspace must be the same workspace you used when creating NC2 clusters. For more information on
workspaces, see Workspace Management.

15. Click Add Addresses to add your billing address and the address where the NC2 subscription will be used.

Figure 51: Billing and service address

Cloud Clusters (NC2) | NC2 Payment Methods | 94


16. On the Add Address page, type the details about your organization and then click Save.
The address you provide in the Address 1 and Address 2 fields are considered as the Bill To Address.
If the address where you consume NC2 is different than your Bill To Address, under the Address where
service will be provided section, clear the Same information as provided above checkbox and then
provide the address of the location where you use NC2.

Figure 52: Add addresses

17. Click Accept and Continue to NC2.


You are redirected to the NC2 console: https://cloud.nutanix.com

18. Sign in with your My Nutanix credentials.

Changing Payment Method


You can update your existing payment plan at any time. Changes to the payment plan can take effect either
immediately, or at the end of the current billing schedule depending on your existing plan.
The following options are available if you want to change your payment plans:

Cloud Clusters (NC2) | NC2 Payment Methods | 95


• Change the selection to use reserved licenses:

• Add your reserved Nutanix licenses for NC2 usage.


• Remove the selection for using the reserved licenses and just use the subscription plan.
Perform the following steps to change the existing payment plan:
1. Sign in to https://my.nutanix.com using your My Nutanix credentials.
2. Select the correct workspace from the Workspace dropdown list on the My Nutanix dashboard. For more
information on workspaces, see Workspace Management.

Figure 53: Selecting a Workspace


3. On the My Nutanix dashboard, scroll down to Administration > Billing Center and click Launch.
4. Under Subscriptions, next to Nutanix Cloud Clusters, click View plan details.

Figure 54: View plan details

Cloud Clusters (NC2) | NC2 Payment Methods | 96


5. Under NC2, click Change under Subscription Plan section.

Figure 55: Change subscription plan


6. Click Change under Reserved Licenses to make the necessary changes to the use of reserved Nutanix
licenses or your subscription plan.

Figure 56: Change reserved licenses capacity


7. Change the subscription plan based on your requirements.
8. Click Activate & Place Order to save the changes in the subscription plan selection.

Canceling the Subscription Plan


When you cancel your Pay As You Go plan, your plan is deactivated at the end of the current billing schedule. You
can revoke the cancellation of your plan at the most two times before the plan is deactivated. You cannot revoke the

Cloud Clusters (NC2) | NC2 Payment Methods | 97


cancellation after the plan has been deactivated. Nutanix bills you for the usage of the NC2 service from the time you
cancel the plan until the end of the current billing schedule.
Perform the following procedure to cancel your subscription plan:

Procedure

1. Sign in to your My Nutanix account.

2. Select the correct workspace from the Workspace dropdown list on the My Nutanix dashboard. For more
information on workspaces, see Workspace Management.

Figure 57: Selecting a Workspace

3. On the My Nutanix dashboard, go to Administration > Billing Center and click Launch.

4. Under Subscriptions, in Nutanix Cloud Clusters, click View plan details.

Figure 58: View plan details

Cloud Clusters (NC2) | NC2 Payment Methods | 98


5. Under NC2, next to the Subscription Plan, click Cancel.

Figure 59: Cancel a Subscription Plan

6. In the Cancel Plan dialog, click Yes, Cancel to cancel the subscription plan or click Nevermind to close the
Cancel Plan dialog.

7. In the Share Your Feedback dialog, you can specify your reasons to cancel the plan, and click Send.

What to do next
Your plan is deactivated at the end of the current billing schedule. The Cancel Plan dialog displays the date on
which your plan is scheduled to be deactivated.

Note: You can revoke the cancellation of your plan at the most two times before the plan is deactivated.

Figure 60: NC2 - Revoke Cancellation

Cloud Clusters (NC2) | NC2 Payment Methods | 99


Billing Management
The Billing Summary page allows you to do the following:

• Update your payment method and company information.


• Change the primary and secondary billing contacts.
Both primary and secondary billing contacts receive all billing and payment-related communications, such as
invoices, subscription updates, and reminders.

Note: Only the primary billing contact can modify any billing or subscription details.

• Upload tax documents.


• Apply promotional codes.
• Download invoices.
• Manage the subscription of NC2, which includes the following:

• If you have applied the Nutanix software licenses, you can change the licenses allocated to NC2.
• View details about the unbilled amount for the current month.
• View details of usage, such as rate, quantity, and the amount charged for each entity (CPU hours, public IP
address hours, disk size, and memory hours) for each cluster.
For more information on how to manage billing, see Nutanix Cloud Services Administration Guide.

Viewing Billing and Usage Details


The Subscriptions tab on the Billing Center displays information about the estimated total spend and total usage
for your current and the last two billing cycles of NC2. You can see a breakdown of all your unbilled spend and usage
in the Analytics graph and the summary table displayed on the page.
The Subscriptions - NC2 page displays the following details:

• Details about the rate, quantity, and amount charged per unit for a selected billing cycle. You can check the details
for the current and last two billing cycles.
• Details about the usage of clusters by units of measure for a selected billing cycle.
Perform the following procedure to display the billing and usage details of NC2:
1. Sign in to your My Nutanix account.

Cloud Clusters (NC2) | NC2 Payment Methods | 100


2. Select the correct workspace from the Workspace dropdown list on the My Nutanix dashboard. For more
information on workspaces, see Workspace Management.

Figure 61: Selecting a Workspace


3. On the My Nutanix dashboard, go to Administration > Billing Center, and click Launch.
4. Under Subscriptions, in Nutanix Cloud Clusters, click View plan details.
5. Select one of the following to either view the billing details or the usage details.

• Spend: Displays a graph detailing your estimated daily spending for a selected billing cycle. You can check
details for the current and last two billing cycles. You can apply filters to the graph for individual units of
measure. A summary table with detailed information about the current billing cycle is also displayed.
• Usage: Displays an estimate of your total usage for the billing cycle that you select. You can filter the usage
by clusters and units of measure. Individual units of measure are a breakdown of total usage on the latest day
of the billing cycle that you select. You can apply filters to see more details, such as usage information of each
cluster and find out whether a usage is processed through licensing or subscription.
Select the billing period on the top-right corner of the usage graph to see the total usage for the selected billing
cycle in the form of a graph.
Under Usage broken down by individual units of measure, click Clusters, and then select a cluster
ID and choose a unit of measure to see the total usage of each cluster for a selected billing cycle in a graphical
view. Hover over the bars in the graph to see the number of licenses and subscriptions you used.
Click Units and select a unit of measure to see the total usage of all the clusters by that unit of measure.
A breakdown of the total usage of the same billing cycle you selected is displayed in a table after the graph.
You can view the usage graph for three billing cycles.

Note: You can also download this table as a CSV file.

Cloud Clusters (NC2) | NC2 Payment Methods | 101


Figure 62: Usage Details

Using the Usage Analytics API


You can use the https://usage-api.nutanix.com/data API to access the usage analytics for your Nutanix
clusters running in AWS. With this API you can get the details on usage data. For an effective rate card,
you would need to check the invoices.
Perform the following steps to use the Usage Analytics API:

Cloud Clusters (NC2) | NC2 Payment Methods | 102


Procedure

1. Create an API key:

a. Sign in to https://my.nutanix.com with your My Nutanix account.


b. In the My Nutanix dashboard, go to the API Key Management tile and click Launch.
c. If you have previously created API keys, a list of keys is displayed. To create a new key, click Create API
Keys. The Create API Key dialog appears.

Figure 63: Creating an API Key


d. Select or enter the following details:

• Name: Enter a unique name for your API key to help you identify the key.
• Scope: Select the Usage Analytics scope category under Billing from the Scope drop-down list.
e. Click Create. The Created API dialog is displayed.

Cloud Clusters (NC2) | NC2 Payment Methods | 103


Figure 64: Created API Key
f. Copy the API Key and Key ID field values and store them securely for use. You can use the clipboard button
to copy the value to your clipboard.

Note: You cannot recover the generated API key and key ID after you close this dialog.

For more details on API Key management, see the API Key Management section in the Licensing Guide.

Cloud Clusters (NC2) | NC2 Payment Methods | 104


2. Generate a JSON Web Token (JWT) token for authentication to call the REST APIs. You can clone the script
from https://github.com/nutanix/generate-jwt-key.

Note: This step uses Python to generate a JWT token. You can use other programming languages, such as
Javascript and Golang.

a. Run the following command to install the PyJwt package:


pip install PyJWT==2.3.0

b. Replace the API Key and Key ID in the following Python script and then run it to generate a JWT token.
Also, you can specify expiry time in seconds for the JWT token to remain valid. In the requesterip attribute,
enter the requester IP.
from datetime import datetime
from datetime import timedelta
import base64
import hmac
import hashlib
import jwt

api_key = "enter the API Key" # API_KEY


key_id = "enter the Key ID" # KEY_ID
aud_url = "https://apikeys.nutanix.com"

def generate_jwt():
curr_time = datetime.utcnow()
payload = {
"aud": aud_url,
"iat": curr_time,
"exp": curr_time + timedelta(seconds=120),
"iss": key_id,
"metadata": {
"reason": "fetch usages",
"requesterip": "enter the requester IP",
"date-time": curr_time.strftime("%m/%d/%Y, %H:%M:%S"),
"user-agent": "datamart"
}
}
signature = base64.b64encode(hmac.new(bytes(api_key, 'UTF-8'), bytes(key_id,
'UTF-8'), digestmod=hashlib.sha512).digest())
token = jwt.encode(payload, signature, algorithm='HS512',
headers={"kid": key_id})
print("Token (Validate): {}" .format(token))

generate_jwt()

c. A JWT token is generated. Copy the JWT token on your system for further use. The JWT token can be used as
an Authorization header when validating the API call. The JWT token remains valid for the duration that you
have specified.

Cloud Clusters (NC2) | NC2 Payment Methods | 105


3. In the headers, add the generated JWT token with key as X-API-KEY and hit the Usage Analytics API:
Usage Analytics supports two queries.

• getFilters - to get the filters associated with a usage


• getUsages - to get the usages of a subscription
In the following queries are in graphQL query language. The accountid attribute in the query is specific for a user.

• accountid is the tenantuuid of the user.


• startdate is the start date of the usages to fetch.
• end date is the end date of the usages to fetch.
• appid is the service type. It can be NUTANIX_CLUSTERS or MSP.
Query to getFilters for a cluster:
'https://usage-api.nutanix.com/data' \
--
header 'X-API-KEY: <enter your JWT token>' \
--header 'Content-Type: application/json' \
--data-raw '{
"query": "query { getFilters (appid: \"NUTANIX_CLUSTERS\", accountid: \"enter
account ID\", sno: \"\", startdate: \"2021-12-06\", enddate: \"2022-01-05\"){ data
{ metereditem, value1, dimension1 } } }"
}'
Query to getUsages from a specific clusterID:
{
"query": "query { getUsages (appid: \"MSP\", accountid: \"e**e**c3-e5fe-****-
b746-*****2d393fa\", value1: \"000*****-5d0b-****-0000-0000000***80\", startdate:
\"2022-07-11\", enddate: \"2022-08-11\") { data { metereditem, value1, startdate,
enddate, qty } } }"
}
Here, value1 is clusterID. The usages returned are specific to this clusterID.
Query to getUsage for Unit of Measure (UOM):
{
"query": "query { getUsages (appid: \"MSP\", accountid: \"e**e**c3-e5fe-****-b746-
*****2d393fa\", value1: \"000*****-5d0b-****-0000-0000000***80\", metereditem:
\"UOM_PPC_AOS_PRO_CORE_1H\" startdate: \"2022-07-11\", enddate: \"2022-08-11\")
{ data { metereditem, value1, startdate, enddate, qty } } }"
}
Here, metereditem is the UOM. The usages returned are specific to this UOM.
The following is an example of the API response:
{
"data": {
"getUsages": {
"data": [
{
"metereditem": "UOM_PPC_AOS_PRO_CORE_1H",
"value1": "000*****-5d0b-****-0000-0000000***80",
"startdate": "2022-07-11",
"enddate": "2022-07-11",
"qty": 2484
},
{
"metereditem": "UOM_PPC_AOS_PRO_CORE_1H",

Cloud Clusters (NC2) | NC2 Payment Methods | 106


"value1": "000*****-5d0b-****-0000-0000000***80",
"startdate": "2022-07-11",
"enddate": "2022-07-12",
"qty": 108
},
]
}
},
"success": true
}

Cloud Clusters (NC2) | NC2 Payment Methods | 107


USER VM NETWORK MANAGEMENT
Nutanix Cloud Clusters (NC2) natively integrates the AWS networking infrastructure. This means Nutanix
clusters run in the AWS VPC without any overlay networking and consume all underlying AWS networking
resources natively. For this, the network management of the clusters running in AWS is tightly integrated
with AWS by using the AWS SDK APIs.
AHV networking workflow continues to work seamlessly when a cluster is running in AWS with the following
changes to the User VM (UVM) networks:

• You create UVM networks by specifying a CIDR value that matches the CIDR value of the AWS subnet.
• NC2 supports only AHV managed networks.
• UVMs use only the DHCP servers provided by the cluster.
• You do not need to specify the VLAN ID when you are creating a network.
• AWS Gateway is used as the default gateway for the UVM networks and cannot be changed.
Nutanix clusters consume the AWS subnets from Prism Element. You must add the AWS subnets you created
for UVMs as networks by using the Prism Element web console. Before you create networks for UVMS in Prism
Element, create AWS subnets manually either by using the AWS console, AWS Cloud Formation template, or any
other tools of your choice.
Nutanix recommends the following:

• Create private AWS subnets for UVM networks.


See AWS documentation for instructions about how to create a private AWS subnet.
• Do not share AWS subnets between clusters running in the same VPC.
• Have separate subnets for management (AHV and CVM) and user VMs.
• If you plan to use VPC peering, use nondefault subnets to ensure uniqueness across AWS Regions.
• Divide your VPC network range evenly across all usable Availability Zones in a Region.
• In each Availability Zone, create one subnet for each group of hosts that has unique routing requirements (for
example, public versus private routing).
• Size your VPC CIDR and subnets to support significant growth.

Creating a UVM Network


The subnets you create in AWS for Nutanix clusters are referred to as networks in a Nutanix cluster running in
AWS. After you create a cluster in AWS by using the NC2 console, you can add the AWS subnets as networks to the
Nutanix cluster by using the Prism Element or Prism Central web console.

Note: In NC2 on AWS with AOS 6.6.x, while creating a subnet from the Settings > Network Configuration >
Subnets tab, the list of (AWS) Cloud Subnets does not appear. As a workaround, you can add the Cloud Subnets
using Network Prefix Length and Gateway IP Address based on Cloud Subnet CIDR.

Creating a UVM Network using Prism Element

Before you begin


Ensure that you create the subnet in AWS first before you add it to the Nutanix cluster. The AWS network
must be in the same AZ as the Nutanix cluster.

Cloud Clusters (NC2) | User VM Network Management | 108


To learn more about setting up AWS subnets for use in Prism Element web console, see the Nutanix University
video.

About this task


To create a UVM network in the Prism Element web console, perform the following:

Procedure

1. Sign in to the Prism Element web console.

2. You can navigate to the Create Subnet dialog box in any of the following way:

• Click Network Config on the VM Dashboard.


• Click the gear icon in the main menu and select Network Configuration in the Settings page. The
Network Configuration window appears.

Figure 65: Add an AWS Network

Cloud Clusters (NC2) | User VM Network Management | 109


3. On the Subnets tab, click Create Subnet.
The Create Subnet dialog box appears.

Figure 66: Add an AWS Network

Do the following in the indicated fields:

a. Subnet Name: Enter a name for the subnet.


b. Review the Cloud VPC, Associated VPC CIDR(s), and Cloud Availability Zone of the VPC where the
cluster is created and to which the cloud subnets belong.
c. Select the desired cloud subnet from the Cloud Subnet list. You can search the subnets by Subnet ID or
Subnet CIDR blocks.
d. When you select the cloud subnet, the following details are populated under IP Address Management:

• Network Prefix Length: Associated VPC CIDR of the cloud subnet that you have selected.
• Gateway IP Address: Gateway IP address of the cloud subnet that you have selected.

Note: IP Address Management is enabled by default and indicates that the network is an AHV managed
network. AHV networking stack manages the IP addressing of the UVMs in the network.

e. Under IP Address Pools, provide the following details:


You can choose to define the range of IP addresses to be used on the network for automatic assignment to
UVMs by AHV. Optionally, you can choose to leave this field blank, in which case it is populated with the
default DHCP pool range valid in AWS. The first four and last IP addresses from the CIDR block are reserved

Cloud Clusters (NC2) | User VM Network Management | 110


for use by AWS. Additionally, the last (IP address - 1) from the CIDR block is reserved for the AHV managed
DHCP server.
To define a range of addresses for automatic assignment to virtual NICs, click Create Pool (under IP Address
Pools) and enter the following in the Add IP Pool dialog box:

• Network IP Prefix: The associated VPC CIDR of the cloud subnet that you have selected is populated.
• Start Address: Enter the starting IP address of the range.
• End Address: Enter the ending IP address of the range.
• Click Submit to close the window and return to the Create Subnet dialog box.
f. Under DHCP Settings, provide the following details:

• DHCP Settings: Select this checkbox to define a domain. When this checkbox is selected, the fields to
specify DNS servers and domains are displayed. Clearing this checkbox hides those fields.
• Domain Name Servers (comma separated): Enter a comma-delimited list of DNS servers. If you
leave this field blank, the cluster uses the IP address of the AWS VPC DNS server.
• Domain Search (comma separated): Enter a comma-delimited list of domains.
• Domain Name: Enter the domain name.
• TFTP Server Name: Enter the hostname or IP address of the TFTP server from which virtual machines
can download a boot file. It is required in a Pre-boot execution Environment (PXE).
• Boot File Name: Enter the name of the boot file to download from the TFTP server.

4. Click Save to configure the network connection, close the Create Subnet dialog box.

Creating a UVM Network using Prism Central

About this task


Ensure that you create the subnet in AWS first before you add it to the Nutanix cluster. The AWS network
must be in the same AZ as the Nutanix cluster.
To create a UVM network in the Prism Central web console, perform the following:

Procedure

1. Sign in to the Prism Central web console.

2. Click the entities menu in the main menu, expand Network & Security, and then select Subnets. The
Subnets window appears.

Cloud Clusters (NC2) | User VM Network Management | 111


3. On the Subnets tab, click Network Config. The Network Configuration dialog box appears. On the
Subnets tab, click Create Subnet.

Note: Ensure that you do not use the Create Subnet option displayed adjacent to the Network Config option
on the Subnets window.

Figure 67: Add an AWS Network using Prism Central

4. On the Create Subnet dialog box, provide the required details in the indicated fields:

• Name: Enter a friendly name for the subnet.


• Type: Cloud type is selected by default.
• Cluster: Select the cloud cluster to which the cloud subnet belongs.
• Review the Cloud Availability Zone, Cloud VPC, VPC Address Space (Associated VPC CIDR(s))
details of the VPC where the cluster is created and to which the cloud subnets belong.
• Map to Cloud Subnet: Select the desired cloud subnet from the Cloud Subnet list. You can search the
subnets by Subnet ID or Subnet CIDR blocks.

Note: AHV Subnets are mapped to the Cloud Subnets.

• IP Address Management: When you select the cloud subnet, the following details are populated under IP
Address Management:

• Network Prefix Length: Associated VPC CIDR of the cloud subnet that you have selected. This maps to
the CIDR block on the Cloud subnet.
• Gateway IP Address: Gateway IP address of the cloud subnet that you have selected.

Note:
IP Address Management is enabled by default and indicates that the network is an AHV managed
network. AHV networking stack manages the IP addressing of the UVMs in the network.

• IP Address Pools: Provide the following details:


You can choose to define the range of IP addresses to be used on the network for automatic assignment to
UVMs by AHV. Optionally, you can choose to leave this field blank, in which case it is populated with the
default DHCP pool range valid in AWS. The first four and last IP addresses from the CIDR block are reserved

Cloud Clusters (NC2) | User VM Network Management | 112


for use by AWS. Additionally, the last (IP address - 1) from the CIDR block is reserved for the AHV managed
DHCP server.
To define a range of addresses for automatic assignment to virtual NICs, click Create Pool (under IP
Address Pools) and enter the following in the Add IP Pool dialog box:

• Network IP Prefix: The associated VPC CIDR of the cloud subnet that you have selected is populated.
• Start Address: Enter the starting IP address of the range.
• End Address: Enter the ending IP address of the range.
• Click Submit to close the window and return to the Create Subnet dialog box.
• DHCP Settings: Select the DHCP Settings checkbox to define a domain. Select this checkbox to define a
domain. When this checkbox is selected, the fields to specify DNS servers and domains are displayed. Provide
the following details:

• Domain Name Servers (comma separated): Enter a comma-delimited list of DNS servers. If you
leave this field blank, the cluster uses the IP address of the AWS VPC DNS server.
• Domain Search (comma separated): Enter a comma-delimited list of domains.
• Domain Name: Enter the domain name.
• TFTP Server Name: Enter the hostname or IP address of the TFTP server from which virtual machines
can download a boot file. It is required in a Pre-boot eXecution Environment (PXE).
• Boot File Name: Enter the name of the boot file to download from the TFTP server.

Cloud Clusters (NC2) | User VM Network Management | 113


Figure 68: Add an AWS Network using Prism Central

5. Click Save to configure the network connection, close the Create Subnet dialog box.

Updating a UVM Network


If you modify the UVM network, AOS only updates the network attributes that are specific to the cluster. This
operation does not update the underlying AWS subnet.
You can update a UVM network by using the Prism Element or Prism Central web console.

Updating a UVM Network using Prism Element

About this task


Perform the following procedure to update a UVM network:

Procedure

1. Sign in to the Prism Element web console.

Cloud Clusters (NC2) | User VM Network Management | 114


2. Do one of the following:

• Click the gear icon in the main menu and select Network Configuration in the Settings page. The
Network Configuration window appears.
• Go to the VMs dashboard and click the Network Config button.

3. On the Network Configuration window, select the UVM network you want to update and click the pencil icon
on the right.
The Update Network dialog box appears, which contains the same fields as the Create Network dialog box
(see Creating a UVM Network using Prism Element on page 108).

4. Update the field values as desired.


You can change the CIDR only if no UVM is configured on the existing CIDR. CIDR disassociates UVM
network from existing AWS subnet and associates with AWS subnet with new CIDR (if found).

5. Click Save to update the network configuration and return to the Network Configuration window.

6. To delete a UVM network, in the Network Configuration window, select the UVM network you want to delete
and click the X icon (on the right).
A window prompt appears to verify the action; click OK. The network is removed from the list.

Note: This operation does not delete the AWS subnet associated with the UVM network.

Updating a UVM Network using Prism Central

About this task


Perform the following procedure to update a UVM network:

Procedure

1. Sign in to the Prism Central web console.

2. Click the entities menu in the main menu, expand Network & Security, and then select Subnets. The
Subnets window appears.

3. On the Subnets window, click Network Config. The Network Configuration dialog box appears. On the
Network Configuration window, select the UVM network you want to update and click the pencil icon on the
right.
The Update Subnet dialog box appears, which contains the same fields as the Create Subnet dialog box. See
Creating a UVM Network using Prism Central on page 111.

4. Update the field values as desired.


You can change the CIDR only if no UVM is configured on the existing CIDR. CIDR disassociates UVM
network from existing AWS subnet and associates with AWS subnet with new CIDR (if found).

5. Click Save to update the network configuration.

6. To delete a UVM network, in the Network Configuration window, select the UVM network you want to delete
and click the X icon on the right.
A window prompt appears to verify the action; click OK. The network is removed from the list.

Note: This operation does not delete the AWS subnet associated with the UVM network.

Cloud Clusters (NC2) | User VM Network Management | 115


Using EC2 Instances on the Same Subnet as UVMs
You can choose to use EC2 instances for running applications or services on the same AWS subnet used by a cluster.
In such use cases, you must block the IP addresses of the EC2 instances from AHV IPAM by using the following
aCLI commands:

• net.add_to_ip_blacklist: Blocks IP addresses for a managed network.

• net.delete_from_ip_blacklist: Removes IP addresses from the blocklist of a managed network.

See the Command Reference guide for detailed information about how to block an IP address on a managed
network.
The cluster does not use the IP addresses blocked by using AHV IPAM for any UVM vNIC assignments.

AWS Elastic Network Interfaces (ENIs) and IP Addresses


AHV assigns IP addresses to vNICs of UVMs from a DHCP pool configured on an AHV-managed network.
The vNIC IP addresses are natively integrated with AWS Elastic Network Interfaces (ENI) and are configured
as secondary IP addresses on ENI by the AHV networking service. ENI creation mandates a primary IP address,
which must be allocated from AHV IPAM. ENI primary IP addresses are never used as vNIC IP addresses for use
by UVMs. For this reason, you may notice reduced availability of IP addresses for use by UVMs from what is
configured in the DHCP pool.

Note: ENIs can have upto 49 secondary IP addresses and NC2 implements sharing of ENIs for vNIC IP addresses until
the ENI IP address capacity is reached.

Bare-metal instances support up to 15 ENIs. One ENI is dedicated for AHV or CVM connectivity and the rest of the
14 ENIs are dynamically created as UVMs are powered on or migrated to the AHV node. Note that an ENI belongs to
a single AWS subnet and so UVMs from more than 14 subnets on a given AHV node is not supported.
To learn more about the number of AWS ENIs on bare metal instances, see AWS Documentation.

Adding a Virtual Network Interface (vNIC) to a User VM


Network interfaces added to UVMs are referred to as vNICs. You can add vNICs to a UVM at the time of
creating or updating a UVM. Add at least one vNIC to a UVM for network reachability. You can add multiple
vNICs from the same or different UVM networks to a UVM.

About this task

Before you begin


Ensure the following before you add an NIC to a VM:
1. Create a subnet in AWS for the user VM.
2. Add that subnet to the cluster as a network as described in Creating a UVM Network using Prism Element on
page 108.

About this task


You must add the NIC to the VM when you are creating the VM in the Prism Element web console.
Perform the following procedure to add a vNIC to a VM:

Procedure

1. Sign in to the Prism Element web console.

Cloud Clusters (NC2) | User VM Network Management | 116


2. See the Creating a VM (AHV) topic in the Prism Web Console Guide to proceed with creating a VM in the
cluster.

3. In the Create VM dialog box, scroll down to Network Adaptors (NIC) and click Add New NIC.

4. In the Network Name drop-down list, select the UVM network to which you want to add the vNIC.

5. Select (click the radio button for) Connected or Disconnected to connect or disconnect the vNIC to the
network.

6. The Network Address / Prefix is a read-only field that displays the IP address and prefix of the network.

7. In the IP address field, enter an IP address for the NIC if you manually want to assign an IP address to the
vNIC.
This is an optional field. Clusters in AWS support only managed networks. Therefore, an IP address to the vNIC
is automatically assigned if you leave this field blank.

8. Click Add to create the vNIC.

Enabling Outbound Internet Access to UVMs


Enable outbound internet access to UVMs by using a NAT gateway.

About this task


By default, the cluster is deployed in a private subnet in AWS with a NAT gateway and load balancer if you choose
to create a new VPC when you are creating a cluster. However, if you choose to deploy a cluster on an existing VPC,
manually configure a NAT gateway.
In AWS, perform the following to enable outbound internet access to UVMs:

Note: See the AWS documentation for instructions about how to perform these tasks.

Procedure

1. Create a public subnet in the VPC in which your cluster is deployed.

2. Create a NAT gateway, associate the gateway with the public subnet, assign a public elastic IP address to the NAT
gateway.

3. Create a route table and add a route to that route table with the target as the NAT gateway (created in step 2).

4. Add the route table you created in step 3 with the private subnet you have created for UVMs.

5. Sign in to the Prism Element web console.

6. Create a UVM network as described in Creating a UVM Network using Prism Element on page 108.

7. Go to the UVM in the Prism Element web console and add a vNIC to the UVM by using the AWS private subnet
as described in Adding a Virtual Network Interface (vNIC) to a User VM on page 116.
Your UVM can now access the internet.

Enabling Inbound Internet Access to UVMs


Enable inbound internet access to an application running on NC2 by using a physical load balancer.

Cloud Clusters (NC2) | User VM Network Management | 117


About this task
By default, the cluster is deployed in a private subnet in AWS with a NAT gateway and load balancer if you choose
to create a new VPC when you are creating a cluster. However, if you choose to deploy a cluster on an existing VPC,
manually configure a load balancer and use the domain name of the network load balancer for internet access.
You can instead create a network load balancer in AWS and use the domain name of the network load
balancer.

Note: Additional AWS charges might apply for the use of a network load balancer. Check with your AWS
representative before you create a network load balancer.

Perform the following to create a network load balancer in AWS:

Procedure

1. Sign in to your AWS account.

2. Create a network load balancer.


See Getting Started with Network Load Balancers for information about how to create a network load
balancer in AWS.
Ensure the following when you are creating a network load balancer in AWS:

a. Set the Scheme to internet-facing.


b. For Listeners, select the TCP port depending on your application. For example, select the TCP port 443 for
a Hycu VM.
c. For Availability Zones, select the availability zone and VPC you used to create a cluster in AWS through
NC2.
d. Select a public subnet in the VPC.
e. Skip the Configure Security Settings step.
f. In the Configure Routing page, create a new target group.
g. Set Protocol to TCP, Port to a number depending on your application, and Target type to ip.

Note: Make sure the port you want to access is open in an inbound policy of the security group associated
with bare-metal instances of the cluster.

h. For Health checks, retain the default protocol.


i. Proceed to Register targets.
j. For Network, select Other private IP address, select the availability zone you used for creating a cluster
in AWS through NC2, and add the IP address and port number of the user VM running on the cluster.
k. Click Add to list to add this target.
l. Click Preview and after validating all the values, click Create.
Wait until the newly created load balancer becomes active, check the target health, and you can then use the
domain name of this load balancer.

Cloud Clusters (NC2) | User VM Network Management | 118


Deploying a Load Balancer to Allow Internet Access
About this task
Set up a load balancer if your cluster is deployed in an existing VPC to allow internet access to Prism Central or
Prism Element on the CVM.

Note: Additional AWS charges might apply if you use the network load balancer. Check with your AWS
representative before you create a network load balancer.

Perform the following procedure to set up the network load balancer in AWS.

Procedure

1. Sign in to your AWS account.

Cloud Clusters (NC2) | User VM Network Management | 119


2. Create a network Load Balancer.
See Getting Started with Network Load Balancers for more information about how to create a network load
balancer in AWS.
Ensure the following when you create a network load balancer in AWS:

a. Set the Scheme to internet-facing.


b. For Listeners, select the TCP port 9440.
c. For Availability Zones, select the availability zone and VPC you used to create a cluster in AWS through
NC2.
d. Select a public subnet in the VPC.

Note: If you choose a private subnet in the VPC, the Prism Element or Prism Central cannot be accessed
from the Internet.

e. Skip the Configure Security Settings step.


f. In the Configure Routing page, create a new target group and provide a name to the target group.
g. Choose the Target type as IP, set Protocol to TCP and Port number to 9440.

Note: Make sure the port you want to access is open in an inbound policy of the security group associated
with bare-metal instances of the cluster.
The IP address you choose for the target group must be one of the CVM IP addresses that you can
see on the NC2 portal.

h. For Health checks, retain the default protocol.


i. Proceed to Register targets.
j. For Network, select Other private IP address, select the availability zone you used for creating a cluster
in AWS through NC2, and add the CVMs IP address or Prism Central VM IP address and leave the default
port number as 9440.

Note: Nutanix recommends you manually blacklist the virtual IP address configured on Prism Central to
avoid IP address conflicts.

k. Click Add to list to add this target


l. Click Preview and after you validate all the values, click Create.
Wait until the newly created load balancer becomes active, check the target health, and you can then use the
domain name of this load balancer.

What to do next
Note down the DNS name of the load balancer. To find the DNS name, open the load balancer on your
AWS console and then navigate to Description > Basic Configuration. Then to get the IP address of
the load balancer, navigate to Network & Security > Network Interfaces > search the name of the load
balancer and then copy the Primary Private IPv4 address. You would need the load balancer IP address
while modifying the inbound rules under the UVM security group.

Cloud Clusters (NC2) | User VM Network Management | 120


Prism Central UI Access for Site-to-Site VPN Setup
About this task
To configure the Site-to-Site VPN setup, you must enable the inbound access to the Prism Central UI. To access the
Prism Central UI using virtual IP or Prism Central VM IP(s), the TCP port 9440 must be allowlisted for the Prism
Central VM.
Perform the following procedure to allowlist the inbound port for the UVM.

Procedure

1. Sign in to your AWS account.

2. Filter and select the cluster node on which the Prism Central is deployed, and then click the Security tab.

3. Select the corresponding UVM security group.

4. For the selected UVM security group, in the Inbound rules tab, click Add rule, and then enter the TCP port as
9440 and the custom source IP as the load balancer IP.

5. Click Save rule..

Cloud Clusters (NC2) | User VM Network Management | 121


NETWORK SECURITY USING AWS
SECURITY GROUPS
AWS security groups offer a firewall-like method of filtering traffic to Elastic Network Interfaces (ENIs)
typically used with EC2 instances, and control the traffic allowed to and from the resources in the VPC where the
security groups are set up. You can add separate rules for each security group by defining appropriate ports and
communication protocols to control inbound and outbound traffic. To learn more about controlling traffic using
security groups, see AWS documentation.
Three default security groups - Internal management, User management, and UVM security group - are created by
the NC2 console at the time of initial cluster creation. The Internal management security group and User management
security groups are attached to ENIs associated with the Management subnet. The default UVM security group is
attached to ENIs associated with the UVM subnet.

Note: These default security groups are created for each cluster. Amending security group rules in one cluster does not
affect the security group rules in another cluster. When you amend inbound and outbound rules within the default UVM
security group, the policies are applied to all UVMs that are part of the cluster.

You can also create custom security groups to more granularly control traffic to your NC2 environment. You can:

• create a security group that applies to the entire VPC, if you want the same security group rules applied to all
clusters in that VPC.
• create a security group that applies to a specific cluster if you want certain security group rules applied only to that
particular cluster.
• create a security group that applies to a subset of UVMs in a specific cluster, if you want certain security group
rules to only apply those subsets of UVMs.
You must configure Prism Central VM security group and all the UVM security groups in a way that allows
communication between Prism Central VM and UVMs. In a single cluster deployment, the Prism Central VM and
UVM communication is open by default. However, if your Prism Central is hosted on a different NC2 cluster,
then you must allow communication between the Prism Central VM on the cluster hosting Prism Central and the
management subnets of the remaining NC2 clusters.
You do not need to configure security groups for communication between the CVM of the cluster hosting Prism
Central and Prism Central VM.
You cannot deploy Prism Central in the Management subnet. You must deploy Prism Central in a separate subnet.
Suppose your Prism Central is hosted on a different NC2 cluster (say, NC2-Cluster2). In that case, you must modify
the security groups associated with the management subnet on NC2-Cluster1 to include inbound and outbound
security group rules for communication between the Prism Central subnet on NC2-Cluster2 and Management subnet
on NC2-Cluster1. This might extend to management subnets across multiple clusters managed by the same Prism
Central.

Note: Ensure that all AWS subnets used for NC2, except the Management subnet, use the same route table. For more
information on AWS route tables, see AWS documentation.

For more information on the ports and endpoints the NC2 cluster needs, see Ports and Endpoint Requirements.
For more details on the default Internal management, User management, and UVM security groups, see Default
Security Groups. For more information on creating custom security groups, see Custom Security Groups.
Perform the following steps to control inbound and outbound traffic:
1. Determine if you want to use the default UVM security group to control inbound and outbound traffic for all
UVMs in the cluster or if you want more granular control over UVM security rules with different security groups
for different UVMs.

Cloud Clusters (NC2) | Network Security using AWS Security Groups | 122
2. Edit the default UVM security group to add inbound and outbound rules if you want those rules to apply to all
UVMs on your cluster.
3. You may also create additional custom security groups for more granular control of traffic flow in your NC2
environment:
1. Create a security group in AWS.
2. Add appropriate tags to the security group. For more details on the tags needed with custom security groups,
see Custom Security Groups.
3. Add rules to enable or restrict inbound and outbound traffic.

Default Security Groups


By default, network security for UVMs is managed by the internal networking service of the cluster. NC2 creates the
following default security groups with recommended default rules at the time of cluster creation:

• Internal Management: Controls AHV to CVM communication within a cluster.


• User Management: Controls UVM to CVM communication.
• UVM: Controls external traffic entering or leaving the UVMs.
You can identify the default UVM security group by the following AWS tags:

• Tag: key=nutanix:clusters:cluster-uuid, value=cluster UUID


• Tag: key=nutanix:clusters:networks, value=all-uvms
The following figure shows an example of tags applied for the default UVM security group.

Figure 69: Tags in the default UVM security group

The Internal management, User management, and UVM security groups have the recommended default rules set
up by NC2 at cluster creation. All management ENIs created, even after initial cluster deployment, have the default
Internal management (internal_management) and User management (user_management) security groups
attached.

Note: Nutanix recommends that you do not modify Internal management and User management security groups or
change any security group attachments.

Cloud Clusters (NC2) | Network Security using AWS Security Groups | 123
All elastic network interfaces (ENIs) for CVMs and the EC2 bare-metal hosts are present on the private Management
subnet.
All UVMs on a cluster are associated with the default UVM security group unless you create additional UVM
security groups. The default UVM security group controls all traffic that enters the ENIs belonging to the UVM
subnets. Additional custom security groups can be created to control traffic at the VPC, individual cluster, or UVM
subnet levels.
To allow communication from external sources to the UVMs, you must modify the default UVM security group to
add new inbound rules for the source IP addresses and the load balancer IP addresses.

Note: Each cluster in the same VPC has its default security group. When you amend inbound and outbound rules
within the default UVM security group, the policies are applied to all UVMs that are part of the cluster.

The default UVM security group is configured to allow the following:

• All outbound traffic.


• Communication between UVMs in the same cluster.
• Traffic from AHV or CVMs over default ports of the cluster.
For more information on the ports, see Ports and Endpoint Requirements.

Custom Security Groups


While you can use the default UVM security group configured for all UVMs in a cluster, you can also create custom
security groups to more granularly control network traffic at the VPC level, individual cluster level, or individual
UVM subnet level. You can create custom security groups by adding the appropriate tags, and those security groups
will get attached to the appropriate ENIs.

Note: NC2 supports the ability to create custom security groups when it uses AOS 6.7 or higher.

A custom security group at the VPC level is attached to all ENIs in the VPC. A custom security group at the cluster
level is attached to all ENIs of the cluster. Custom security groups at the UVM subnet level are attached to all ENIs of
all specified UVM subnets.
You can use custom security groups to apply security group rules across all clusters in a VPC or a specific cluster
or a subset of UVM Subnets in a specific cluster. A custom security group per UVM subnet can be beneficial when
controlling traffic for specific UVMs or restricting traffic between UVMs from different subnets. To support custom
security groups at the UVM subnet level, NC2 assigns tags with key-value pairs that can be used to identify the
custom security groups. For more information about default security groups for internal management and UVMs, see
Default Security Groups.

Note: To be able to increase the custom security groups quota beyond the default limit, you must add the
GetServiceQuota permission to the Nutanix-Clusters-High-Nc2-Cloud-Stack-Prod IAM role. To change the
permissions and policies attached to the IAM role, sign into the AWS Management Console, open the IAM console at
https://console.aws.amazon.com/iam/, and choose Roles > Permissions. For more information, see AWS
documentation.

Cloud Clusters (NC2) | Network Security using AWS Security Groups | 124
Figure 70: GetServiceQuota permission

The default AWS service quota allows you to create a maximum of five custom security groups per ENI. Out of the
five security groups per ENI quota, one is used for the default UVM security group. You can add only one custom
security group at the VPC level and one custom security group at the cluster level. You can add the remaining custom
security groups at the UVM subnet level.
For example, if you create one custom security group at the VPC level and one at the cluster level, you can create
two security groups at the UVM subnet level, assuming you have the default AWS Service quota limit of 5 security
groups per ENI. Similarly, if you create one security group at the cluster level and no security group at the VPC level,
you can create three security groups at the UVM subnet level.

Note: If you need more security groups, you can contact AWS support to increase the number of security groups per
ENI in your VPC.

The following table lists the AWS tags for custom security groups and the level at which these security groups can
be applied. These three tags have hierarchical order that defines the order in which the security groups with these
tags are honoured. A higher hierarchical tag is a prerequisite for the lower hierarchical tag, and therefore the higher
hierarchical tag must be present in the security group with the lower hierarchical tag. For example, if you use the
networks tag (the lowest hierarchical tag) for a security group, both the cluster-uuid (middle hierarchical) tag and
external (higher hierarchical) tag must also be present in that security group. Similarly, if you add the cluster-uuid
tag, the external tag must be present in that security group.

Table 12: Tags in custom security groups

Tag Key Value Level Hierarchical order Tags included


in the Security
Group
tag:nutanix:clusters:external
none VPC 1 Highest external
tag:nutanix:clusters:external:cluster-
cluster-uuid Cluster 2 Middle external and
uuid cluster-uuid
tag:nutanix:clusters:external:networks
CIDR1, CIDR2, UVM subnet CIDR 3 Lowest external, cluster-
CIDR3 uuid, and networks

For example, if you want to create a security group to apply rules to all clusters in a certain VPC, you must attach the
following tag to the security group. The tag value can be left blank:

Cloud Clusters (NC2) | Network Security using AWS Security Groups | 125
Table 13: Tag example for VPC-level security group

Tag Key Tag Value


tag:nutanix:clusters:external none

The following figure shows an example of tags applied for the custom security group at the VPC level.

Figure 71: Example for VPC-level security group

If you want to create a security group to apply rules to a cluster with UUID 1234, then you must apply both of these
tags to the security group:

Table 14: Tag example for cluster-level security group

Tag Key Tag Value


tag:nutanix:clusters:external none
tag:nutanix:clusters:external:cluster-uuid 1234

The following figure shows an example of tags applied for the custom security group at the cluster level.

Cloud Clusters (NC2) | Network Security using AWS Security Groups | 126
Figure 72: Example for cluster-level security group

If you want to create a security group to apply rules to a UVM subnet 10.70.0.0/24 in a cluster with UUID 1234, then
you must apply all three of these tags to the security group:

Table 15: Tag example for subnet-level security group

Tag Key Tag Value


tag:nutanix:clusters:external none
tag:nutanix:clusters:external:cluster-uuid 1234
tag:nutanix:clusters:external:networks 10.70.0.0/24

The following figure shows an example of tags applied for the custom security group at the subnet level.

Figure 73: Example for subnet-level security group

Cloud Clusters (NC2) | Network Security using AWS Security Groups | 127
Ports and Endpoints Requirements
This section lists the ports and endpoint requirements for the following:

• Outbound communication
• Inbound Communication
• Communication to UVMs
For more information on the general firewall support requirements, see the Port and Protocols guide.

Requirements for Outbound Communication


There are a few general outbound requirements for deploying a Nutanix cluster in AWS on top of the existing
requirements that on-premises clusters use for support services. The following tables show the endpoints the Nutanix
cluster needs to communicate for a successful deployment. The list of endpoints is not comprehensive; there are
several other endpoints that you might need to allowlist depending on the Nutanix software components you use and
the firewall support requirements. For more information on port requirements for Nutanix products and services, see
the Port and Protocols guide and select the Nutanix product from the Software Type list.

Note: Many of the destinations listed here use DNS failover and load balancing. For this reason, the IP address
returned when resolving a specific domain may change rapidly. Nutanix cannot provide specific IP addresses in place of
domain names.

Table 16: Cluster Outbound to the NC2 Portal

Source Destination Protocol Purpose


Management subnet https://portal.nutanix.com/* TCP/443 (HTTPS) Nutanix service
portal.
Management subnet https://download.nutanix.com/* TCP/443 (HTTPS) Life Cycle Manager
(LCM) required to
upgrade NCI and
NC2 components.
Management subnet https://insights.nutanix.com/* TCP/443 (HTTPS) Pulse to provide
diagnostic system
data to Nutanix
Support.
Management subnet 169.254.169.123 TCP/443 (HTTPS) NTP server.
Management subnet 169.254.169.254/* TCP/443 (HTTPS) Access the instance
metadata related
to the AWS service
role.
Management subnet https://gateway-external- TCP/443 (HTTPS) The NC2 portal
api.cloud.nutanix.com/* orchestration.
Management subnet https://downloads.cloud.nutanix.com/ TCP/443 (HTTPS) Download NC2
clusters/* RPMs.

Cloud Clusters (NC2) | Network Security using AWS Security Groups | 128
Table 17: Cluster Outbound to EC2

Source Destination Protocol Purpose


Management subnet ec2.<region>.amazonaws.com/* TCP/443 (HTTPS) Access the AWS
metadata.
For example, a cluster in us-west-2
requires ec2.us-west-2.amazonaws.com/
*.

Requirements for Inbound Communication


The following ports are open by default in the User management security group for inbound traffic to the cluster
management services such as CVM and AHV.

Table 18: Cluster Outbound to the NC2 Portal

Description Protocol Port Source: Any Source: User-


UVM in Cluster provided Range
SSH to CVM and TCP 22 default: allow default: allow
hypervisor
Prism web TCP 80 default: allow default: allow
console
Prism web TCP/UDP 9440 default: allow default: allow
console
Cluster remote TCP 80 default: allow default: allow
support
Cluster remote TCP 8443 default: allow default: allow
support
Nutanix Move TCP/UDP 111 default: allow default: allow
Nutanix Move TCP/UDP 2049 default: allow default: allow
NTP Service UDP 123 default: allow default: allow
Disaster recovery TCP 2009 default: allow default: allow
Disaster recovery TCP 2020 default: allow default: allow
Stargate iscsi TCP 3205 default: allow default: allow
access for Files
Stargate iscsi TCP 3260 default: allow default: allow
access for Files
Files Services TCP 7501 default: allow default: allow
Citrix MCS TCP 9440
NGT tools TCP 9440 default: allow default: allow
NGT tools TCP 2073 default: allow
NGT tools TCP 2074 default: allow default: allow
NGT UVM TCP 5000 (dynamic), default: allow default: allow
23578.

Cloud Clusters (NC2) | Network Security using AWS Security Groups | 129
Figure 74: Inbound Rules in User Management Security Group

Figure 75: Outbound Rules in User Management Security Group

Requirements for Communication to UVMs


The following ports are opened by default in the UVM security group during cluster creation to enable
communication between the CVM and UVMs.

Table 19: Open Ports

Description Protocol Number Source: User


Management Security
Group
SSH TCP 22 default: allow
CVM to file server VMs TCP 2027 default: allow
(FSVM) management
CVM to file server TCP 2090 default: allow
Cluster configuration TCP 2100 default: allow
Files Services TCP 7501 default: allow
Files Services TCP 7502 default: allow
RESP API and Prism TCP 9440 default: allow
access
Communication between TCP 30900 and 30990 default: allow
AOS and Multicloud
Snapshot Technology
(MST)

Cloud Clusters (NC2) | Network Security using AWS Security Groups | 130
Description Protocol Number Source: User
Management Security
Group
Prism Central to Prism TCP 9300 and 9301 default: allow
Element communication
Note: You must
manually open
these ports in the
default UVM
security group.

Figure 76: Inbound Rules in UVM Security Group

Figure 77: Outbound Rules in UVM Security Group

Cloud Clusters (NC2) | Network Security using AWS Security Groups | 131
CLUSTER MANAGEMENT
Modify, update, manually replace, display AWS events, hibernate, resume and delete NC2 running on AWS by using
NC2 console.

Updating the Cluster Capacity


You can expand the cluster by adding more nodes to the cluster or shrink the cluster by removing nodes
from the cluster.
You can use a combination of instance types while expanding the cluster capacity of an already running cluster.

• i3.metal, i3en.metal, i4i.metal: Any combination of these instance types can be mixed, subject to the bare-metal
availability in the region where the cluster is being deployed.
• z1d.metal, m5d.metal, m6id.metal: Any combination of these instance types can be mixed, subject to the bare-
metal availability in the region where the cluster is being deployed.
For more details, see Creating a Heterogeneous Cluster.

Note: The tasks to add or remove nodes are executed sequentially while updating the capacity of a cluster.

Before you begin


Ensure the following before you expand the cluster:

• Your cluster is at least a three-node cluster.


• Your cluster is in a Running state.
• Your AWS subscription has enough quotas (such as vCPUs).

About this task

Note: You must update the cluster capacity by using the NC2 console only. Support to update the cluster capacity by
using the Prism Element web console is not available.
When expanding an NCI cluster beyond what the NCI license covers, you need to purchase and manually
apply additional license capacity. Contact your Nutanix account representative to purchase an additional
license capacity.

Follow these steps to update the capacity in your cluster:

Procedure

1. Sign in to the NC2 console: https://cloud.nutanix.com

2. In the Clusters page, click the name of the cluster for which you want to update the capacity.

Cloud Clusters (NC2) | Cluster Management | 132


3. Do one of the following:

• Navigate to the Settings tab and click Capacity.


• Click Update Capacity under the Actions drop-down list in the Cloud Summary section.

Figure 78: Update Capacity

4. Under Host Configuration, specify the following details:

• Host type. The instance type used during initial cluster creation is displayed.
• Number of Hosts. Click + or - depending on whether you want to add or remove nodes from the cluster.

Note: A maximum of 28 nodes are supported in a cluster. NC2 supports 28-node cluster deployment in AWS
regions that have seven placement groups. Also, there must be at least three nodes in a cluster for RF2 and five
nodes when RF3.
Nutanix recommends that the number of hosts must match the RF number or multiples of the RF
number that has been selected for the base cluster.

• Add Host Type: Depending on the instance type used for the cluster, the other compatible instance types are
displayed. For example, if you have used i3.metal node for the cluster, then i3en.metal, and i4i.metal instance
types are displayed.

Note: You can create a heterogeneous cluster using a combination of i3.metal, i3en.metal, and i4i.metal
instance types or z1d.metal, m5d.metal, and m6id.metal instance types.

The Add Host Type option is disabled when no compatible node types are available in the region where the
cluster is deployed.

Note: UVMs that have been created and powered ON in the original cluster running a specific node or a
combination of compatible nodes, as listed below, cannot be live migrated across different node types when other

Cloud Clusters (NC2) | Cluster Management | 133


nodes are added to the cluster. After successful cluster expansion, all UVMs must be powered OFF and powered
ON to enable live migration.

• If z1d.metal is present in the heterogeneous cluster either as the initial node type of the cluster or as
the new node type added to an existing cluster.
• If i4i.metal is the initial node type of the cluster and any other compatible node is added.
• If m6id.metal is the initial node type of the cluster and any other compatible node is added.
• If i3en.metal is the initial node type of the cluster and the i3.metal node is added.

Figure 79: Host Configuration

Cloud Clusters (NC2) | Cluster Management | 134


5. Under Redundancy, select the redundancy factor:
The redundancy factor (RF) selected during initial cluster creation is displayed.

• RF 1. Data is not replicated across the cluster for RF1. The minimum cluster size must be 1.
• RF 2. The number of copies of data replicated across the cluster is 2. The minimum cluster size must be 3.
• RF 3. Number of copies of data replicated across the cluster is 3. The minimum cluster size must be 5.

6. Under Service Quotas, the service quotas for AWS resources under your AWS quota are displayed. Click
Check quotas to verify the cluster creation or expansion limits.

7. Click Save. The Increase capacity? or Reduce capacity? dialog appears based on your choice to expand or
shrink the cluster capacity in the previous steps.

8. Click Yes, Increase Capacity or Yes, Reduce Capacity to confirm your action.

Note: The cluster expansion to the target capacity might fail if enough AWS nodes are unavailable in the current
region. The NC2 console will automatically retry to provision the nodes. If the error in provisioning the nodes is
consistent, you must check with your AWS account representative to ensure enough nodes are available from AWS
in your target AWS region and Availability Zone.
Ensure that all VMs on the nodes you want to remove must be turned off before performing the node
removal task.
You can cancel any pending operations to expand the cluster capacity and try to expand the cluster
capacity with a different instance type. See Creating a Heterogeneous Cluster for more details.

What to do next
For more information when you see an alert in the Alerts dashboard of the Prism Element web console or
if the Data Resiliency Status dashboard displays a Critical status, see Maintaining Availability: Node and
Rack Failure.

Manually Replacing a Host


If any issue occurs in a host in a cluster, you can choose to replace that host. The replace host operation
first adds a new host to the cluster, migrates the VMs running on the host you want to replace with the
newly added host, and then removes the host you want to replace.

About this task

Note: The replace host operation is not supported in a single-node cluster.

To replace a host in a cluster, perform the following:

About this task

Note: If a host turns unhealthy and you add another host to a cluster for evacuation of data or VMs, AWS charges you
additionally for the new host.

Procedure

1. Sign in to NC2 from the My Nutanix dashboard.

2. In the Clusters page, click the name of the cluster.

3. In the Hosts page, click the ellipsis of the corresponding host you want to replace, and click Replace Host.

Cloud Clusters (NC2) | Cluster Management | 135


4. In the Replace Host dialog box, specify why you want to replace the host and click Confirm.

What to do next
For more information when you see an alert in the Alerts dashboard of the Prism Element web console or
if the Data Resiliency Status dashboard displays a Critical status, see Maintaining Availability: Node and
Rack Failure.

Creating a Heterogeneous Cluster


NC2 on AWS allows you to use a combination of i3.metal, i3en.metal, and i4i.metal instance types or z1d.metal,
m5d.metal, and m6id.metal instance types while creating a new cluster or expanding the cluster capacity of an already
running cluster.
NC2 on AWS allows you to create a heterogeneous cluster depending on the following conditions:

• NC2 on AWS supports a combination of i3.metal, i3en.metal, and i4i.metal instance types or z1d.metal,
m5d.metal, and m6id.metal instance types. The AWS region must have these instance types supported by NC2 on
AWS. For more information, see Supported Regions and Bare-metal Instances.

Note: You can only create homogenous clusters with g4dn.metal; it cannot be used to create a heterogeneous
cluster.

• Nutanix recommends that the minimum number of additional nodes must be equal to or greater than your cluster's
redundancy factors (RF), and the cluster must be expanded in multiples of RF for the additional nodes. A warning
is displayed if the number of nodes is not evenly divisible by the RF number.
• UVMs that have been created and powered ON in the original cluster running a specific node or a combination of
compatible nodes, as listed below, cannot be live migrated across different node types when other nodes are added
to the cluster. After successful cluster expansion, all UVMs must be powered OFF and powered ON to enable live
migration.

• If z1d.metal is present in the heterogeneous cluster either as the initial node type of the cluster or as the new
node type added to an existing cluster.
• If i4i.metal is the initial node type of the cluster and any other compatible node is added.
• If m6id.metal is the initial node type of the cluster and any other compatible node is added.
• If i3en.metal is the initial node type of the cluster and the i3.metal node is added.
• You can expand or shrink the cluster with any number of i3.metal, i3en.metal, and i4i.metal instance types or
z1d.metal, m5d.metal, and m6id.metal instance types as long as the cluster size remains within the cap of a
maximum of 28 nodes.

Note: You must update the cluster capacity using the NC2 console. You cannot update the cluster capacity using the
Prism Element web console.

For more information on how to add two different node types when expanding a cluster, see Updating the Cluster
Capacity.

Hibernate and Resume in NC2


You can hibernate your NC2 cluster running on AWS if you will not be using it for an extended period of time. Once
the cluster is hibernated, your data is stored in Amazon S3 buckets, and all instances in the cluster are stopped and
released to AWS. This operation can deliver cost savings for your AWS cluster when you do not need the cluster in a
running state while keeping your data and metadata safe in your S3 buckets.

Cloud Clusters (NC2) | Cluster Management | 136


An S3 bucket is created at the time of cluster creation, which remains empty until the Hibernate feature is used. When
the Hibernate feature is used, all data from your NC2 cluster is placed in the S3 bucket. Once the cluster is resumed,
data is hydrated back onto hosts but also stays in the S3 bucket as a backup.
The S3 bucket gets created with a policy that grants s3:GetObject, s3:PutObject, s3:DeleteObject,
s3:ListBucket, s3:ListBucketVersions, and s3:DeleteBucket permissions to the user.
The default encryption, Amazon S3 managed keys (SSE-S3), is enabled for the S3 bucket.

Note: The S3 buckets used must not be publicly accessible.

You can use a gateway endpoint for connectivity to Amazon S3 without using an internet gateway or a NAT device
for your VPC. For more information, see AWS VPC Endpoints for S3.
Hibernate and resume feature is generally available with the AOS 6.5.1 version. All previously hibernated clusters
running AOS 6.0.1 or prior versions must be resumed once and then upgraded to AOS 6.5.1 or later versions to
hibernate the cluster again in future. Henceforth, if you want to use the GA version of this feature, upgrade to AOS
6.5.1 version.
After you hibernate your cluster, you will not be billed for any Nutanix software usage or the AWS bare-metal
instance for the duration the cluster is in the hibernated state. However, you may be charged by AWS for the data
stored in Amazon S3 buckets for the duration the cluster is hibernated. For more information about the Amazon S3
billing, see the AWS documentation.
NC2 does not consume any of your reserved license capacities while a cloud cluster is in the hibernated state. Once a
cloud cluster is resumed, an appropriate license will be automatically applied to the cluster from your reserved license
pool, provided that enough reserved capacity is available to cover your cluster capacity. To learn more about license
reservations for cloud clusters, visit Reserving License Capacity on page 80.
You can hibernate and resume a single-node and three or more node clusters.

Note: You cannot hibernate the clusters that are protected by the Cluster Protect feature. You must stop protecting the
cluster before triggering hibernation.
You cannot hibernate a cluster if any of the following conditions are met:

• The cluster is protected.


• The cluster is hosting Prism Central, which is protecting other clusters.
To hibernate a cluster that is not hosting Prism Central, you must first unprotect the cluster. To hibernate a
cluster that is hosting Prism Central, you must first unprotect the cluster, and all other clusters managed by
Prism Central and then turn off Prism Central.

For more architectural details on the hibernate/resume operation, visit the Tech Note for NC2 on AWS.

Hibernating Your NC2 Cluster


You can hibernate a cluster that is in a running state. After you hibernate a cluster, your data will be stored
in an Amazon S3 bucket and AWS bare-metal nodes will be stopped.

Note: The encryption will be enabled by default on all S3 buckets used for hibernation.

Before you begin


To avoid unexpected issues, you must shut down all user VMs, Prism Central, and Files servers before
hibernating your Nutanix cluster running on AWS. See the Nutanix Files User Guide for instructions on how
to stop a Files cluster.

About this task


To hibernate your cluster, perform the following steps.

Cloud Clusters (NC2) | Cluster Management | 137


Procedure

1. Sign in to NC2 from the My Nutanix dashboard.

2. Click on the cluster that you want to hibernate. The cluster summary page will open.

3. Select the Actions dropdown and, click Hibernate.

Figure 80: Hibernate a cluster

4. In the Hibernate cluster "Cluster Name" dialog box, review the hibernation guidelines and limitations, and
then type the name of the cluster in the text box.

5. Click Hibernate Cluster.


Visit the Nutanix Hibernate and Resume walkthrough video for more details. The time required to hibernate
your cluster depends on the amount of data in your cluster. Your cluster is hibernated when the status of the
cluster changes to Hibernated.

Resuming an NC2 Cluster


You can resume a cluster that is in a hibernated state. After you resume a cluster, your data is recovered
from the Amazon S3 bucket in the same state it was before you hibernated it.

Note: Your data is retained in S3 buckets for 6 days post a successful resume operation.
When a hibernated cluster is resumed, it returns to the same licensing state it had before entering
hibernation. The IP addresses of hosts and CVMs remain the same as pre-hibernate.

About this task


To resume a cluster, perform the following:

Procedure

1. Sign in to NC2 from the My Nutanix dashboard.

Cloud Clusters (NC2) | Cluster Management | 138


2. Select the hibernated cluster that you want to resume, and click Resume Cluster.

Figure 81: Resume a cluster

3. In the Resume "Cluster Name" dialog box, click Resume Cluster.


Wait for the status to change to Running and click one of the cluster IP addresses to launch the Prism Element
web console. Visit the Nutanix Hibernate and Resume walkthrough video for more details.

Limitations in Hibernate and Resume


To prevent the hibernate operation from getting stuck or failing, consider the following limitations and take
appropriate action:

• Do not attempt failover, failback, VM restore or create new DR configurations during hibernate or resume. Any
such running operations might fail if you start hibernating a cluster.
• Disable SyncRep schedules from Prism Central for a cluster that is used as a source or target for SyncRep before
hibernating that cluster. Failure to do so might result in data loss.
• Ensure that no ongoing synchronous or asynchronous replications are happening when you initiate the cluster
hibernation.
• Disable existing near-sync/minutely snapshots and do not configure new minutely snapshots during the hibernate
or resume operation. You may have to wait until the data of the minutely snapshots gets garbage collected before
trying to hibernate again. The waiting period could be approximately 70 minutes.
• Remove remote schedules of protection policies and suspend remote schedules of protection domains targeting a
cluster until the cluster is hibernated.
• Snapshot retention is not guaranteed after the cluster has been resumed. Long-term snapshot retention is subject to
hibernate and resume durations and retention policies.
• If a node in the cluster goes down or degrades, or the CVM goes down, the hibernate or resume operation might
not succeed.
• Hibernate and resume works for Autonomous Extent Store (AES) containers only. For NC2 on AWS, every
container is automatically enabled with AES.

Cloud Clusters (NC2) | Cluster Management | 139


Terminating a Cluster
You can terminate an NC2 cluster if you do not want to use the cluster anymore.

Note: You must only terminate the clusters from the NC2 console and not from your public cloud console. If you try
to terminate the cluster or some nodes in the cluster from your cloud console, then NC2 will continue to attempt to re-
provision your nodes in the cluster.
You do not need to delete the license reservation when terminating an NC2 cluster if you intend to use the
same license reservation quantity for a cluster you might create in the future.

Note: Ensure that the cluster on which Prism Central is deployed is not deleted if Prism Central has multiple Prism
Elements registered with it.

To terminate an NC2 cluster, perform the following procedure.

Procedure

1. Sign in to NC2 from the My Nutanix dashboard.

2. Go to the Clusters page, click the ellipsis in the row of the cluster you want to terminate, and click Terminate.

3. In the Terminate tab, select the confirmation message to terminate the cluster.

Figure 82: Terminate an NC2 cluster

Multicast Traffic Management


NC2 leverages the multicast capability of AWS Transit Gateway to support multicast traffic. You can enable dynamic
multicast group membership to forward multicast traffic between subnets of a VPC where NC2 is running. The transit
gateway controls how traffic is routed among all the connected spoke networks using AWS route tables.
IP Multicast allows efficient one-to-many and many-to-many traffic by sending IP traffic to a group of interested
receivers in a single transmission. The use of IP Multicast helps maintain an efficient network as the source
sends a single copy of IP traffic irrespective of the number of receivers. The interested receivers subscribe to a
Multicast group. The group is dynamic because interested receivers can join or leave the group as required. For more
information on AWS Transit Gateway, see AWS Documentation. For more information on how to enable multicast
and configure AWS Transit Gateway for multicast, see Configuring AWS Transit Gateway for Multicast.

Note: Multicast traffic is disabled by default in NC2. You can enable multicast traffic for each cluster so that clusters
running in AWS do not drop the multicast traffic egressing from AHV.

Cloud Clusters (NC2) | Cluster Management | 140


UVMs use Internet Group Management Protocol (IGMP) for subscribing to multicast groups. When a
subnet is added to the AWS Transit Gateway Multicast domain, AWS snoops all the IGMP traffic on the
ENI within the added subnets and maintains a multicast state to route the multicast traffic to intended users.
When multicast traffic is enabled for UVMs, clusters running in AWS do not drop the multicast traffic
egressing from AHV. A set of hosts that send and receive the same multicast traffic is called a multicast
group. The multicast traffic is routed to the subscribed UVMs for a given multicast group based on the
multicast membership table.

For more information on multicast concepts, see Multicast on transit gateways - Amazon VPC, and on how
to manage multicast domains and groups, see Managing multicast domains - Amazon VPC and Managing
multicast groups - Amazon VPC.
For multicast traffic to work in NC2, IGMP snooping must be enabled on AHV so that AHV can send multicast
traffic to only subscribed UVMs. If IGMP snooping is disabled, AHV will send multicast traffic to all UVMs, which
might be undesirable. This unwanted traffic results in consuming more computing power, slowing down normal
functions, and making the network vulnerable to security risks. With IGMP snooping enabled, networks use less
bandwidth and operate faster.

Note: A default virtual switch is created automatically when multicast is enabled. You can enable or disable IGMP
snooping only for the UVMs attached to the default virtual switch. You cannot enable or disable IGMP snooping at
the subnet level. All UVMs associated with the default virtual switch will have IGMP snooping enabled or disabled.
Multicast traffic is supported only for UVM subnets and not for CVM (management cluster) subnets. For instructions,
see Enabling or Disabling IGMP Snooping.

When a UVM with multicast traffic enabled is migrated to another NC2 node in the same cluster, multicast traffic can
be forwarded to that UVM even after migration.
The following figure shows a typical topology where both the multicast sender and receiver are in the same VPC.
Various scenarios with different multicast senders and receivers are described below.

Figure 83: Multicast traffic with the multicast sender and receiver are in the same VPC

In this example, the AWS transit gateway is configured on AWS Subnet X. The UVMs in Blue are in Subnet X,
and the UVMs in Green are in Subnet Y. EC2 instance can be any AWS-native (non-bare metal) compute instance,
outside of NC2. All the components shown in this example, other than the EC2-native instance, belong to a single
cluster.
A multicast sender is a host that sends multicast traffic; it can be any EC2-native instance or any UVM on an
NC2 host with IGMP snooping enabled. A multicast receiver is a host that receives multicast traffic; it can be any
EC2-native instance and UVMs that share a subnet. UVMs that are not configured as the receiver will still receive
multicast traffic if snooping is disabled when those UVMs share the subnet with a UVM that is configured as a
receiver UVM. UVMs that are not configured as a receiver and that are not sharing the subnet with another UVM that
is configured as a receiver will not receive multicast traffic regardless of the snooping status.

Cloud Clusters (NC2) | Cluster Management | 141


The following table shows how multicast traffic will be routed for different senders and receivers based on the IGMP
snooping status.

Table 20: Multicast traffic routing for multicast senders and receivers

Configured Multicast Configured Multicast IGMP Snooping Status Multicast Traffic Status
Sender Receivers
EC2-native UVM1, UVM2, UVM4 Enabled Traffic from the sender
(EC2-native) is received
by the configured
receivers UVM1, UVM2,
and UVM4.
EC2-native UVM1, UVM2, UVM4 Disabled Traffic from EC2-native
is received by:

• All UVMs on NC2-


Host1 that share the
subnet with UVM1/
UVM2. This includes
UVM1, UVM2, and
UVM3.
• All UVMs on NC2-
Host2 that share the
subnet with UVM4.
This includes UVM4,
UVM5, and UVM6.

UVM8 UVM1, UVM2, UVM4 Enabled Traffic from UVM8


is received by the
configured receivers
UVM1, UVM2, and
UVM4.

Cloud Clusters (NC2) | Cluster Management | 142


Configured Multicast Configured Multicast IGMP Snooping Status Multicast Traffic Status
Sender Receivers
UVM8 UVM1, UVM2, UVM4 Disabled Traffic from UVM8 is
received by:

• All UVMs on NC2-


Host1 that share the
subnet with configured
receiver UVM1/UVM2,
that is, UVM1, UVM2,
UVM3.
• All UVMs on NC2-
Host2 that share the
subnet with configured
receiver UVM4, that
is, UVM4, UVM5, and
UVM6.
• Also, UVM9 receives
traffic because UVM9
is sharing a subnet with
the sender UVM8 on
NC2-Host3.

UVM8 or EC2-native None Enabled or Disabled No UVM or EC2-


instance native receives traffic
because no receivers are
configured.
UVM7 UVM1, UVM2, UVM4 Enabled Traffic from UVM7
is received by the
configured receivers
- UVM1, UVM2, and
UVM4.

Note: When
IGMP Snooping
is enabled, traffic
from the multicast
sender is received
only by the
multicast receivers.

Cloud Clusters (NC2) | Cluster Management | 143


Configured Multicast Configured Multicast IGMP Snooping Status Multicast Traffic Status
Sender Receivers
UVM7 UVM1, UVM2, UVM4 Disabled Traffic from UVM7 is
received by:

• All UVMs on NC2-


Host1 that share
the subnet with the
configured receiver
UVM1/UVM2, that is,
UVM1, UVM2, UVM3.
• All UVMs on NC2-
Host2 that share
the subnet with the
configured receiver
UVM4, that is, UVM4,
UVM5, and UVM6.
• All UVMs that share a
subnet with the sender
UVM7 on NC2-Host3.
In this case, no such
UVM is available on
NC2-Host3.

UVM8 EC2-native instance on Enabled Configured receiver


Subnet Y EC2-native instance on
Subnet Y receives traffic.
UVM8 EC2-native instance on Disabled Traffic from UVM8 is
Subnet Y received by:

• Configured receiver
EC2-native instance on
Subnet Y.
• UVM9 because it shares
the subnet with the
sender UVM8 on NC2-
Host3.

The following figure shows an example topology where both the multicast sender and receiver are in different VPCs.

Cloud Clusters (NC2) | Cluster Management | 144


Figure 84: Multicast traffic with the multicast sender and receiver are in different VPCs

The transit gateway is configured on Subnet X in VPC 1. The transit gateway allows connecting different VPCs (for
example, Subnet X in VPC 1 to Subnet Y in VPC 2). The following table shows how multicast traffic will be routed
for certain senders and receivers based on the IGMP snooping status.

Table 21: Multicast traffic routing for multicast senders and receivers

Configured Multicast Configured Multicast AOS IGMP Snooping Multicast Traffic Status
Sender Receiver/s Status
EC2-native2 / UVM3 UVM1, EC2-native1 Enabled Traffic from the sender
is received by the
configured receivers
UVM1 and EC2-native1.

Note: When
IGMP Snooping
is enabled, traffic
from the multicast
sender is received
only by the
multicast receivers.

EC2-native2 UVM1, EC2-native1 Disabled Traffic from EC2-native2


is received by EC2-
native1, and all UVMs
on NC2-Host1 that share
the subnet with UVM1,
that is, UVM2.
UVM3 UVM1, EC2-native1 Disabled Traffic from UVM3 is
received by EC2-native1,
and all UVMs on NC2-
Host1 that share the
subnet with UVM1.

Configuring AWS Transit Gateway for Multicast


Follow these steps to configure the AWS transit gateway and enable multicast traffic:

Cloud Clusters (NC2) | Cluster Management | 145


Procedure

1. Run the following command on the CVM to enable IGMP snooping using aCLI.
net.update_virtual_switch virtual-switch-name enable_igmp_snooping=true
enable_igmp_querier=[true | false] igmp_query_vlan_list=VLAN IDs
igmp_snooping_timeout=timeout
The default timeout is 300 seconds. The AWS Transit Gateway acts as a multicast querier, and you have the
option to add additional multicast queries. You can set enable_igmp_querier variable as true or false if you
want to enable or disable AOS IGMP querier.
If you want to enable IGMP queries to only specific subnets, then you must specify the list of VLANs for
igmp_query_vlan_list. You can get the subnet to VLAN mapping using the net.list aCLI command.

For instructions, see Enabling or Disabling IGMP Snooping.


IGMP snooping allows the host to track which UVMs need the multicast traffic and send the multicast traffic to
only those UVMs.

2. Configure a transit gateway for multicast traffic.


For more information, see Create a transit gateway.

Note: While creating an AWS transit gateway, ensure that you select the Multicast support option. You can enable
the transit gateway for multicast traffic only when you create the transit gateway; you cannot modify an existing
transit gateway to enable multicast traffic.

3. Attach an AWS Transit Gateway to a VPC.


For more information, see Create a transit gateway attachment to a VPC.

4. Configure a multicast domain for IGMP support.


For more information, see Creating an IGMP multicast domain. Use the following settings:

• Enable IGMPv2 support


• Disable Static sources support

5. Create an association between subnets in the transit gateway VPC attachment and the multicast domain.
For more information, see Associating VPC attachments and subnets with a multicast domain.

6. Change the default IGMP version for all IGMP group members by running the following command on each UVM
that is intended to be a multicast receiver on the cluster:
sudo sysctl net.ipv4.conf.eth0.force_igmp_version=2

7. Update AWS security groups:

• Configure the inbound security group rule to allow traffic from the sender by specifying the sender’s IP
address.
• Configure the outbound security rule that allows traffic to the multicast group IP address.
Also, allow IGMP queries from the Transit Gateway; add the source IP address as 0.0.0.0/32, and the protocol
must be IGMP. For more information, see Multicast routing - Amazon VPC.

AWS Events in NC2


Events raised by AWS are sent to the cluster and the NC2 console displays these events in the notification center.
The following table shows the AWS events displayed in the notification center and actions taken by the NC2 console
to handle them:

Cloud Clusters (NC2) | Cluster Management | 146


Event Description Action
Instance Retirement At the scheduled time, the bare- Nutanix automatically condemns
metal instance is stopped if it the host, triggering replacement of
is backed by Amazon EBS, or the host.
terminated if it is backed by an
instance store.
System reboot At the scheduled time, the Nutanix restarts the AHV host.
host running on the bare-metal
instance is restarted.
Instance status impaired An EC2 instance status check is No action is taken.
failing for the bare-metal instance.
System Status impaired An EC2 system status check is No action is taken.
failing for the bare-metal instance
Instance Stopped EC2 reports that the bare-metal Nutanix automatically condemns the
instance is in stopped state host, triggering replacement of the
when Nutanix expects it to be in host.
running state. When an instance
enters a stopped state, the
hardware reservation is lost and
the instance store is erased.
Instance Terminated EC2 reports that instance is in Nutanix automatically condemns
terminated state when Nutanix the host, triggering replacement of
expects it to be in running state. the host.
When an instance enters a
terminated state, the hardware
reservation is lost and the
instance store erased.

Displaying AWS Events

About this task


This section provides instructions about how to display AWS events in the NC2 console. The NC2 console does not
display AOS events. View the AOS alerts in the Prism Element web console.
To display the AWS events, perform the following:

Procedure

1. Sign in to NC2 from the My Nutanix dashboard.

2. Select the ellipsis button of a corresponding cluster and click Notification Center.
View AOS specific alerts from Prism Web Console.

3. Navigate to the Notifications tab.


The Notifications page displays the AWS events such as message, entity details, severity of the AWS events
that occur in your NC2 on AWS environment.

4. To acknowledge a notification, in the row of a notification, click the corresponding ellipsis, and select
Acknowledge.

Cloud Clusters (NC2) | Cluster Management | 147


Viewing Licensing Details
The Clusters page displays the details of the licenses that you have applied to a cluster and also provides
a link to the licensing portal where you can view and manage all your Nutanix licenses.

About this task


To display the licensing details of a cluster, perform the following:

Procedure

1. Sign in to NC2 from the My Nutanix dashboard.

Note: Ensure that you select the correct workspace from the Workspace dropdown list on the My Nutanix
dashboard. For more information on workspaces, see Workspace Management.

2. In the Clusters page, click the name of the cluster whose licensing details you want to display.

3. In the Properties section, click View Details in Licensing.


This section displays information such as number of cores, memory capacity, and storage capacity of the cluster
based on the license that you have applied to the cluster.
The Nutanix licensing portal where you can view and manage all your Nutanix licenses is displayed.

4. You can check if you are running the cluster with Windows License Included EC2 instances by navigating to the
cluster’s Summary page and the Microsoft Windows Licensing section, and then check if Run Microsoft
Windows Server on this Cluster is set to True.
You will pay the cost associated with Microsoft Windows Server licensing directly to AWS.

Figure 85: Microsoft Windows Licensing Details

Support Log Bundle Collection


You can generate a support logbay bundle that you can send to Nutanix Support if you need further assistance with a
reported issue.
Support logbay bundle of NC2 on AWS contains all the standard on-prem AOS logs and also the following logs
specific to NC2:

• Clusters_agents_upgrader
• Cluster_agent
• Host_agent
• Hostsetup

Cloud Clusters (NC2) | Cluster Management | 148


• Infra_gateway
• cloudnet
You can collect the logs either by using the Prism Element web console or Nutanix Cluster Check (NCC) command
line.
See Collecting Logs from the Web Console with Logbay for instructions about how to collect the logs by using
the Prism Element web console.
See Logbay Log Collection (Command Line) for instructions about how to collect the logs by using the NCC
command line.
You can collect the logs using logbay for a certain time frame and share the respective log bundle with Nutanix
Support to investigate the reported issue. You can upload logs collected by logbay on the Nutanix SFTP or FTP
server.
See Uploading Logbay Logs for more information on how to upload the collected logs.

Cloud Clusters (NC2) | Cluster Management | 149


CLUSTER PROTECT CONFIGURATION
Nutanix Cloud Clusters (NC2) on AWS run on bare metal instances that capitalize on local NVMe storage. A data
loss risk might occur in case of failures caused by scenarios including but not limited to Availability Zones (AZ)
failures or users terminating bare-metal nodes from the AWS console.
With the Cluster Protect feature, you can protect your NC2 cluster data, including Prism Central configuration, UVM
data, and volume group data, with snapshots stored in AWS S3 buckets. When using Cluster Protect, you can recreate
your cluster with the same configurations and recover your UVM data and Prism Central configuration from S3
buckets. The Cluster Protect feature thus helps ensure business continuity even in the event of a complete AWS AZ-
level failure.
With Cluster Protect, you can protect the following entities:

• VM configuration and data, such as VM Disks and Volume Groups.


• Prism Central configuration data.
To learn more about the services that can be protected using Prism Central Disaster Recovery, see Prism Central
Disaster Recovery.
• Flow Network Security policies.
• DR configurations, such as Protection Policies, Recovery Plans, and VM categories.
To use the Cluster Protect feature, you must set up two Amazon S3 buckets, one to back up the UVM and Volume
Groups data and another to back up the Prism Central data. You must also use AOS or NCI Ultimate licensing tier.
You can use a gateway endpoint for connectivity to Amazon S3 without using an internet gateway or a NAT device
for your VPC. For more information, see AWS VPC Endpoints for S3.
The following figure illustrates a deployment scenario where multiple VMs run on various NC2 clusters within the
same AZ. At least one Prism Central instance runs on one of the clusters and is configured to manage multiple NC2
clusters in the same AZ.

Figure 86: Cluster Protect Illustration

In the event of a failure event that impacts multiple clusters, you can first recover a cluster that will be used to recover
Prism Central (if the failure event also impacted Prism Central) and then recover the remaining failed clusters and
their associated VMs, and Volume Groups from the backups in the S3 buckets. If the failure is not AZ-wide and
Prism Central of the impacted cluster is hosted on another cluster and that Prism Central is not impacted, then you can
restore the impacted cluster from that existing Prism Central.

Note: With Cluster Protect, all the VMs in a cluster are auto-protected using a single category value and hence are
recovered by a single Recovery Plan. A single Recovery Plan can recover up to 300 entities. Nutanix does not support
multiple recovery plans in parallel, irrespective of the number of entities in the recovery plan.

Cloud Clusters (NC2) | Cluster Protect Configuration | 150


You can register multiple clusters with Prism Central in the same AWS AZ and enable Cluster Protect to back up
those clusters on that Prism Central.

Note: Currently, up to five NC2 clusters registered with one Prism Central in the same AWS AZ can be protected by
Cluster Protect.

You need to follow various protection and recovery procedures individually for each cluster that needs to be protected
and recovered. Prism Central can be recovered on any AWS cluster that it was previously registered with. All UVMs
and volume groups data are protected automatically to an Amazon S3 bucket with a 1-hour Recovery Point Objective
(RPO). Only the two most recent snapshots per protected entity are retained in the S3 bucket.
When the cluster recovery process is initiated, the impacted clusters are marked as failed, and new recovery clusters
with the same configurations are created through the NC2 console. If you had previously opted to use the NC2
console to create VPCs, subnets, and associated security groups, then NC2 automatically creates those resources
again during the recovery process. Else, you will need to first manually recreate those resources in your AWS console
if you did not use NC2 to create them before the failure event.
Cluster Protect can protect the following services and recover the associated metadata:

• Leap
• Flow Network Security
• Prism Pro (AIOps)
• VM management
• Cluster management
• Identity and Access Management (IAMv1)
• Categories
• Networking
The following services continue to run though these services are not protected, so data associated with them is not
recovered.

• Nutanix Files
• Self-Service
• LCM
• Nutanix Kubernetes Engine
• Objects
• Catalog
• Images
• VM templates
• Reporting Template

Prerequisites for Cluster Protect


You must meet the following requirements for using the Cluster Protect feature:

• AOS version must be 6.7 or higher and Prism Central version must be 2023.3 or higher.
• License tier must be AOS Ultimate or NCI Ultimate.

Cloud Clusters (NC2) | Cluster Protect Configuration | 151


• Subnets used for Prism Central and Multicloud Snapshot Technology (MST) must be different than the UVM
subnet.

Note: You can use the same subnet or different subnets for Prism Central and MST.

• Clusters to be protected by Cluster Protect must be registered with the same Prism Central instance.

Note: Prism Central that manages protected clusters can also be protected by Prism Central Disaster Recovery.

• Two new AWS S3 buckets must be manually created with the bucket names prefixed with nutanix-clusters.
• Nutanix Guest Tools (NGT) must be installed on all UVMs.
• You must re-run the CloudFormation script if you have already added your AWS account in the NC2 console,
so that the IAM role that has the required permissions to access only the S3 buckets with the nutanix-clusters
prefix comes into effect.

Note: If you already have run the CloudFormation template, you must run it again to use Cluster Protect on newly
deployed NC2 clusters.

For more information, see https://portal.nutanix.com/kb/15256.

Note: Ports 30900 and 30990 are opened by default while creating a new NC2 cluster and are required for
communication between AOS and MST to back up the VM and Volume Groups data.

Limitations of Cluster Protect


Understand the following limitations while using the Cluster Protect feature:

• The Cluster Protect feature and Protection Policies cannot be used at the same time in the same cluster to protect
the data. If a user-created protection or DR policy already protects a VM or Volume Group, it cannot also be
protected with the Cluster Protect feature. If you need to use DR configurations for a cluster, you must use those
protection policies instead of Cluster Protect to protect your data. A new DR policy creation fails if the cluster is
already protected using the Cluster Protect feature.
• You cannot hibernate or terminate the clusters that are protected by the Cluster Protect feature. You must disable
Cluster Protect before triggering hibernation or termination.
• All clusters being protected must be in the same Availability Zone. Prism Central must be deployed within the
same Availability Zone as the clusters it is protecting.
• The Cluster Protect feature is available only for new cluster deployments. Any clusters created before AOS 6.7
cannot be protected using this feature.
• A recovered VDI cluster might consume more storage space than the initial storage space consumed by the
protected VDI cluster. This issue might arise because the logic that efficiently creates VDI clones is inactive
during cluster recovery. This issue might also occur if there are multiple clones on the source that are created from
the same image. As a workaround, you can add additional nodes to your cluster if your cluster runs out of space
during the recovery process.
For more information, see https://portal.nutanix.com/kb/14558.
• With the Cluster Protect feature, up to 300 entities (VMs or Volume Groups) per Prism Element and 500 entities
per Prism Central can be protected. Based on the tests Nutanix has performed, Multicloud Snapshot Technology
(MST) can manage a maximum of 15 TB of data across all managed clusters.
The recovery process will be blocked if the number of entities exceeds the allowed limit. When there are more
than 300 entities, you can contact Nutanix Support to continue recovery. For more information, see https://
portal.nutanix.com/kb/14961.

Cloud Clusters (NC2) | Cluster Protect Configuration | 152


• A storage leak might occur after MST rebuild for VM and Volume Group disks whose garbage data processing
might not have been completed by MST at the time of cluster failure. In such a case, extra data objects might be
stored in S3, leading to more storage usage than expected.
• The Mantle Master Key, stored on the local disk on all Prism Central VMs, gets lost on Prism Central failure. You
must reactivate the Mantle after recovering Prism Central. Any Playbook with stored credentials might fail on the
restored Prism Central until the credentials are reentered in the playbook.
• When the protected cluster has data-at-rest encryption enabled, the entity data is written back to the recovered
Prism Element in cleartext without encryption. An originally encrypted entity will now be unencrypted. Nutanix
recommends reenabling data-at-rest encryption immediately after recovery of the cluster.
• The CHAP authentication encrypted keys stored inside the Prism Central IDF tables are not backed up. A Prism
Central backup in S3 cannot restore these keys. After restoring the cluster and Prism Central, you must manually
configure and activate CHAP between the VM and corresponding Volume Groups.
• After recovering the cluster and Prism Central, you can manually update the storage container properties from the
Prism Element web console as they might not be recovered automatically.
• Volume Groups created by Nutanix Docker Volume Plugin (DVP) for internal management might get backed up
to S3 instead of being deleted. In this case, you can run the following CLI command on the Prism Central VM
before protecting the cluster so that these Volume Groups are marked as internal in 1 minute and are not backed
up to S3.
nutanix@pcvm$ mspctl controller flag set vg-internal-operator-period 1

Protecting NC2 Clusters


Follow these steps to protect your NC2 clusters:

Procedure

1. Get ready for cluster protection:

a. Create clusters in a new VPC or an existing VPC using the NC2 console.

Note: While deploying a cluster, ensure that you select the option to protect the cluster.

For more information, see Creating a Cluster.


b. Create two new S3 buckets in the AWS console.
For more information, see Creating S3 Buckets.
c. Deploy Prism Central on one of these clusters and then register the remaining NC2 clusters with Prism Central.
For more information, see Protecting Prism Central Configuration.

Cloud Clusters (NC2) | Cluster Protect Configuration | 153


2. Protect the cluster:

a. Protect Prism Central data by running CLI commands.


For more information, see Protecting Prism Central Configuration.
b. Enable the Multicloud Snapshot Technology (MST) by running CLI commands to protect UVM data.
For more information, see Deploying Multicloud Snapshot Technology.
c. Protect NC2 clusters by running CLI commands.

Note: You can protect your NC2 clusters even without protecting the Prism Central instance that is managing
these NC2 clusters; however, Nutanix recommends protecting your Prism Central instance as well.

For more information, see Protecting UVM and Volume Groups Data.

Note: After you complete all of these steps, wait for an hour and then check that at least one backup of Prism
Central is completed. One Prism Central backup must be completed after backing up the UVM data so that
protection policies, recovery points, and so on created during UVM backups are included in the Prism Central
backup. To ensure the same, run the following command and validate that the Prism Central replication to the S3
bucket has happened successfully:
nutanix@pcvm$ pcdr-cli list-protection-targets
The command returns the details in the following format:

UUID NAME
TIME-ELAPSED-SINCE-LAST-SYNC BACKUP-PAUSED
BACKUP-PAUSED-REASON TYPE

8xxxxxf5-3xxx-3xxx-bxxc-dxxxxxxxxxx6 https://nutanix-clusters-xxxx-
pcdr-3node.s3.us-west-2.amazonaws.com 30m59s false
kS3
The CLI shows sync in progress until the Prism Central data is synced to S3 for the first time. After
that, the CLI shows a non-zero time elapsed since the last sync. This confirms that the Prism Central
backup has been completed.

Creating S3 Buckets
You must set up two new Amazon S3 buckets with the default settings, one to back up the UVMs and volume group
data, and another to back up the Prism Central data. These S3 buckets must be empty and exclusively used only for
UVMs, volume groups, and Prism Central backups.
For instructions on how to create an S3 bucket, see the AWS documentation. While creating the S3 buckets, follow
the NC2-specific recommendations:

• The S3 bucket names are prefixed with nutanix-clusters.

Note: NC2 creates an IAM role with the required permissions to access S3 buckets with the nutanix-clusters
prefix. This IAM role is added to the CloudFormation template. You must run the CloudFormation template
while adding your AWS cloud account. If you already have run the CloudFormation template, you must run
it again to be able to use Cluster Protect on newly deployed NC2 clusters. For more information, see https://
portal.nutanix.com/kb/15256.
If the S3 buckets do not have the nutanix-clusters prefix, the commands to protect Prism Central and
clusters fail.

• Ensure that public access to these S3 buckets is blocked by default.

Cloud Clusters (NC2) | Cluster Protect Configuration | 154


Protecting Prism Central Configuration
After creating an NC2 cluster and the cluster status becoming Running, you can see the Cluster Protection
section on the cluster Summary page.

Note: While the cluster protection status can be checked from the cluster summary page on the NC2 console, the Prism
Central protection status can only be checked by running the nutanix@pcvm$ pcdr-cli list-protection-
targets command.

Figure 87: Cluster Protection Summary

The Cluster Protect section instructs you to:

Cloud Clusters (NC2) | Cluster Protect Configuration | 155


1. Deploy a new Prism Central instance on this cluster or register the cluster with an existing Prism Central.

Note: These clusters must be in the same AWS AZ as Prism Central.

• To deploy a new Prism Central: Perform the instructions described in Installing Prism Central (1-Click
Internet) to install Prism Central.
When deploying Prism Central, follow these recommendations:

• The Prism Central subnet must be a private subnet, and must only be used for Prism Central. The Prism
Central subnet must not be used for UVMs.
• When creating a DHCP pool in Prism Element, ensure that at least 3 IP addresses are kept outside the
DHCP pool for MST.
If you choose to use IPs from the DHCP pool, you can run the following aCLI command to reserve the IPs
in a network from the DHCP pool:
acli net.add_to_ip_blacklist <network_name> ip_list=ip_address1,ip_address2

• While deploying Prism Central, do not change the Microservices Platform (MSP) settings because these
are required to enable MST. You must choose Private network (defaults) in the MSP configuration when
prompted.

Note: You must not use managed networks for CMSP clusters with Cluster Protect enabled. CMSP cluster
is deployed in the VXLAN/kPrivateNetwork mode only.

• Modify the User management security group of the cluster hosting Prism Central to allow traffic from the
Internal Management subnet of the cluster hosting Prism Central to the Prism Central subnet. A rule to
allow traffic on all protocols gets added and the Management Subnet CIDR is used as the source. For more
information, see Port and Endpoint Requirements.

Note: Ports 30900 and 30990 are opened by default while creating a new NC2 cluster and are required for
communication between AOS and Multicloud Snapshot Technology (MST) to back up the VM and Volume
Groups data.

• To register a cluster with Prism Central: After you deploy Prism Central on one of the NC2 clusters in the
VPC, you must register your remaining NC2 clusters in that VPC to Prism Central that you deployed.
To register a cluster with Prism Central, follow the steps described in Registering a Cluster with Prism
Central.

Note: Any NC2 clusters that are not configured with the Prism Central that is hosting the Multicloud Snapshot
Technology will not be protected by Prism Central.

2. Configure the Prism Central protection and UVMs data protection. For more information, see Protecting Prism
Central Configuration and Protecting UVM and Volume Groups Data.

Protecting Prism Central Configuration


After creating an NC2 cluster and registering the cluster with Prism Central, you can perform the following steps to
protect your Prism Central configuration to an S3 bucket. Ensure that you have created a new AWS S3 bucket for
Prism Central backup as described in Creating S3 Buckets.

Note: In addition to protecting Prism Central to the S3 bucket, if your Prism Central instance is registered with
multiple NC2 clusters, then you must also protect Prism Central to one or more of the NC2 clusters it is registered with.
In this case, you must prioritize recovery of Prism Central configuration from another NC2 cluster where Prism Central
configuration was backed up if that NC2 cluster has not also been lost to a failure event. For more information, see
Protecting Prism Central.

Cloud Clusters (NC2) | Cluster Protect Configuration | 156


To protect the Prism Central configuration data, perform these steps:
1. Sign in to the Prism Central VM using the credentials provided while installing Prism Central.
2. Run the following CLI command:
nutanix@pcvm$ pcdr-cli protect -b s3_bucket_name -r aws_region
Replace s3_bucket_name with the name of the S3 bucket used for Prism Central protection and aws_region with
the region of the S3 bucket.

Note: The S3 bucket name must start with nutanix-clusters.

The Prism Central configuration gets backed up to the S3 bucket once every hour and is available in the pcdr/
folder in the S3 bucket.

Figure 88: Prism Central Configuration Backup in S3


3. Run the following command to validate that the Prism Central replication to the S3 bucket is happening
successfully:
nutanix@pcvm$ pcdr-cli list-protection-targets
The command returns the details in the following format:

UUID NAME
TIME-ELAPSED-SINCE-LAST-SYNC BACKUP-PAUSED BACKUP-PAUSED-REASON
TYPE

8xxxxxf5-3xxx-3xxx-bxxc-dxxxxxxxxxx6 https://nutanix-clusters-xxxx-pcdr-3node.s3.us-
west-2.amazonaws.com 30m59s false
kS3
The CLI shows sync in progress until the Prism Central data is synced to S3 for the first time. After that, the
CLI shows a non-zero time elapsed since the last sync. This confirms that the Prism Central backup has been
completed.
Wait until the sync is completed before performing the next steps to deploy MST.

Deploying Multicloud Snapshot Technology


The Cluster Protect feature uses Multicloud Snapshot Technology (MST) to replicate the protected UVM and Volume
Group entities to an AWS S3 bucket. The MST uses the Nutanix Disaster Recovery infrastructure to periodically
create recovery points with a Recovery Point Objective (RPO) of 1 hour and push the snapshots to the S3 bucket.

Note: When creating a DHCP pool in Prism Element, ensure that at least 3 IP addresses are reserved to be used with
the MST and another 3 for the Prism Central VM to be deployed (that are added as Virtual IPs during Prism Central

Cloud Clusters (NC2) | Cluster Protect Configuration | 157


deployment). The static IPs reserved for the MST must be outside the DHCP range of the MST subnet. Also, 4 IPs from
the DHCP range of the MST subnet will be used by the MST VMs.
The 4 MST VMs include three MSP controller VMs and one MSP LB VM. Each MSP Controller VM
consumes 8 vCPUs and 16 GiB memory, and the MSP LB VM consumes 2 vCPUs and 4 GiB memory.
Therefore, MST requires 26 vCPUs and 52 GiB memory.

To define the IP addresses for the MST, update the subnet > Settings > Network Configuration > IP Address
Pool.

Figure 89: Define IP Addresses for MST

To enable the Multicloud Snapshot Technology:

Procedure

1. Sign in to the Prism Central VM using the credentials provided while installing Prism Central.

2. Run the following CLI command:


nutanix@pcvm$ clustermgmt-cli deploy-cloudSnapEngine -b S3_bucket_name -r aws region
-i IP1,IP2,IP3 -s Private Subnet
In this command, the CloudSnapEngine represents the Multicloud Snapshot Technology.
Replace the variables with their appropriate values as follows:

• S3_bucket_name: the S3 bucket configured for UVM snapshots.

Note: The S3 bucket name must start with nutanix-clusters.

• aws region: the AWS region where the S3 bucket is created.


• IP1,IP2,IP3: the 3 IP addresses used for the MST service VMs.
• Private Subnet: the AWS private subnet configured for Prism Central.
The deploy-cloudSnapEngine command deploys the MST cluster. It can take up to 60 minutes for the MST
service to deploy. This command also displays the MST deployment status. You can rerun this command to check
the MST deployment status. If MST deployment has failed, you need to first clean up the failed deployment
by running the clustermgmt-cli delete-cloudSnapEngine command and then run the deploy-cloudSnapEngine
command.

Cloud Clusters (NC2) | Cluster Protect Configuration | 158


3. Check if the MST configuration data is being sent to the S3 bucket created for UVMs.

Figure 90: MST Configuration Data in the S3 Bucket

What to do next
Back up all UVM and Volume Groups data from NC2 clusters. For more information, see Protecting UVM
and Volume Groups Data.

Protecting UVM and Volume Groups Data


Follow these steps to back up all UVM and Volume Groups data from NC2 clusters:

Note: You must run this command separately for each NC2 cluster you want to protect by specifying the UUID for
each NC2 cluster. This command also creates a recovery point for the protected entities.

Procedure

1. Sign in to the Prism Central VM using the credentials provided while installing Prism Central.

Cloud Clusters (NC2) | Cluster Protect Configuration | 159


2. Run the following CLI command:
nutanix@pcvm$ clustermgmt-cli protect-cluster -u uuid
Replace uuid with the ID of the NC2 cluster you want to protect. You can find the UUID listed as Cluster ID
under General in the cluster Summary page in the NC2 console.
Your S3 bucket starts showing the UVMs data.

Figure 91: UVM Data in the S3 Bucket

Note:
If the clustermgmt-cli command fails, it might be due to the clustermgmt-nc2 service did not get
properly installed. You can run the following command to verify if the clustermgmt-nc2 service is
installed:
nutanix@pcvm$ allssh "docker ps | grep nc2"
An empty response in the output of this command indicates that the clustermgmt-nc2 service did not get
installed properly. To overcome this issue, you must restart the pc_platform_bootstrap service to install
the clustermgmt-nc2 service. To do this, run the following commands on the Prism Central VM using
CLI:
nutanix@pcvm$ allssh "genesis stop pc_platform_bootstrap"
nutanix@pcvm$ allssh "cluster start"
Wait for 5-10 minutes and then rerun the following command to verify that the clustermgmt-nc2 service
is installed:
nutanix@pcvm$ allssh "docker ps | grep nc2"
After you verify that the clustermgmt-nc2 service is successfully installed, you must rerun the
clustermgmt-cli command:
nutanix@pcvm$ clustermgmt-cli deploy-cloudSnapEngine -b S3_bucket_name -r aws
region -i IP1,IP2,IP3 -s Private Subnet

Cloud Clusters (NC2) | Cluster Protect Configuration | 160


3. Check the NC2 cluster protection status under Protection Policies in Prism Central.
For more information, see Data Protection and Recovery Entities.
One protection policy is created for each NC2 cluster that is protected. You can identify the appropriate protection
policy by its name, which will be in the s3-protection-rule-UUID format.
Where UUID is the ID of the NC2 cluster that was protected. You can also see the protection target being AWS
S3.

Figure 92: Protection Policies

The Protection Summary page includes an overview of the protection status of all clusters. It also provides
details about the VMs that are lagging behind their RPO. You can see the cluster being protected and the target
being the AWS S3 bucket.

Figure 93: Protection Summary

Cloud Clusters (NC2) | Cluster Protect Configuration | 161


4. Check the VMs that have been protected under the VM tab in Prism Central.

Figure 94: VMs list in Prism Central

The Recovery Points of the VM show when the VM was last backed up to S3. Only the two most recent snapshots
per protected entity are retained in the S3 bucket.

Figure 95: Recovery Points of the VM

Disabling Cluster Protect


Follow these steps to disable Cluster Protect for Prism Central and NC2 clusters:

Cloud Clusters (NC2) | Cluster Protect Configuration | 162


Procedure

1. Run the following command to check the Prism Central protection status by listing protection targets:
nutanix@pcvm$ pcdr-cli list-protection-targets
As Prism Central can be protected to S3 and its registered clusters, you need to know the protection target, S3
bucket, or one of the NC2 clusters where Prism Central configuration is backed up.
This command lists information about the Prism Central protection targets with their UUIDs. These UUIDs are
different from cluster UUIDs and are required when running the unprotect-cluster command.

2. Run the following command on the Prism Central VM to disable Prism Central protection:
nutanix@pcvm$ pcdr-cli unprotect -u protection_target_uuid
Use the protection target UUID that you derived using the list-protection-targets command in Step 1.
A warning is issued if Cluster Protect is enabled for any cluster managed by this Prism Central and asks for your
confirmation to proceed with unprotecting Prism Central. You can unprotect Prism Central even if Cluster Protect
is enabled for any cluster. Nutanix recommends keeping Prism Central protected for seamless recovery of NC2
clusters.

Note: If the failure is not AZ-wide and Prism Central that is managing one of the failed clusters is hosted on
another cluster, and that cluster is not impacted, then you can restore the failed cluster from that running Prism
Central.

3. Run the following command to disable cluster protection for any NC2 cluster:
nutanix@pcvm$ clustermgmt-cli unprotect-cluster -u cluster_uuid
Replace cluster_uuid with the UUID of the NC2 cluster for which you want to disable Cluster Protect. You can
find the UUID listed as Cluster ID under General in the cluster Summary page in the NC2 console.

Recovering NC2 Clusters


When you have protected your NC2 clusters and the associated Prism Central by following the steps listed in
Protecting NC2 Clusters and you observed a cluster failure due to reasons, such as AWS Availability Zones (AZs)
failures or all nodes from the EC2 management console being shut down, you can configure cluster recovery to
recover your NC2 clusters and Prism Central.

Note: Before you initiate the recovery of an NC2 cluster, ensure that you have protected Prism Central, deployed
Multicloud Snapshot Technology, and protected UVMs and volume groups. Also, after completing these cluster
protection steps, wait for one hour and then check that at least one backup of Prism Central is completed. One Prism
Central backup must be completed after backing up the UVM data so that protection policies, recovery points, and
so on created during UVM backups are included in the Prism Central backup. To ensure the same, run the following
command and validate that the Prism Central replication to the S3 bucket has happened successfully:
nutanix@pcvm$ pcdr-cli list-protection-targets
The command returns the details in the following format:

UUID NAME
TIME-ELAPSED-SINCE-LAST-SYNC BACKUP-PAUSED BACKUP-
PAUSED-REASON TYPE

8xxxxxf5-3xxx-3xxx-bxxc-dxxxxxxxxxx6 https://nutanix-clusters-xxxx-
pcdr-3node.s3.us-west-2.amazonaws.com 30m59s false
kS3

Cloud Clusters (NC2) | Cluster Protect Configuration | 163


The CLI shows sync in progress until the Prism Central data is synced to S3 for the first time. After that,
the CLI shows a non-zero time elapsed since the last sync. This confirms that the Prism Central backup has
been completed.

Cluster recovery includes the following steps:


1. Set the cluster to the Failed state using the NC2 console to start the recovery workflow.
For more information, see Setting Clusters to Failed State.
2. Redeploy the cluster using the NC2 console.
For more information, see Recreating a Cluster.
3. If the Prism Central instance was running on a cluster that had failed, then you may recover Prism Central by
running CLI commands.
For more information, see Recovering Prism Central and MST.
4. Redeploy the Multicloud Snapshot Technology.
For more information, see Recovering Prism Central and MST.
5. Recover UVM data by running CLI commands.
For more information, see Recovering UVM and Volume Groups Data.
After the cluster recovery is finalized, you can protect the newly recovered cluster again. For more information, see
Reprotecting Clusters and Prism Central.

Setting Clusters to Failed State


When a protected cluster fails and you need to recover it, you need to first set the cluster to the Failed state to initiate
the recovery process.

Note: The NC2 console automatically detects if an EC2 instance is deleted and then flags the cluster status as Failed.
However, the cluster might fail for any reason that NC2 might not detect. Therefore, it is recommended to perform
these steps to set the cluster to the Failed state whenever a failed cluster needs to be recovered.

Follow these steps to set a cluster to the Failed state:

Procedure

1. Sign in to the NC2 console: https://cloud.nutanix.com

2. On the Clusters page, click the name of the cluster you want to set to the Failed state.

Figure 96: Clusters page

Cloud Clusters (NC2) | Cluster Protect Configuration | 164


3. Ensure that the cluster Summary page shows the Cluster Protect field under General settings as Enabled.

Figure 97: Cluster Protection Status

4. On the Settings page, click the Advanced tab.

Figure 98: Settings - Advanced tab

5. Under Cluster Failure, click Set Cluster to Failed State.

Cloud Clusters (NC2) | Cluster Protect Configuration | 165


6. On the confirmation page, click Yes, Set Cluster State to Failed.

Figure 99: Set Cluster State to Failed

7. Ensure that the cluster status is changed to Failed for the cluster on the Clusters page.

Figure 100: Cluster Status - Failed

Cloud Clusters (NC2) | Cluster Protect Configuration | 166


8. Go to the cluster Summary page to validate that the Cluster Recovery workflow is displayed.

Figure 101: Start Cluster Recovery

What to do next
After you set the cluster to the Failed state, redeploy the cluster. See Recreating a Cluster for more
information.
You must figure out on your own when the failure event, such as an AWS AZ failure, impacting your cluster is over
so that they can start the cluster recovery process. Nutanix does not indicate when an AWS AZ has recovered enough
for your recovery cluster to be deployed.

Recreating a Cluster
When a protected cluster fails, and you set the cluster state to Failed, you need to redeploy the cluster.
Follow these steps to redeploy the cluster:

Procedure

1. Sign in to the NC2 console: https://cloud.nutanix.com

2. On the Clusters page, click the name of the failed cluster that you want to redeploy.

Figure 102: Clusters with Failed Status

Cloud Clusters (NC2) | Cluster Protect Configuration | 167


3. On the cluster Summary page, under Cluster Recovery, click Start Cluster Recovery.

Figure 103: Start Cluster Recovery

4. Click Recreate Cluster.


The Recreate Cluster page appears.

Figure 104: Recreate Cluster

Cloud Clusters (NC2) | Cluster Protect Configuration | 168


5. Review or specify the following details on the Cluster Configuration page, and then click Next.

• Under General:

• Cluster Name: Enter a name for the cluster.

Note: The recovery cluster name must be different than the failed cluster. It will be enforced by the NC2
console during recovery cluster creation.

• Cloud Account, Region, and Availability Zone: These configurations from the failed cluster that you
are recreating are displayed. Your recovery cluster will use the same configuration.
• Under Network Configuration:

• When manually created VPC and subnets were used to deploy the failed cluster, the previously used
resources are displayed. You must recreate the same VPC and subnets that you had previously created in
your AWS console.
• When VPC and subnets created by the NC2 console were used to deploy the failed cluster, the NC2
console will automatically recreate the same VPCs and subnets during the cluster recovery process.

Cloud Clusters (NC2) | Cluster Protect Configuration | 169


Figure 105: Cluster Configuration

6. Review the cluster summary on the Summary page and then click Recreate Cluster.

Cloud Clusters (NC2) | Cluster Protect Configuration | 170


7. To continue the cluster recovery process, click Go to new cluster to navigate to the redeployed cluster.
The failed cluster gets terminated.

Figure 106: Navigate to the Redeployed Cluster

Cloud Clusters (NC2) | Cluster Protect Configuration | 171


8. The cluster Summary page of the newly created cluster shows the cluster status as Creating Cluster.

Figure 107: Cluster Status - Creating Cluster

After the cluster is created, the status changes to Recovery in Progress.

Figure 108: Cluster Status - Recovery in Progress

What to do next
After recreating the cluster, you must recover Prism Central if it was running on a cluster that suffered a
failure event and user VM or volume groups data. See Recovering Prism Central and User Data.

Recovering Prism Central and MST


In case the failure was not AZ-wide and one of the clusters protecting Prism Central survived, you must restore Prism
Central from the cluster that it was backed up to rather than from S3. For more information, see Recovering Prism
Central (1-Click Disaster Recovery).

Cloud Clusters (NC2) | Cluster Protect Configuration | 172


If all clusters protecting Prism Central are unavailable, then recover Prism Central from S3.
After you redeploy the NC2 clusters by following the instructions listed in Recreating a Cluster, you may need to
recover the Prism Central instance whose configuration you backed up earlier on the S3 bucket and then recover the
UVM data from the S3 basket created for UVM data backup.

Note: The Prism Central subnet must be created first before following these instructions to recover Prism Central.
Also, the Prism Central image must be present on the cluster.

Follow these steps to recover Prism Central:

Procedure

1. Ensure that you have redeployed a cluster.


For more information, see Recreating a Cluster.

2. On the redeployed cluster, run the following CLI command on the CVM to recover Prism Central from the S3
bucket where Prism Central data was backed up:
nutanix@cvm$ pcdr-cli recover -b S3_bucket -r AWS_region -n PC-Subnet
Replace the variables with their appropriate values as follows:

• S3_bucket: the S3 bucket configured for Prism Central backup.


• AWS_region: the AWS region where the S3 bucket is created.
• PC-Subnet: the AWS private subnet configured for Prism Central.

3. Track the Prism Central recovery status in the Tasks section on the recreated Prism Element console.

Note: The Prism Central recovery might take approximately four hours. Also, the recovered Prism Central and
original Prism Central are of the same version.

What to do next
After you recover Prism Central, register any newly created NC2 clusters with the recovered Prism Central. If the
clusters that were registered with Prism Central prior to the recovery of Prism Central did not suffer any failure, they
will be auto-registered with the recovered Prism Central.

Note: After the cluster recovery is complete, the failed Prism Element remains registered with recovered Prism
Central. To remove this Prism Element, unregister the Prism Element from Prism Central. For detailed instructions, see
the KB article 000004944.

Redeploying Multicloud Snapshot Technology


After recreating NC2 clusters and redeploying Prism Central, you must redeploy Multicloud Snapshot Technology
before recovering UVM and Volume Groups data from the S3 bucket where they were backed up.

Note: The configuration data for the recovery Prism Central must be recovered from the Prism Central S3 bucket
before recovering the UVM data on the recovery clusters. For more information, see Recovering Prism Central.

To redeploy Multicloud Snapshot Technology:

Procedure

1. Sign in to the Prism Central VM using the credentials provided while installing Prism Central.

Cloud Clusters (NC2) | Cluster Protect Configuration | 173


2. Run the following CLI command:
nutanix@pcvm$ clustermgmt-cli deploy-cloudSnapEngine --recover -b S3_bucket -r
AWS_region -i IP1,IP2,IP3 -s PC-Subnet
Replace the variables with their appropriate values as follows:

• S3_bucket: the S3 bucket where you want to protect the user VMs data.
• AWS_region: the AWS region where the S3 bucket is created.
• IP1,IP2,IP3: the static IPs reserved for MST.

Note: These IPs can be different than the IPs used earlier while deploying MST prior to cluster failure.

• PC-Subnet: the AWS private subnet configured for the recovery Prism Central.

Recovering UVM and Volume Groups Data


Before recovering the UVM and Volume Groups data from the S3 bucket where they were backed up, ensure the
following requirements are met:

• NC2 clusters are recreated. For more information, see Recreating a Cluster.
• Prism Central is redeployed.

Note: The configuration data for the recovery Prism Central must be recovered from the Prism Central S3 bucket
before recovering the UVM data on the recovery clusters. For more information, see Recovering Prism
Central and MST.

• Multicloud Snapshot Technology is redeployed. For more information, see Recovering Prism Central and
MST.
• Disaster Recovery must be enabled.

Note: The UVM subnet names on the failed and recovered clusters must be the same for the correct mapping of
subnets in the recovery plan. If the names do not match correctly, the cluster recovery might proceed, but the VMs are
recovered without the UVM subnet attached. You can manually attach the subnet post-recovery. If there are multiple
UVM subnets, then all UVM subnets must be recreated with the same names for the correct mapping of subnets
between failed and recovered clusters.

Follow these steps to recover UVM and Volume Groups data:

Procedure

1. Sign in to the Prism Central VM using the credentials provided while installing Prism Central.

Cloud Clusters (NC2) | Cluster Protect Configuration | 174


2. Run the following command to get a list of NC2 clusters that were registered with the recovery Prism Central
prior to failure:
nutanix@pcvm$ nuclei cluster.list
A list of the protected Prism Elements that failed and the recovery Prism Elements created by the NC2 console
is displayed. The following figure shows an example of a list of Prism Element UUIDs on the recovery Prism
Central.

Figure 109: List of Failed and Recovered Clusters

3. Recreate subnets on the recovery Prism Elements:

a. Run the following command to list all the subnets associated with the protected Prism Elements:
nutanix@pcvm$ clustermgmt-cli list-recovery-info -u UUID_OldPE
Replace UUID_OldPE with the UUID of the old NC2 cluster.
A list of subnets is displayed.

NAME GATEWAY-IP CIDR TYPE IP-POOL-RANGES

PC-Subnet 10.0.xxx.1 10.0.xxx.0/24 VLAN


[{"begin":"10.0.xxx.50","end":"10.0.xxx.200"}]

b. Recreate these subnets on the recovery Prism Elements in the same way they were created in the first place.
For more information, see Creating a UVM Network.

4. Run the following command to create a Recovery Plan to restore UVM data from the S3 buckets.
nutanix@pcvm$ clustermgmt-cli create-recovery-plan -o UUID_OldPE -n UUID_NewPE
Replace the variables with their appropriate values as follows:

• UUID_OldPE: the UUID of the old NC2 cluster.


• UUID_NewPE: the UUID of the new NC2 cluster.

Note: You must perform this step for each NC2 cluster you want to recover.

Cloud Clusters (NC2) | Cluster Protect Configuration | 175


5. Execute the recovery plans:

a. Sign in to Prism Central using the credentials provided while installing Prism Central.
b. Go to Data Protection > Recovery Plans.
You can identify the appropriate recovery plan to use by looking at the recovery plan name. It is in the format:
s3-recovery-plan-UUID_OldPE

Figure 110: View Recovery Plans


c. Start the recovery plan execution by triggering a failover.

Figure 111: Create a Failover

Once the failover is complete, your UVM and Volume Groups data is recovered on the recovery Prism
Element.
Once the recovery plan is finished, your VMs are recovered.

Cloud Clusters (NC2) | Cluster Protect Configuration | 176


6. Go to Compute & Storage > VMs to see the list of recovered VMs.

Figure 112: View Recovered VMs

7. Run the following command on all NC2 clusters that are recovered after the cluster failure to remove the category
values and protection policies associated with the old clusters that no longer exist.
nutanix@pcvm$ clustermgmt-cli finalize-recovery -u UUID_OldPE
Replace UUID_OldPE with the UUID of the old NC2 cluster.

What to do next
Manually turn on all UVMs.

Reprotecting Clusters and Prism Central


After the cluster recovery is finalized, you can protect the newly recovered cluster again. By looking at the VM
protection status, you can verify that the new cluster is unprotected from the Prism Central UI. For more information,
see Finding the Protection Policy of an Entity.

Figure 113: View Cluster Protection Status

Cloud Clusters (NC2) | Cluster Protect Configuration | 177


Run the following command on the Prism Central VM to reprotect the cluster:
nutanix@pcvm$ clustermgmt-cli protect-cluster -u UUID_NewPE
Replace UUID_NewPE with the UUID of the new NC2 cluster.

CLI Commands Library


The following table lists the CLI commands you can use for the Cluster Protect feature end-to-end
workflow. While these commands and their examples are already listed in the respective procedures, this
table intends to provide additional reference information.

Table 22: CLI Commands for Cluster Protect

Purpose Command Command Flags Description


available on
Get Prism Central nutanix@cvm$ pcdr- CVM -h, --help Help for the
deployment cli deployment-info deployment-info
information from the [flags] command.
backup source.
For example, -b, --bucket string Name of the S3
nutanix@cvm$ pcdr- bucket.
cli deployment- -o, --output string Supported output
info -b nutanix- formats: ['default',
clusters-xxxx-pcdr
'json'] (default
-r us-west-2
"default")
The output format
'default' is the same as
'json'.

-c, -- UUID of the primary


pc_cluster_uuid Prism Central
string cluster.
-r, --region string Name of the AWS
region where the
provided S3 bucket
exists.
Recover Prism nutanix@cvm$ pcdr- CVM -h, --help Help for the recover
Central from a cli recover [flags] command.
backup source,
For example, -v, --pc_vip string Virtual IP of the
such as the NC2
Prism Central
cluster or S3 bucket nutanix@cvm$ pcdr-
cli recover -b cluster. If VIP is not
endpoint.
nutanix-clusters- provided, the older
xxxx-pcdr-3node VIP is used.
-r us-west-2 -
-i, --pc_vm_ips Comma-separated
n PC-Subnet -g
10.0.xxx.1 -m strings list of Prism Central
255.255.255.0 VM IPs. If VM IPs
are not provided,
the older VM IPs
are used.

Cloud Clusters (NC2) | Cluster Protect Configuration | 178


Purpose Command Command Flags Description
available on
-s, --storage_ctr Name or UUID
string of the storage
container. If the
name or UUID
is not specified,
then the container
whose name starts
with default is
used for the Prism
Central cluster.
-g, -- Gateway address
subnet_gateway for the provided
string subnet, If not
specified, then the
gateway address
of the old subnet is
used.
-m, -- Subnet mask
subnet_mask address for the
string provided subnet.
If not specified,
the subnet mask
address of the old
subnet is used.
-n, -- Name of the subnet
subnet_name on which Prism
string Central cluster must
be deployed.
-b, --bucket string Name of the S3
bucket.
-o, --output string Supported output
formats: ['default',
'json'] (default
"default")
-c, -- UUID of the primary
pc_cluster_uuid Prism Central
string cluster.

Cloud Clusters (NC2) | Cluster Protect Configuration | 179


Purpose Command Command Flags Description
available on
-r, --region string Name of the AWS
region where the
provided S3 bucket
exists.
Get a list of all nutanix@pcvm$ Prism Central VM -h, --help Help for list-
protection targets pcdr-cli list- protection-targets
with the time protection-targets command.
elapsed since their [flags]
--verbose Print all details of
last sync. For example, protection targets.
nutanix@pcvm$ If the verbose flag
pcdr-cli list- is not specified,
protection-targets only the important
details, such as the
protection target's
name, UUID,
the time elapsed
since the last
backup, type, and
is_backup_paused
are returned.
-o, --output string Supported output
formats: ['default',
'json'] (default
"default")
Protect Prism nutanix@pcvm$ pcdr- Prism Central VM -b, --bucket string Name of the S3
Central to the cli protect [flags] bucket where Prism
backup target. Central config data
For example,
would be backed
nutanix@pcvm$ pcdr- up.
cli protect -b
nutanix-clusters- -h, --help Help for the protect
xxxx-pcdr-3node -r command.
us-west-2
-r, --region string Name of the AWS
region where the
provided S3 bucket
exists.

Cloud Clusters (NC2) | Cluster Protect Configuration | 180


Purpose Command Command Flags Description
available on
-o, --output string Supported output
formats: ['default',
'json'] (default
"default")
Unprotect Prism nutanix@pcvm$ pcdr- Prism Central VM -h, --help Help for unprotect
Central by removing cli unprotect command.
the specified [flags]
-u, -- Entity UUID of the
protection target.
Note: If the protection_target_uuid
protection target,
failure is not string which must be
AZ-wide and removed from the
Prism Central target list.
of the impacted
cluster is hosted -o, --output string Supported output
on another cluster formats: ['default',
and that Prism 'json'] (default
Central is not "default")
impacted, then
you can restore
the impacted
cluster from that
existing Prism
Central.

For example:
nutanix@pcvm$ pcdr-
cli unprotect -u
8xxxxxx5-3xx4-3xx1-
bxxc-dbxxxxxxx0b6

Create a recovery nutanix@pcvm$ Prism Central VM -h, --help Help for create-
plan which can clustermgmt-cli recovery-plan
be executed from create-recovery- command.
Prism Central UI to plan [flags]
-n, -- UUID of the new
recover a cluster. For example: new_cluster_uuid recovery NC2
nutanix@pcvm$ string cluster.
clustermgmt-cli -o, -- UUID of the old,
create-recovery-
old_cluster_uuid failed NC2 cluster.
plan -o 0xxxxxx6-
cxxc-dxxx-8xxf- string
dxxxxxxxxx99 --output string Supported output
-n 0xxxxxxe- formats: ['default',
dxxd-fxxx-fxxe-
'json'] (default
cxxxxxxxxxe5
"default")
Deploy MST, nutanix@pcvm$ Prism Central VM -b, --bucket string Name of the S3
which can be used clustermgmt- bucket that will be
to protect NC2 cli deploy- used to store the
clusters. cloudSnapEngine backup of NC2
[flags] clusters.
For example: -h, --help Help for the deploy-
nutanix@pcvm$ cloudSnapEngine
clustermgmt- command.
cli deploy-
cloudSnapEngine -b --recover Deploys MST using
nutanix-clusters- old configuration
xxxxx-xxxx-xxxxx data, if available on
-r us-west-2 -i Cloud Clusters (NC2) | Cluster Protect Configuration
Prism Central. If| old
181
10.0.xxx.11,10.0.xxx.12,10.0.xxx.13 configuration data
-s PC-Subnet is unavailable, the
Purpose Command Command Flags Description
available on
-r, --region string Name of the AWS
region where the
provided S3 bucket
exists.
-i, --static_ips Comma-separated
strings list of 3 static IPs
that are part of
the same subnet
specified by the
subnet_name flag.
-s, -- Name of the subnet
subnet_name which can be used
string for MST VMs.

Cloud Clusters (NC2) | Cluster Protect Configuration | 182


Purpose Command Command Flags Description
available on
--output string Supported output
formats: ['default',
'json'] (default
"default")
Delete or clean nutanix@pcvm$ Prism Central VM -f, --force Force delete the
up the failed MST clustermgmt- MST.
deployments. cli delete-
cloudSnapEngine -h, --help Help for delete-
[flags] cloudSnapEngine
command.
Note: If the
--output string Supported output
MST deployment
is deleted, formats: ['default',
clusters cannot 'json'] (default
be recovered in "default")
the event of a
failure, such as
Availability Zone
failures.

For example:
nutanix@pcvm$
clustermgmt-
cli delete-
cloudSnapEngine

Mark completion nutanix@pcvm$ Prism Central VM -u, --cluster_uuid UUID of the old
of recovery of a clustermgmt-cli string NC2 cluster.
cluster. finalize-recovery
[flags] -h, --help Help for the finalize-
recovery command.
For example:
--output string Supported output
nutanix@pcvm$ formats: ['default',
clustermgmt-cli 'json'] (default
finalize-recovery
"default")
-u 0xxxxxxx-
cxxc-dxxx-8xxx-
dxxxxxxxxxx9

Get a list nutanix@pcvm$ Prism Central VM -u, --cluster_uuid UUID of the NC2
of recovery clustermgmt-cli string cluster.
information, such as list-recovery-info
[flags] -h, --help Help for the list-
subnets that were
recovery-info
available on the For example: command.
original(failed) NC2
cluster. nutanix@pcvm$ --verbose With the verbose
clustermgmt- flag, a detailed
cli list-
JSON output is
recovery-info -u
00xxxxxb-0xxd-8xxx-6xx4-3xxxxxxxxx7d returned. If the
verbose flag is not
specified, only the
important fields,
such as subnet
name, IP Pool
ranges, and CIDR,
are returned.

Cloud Clusters (NC2) | Cluster Protect Configuration | 183


Purpose Command Command Flags Description
available on
--output string Supported output
formats: ['default',
'json'] (default
"default")
Protect clusters nutanix@pcvm$ Prism Central VM -u, --cluster_uuid NC2 cluster UUID
against AZ failures clustermgmt-cli string
by backing up the protect-cluster
-h, --help Help for the protect-
clusters in AWS S3. [flags]
cluster command.
For example:
-l, -- Local snapshot
nutanix@pcvm$ local_snapshot_count
retention count. The
clustermgmt-cli int default count is 2.
protect-cluster
-u 00xxxxxe- -r, --rpo int Protection RPO
dxxx-fxxx-fxxe- in minutes. The
cxxxxxxxxxe5 default RPO is 60
minutes.
--output string Supported output
formats: ['default',
'json'] (default
"default")
Unprotect a cluster. nutanix@pcvm$ Prism Central VM -u, --cluster_uuid UUID of the NC2
clustermgmt-cli string cluster.
unprotect-cluster
[flags] -h, --help Help for the
unprotect-cluster
For example: command.
nutanix@pcvm$ --output string Supported output
clustermgmt-cli formats: ['default',
unprotect-cluster
'json'] (default
-u 000xxxx6-
cxxx-dxx0-8xxx- "default")
dxxxxxxxx999

Cloud Clusters (NC2) | Cluster Protect Configuration | 184


NC2 MANAGEMENT CONSOLES
The following management consoles are used to use and manage your NC2 clusters:

• NC2 Console: Use the NC2 console to create, hibernate, resume, update, and terminate a NC2 cluster running on
AWS.
• Prism Element Web Console: Use the Prism Element web console to manage routine Nutanix tasks in a single
console. For example, creating a user VM. Unlike Prism Central, Prism Element is used to manage a specific
Nutanix cluster.
For more information on how to sign into the Prism Element web console, see Logging into a Cluster by Using
the Prism Element Web Console.
For more information on how to manage Nutanix tasks, see Prism Web Console Guide.
• Prism Central Web Console: Use to manage multiple Nutanix clusters.
For more information on how to sign into the Prism Central web console, see Logging Into Prism Central.
For more information on how to manage multiple NC2, see Prism Central Infrastructure Guide.

NC2 Console
The NC2 console displays information about clusters, organization, and customers.
The following section explains about all the tasks you can perform and view from this console.

Figure 114: NC2 Console

Main Menu
The following options are displayed in the main menu at the top of the NC2 console:

Navigation Menu
The navigation menu has three tabs: Clusters, Organizations, and Customers. The selected tab is displayed in the top-
left corner. For more information, see Navigation Menu on page 186.

Cloud Clusters (NC2) | NC2 Management Consoles | 185


Tasks

• Circle icon displays ongoing actions performed in a system that takes a while to complete.
For example, actions like creating a cluster or changing cluster capacity.
Circle icon also displays progress of each ongoing task and a success message appears if the task is complete or an
error message appears if the task fails.
• Gear icon displays the source details of each task performed.
For example, account, organization, or customer.

Notifications

• Bell icon displays notifications if some event in the system occurs or if there is a need to act and resolve an
existing issue.

Warning: You can choose to Dismiss notifications from the Notification Center. However, the dismissed
notifications no longer appear to you or any other user.

• Gear icon displays source details and a tick mark to acknowledge notifications.
• Drop-down arrow to the right of each notification displays more information about the notification.

Note: If you want to receive notifications about a cluster that is not created by you, you must be an organization
administrator and subscribe to notifications of respective clusters in the Notification Center. The cluster creator is
subscribed to notifications by default.

User Menu
The Profile user name option from the drop-down list provides the following opitons:

• General: Edit your First name, Last name, Email, and Change password from this screen. This screen
also displays various roles assigned.
• Preferences: Displays enable or disable slider options based on your preference.
• Storage providers: Displays the storage options with various storage providers.
• Advanced: Displays various assertion fields and values.
• Notification Center: Displays the list of Tasks, Notifications, and Subscriptions.

Navigation Menu
The navigation menu has three tabs on the top; Clusters, Organizations, and Customers, two tabs in the bottom;
Documentation and Support.

Clusters

• Displays the Create Cluster option to create a new cluster.


• Provides a search bar to Search clusters.
• Displays a list of all active clusters by default.
Displays a filter button on the right of the search bar. Click Active or Terminated to switch the list of currently
visible active or terminated clusters, respectively.

Cloud Clusters (NC2) | NC2 Management Consoles | 186


• Displays details of each cluster like Name, Organization, Cloud, Created On, Capacity, Use-Case, and
Status.
The last created cluster is on top of the list by default. To change the order, click the Name heading to change the
value and direction by which the entries are ordered.
• The ellipsis icon against each cluster displays the following options:

• Audit Trail: Displays the activity log of all actions performed by the user on a specific cluster.
• Users: Displays the screens for user management like User Invitations, Permissions, Authentication
Providers.
• Notification Center: Displays the complete list of all the tasks and notifications.
• Update Configuration: Displays the screens to update the settings of clusters.
• Update Capacity: Displays a screen to update the resource allocation of clusters.
• Hibernate: Opens a dialog box for Cluster Hibernation or a Resume option appears if the cluster is
already hibernated.
• Terminate: Displays a screen to delete the cluster.

Figure 115: Clusters Console

Organizations

• Displays the Create Organization option to create a new organization.


• Provides a search bar to Search organizations.
• Displays a list of active organizations by default.
Displays a filter button to the right of the search bar. Click Active or Terminated to switch the list of currently
visible active or terminated organizations, respectively.
• Displays the details of each organization such as Name, Customer, Description, URL Name, and Status.
The last created organization is on the top of the list by default. To change the order, click the Name heading to
change the value and direction by which the entries are ordered.

Cloud Clusters (NC2) | NC2 Management Consoles | 187


• The ellipsis icon against each organization displays the following options:

• Audit Trail: Displays the activity log of all actions performed on a specific organization.
• Users: Displays the screens for user management like User Invitations, Permissions, Authentication
Providers.
• Sessions: Displays the basic details of the organization and information about the terminating the cluster.
• Notification Center: Displays the complete list of all Tasks and Notifications.
• Cloud accounts: Displays the status of the Cloud Account if it is active (A-Green) or inactive (I-Red).
The ellipsis icon against each cloud account displays the following options:

• Add regions: Select this option to update the regions in which the cloud account can deploy clusters to.
• Update: Select this option to create a new stack or update an existing stack.
• Deactivate: Select this option to deactivate the cloud account.
• Update: Displays the options to update settings of organizations.

Figure 116: Organizations Console

Customers

• Displays a search bar to Search Customers.


• Displays a list of active customers by default.
Displays a filter button to the right of the search bar. Click Active or Terminated to switch the list of currently
visible active or terminated customers, respectively.
• Displays details of each customer like Customer Name, Description, URL Name, Billing and Status.
The last created customer is on the top by default and to change the order, click on customer Name heading to
change the value and direction by which the entries are ordered.

Cloud Clusters (NC2) | NC2 Management Consoles | 188


• The ellipsis icon against each customer displays the following options:

• Audit Trail: Displays the activity log of all actions performed on a specific cluster.
• Users: Displays the screens for user management like User Invitations, Permissions, Authentication
Providers.
• Notification Center: Displays the complete list of all tasks and notifications.
• Cloud accounts: Displays the status of the Cloud Account if it is active (A-Green) or inactive (I-Red).
• Update: Displays the options to update settings of customers.

Figure 117: Customers Console

Documentation
Directs you to the documentation section of NC2.

Support
Directs you to the Nutanix Support portal.

Audit Trail
Administrators can monitor user activity using the Audit Trail. Audit Trail provides administrators with an audit
log to track and search through account actions. Account activity can be audited at all levels of the NC2 console
hierarchy.
You can access the Audit Trail page for an Organization or Customer entity from the menu button to the right of the
desired entity.

Cloud Clusters (NC2) | NC2 Management Consoles | 189


Figure 118: Audit Trail

The following figure illustrates the Audit Trail at the organization level.

Figure 119: Audit Trail - Download CSV

Under the Audit Trail section header, you can search the audit trail by first name, last name, and email address. You
can also click the column titles to sort the Audit Trail by ascending or descending order.
If you want to search for audit events within a certain period, click the date range in the upper right corner of the
section. Set your desired period by clicking on the starting and ending dates in the calendar view.
You can filter your results using the filter icon in the top right corner by specific account action.
You can download the details of your Audit Trail in CSV format by clicking the Download CSV link in the upper
right corner. The CSV will provide all Audit Trail details for the period specified to the left of the download link.

Notification Center
Admins can easily stay up to date regarding their NC2 resources with the Notification Center. Real-time notifications
are displayed in a Notification Center widget at the top of the NC2 console. The Notification Center displays two
different types of information: tasks and notifications. The information displayed in the Notification Center can be for
organizations or customer entities.

Note: Customer Administrators can see notifications for all organizations and accounts associated with the tenant by
navigating to the Customer or Organization dashboard from the initial NC2 console view and clicking Notification
Center.

Notification Center Widget


The Notification Center splits information into two categories: Tasks (bullet list icon) and notifications (bell icon).
Clicking these icons from the NC2 console view will display a list of pending tasks or notifications to which the
current user is subscribed.

Cloud Clusters (NC2) | NC2 Management Consoles | 190


Figure 120: Notification Center

Tasks
Tasks (bullet list icon) show the status of various changes made within the platform. For example, creating an
account, changing capacity settings, and so on trigger a task notification informing the admin that an event has
started, is in progress, or has been completed.
Notifications
Notifications (bell icon) differ from tasks; notifications notify administrators when specific events happen. For
example, resource limits, cloud provider communication issues, and so on.). There are three types of notifications:
info, warning, or error.
Dismiss Tasks and Notifications
You can dismiss tasks or notifications from the Notification Center widget by selecting the task or notification icon
and click the dismiss (x) button inside the event.
Dismissing an event only dismisses the task or notification for your console view; other subscribed admins still see
the event.
Acknowledge Notifications
You can click the check mark icon to acknowledge and dismiss a notification for all users subscribed to that resource.
Acknowledging a notification removes it from the widget, but the notification is still available on the Notification
Center page.

Note: Acknowledging a notification will dismiss it for all administrators subscribed to the same resource.

Configuring Email Notifications for Alerts


Administrators can subscribe or unsubscribe to receive notification emails from the NC2 console for specific clusters
or organizations to ensure that they are in the loop when changes are made, or alerts are triggered.
A few example scenarios where automated email notifications are sent include:

• An NC2 cluster is successfully created.


• A user is added to an organization or customer account.
• When the cluster is ready for the customer to start using.
Follow these steps to configure email notifications:

Procedure

1. Sign in to the NC2 console: https://cloud.nutanix.com

Cloud Clusters (NC2) | NC2 Management Consoles | 191


2. On the Clusters page, click the ellipsis icon against the desired cluster for which you want to configure email
notifications.

Note: If you want to set email notifications for an organization or customer entity, select the Organizations or
Customers tab.

Figure 121: Cluster Notification Center

3. Click Notification Center.

4. On the Notification Center page, click the Settings tab.

Figure 122: Notification Settings

Cloud Clusters (NC2) | NC2 Management Consoles | 192


5. Under Notification Settings, specify the following:

• Receive email notifications: To enable automatic email notifications, turn on the Receive email
notifications toggle.
• Severity:

• Info: Receive emails for informational notifications


• Warning: Receive emails for warning notifications
• Critical: Receive emails for critical notifications
• Recipients: Enter the email address of the recipient. To add more recipients, click Add Recipient and then
provide the email address.

6. Click Save.

Cloud Clusters (NC2) | NC2 Management Consoles | 193


NC2 USER MANAGEMENT
NC2 provides access control through which you can assign roles to users that you add to your My Nutanix account.
You have account administrator permissions by default when you sign up for a My Nutanix account. You can add two
more users with the account administrator role to the same account. Therefore, one account can have only three users
with the account administrator role at any given time. To add users to your account, you can either integrate your
organization's SAML authentication solution (Active Directory, Okta, and others) with My Nutanix or invite specific
users to access the NC2 console.
While the administrators can remove users from a tenant using the Global Admin Center, the users, tenant owner,
and administrators can choose to leave a tenant themselves on specific conditions using the Tenant Details feature,
which is available under My Nutanix > Profile > Profile Settings. When there are no users in the tenant or there
is no active subscription in the Billing Center, then the tenant owner can leave and close the tenant. When a tenant
is closed, all subscriptions and services associated with that tenant are erased. As the user's tenant-related data also
gets deleted, users will not be able to rejoin that tenant. If you must retain the data, then instead of closing the tenant,
invite a new account administrator from the Admin Center before leaving the tenant. For more information on the
Leave Tenant feature, see the Nutanix Cloud Services Administration Guide.

User Roles
The NC2 console uses a hierarchical approach to organizing administration and access to accounts.
The NC2 console has the following entities:

• Customer: This entity is the highest business entity in the NC2 platform. You create multiple organizations
under a customer and then create clusters within an organization. When you sign up for NC2, a Customer
entity is created for you. You can then create an Organization, add a cloud (Azure or AWS) account to that
organization, and create clusters in that organization. You cannot create a new Customer entity in your NC2
platform.
• Organization: This entity allows you to set up unique environments for different departments within your
company. You can create multiple clusters within an organization. You can separate your clusters based on your
specific requirements. For example, create an organization Finance and then create a cluster in the Finance
organization to run only your finance-related applications.
Users can be added from the Cluster, Organization, and Customer entities. However, the user roles that are available
while adding users vary based on whether the users are invited from the Cluster, Organization, and Customer entities.
Administrators can grant permissions based on their own level of access. For example, while a customer administrator
can assign any role to any cluster or organization under that customer entity, an organization administrator can only
grant roles for that organization and the clusters within that organization.
The following user roles are available in NC2.

Table 23: User Roles in NC2

Role Description
Customer Administrator Highest level of access. Customer administrators can create
and manage multiple organizations and clusters. Customer
administrators can also modify permissions for any of the user
roles.
Customer Auditor Customer Auditor users have read only access to functionality at
the customer, organizations, and account levels.

Cloud Clusters (NC2) | NC2 User Management | 194


Role Description
Customer Security Administrator Customer Security Administrator users can only access Audit Trail
and Users functions at the customer level to manage all authentication
providers (such as, Basic (username/password), Google, SAML2, and
API), configures SAML2 providers, manage SAML2 permissions, and
manages users for all organizations and accounts.

Organization Administrator Organization administrators can manage any organizations


assigned to them by the Customer administrator and those
organizations’ accounts. Organization administrators can only be
created by Customer administrators.
Organization Auditor Organization Auditor users have read only access to the
organization and clusters under the organization.
Organization Security Administrator Organization Security Administrator users can only access Audit Trail
and Users functions at the specified organization level to manage
all authentication providers (such as, Basic (username/password),
Google, SAML2, and API), configures SAML2 providers, manage
SAML2 permissions, and add users for all accounts under the specified
organization.

Cluster Administrator Cluster Administrator can access and manage any clusters
assigned to them by the Organization or Customer administrators.
Cluster Admin can also open, close, or extend a support tunnel for
the Nutanix Support team.
Cluster Super Admin Cluster Super Admin can open, close, or extend a support tunnel
for the Nutanix Support team.
Cluster Auditor Cluster Auditor users have read only access to the clusters under
the organization.
Cluster User Cluster User can access a specific cluster assigned to them by the
Cluster, Organization or Customer Administrator.

See the Local User Management section of the Nutanix Cloud Services Administration Guide for more
information about the following:

• Invite additional My Nutanix administrators.


• Remove the My Nutanix administrator.
• Resend or cancel the invite for a My Nutanix administrator.

Note: The user roles described in the Local User Management section of the Nutanix Cloud Services
Administration Guide guide are not applicable to NC2. For the user roles in NC2, see the user roles described in this
section.

See the Nutanix Cloud Services Administration Guide for more information about authentication mechanisms,
such as multi-factor authentication and SAML authentication.

Adding Users from the NC2 Console


The NC2 Customer and Organization Security Administrators can enforce the authentication settings for
your NC2 account. Cluster administrators and users can add other users and assign roles based on their
own level of access to the NC2 resources. Users can be added at the customer account, organization, and
cluster level.

Cloud Clusters (NC2) | NC2 User Management | 195


Note: Users can be added from the Cluster, Organization, and Customer entities. However, the user roles that
are available while adding users vary based on whether the users are invited from the Cluster, Organization, and
Customer entities. Administrators can grant permissions based on their own level of access. For example, while a
customer administrator can assign any role to any cluster or organization under that customer entity, an organization
administrator can only grant roles for that organization and the clusters within that organization.

Perform the following to add users to NC2:

Procedure

1. Sign in to the NC2 console.

2. Click the Customers tab.

3. Click the ellipsis icon against the desired customer entity, and click Users.
The Authentication tab displays the identity authentication providers that are currently enabled for your
account, and the relevant tabs for the enabled authentication providers are displayed. The NC2 account
administrator must have first unlocked the Enforce settings slider.

Figure 123: User Authentication Enforcement

Perform the following steps to invite users based on the authentication provider.

Cloud Clusters (NC2) | NC2 User Management | 196


4. To invite users with the basic authentication method where username and password is used:

a. Click the Basic (username/password) tab.

Figure 124: Invite Users with Basic Authentication


b. Click Invite Users.

Figure 125: Invite Users - Basic Authentication


c. Enter a comma-separated email address of the users you want to add to NC2. You can invite 100 users at a
time.
d. Select the desired user role for the invited user from the Roles list, and then select the desired customer entity.
Click Add to add more entries to assign more user roles.
e. Click Invite to invite the users to NC2.

Cloud Clusters (NC2) | NC2 User Management | 197


5. To invite users with the My Nutanix authentication method:

a. Click the My Nutanix tab.

Figure 126: My Nutanix Administrator Access


b. Enable or disable access to My Nutanix using the Allow Nutanix Admins on MyNutanix to administer
this customer slider. If you lock the My Nutanix slider, then the slider cannot be unlocked at the
Organization entity level.

Cloud Clusters (NC2) | NC2 User Management | 198


6. To invite users with Google authentication:

a. Click the Google tab.

Figure 127: Invite Users with Google Authentication


b. Click Add.

Figure 128: Add Google Authentication


c. Enter an email address or domain of the user you want to add to NC2.
d. Click Add Recipient to add more users.
e. Select the desired user role for the invited user from the Roles list, and then select the entity that the role
applies to. Click Add to add more entries to assign more user roles.
f. Click Add.

Cloud Clusters (NC2) | NC2 User Management | 199


7. To add users with SAML2 authentication, first add the SAML2 provider and then add the desired permissions:
To add a SAML2 provider:

a. Click the SAML 2 Providers tab.

Figure 129: Adding SAML 2 Provider


b. Click Add SAML 2 Provider. The Add A SAML 2 Identity Provider dialog appears.

Cloud Clusters (NC2) | NC2 User Management | 200


Figure 130: Adding a SAML 2 Identity Provider
c. Enter or select the following details:

• Application Id
• Auth provider metadata: URL or XML
• Metadata URL or Metadata XML
• Integration Name
• Custom Label
• Authentication token expiration
• Signed response
• Signed assertion
d. Click Add.
To add SAML 2 Permission:

a. Click the SAML 2 Permission tab. The SAML 2 Permissions dialog appears.
b. Click Add Permission. The Create A SAML2 Permission dialog appears.

Cloud Clusters (NC2) | NC2 User Management | 201


Figure 131: Creating A SAML2 Permission
c. Enter or select the following details:

• For provider: Select the SAML2 Provider you are designating permissions for.
• Allow Access:

• Always: Once the user is authenticated, they have access to the role you specify – no conditions
required.
• When all conditions are satisfied: The user must meet all conditions specified by the
administrator to be granted access to the role specified.
• When any condition is satisfied: The user can meet any conditions specified by the administrator
to be granted access to the role specified.
• Conditions: Specify your assertion claims and their values which correspond with the roles you wish to
grant.
• Grant roles: Select the desired roles you wish to grant to your users. You can add multiple role sets using
the Add button.
d. Click Save.
e. To update the SAML 2 permissions of the users in your account, click the SAML 2 Permissions tab. The
SAML 2 Permissions page displays the list of all users in your account.
f. Click the ellipsis icon against the user you want to edit the SAML 2 permissions for, and then click Update.
The Update a rule dialog appears.

Cloud Clusters (NC2) | NC2 User Management | 202


Figure 132: Updating a SAML 2 Permission Rule
g. Edit the details, such as roles.
h. Click Save.

Cloud Clusters (NC2) | NC2 User Management | 203


8. To invite users with API authentication:

a. Click the API tab. The APIs dialog appears.


b. Click Add API.

Figure 133: Adding an API


c. Enter a name for the API and select the desired role.
d. Click Add.
e. To update an API, click the API tab. The APIs page displays the list of all APIs in your account. Click the
ellipsis icon against the API you want to edit. You get Update, Delete, and Manage options. To update the
API, click Update. The Update API dialog appears.

Figure 134: Updating an API


f. Enter details, such as Roles.

Cloud Clusters (NC2) | NC2 User Management | 204


g. Click Save.
To manage API credentials, Click Manage. The Manage API Credentials dialog appears. Click the trash icon
if you want to delete the API key.
To delete an API, click Delete.

9. To invite users with Secure Anonymous: You can create many users without email invitation or activation.
Mass user creation can be used to deliver training and certification tests to end users who are guest users (not

Cloud Clusters (NC2) | NC2 User Management | 205


employees, but clients or anonymous users). This solution does not rely on any existing identity provider
integration.

a. Click the Secure Anonymous tab.

Figure 135: Anonymous Access Provider


b. Click Add Provider. The Add Anonymous Access Provider dialog appears.

Figure 136: Adding Anonymous Access Provider


c. Enter or select Name, Description, Token Duration, and Roles. Click Save.

Cloud Clusters (NC2) | NC2 User Management | 206


d. Once you have created a Secure Anonymous Token Provider and set the desired token duration and roles,
simply click on the ellipsis listed next to your Anonymous Access Provider and click Playground.
e. Specify the number of tokens you need, enable the Embed token in a URL toggle, and then click Generate
Anonymous Tokens.
f. All tokens and their pre-constructed URLs are copied to your clipboard. You can now distribute these URLs to
your end users to give them access to your NC2 environment.

Managing Support Authorization


NC2 specialists, a group of Nutanix Support team, can view customer names, organization names, and the
cluster details. However, they cannot access these entities or make any changes to these entities.
When you report any issue with your NC2 cluster, the specialists can request admin-level access to your cluster
entities to be able to view the cluster details and aid you in the troubleshooting process. If the specialist makes any
changes, these changes are logged in the audit trail. The admin-level access requests are granted by default; however,
NC2 provides a way to manage the support authorization to allow complete, partial, or block access to your entities.
Perform the following steps to manage support authorization:

Procedure

1. Sign in to the NC2 console.

2. Click the Organizations tab.

3. Click the ellipsis icon against the organization entity, and then click Users.

4. Click the Support tab. The Support Options page appears.


Under Support Options, you can specify how much control you would like to grant NC2 support engineers.

Figure 137: Support Authorization

Cloud Clusters (NC2) | NC2 User Management | 207


5. Select the required option under Support Authorization:

• Full access to this organization and its accounts: Grants NC2 support engineers the same level of
access as a Customer Administrator.
• Full access without ability to start sessions and manage users: NC2 support engineers may not
start sessions to your workload VMs.
• No Access: NC2 support engineers have no access to your customer and organization(s).

6. If you choose to give full access, then you can choose to give full access to specific NC2 specialists. Click Add
Personnel and then enter the email address of the NC2 specialist.
To revoke access, click the trashcan symbol listed to the right of the Nutanix staff member you would like to
remove from the Authorized Nutanix Personnel list. Click Save to apply your changes.

Cloud Clusters (NC2) | NC2 User Management | 208


API KEY MANAGEMENT FOR NC2
You can create API keys that can be used to assign roles to NC2 users.
Follow these steps to create an API key for NC2:

Cloud Clusters (NC2) | API Key Management for NC2 | 209


Procedure

1. Create an API key:

a. Sign in to https://my.nutanix.com with your My Nutanix account.

Note: Ensure that you select the correct workspace from the Workspace list on the My Nutanix dashboard. For
more information on workspaces, see Workspace Management.

b. In the My Nutanix dashboard, go to the API Key Management tile and click Launch.
If you have previously created API keys, a list of keys is displayed.
c. Click Create API Keys to create a new key.
The Create API Key dialog appears.

Figure 138: Creating an API Key


d. Select or enter the following details:

• Name: Enter a unique name for your API key to help you identify the key.
• Scope: Select the NC2 scope category under Cloud from the Scope list.

Cloud Clusters (NC2) | API Key Management for NC2 | 210


• Role: Select the NC2 role for which the API Key authorization/permissions will be used. You can select
one of these roles:

• Admin: Create or delete a cluster and all permissions that are assigned to the User role.
• User: Manage clusters, hibernate and resume a cluster, update cluster capacity, and all permissions that
are assigned to the Viewer role.
• Viewer: View account, organization, cluster, and tasks on the NC2 console.
e. Click Create.
The Created API dialog is displayed.

Figure 139: Created API Key


f. Copy the API Key and Key ID field values and store them securely for use. You can use the clipboard button
to copy the value to your clipboard.

Note: You cannot recover the generated API key and key ID after you close this dialog.

For more details on API Key management, see the API Key Management section in the Licensing Guide.

Cloud Clusters (NC2) | API Key Management for NC2 | 211


2. Generate a JSON Web Token (JWT) token for authentication to call the REST APIs.
You can clone the script from https://github.com/nutanix/generate-jwt-key and update it as needed.

Note: This step uses Python to generate a JWT token. You can use other programming languages, such as
Javascript and Golang.

a. Run the following command to install the PyJwt package:


pip install PyJWT==2.3.0

b. Replace the API Key and Key ID in the following Python script and then run it to generate a JWT token.
Also, you can specify expiry time in seconds for the JWT token to remain valid. In the requesterip attribute,
enter the requester IP.
from datetime import datetime
from datetime import timedelta
import base64
import hmac
import hashlib
import jwt

api_key = "enter the API Key" # API_KEY


key_id = "enter the Key ID" # KEY_ID
aud_url = "https://apikeys.nutanix.com"

def generate_jwt():
curr_time = datetime.utcnow()
payload = {
"aud": aud_url,
"iat": curr_time,
"exp": curr_time + timedelta(seconds=120),
"iss": key_id,
"metadata": {
"reason": "fetch usages",
"requesterip": "enter the requester IP",
"date-time": curr_time.strftime("%m/%d/%Y, %H:%M:%S"),
"user-agent": "datamart"
}
}
signature = base64.b64encode(hmac.new(bytes(api_key, 'UTF-8'), bytes(key_id,
'UTF-8'), digestmod=hashlib.sha512).digest())
token = jwt.encode(payload, signature, algorithm='HS512',
headers={"kid": key_id})
print("Token (Validate): {}" .format(token))

generate_jwt()

c. A JWT token is generated. Copy the JWT token on your system for further use. The JWT token can be used as
an Authorization header when validating the API call. The JWT token remains valid for the duration that you
have specified.

Cloud Clusters (NC2) | API Key Management for NC2 | 212


NC2 PLANNING GUIDANCE
This section describes how you can plan costs, sizing, and capacity for your Nutanix Cloud Clusters (NC2)
infrastructure.

Costs
Costs for deploying an NC2 infrastructure include the following:
1. AWS EC2 bare-metal instances: AWS sets the cost for EC2 bare-metal instances. Engage with AWS or see their
documentation about how your EC2 bare-metal instances are billed. For more information, see the following
links:

• EC2 Pricing
• AWS Pricing Calculator
2. NC2 on AWS: Nutanix sets the costs for running Nutanix clusters in AWS. Engage with your Nutanix sales
representatives to understand the costs associated with running Nutanix clusters on AWS.

Sizing
You can use the Nutanix Sizer tool to enable you to create the optimal Nutanix solution for your needs. See the Sizer
User Guide for more information.

Capacity Optimizations
The Nutanix enterprise cloud offers capacity optimization features that improve storage utilization and performance.
The two key features are compression and deduplication.

Compression
Nutanix systems currently offer the following two types of compression policies:

Inline
The system compresses data synchronously as it is written to optimize capacity and to maintain high performance
for sequential I/O operations. Inline compression only compresses sequential I/O to avoid degrading performance for
random write I/O.

Post-Process
For random workloads, data writes to the SSD tier uncompressed for high performance. Compression occurs after
cold data migrates to lower-performance storage tiers. Post-process compression acts only when data and compute
resources are available, so it does not affect normal I/O operations.
Nutanix recommends that you carefully consider the advantages and disadvantages of compression for your specific
applications. For further information on compression, see the Nutanix Data Efficiency tech note.

Cloud Clusters (NC2) | NC2 Planning Guidance | 213


COST ANALYTICS
NCM Cost Governance (formerly Beam) enables you to gain visibility into your Nutanix Cloud Clusters
(NC2) spend in AWS. The cost governance feature of Cost Governance provides visibility into your cloud
consumption and in turn helps you optimize and control the usage of your Nutanix clusters running in AWS.
If you use Cost Governance with NC2, you can analyze the cost of EC2 bare-metal instances, cost per instance type,
cost of EC2 outbound traffic and network interfaces, and spend of S3 Buckets during hibernation of the Nutanix
clusters in AWS.

• Key: nutanix:clusters:cluster-uuid
• Value: UUID of the cluster created in AWS
You must add and activate the nutanix:clusters:cluster-uuid tag as a cost allocation tag in AWS, so that Cost
Governance can successfully display the cost analytics of Nutanix clusters in AWS.
For more information about setting up and using Cost Governance, see the NCM Cost Governance documentation.

Integrating Cost Governance with NC2


To integrate Cost Governance with NC2, you must have a Cost Governance subscription and configure
Cost Governance with your AWS account.

About this task


Perform the following tasks to integrate Cost Governance with NC2.

Procedure

1. Subscribe to the Cost Governance service.


See the Cost Governance (Beam) section in the Cloud Services Administration Guide for instructions about
how to subscribe to the Cost Governance service.

2. Configure Cost Governance with your AWS account.


See Adding AWS Account (Cost Governance) in the NCM Cost Governance User Guide for instructions
about how to perform this task.

3. In AWS, add and activate the NC2 tag nutanix:clusters:cluster-uuid as a user-defined tag.
See Activating User-Defined Cost Allocation Tags section in the AWS documentation.
The tag activates after 24 hours.

Note: Add and activate the tag by using the payer account of your organization in AWS.

4. Sign in to the Cost Governance console to see the cost analytics of your Nutanix clusters in AWS.

Displaying Cost Analytics in the Cost Governance Console


You can display the cost of EC2 bare-metal instances, cost per instance type, cost of EC2 outbound traffic
and network interfaces, and spend of S3 Buckets during hibernation of the Nutanix clusters in AWS in the
Cost Governance console.

About this task


In the Cost Governance console, perform the following.

Cloud Clusters (NC2) | Cost Analytics | 214


Note: For the up-to-date instructions about how to perform the following tasks, see the NCM Cost Governance
documentation.

Procedure

1. Sign in to the Cost Governance console.

2. Select Cost Governance in the application selection menu.

3. Select AWS and your AWS account in the cloud and account selection menu.

4. Click Analyze to display the Cost Analytics screen.

5. To display the cost of EC2 instances, do the following:

a. In the Cost Analytics screen, go to Compute # Instances.


b. In the Filters pane, in the Tag Key field, select the nutanix:clusters:cluster-uuid and, in the Tag Value
field, select the UUID of the cluster for which you want to view the spend.
c. Click Apply.

6. To display the cost for each instance type, do the following:

a. In the Cost Analytics screen, go to Compute # Instances Types.


b. In the Filters pane, in the Tag Key field, select the nutanix:clusters:cluster-uuid and, in the Tag Value
field, select the UUID of the cluster for which you want to view the spend.
c. Click Apply.

7. To display the cost of the EC2 outbound traffic, do the following:

a. In the Cost Analytics screen, go to Data Transfer.


b. In the Filters pane, in the Tag Key field, select the nutanix:clusters:cluster-uuid and, in the Tag Value
field, select the UUID of the cluster for which you want to view the spend.
c. Click Apply.

8. To display the cost for each instance type, do the following:

a. In the Cost Analytics screen, go to Compute # Subservices.


b. In the Filters pane, in the Tag Key field, select the nutanix:clusters:cluster-uuid and, in the Tag Value
field, select the UUID of the cluster for which you want to view the spend.
c. Click Apply.

9. To display the cost for each instance type, do the following:

a. In the Cost Analytics screen, go to Storage.


b. In the Filters pane, in the Tag Key field, select the nutanix:clusters:cluster-uuid and, in the Tag Value
field, select the UUID of the cluster for which you want to view the spend.
c. Click Apply.

Cloud Clusters (NC2) | Cost Analytics | 215


FILE ANALYTICS
Nutanix clusters running in AWS support File Analytics if you are using the Files feature of Nutanix. File
Analytics runs as a VM in Prism Element and not as a separate EC2 instance.
Install the File Analytics VM from the Prism Element web console.

Note: Nutanix Cloud Clusters (NC2) supports File Analytics versions 2.2.0 and later.

See the Files Analytics documentation on the Nutanix Support portal for more information about File Analytics.
In the Prism Element web console, go to the Files page and click File Analytics.
If you are accessing the VM from inside the VPC, you can access the VM by using the File Analytics IP address. If
you want to access the file analytics VM from outside the VPC, you must configure a load balancer that has a public
IP address.

Note: NC2 recommends that you enable File Analytics for a desired file server before you add a load balancer to the
File Analytics VM.

Configure a load balancer with the following:


1. Target type must be the IP address of the File Analytics VM.
2. Target must have only one IP address, that is the File Analytics VM IP address since it is a single-node VM.
3. Allow traffic only on port 3000 as a part of the listener definition.
4. Protocol must be HTTPS.
5. Configure health checks at “/“.
After you add a load balancer, you cannot access the File Analytics VM from the Files page. Use the following link to
directly access the File Analytics VM:
<Load balancer link>:3000/#dashboard?fs_id=<file server uuid>&user_name=<user name of
the user>

Cloud Clusters (NC2) | File Analytics | 216


DISASTER RECOVERY AND BACKUP
This section explains the Disaster Recovery and backup options that NC2 supports.

Disaster Recovery
NC2 supports Asynchronous and NearSync replication. NearSync replication is supported with AOS 6.7.1.5 and later,
while Asynchronous replication is supported with all supported AOS versions. NearSync replication is supported only
when clusters run AHV; NC2 does not support cross-hypervisor disaster recovery. For more information on Nutanix
Disaster Recovery capabilities, see Nutanix Disaster Recovery Guide.
You can pair the Prism Central of the Nutanix cluster running in AWS with the Prism Central of the Nutanix cluster
running in your on-premises datacenter. You must configure connectivity between your on-prem datacenter and
AWS VPC by using either the AWS VPN or AWS Direct Connect. You must also ensure the ports are open on
the management security group required for replication. The existing best practices listed in the Nutanix Disaster
Recovery Guide apply.
See the AWS documentation at AWS Site-to-Site VPN to connect AWS VPC by using VPN.
See the AWS documentation at Connect Your Data Center to AWS to connect AWS VPC by using Direct
Connect.
If you want to use protection policies and recovery plans to protect applications across multiple Nutanix clusters,
set up Nutanix Disaster Recovery (formerly Leap) from Prism Central. Nutanix Disaster Recovery allows you to
stage your application to be restored in the right order. You can also use protection policies to failback to on-prem if
required.
NC2 on AWS, when using Prism Central 2022.9 or later, also supports disaster recovery from on-prem to AWS over
layer 2 stretched subnets. Layer 2 subnet extension assumes that the reachability between on-prem and AWS is over
a VPN or AWS Direct Connect. NC2 on AWS supports partial failover (with Layer 2 stretch) and complete failover
(with or without Layer 2 stretch) while maintaining IP reachability.
IP addresses of VMs can be maintained while the VMs are migrated between:

• On-prem and NC2 on AWS clusters


• Two NC2 on AWS clusters

Note: For IPs to be maintained, ensure that there are no IP conflicts prior to the creation of a recovery plan.

For more information on disaster recovery without the Layer 2 stretch, see Disaster Recovery Without Layer 2
Stretch.
For more information on disaster recovery over the Layer 2 stretch, see Disaster Recovery Over Layer 2 Stretch.

Disaster Recovery Without Layer 2 Stretch


For more information on disaster recovery, see Disaster Recovery Between On-Prem AZ and Nutanix Cloud
Cluster (NC2) and Disaster Recovery Between Two Nutanix Cloud Clusters.

Disaster Recovery Over Layer 2 Stretch


Ensure that you complete the following prerequisites before configuring disaster recovery from on-prem to AWS
VPC over layer 2 stretched subnets:

• Understand how layer 2 virtual network extension. For details, see AHV Administration Guide.
• Understand how to use Nutanix Disaster Recovery. For details, see Nutanix Disaster Recovery Guide.

Cloud Clusters (NC2) | Disaster Recovery and Backup | 217


• Ensure that you have configured network connectivity for user VMs and set up VPN or AWS Direct Connect
between the on-prem cluster and clusters running in AWS. For more information, see User VM Network
Management.
• The cluster is running the minimum versions of Prism Central 2022.9.
• The following ports must be open:

• UDP port 500


• UDP port 4500
• UDP port 4789 (for VTEP)
The following protocols must also be available:

• ESP (Encapsulating Security Payload for VPN)


• ICMP
• SSH
• Layer 2 can have non-overlapping Management CIDR and overlapping CIDR for workloads.
Perform the following steps to configure Layer 2 stretched network connectivity for disaster recovery:

Note: The following steps cover both VPN and VTEP gateway. The fields vary based on your selection for VPN or
VTEP gateway.

Procedure

1. Pair the Prism Central at the primary AZ with the Prism Central at the recovery AZ.
The Availability Zone Type must be selected as Physical Location. Ensure that the availability zone is reachable.
The primary AZ and the recovery AZ can be:

• On-prem and NC2 on AWS clusters


• NC2 on AWS and NC2 on AWS clusters
For more information, see Pairing Availability Zones.

2. Create a subnet for Prism Central on the primary AZ.


You can also use an existing subnet if that subnet is not used for user VMs.
You can skip the IP Address Management and DHCP Settings fields for VLAN.
For more information, see Creating a Subnet.

3. Create a local gateway on the recovery AZ.


You can choose either VPN or VTEP gateway.
If you have selected the VPN gateway service: This gateway creates the Nutanix VPN VM on NC2 on AWS. You
must select the Gateway Attachment as VLAN.
If you have selected the VTEP gateway service: The VxLAN (UDP) port must be kept as default 4789. If any
other port is used, then ensure that the port must be open.
For more information, see Creating a Network Gateway.

Cloud Clusters (NC2) | Disaster Recovery and Backup | 218


4. Create a local gateway on the primary AZ subnet.
You can choose either VPN or VTEP gateway.
If you have selected the VPN gateway service: You must select the Gateway Attachment based on the primary
AZ subnet. The static IP address for the VPN comes from the subnet created in Step 2. This gateway creates the
Nutanix VPN/VTEP VM on the primary AZ.
If you have selected the VTEP gateway service, the VxLAN (UDP) port must be kept as default 4789. If any other
port is used, then ensure that the port must be open.
For more information, see Creating a Network Gateway.

5. Create a remote gateway on the recovery AZ.


You can choose either a VPN or VTEP gateway.
If you have selected the VPN gateway service: Enter the public IP address of the remote endpoint.
If you have selected VTEP gateway service: The VxLAN (UDP) port must be kept as default 4789. If any other
port is used, then ensure that the port must be open.
The VTEP IP Addresses is the IP address of the remote endpoints that you want to create the gateway for.
For more information, see Creating a Network Gateway.

6. Create a remote gateway on the primary AZ.


You can choose either VPN or VTEP gateway.
If you have selected the VPN gateway service, you must use the NC2 on AWS local gateway IP address for the
Public IP Address option.
If you have selected VTEP gateway service: The VxLAN (UDP) port must be kept as default 4789. The VTEP IP
Addresses is the IP address of the remote endpoints that you want to create the gateway for.
For more information, see Creating a Network Gateway.

7. If you must extend the subnet over VPN, then perform these additional steps:

• Create a VPN connection on the primary AZ as an initiator.


• Create a VPN connection on the recovery AZ as an acceptor.
For more information, see Creating a VPN Connection.

8. Extend the subnet over VPN or VTEP.

Note: Ensure that you perform the subnet extension steps from Prism Central using the Networking & Security
> Connectivity > Subnet Extension option.
You must not perform these steps using the Network and Security > Subnets > List > Actions
> Manage Extensions option and the Virtual Private Cloud > Subnet > Manage Extension
option.

• To extend a subnet over VPN, see Layer 2 Virtual Subnet Extension Over VPN.
• To extend a subnet over VTEP, see Layer 2 Virtual Subnet Extension Over VTEP.

Cloud Clusters (NC2) | Disaster Recovery and Backup | 219


9. Configure disaster recovery.

Note: Ensure that you have installed Nutanix Guest Tools (NGT) on the user VMs for static IP address mapping of
user VMs between source and target virtual networks and static IP address preservation after failover.

For more information on the typical tasks that you would perform, see Nutanix Disaster Recovery Guide.

Preserving UVM IP Addresses During Disaster Recovery


When using NC2 on AWS for a Disaster Recovery use case, you might want to preserve the IP addresses of UVMs
when you fail over. NC2 on AWS supports the following scenarios while maintaining IP reachability:

• Partial subnet failover with Layer 2 stretch


• Full subnet failover with or without Layer 2 stretch
NC2 on AWS automatically assigns AWS ENI IPs in a way that avoids conflict with the UVM IPs and maintains the
VM IP address while the VMs are migrated between:

• On-prem and NC2 on AWS clusters


• Two NC2 on AWS clusters

Note: For IPs to be maintained, ensure that there are no IP conflicts between the UVMs on the primary site and UVMs
and ENI IPs on the recovery site prior to the creation of a Recovery Plan.

For more information, see Disaster Recovery with NC2 on AWS.

Partial Subnet Failover


Disaster Recovery over a Layer 2 stretched network ensures partial failover of workloads while retaining the IP
addresses when VMs are migrated.
With Layer 2 VTEP stretch, you can connect NC2 on AWS VMs to VMs running in on-prem servers in the same
stretched subnet. For Layer 2 connectivity, you can use VPN (encrypted) or VTEP (unencrypted) connectivity across
sites based on business needs.
Partial failover can be used, for example, when the Primary AZ needs maintenance, but some of the UVMs must not
have any downtime. In this case, you can fail over some of the UVMs to the Remote AZ while keeping the other
UVMs running at the Primary AZ.
In the following example, the on-prem cluster and NC2 on AWS cluster have a non-overlapping Cluster Management
subnet and Prism Central subnet and an overlapping subnet for UVMs. The Layer 2 stretched network is established
using Gateway appliances, while Layer 3 connectivity is achieved using a VPN.
The On-prem cluster has two UVMs - UVM2 with 10.YY.101.50 and UVM3 with 10.YY.101.51, while the NC2
on AWS cluster has UVM1 with 10.YY.101.150 running on the overlapping subnet stretched over Layer 2 network.
When UVM2 is failed over from On-prem to NC2 on AWS, its IP address is retained post-migration as long as there
are no IP conflicts between the UVMs on the primary site and UVMs and ENI IPs on the recovery site prior to the
creation of a Recovery Plan.

Cloud Clusters (NC2) | Disaster Recovery and Backup | 220


Figure 140: Partial Subnet Failover Example

For more information, see Disaster Recovery Over Layer 2 Stretch.

Full Subnet Failover


You might use NC2 on AWS for Disaster Recovery, with or without Layer 2 stretch, to achieve full subnet failover
from on-prem to NC2 on AWS, or vice versa, and retain the UVM IP address while migrating.
With full subnet failover, you can bring up the whole subnet from the Primary AZ in the Remote AZ.
In the following example, the on-prem cluster and NC2 on AWS cluster have a non-overlapping Cluster Management
subnet and Prism Central subnet. Connectivity between on-prem and NC2 on AWS can be established using a Site-to-
Site VPN or AWS Direct Connect. For more information, see AWS Documentation.
The On-prem cluster includes a Protected subnet with two UVMs - UVM2 with 10.YY.101.50 and UVM3 with
10.YY.101.51. NC2 on AWS has an overlapping Recovery subnet 1 and a non-overlapping Recovery subnet 2.
With full failover, UVM3 and UVM2 are failed over to the overlapping Recovery subnet 1, where the complete IP
addresses of UVMs are retained, and to non-overlapping Recovery subnet 2, where only the offset is retained.

Cloud Clusters (NC2) | Disaster Recovery and Backup | 221


Figure 141: Full Subnet Failover Example

Integration with Third-Party Backup Solutions


Deploying a single cluster in AWS is great for more ephemeral workloads where you want to take advantage of
performance improvements and use the same automation pipelines you use on-prem.
Nutanix recommends you to use backup products compatible with AHV to target S3 as the backup destination.
Nutanix qualifies most of the backup products compatible with AHV. For example, HYCU and Veeam are
compatible with AHV and are qualified as a backup solution to work in an NC2 environment. See the HYCU and
Veeam documentation for instructions about how to implement and configure HYCU and Veeam solutions.

Cloud Clusters (NC2) | Disaster Recovery and Backup | 222


SYSTEM MAINTENANCE
This section describes the system and operational features of NC2 that enables you to configure data
protection, perform routine and emergency maintenance, monitor health of the cluster through health
checks, and access support services.

Health Check
Nutanix provides robust mechanisms to monitor the health of your clusters by using Nutanix Cluster Check and
health monitoring through the Prism Element web console.
You can use the NC2 console to check the status of the cluster and view notifications and logs that the NC2 console
provides.

Figure 142: Check status and notifications for a cluster

For more information on how to assess and monitor the health of your cluster, See Health Monitoring.

Routine Maintenance
This section has more information about routine maintenance activities like monitoring certificates, software updates,
managing licenses and system credentials.

Monitoring Certificates
You must monitor your certificates for expiration. Nutanix does not provide a process for monitoring certificate
expiration, but AWS provides an AWS CloudFormation template that can help you set up alarms.
See acm-certificate-expiration-check for more information. Follow the AWS best practices for certificate renewals.

Nutanix Software Updates


You can track and manage the software versions of all the entities in your Nutanix cluster by using methods described
in Life Cycle Manager Guide and Acropolis Upgrade Guide.
For an on-prem cluster, Life Cycle Manager (LCM) allows you to perform upgrades of the BMC, BIOS, and
any hardware component firmware. However, these components are not applicable to clusters deployed in AWS.
Therefore, LCM does not list these items in the upgrade inventory.

Cloud Clusters (NC2) | System Maintenance | 223


Managing Nutanix Licenses
After you log on to the Nutanix Support portal at https://portal.nutanix.com and click the Licenses link on the
portal home page, you can expand the Clusters on the left pane to manage the licenses.
The Clusters page includes the following category pages depending on the license type used for your NC2 cluster:

• Licensed Clusters. Displays a table of licensed clusters including the cluster name, cluster UUID, license tier,
and license metric. NC2 clusters with AOS and NCI licensing appear under Licensed Clusters.
• Cloud Clusters. Displays a table of licensed Nutanix Cloud Clusters including the cluster name, cluster UUID,
billing mode, and status. NC2 clusters with AOS licensing appear under Cloud Clusters. NCI-licensed clusters
do not appear under Cloud Clusters.
To purchase and manage the software licenses for your Nutanix clusters, see the License Manager Guide.

System Credentials
See the AWS documentation to manage your AWS accounts and their permissions.
For NC2 credentials, see the NC2 Payment Methods and User Management.

Managing Access Keys and AWS Service Limits


Nutanix recommends that you follow the AWS best practices to manage access keys and service limits.

Emergency Maintenance
The NC2 software can automatically perform emergency maintenance if you configure redundancy factor 2 (RF2) or
RF3 on your cluster to protect against rack failures and synchronous or asynchronous replication to protect against
AZ failures. For node failures, NC2 detects a node failure and replaces the failed node with a new node.
Hosts in a cluster are deployed by using a partition placement group with seven partitions. A placement group is
created for each host type and the hosts are balanced within the placement group. The placement group along with
the partition number is translated into a rack ID of the node. This enables AOS Storage to place meta data and data
replicas in different fault domains.

Figure 143: NC2 on AWS Partition Placement(Multi)

A redundancy factor 2 (RF2) configuration of the cluster protects data against a single-rack failure and an RF3
configuration protects against a two-rack failure. Additionally, to protect against multiple correlated failures within a

Cloud Clusters (NC2) | System Maintenance | 224


data center and an entire AZ failure, Nutanix recommends that you set up synchronous replication to a second cluster
in a different AZ in the same Region or an asynchronous replication to an AZ in a different Region.
See Data Protection and Recovery with Prism Element for more information.

Automatic Node Failure Detection


If a node failure occurs, the NC2 software detects the failure, and will automatically condemn the node, add a new
node to the cluster, and ultimately remove the failed node. Depending on the type of failure, the workload on the
failed node will be either migrated or restarted on the remaining nodes.

Note: NC2 detects a node failure in a few minutes and brings a replaced node online in approximately one hour; this
duration varies depending on the time taken for data replication, the customer’s specific setup, and so on.

Troubleshooting Deployment Issues


Nutanix provides knowledge-base articles to address any errors that users might encounter while deploying and using
NC2 on AWS. You can find most recent KBs here. You can also get a list of known issues in the NC2 on AWS
Release Notes.
The recent issue observed is that the cluster resume workflow hangs when S3 connectivity is lost on one of the
CVMs. KB Article 000013499.

Documentation Support and Feedback


Nutanix strives to improve product documentation continuously to ensure that users get the information they want.
Making sure our content is sufficiently solving our users' problems is essential to us. With feedback, you can indicate
if you found the documentation helpful and highlight the article that needs improvements.
Nutanix provides a way for users to share their feedback for documentation and takes necessary actions to incorporate
the feedback received to improve the documentation quality and user experience.
To share your feedback for documentation:

Procedure

1. When accessing a document on https://portal.nutanix.com/, navigate to the Feedback dialog displayed at the
bottom of the page.

Figure 144: Documentation Feedback

2. Select one to five stars to rate the page you referred to. Here, a single star means poor, and five stars mean
excellent.

Cloud Clusters (NC2) | System Maintenance | 225


3. Select the predefined feedback messages that are presented based on the number of stars selected.

Figure 145: Submit Documentation Feedback

4. Enter your suggestion on how this section can be improved.

5. Enter your email address and click Submit.

Nutanix Support
You can access the technical support services in a variety of ways to troubleshoot issues with your Nutanix cluster.
See the Nutanix Support Portal Help for more information.
Nutanix offers a support tier called Production Support for NC2.
See Product Support Programs under Cloud Services Support for more information about Production
Support tier and SLAs.

AWS Support
Nutanix recommends that you sign up for an AWS Support Plan subscription for technical support of the AWS
entities such as Amazon EC2 Instances, VPC, and more. See AWS Support Plan Offerings for more information.

Cloud Clusters (NC2) | System Maintenance | 226


RELEASE NOTES
Nutanix recommends following the NC2 on AWS Release Notes to learn more about:

• Changes or enhancements
• Known Issues
• Fixes and workarounds
• Software compatibility

Cloud Clusters (NC2) | Release Notes | 227


COPYRIGHT
Copyright 2024 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the United States and/or other
jurisdictions. All other brand and product names mentioned herein are for identification purposes only and may be
trademarks of their respective holders.

Cloud Clusters (NC2) | Copyright | 228

You might also like