Command Ref AOS v5 10
Command Ref AOS v5 10
10
Command Reference
July 7, 2020
Contents
ii
pulse-config: Pulse Configuration.................................................................................................. 170
rackable-unit: Rackable unit..............................................................................................................171
remote-site: Remote Site.................................................................................................................... 172
rsyslog-config: RSyslog Configuration.........................................................................................177
smb-server: Nutanix SMB server.......................................................................................................180
snapshot: Snapshot.................................................................................................................................. 181
snmp: SNMP.................................................................................................................................................. 183
software: Software.................................................................................................................................. 187
ssl-certificate: SSL Certificate..................................................................................................... 188
storagepool: Storage Pool.................................................................................................................. 189
storagetier: Storage Tier..................................................................................................................... 191
task: Tasks................................................................................................................................................... 192
user: User.....................................................................................................................................................193
vdisk: Virtual Disk.................................................................................................................................... 197
virtual-disk: Virtual Disk................................................................................................................. 200
virtualmachine: Virtual Machine.................................................................................................... 200
volume-group: Volume Groups..........................................................................................................203
vstore: VStore.......................................................................................................................................... 208
vzone: vZone..............................................................................................................................................209
3. Controller VM Commands................................................................................212
Specifying Credentials......................................................................................................................................... 212
cluster..........................................................................................................................................................................212
diagnostics.py......................................................................................................................................................... 267
genesis.......................................................................................................................................................................274
ncc.............................................................................................................................................................................. 280
setup_hyperv.py......................................................................................................................................................318
Copyright................................................................................................................. 321
License........................................................................................................................................................................321
Conventions.............................................................................................................................................................. 321
Default Cluster Credentials................................................................................................................................ 321
Version....................................................................................................................................................................... 322
iii
1
ACROPOLIS COMMAND-LINE
INTERFACE (ACLI)
Acropolis provides a command-line interface for managing hosts, networks, snapshots, and
VMs.
ads
Operations
core
Operations
ha
Operations
host
Operations
image
Operations
Create an image
Optionally, a checksum may also be specified if we are creating an image from a source_url in
order to verify the correctness of the image.
<acropolis> image.create name [ annotation="annotation" ][
architecture="architecture" ][ clone_from_vmdisk="clone_from_vmdisk" ][
compute_checksum="{ true | false }" ][ container="container" ][ image_type="{raw|vhd|vmdk|
vdi|iso|qcow2|vhdx}" ][ product_name="product_name" ][ product_version="product_version"
][ sha1_checksum="sha1_checksum" ][ sha256_checksum="sha256_checksum" ][
source_url="source_url" ][ wait="{ true | false }" ]
Required arguments
name
Comma-delimited list of image names
Type: list of strings with expansion wildcards
Optional arguments
annotation
Image description
Type: string
architecture
Disk image CPU architecture
Type: image architecture
clone_from_vmdisk
UUID of the source vmdisk
Type: VM disk
compute_checksum
If True, we will compute the checksum of the image
Type: boolean
Default: false
Delete an image(s)
<acropolis> image.delete image_list
Required arguments
image_list
Image identifiers
Type: list of images
Update an image
<acropolis> image.update [ annotation="annotation" ][ architecture="architecture"
][ image="image" ][ image_type="{raw|vhd|vmdk|vdi|iso|qcow2|vhdx}" ][ name="name" ][
product_name="product_name" ][ product_version="product_version" ]
Required arguments
None
Optional arguments
annotation
Image description
Type: string
architecture
Disk image CPU architecture
Type: image architecture
image
Image identifier
Type: image
image_type
Image type
Type: image type
iscsi_client
Operations
microseg
Operations
net
2. Create cluster_vswitch configuration for bridge br0 and bond 100G withlacp
<acropolis> net.create_cluster_vswitch br0 uplink_grouping=kAll100G
nic_team_policy=kBalanceTcp lacp=true
3. Create cluster_vswitch configuration for bridge br0 and bond 100G with lacp and
hostoverride for host1:<uuid1> with uplinks=eth0,eth1 and for host2:<uuid2> with
uplinks=eth3,eth4
<acropolis> net.create_cluster_vswitch br0 uplink_grouping=All100G
nic_team_policy=kBalanceSlb lacp=true host_override=[uuid1:eth0:eth1],[uuid2:eth2:eth3]
Deletes a network
To determine which VMs are on a network, use net.list_vms.
<acropolis> net.delete network
Required arguments
network
Network identifier
Type: network
nf
Operations
parcel
snapshot
Operations
vg
2. Attach an external client identified by network id, 10.1.1.1, to the VG, vg2, :
<acropolis> vg.attach_external vg2.attach_external initiator_network_id=10.1.1.1
2. Clone two VGs vg1 and vg2 with target_secrets vg1_target_secret and vg2_target_secret
<acropolis> vg.clone vg1,vg2 clone_from_vg=source-vg
target_secret_list=vg1_target_secret,vg2_target_secret
2. Clone a disk from the ADSF file /ctr/plan9.iso, and add it to first open slot
<acropolis> vg.disk_create my_vg clone_from_adsf_file=/ctr/plan9.iso
3. Clone a disk from the existing vmdisk, and add it to the first open slot
<acropolis> vg.disk_create my_vg clone_from_vmdisk=0b4fc60b-cc56-41c6-911e-67cc8406d096
vm
Operations
Clones a VM
The following suffixes are valid: M=2^20, G=2^30.
<acropolis> vm.clone name_list [ clone_affinity="{ true
| false }" ][ clone_from_snapshot="clone_from_snapshot" ][
clone_from_vm="clone_from_vm" ][ clone_ip_address="clone_ip_address"
][ memory="memory" ][ num_cores_per_vcpu="num_cores_per_vcpu" ][
num_threads_per_core="num_threads_per_core" ][ num_vcpus="num_vcpus" ]
Required arguments
name_list
Comma-delimited list of VM names
Type: list of strings with expansion wildcards
Optional arguments
clone_affinity
Clone source VM's affinity rules.
Type: boolean
clone_from_snapshot
Snapshot from which to clone
Type: snapshot
clone_from_vm
VM from which to clone
Type: VM
clone_ip_address
IP addresses to assign to clones
Type: list of IPv4 addresses
memory
Memory size
Type: size with MG suffix
num_cores_per_vcpu
Number of cores per vCPU
Type: int
num_threads_per_core
Number of threads per core
Type: int
num_vcpus
Number of vCPUs
Type: int
2. Clone a disk from the ADSF file /ctr/plan9.iso, and use it as the backing image for a newly-
created CD-ROM drive on the first available IDE slot.
<acropolis> vm.disk_create my_vm clone_from_adsf_file=/ctr/plan9.iso cdrom=1
3. Clone a disk from the existing vmdisk, and attach it to the first available SCSI slot.
<acropolis> vm.disk_create my_vm clone_from_vmdisk=0b4fc60b-cc56-41c6-911e-67cc8406d096
5. Create a new empty CD-ROM drive, and attach it to the first available IDE slot.
<acropolis> vm.disk_create my_vm empty=1 cdrom=1
2. Replace the disk at IDE:0 with a clone of /ctr/plan9.iso. Note that if IDE:0 is a CD-ROM drive,
it remains such.
<acropolis> vm.disk_update my_vm disk_addr=ide.0 clone_from_adsf_file=/ctr/plan9.iso
2. Create a consistent snapshot across several VMs, using the default naming scheme.
<acropolis> vm.snapshot_create vm1,vm2,vm3
vm_group
Operations
Tip: Refer to KB 1661 for the default credentials of all cluster components.
Procedure
1. Verify that your system has Java Runtime Environment (JRE) version 5.0 or higher.
To check which version of Java is installed on your system or to download the latest version,
go to http://www.java.com/en/download/installed.jsp.
Procedure
1. On your local system, open a command prompt (such as bash for Linux or CMD for Windows).
2. At the command prompt, start the nCLI by using one of the following commands.
• Replace management_ip_addr with the IP address of any Nutanix Controller VM in the cluster.
• Replace username with the name of the user (if not specified, the default is admin).
• (Optional) Replace user_password with the password of the user.
Table 1: Troubleshooting
Error Explanation/Resolution
ncli not found or not The Windows %PATH% or Linux $PATH environment
recognized as a command variable is not set.
Error: Bad credentials The admin user password has been changed from the
default and you did not specify the correct password.
Type exit and start the nCLI again with the correct
password.
Results
The Nutanix CLI is now in interactive mode. To exit this mode, type exit at the ncli> prompt.
action can be replaced by any valid action for the preceding entity. Each entity has a unique
set of actions, but a common action across all entities is list. For example, you can type the
following command to request a list of all storage pools in the cluster.
ncli> storagepool list
Some actions require parameters at the end of the command. For example, when creating an
NFS datastore, you need to provide both the name of the datastore as it will appear to the
hypervisor and the name of the source storage container.
ncli> datastore create name="NTNX-NFS" ctr-name="nfs-ctr"
Parameter-value pairs can be listed in any order, as long as they are preceded by a valid entity
and action.
Tip: To avoid syntax errors, surround all string values with double-quotes, as demonstrated in the
preceding example. This is particularly important when specifying parameters that accept a list of
values.
Embedded Help
The nCLI provides assistance on all entities and actions. By typing help at the command line,
you can request additional information at one of three levels of detail.
help
Provides a list of entities and their corresponding actions
entity help
Provides a list of all actions and parameters associated with the entity, as well as which
parameters are required, and which are optional
entity action help
Provides a list of all parameters associated with the action, as well as a description of
each parameter
The nCLI provides additional details at each level. To control the scope of the nCLI help output,
add the detailed parameter, which can be set to either true or false.
For example, type the following command to request a detailed list of all actions and
parameters for the cluster entity.
ncli> cluster help detailed=true
You can also type the following command if you prefer to see a list of parameters for the
cluster edit-params action without descriptions.
ncli> cluster edit-params help detailed=false
nCLI Entities
alerts: An Alert
authconfig: Configuration information used to authenticate user
cloud: Manage AWS or AZURE Cloud
cluster: A Nutanix Complete Cluster
container: A Storage Container is a container for virtual disks
alerts: Alert
Description An Alert
Alias alert
Operations
• Acknowledge Alerts : acknowledge | ack
• Update Alert Configuration : edit-alert-config | update-alert-config
• List Alert Configuration : get-alert-config
• List history of Alerts : history
• List of unresolved Alerts : list | ls
• Resolve Alerts : resolve
Acknowledge Alerts
ncli> alerts { acknowledge | ack } ids="ids"
Required arguments
ids
A comma-separated list of ids of the Alerts
Resolve Alerts
ncli> alerts { resolve } ids="ids"
Required arguments
ids
A comma-separated list of ids of the Alerts
Comma-separated list of values to be removed from the existing directory role mapping
ncli> authconfig { remove-from-role-mapping-values } name="name" role="role"
entity-type="entity_type" entity-values="entity_values"
Required arguments
name
Name
role
Role
entity-type
Entity Type
entity-values
Values
cloud: Cloud
Description Manage AWS or AZURE Cloud
Alias
Operations
• Add AWS or AZURE credentials : add-credentials
• Clear all cloud credentials : clear-all-credentials
• Deploy and configure a Nutanix CVM on cloud, and create a Remote Site on
the local cluster which points to the new CVM : deploy-remote-site
• Destroy a cloud remote site : destroy-remote-site
• List AWS credentials : ls-credentials
• List AWS CVM images : ls-cvm-images
• List AWS CVMs : ls-cvms
• List AWS VPC subnets : ls-subnets
• Remove AWS credentials : remove-credentials
• Set default AWS credentials : set-default-credentials
Deploy and configure a Nutanix CVM on cloud, and create a Remote Site on the local cluster
which points to the new CVM
ncli> cloud { deploy-remote-site } cloud-type="cloud_type" region="region"
remote-site-name="remote_site_name" local-ctr-name="local_ctr_name" connectivity-
type="connectivity_type" [ instance-name="instance_name" ][ credential-
name="credential_name" ][ image-id="image_id" ][ image-name="image_name" ][ admin-
password="admin_password" ][ remote-sp-name="remote_sp_name" ][ remote-ctr-
name="remote_ctr_name" ][ subnet-id="subnet_id" ][ ssh-tunnel-port="ssh_tunnel_port"
][ azure-virtual-network="azure_virtual_network" ][ enable-proxy="{ true | false }" ][
enable-on-wire-compression="{ true | false }" ][ max-bandwidth="max_bandwidth" ][
instance-type="instance_type" ]
Required arguments
cloud-type
Type of the cloud service
region
Name of the region, eg. us-east-1 | eu-west-1 | East Asia | Brazil South
remote-site-name
Name of the Remote Site on the local cluster
local-ctr-name
Name of a local Storage Container to be backed-up to the deployed CVM
connectivity-type
The platform to use for the cloud instance. Choose between 'vpn'(recommended)
and 'ssh-tunnel'
Optional arguments
instance-name
Prefix for the name of the instance deployed in the cloud
credential-name
Given name of the credentials
image-id
ID of the CVM image to use for deployment
image-name
Name of the CVM image to use for deployment
admin-password
Password for the nutanix user on the CVM deployed in the cloud
cluster: Cluster
Description A Nutanix Complete Cluster
Alias
• Get configuration of SMTP Server used for transmitting alerts and report
emails to Nutanix support : get-smtp-server
• Join the Nutanix storage cluster to the Windows AD domain specified in
the cluster name : join-domain
AOS | Nutanix Command-Line Interface (nCLI) | 74
• Get the list of public keys configured in the cluster : list-public-keys | ls-
Add the configured node to the cluster. In case of compute only node, cvm ip corresponds to
host ip
ncli> cluster { add-node } node-uuid="node_uuid" [ server-certificate-
list="server_certificate_list" ]
Required arguments
node-uuid
UUID of the new node
Optional arguments
server-certificate-list
Comma-separated list of the key management server uuid and corresponding
certificate file path. List should be of format <server_uuid:path_to_certificate>
Clear configuration of SMTP Server used for transmitting alerts and report emails to Nutanix
support
ncli> cluster { clear-smtp-server }
Required arguments
None
Configure discovered node with IP addresses (Hypervisor, CVM and IPMI addresses)
ncli> cluster { configure-node } node-uuid="node_uuid" [ cvm-ip="cvm_ip" ][
hypervisor-ip="hypervisor_ip" ][ ipmi-ip="ipmi_ip" ][ ipmi-netmask="ipmi_netmask" ][
ipmi-gateway="ipmi_gateway" ]
Required arguments
node-uuid
UUID of the new node
Optional arguments
cvm-ip
IP address of the controller VM
hypervisor-ip
IP address of the Hypervisor Host
ipmi-ip
IPMI address of the node
ipmi-netmask
IPMI netmask of the node
ipmi-gateway
IPMI gateway of the node
Generates and downloads the csr from discovered node based on certification information
from the cluster
ncli> cluster { generate-csr-for-discovered-node } cvm-ip="cvm_ip" file-
path="file_path"
Required arguments
cvm-ip
IPv6 address of the controller VM of discovered node
file-path
Path where csr from the discovered node needs to be downloaded
Get configuration of SMTP Server used for transmitting alerts and report emails to Nutanix
support
ncli> cluster { get-smtp-server }
Required arguments
None
Delete public key with the specified name from the cluster
ncli> cluster { remove-public-key | rm-public-key } name="name"
Required arguments
name
Name of the cluster public key
Set configuration of SMTP Server used for transmitting alert and report emails to Nutanix
support
ncli> cluster { set-smtp-server } address="address" [ port="port" ][
username="username" ][ password="password" ][ security-mode="security_mode" ][ from-
email-address="from_email_address" ]
Required arguments
address
Fully Qualified Domain Name(FQDN) or IPv4 address of the SMTP Server
Optional arguments
port
Port number of the SMTP Server. By default, port 25 is used
username
Username to access the SMTP Server
password
Password to access the SMTP Server
security-mode
Security mode used by SMTP Server for data encryption and authentication.
SMTP Server in Nutanix cluster can be configured with one of the following
mode: 'none', 'ssl' or 'starttls'
Default: none
from-email-address
From email address to be used while sending emails (Set to '-' to clear the
existing value)
• Set the down-migrate times (in minutes) for a Storage Tier in a Storage
Container : set-down-migrate-times | set-dm-times
Get the down-migrate times (in minutes) for Storage Tiers in a Storage Container
ncli> container { get-down-migrate-times | get-dm-times }[ id="id" ][
name="name" ]
Required arguments
None
Optional arguments
id
ID of the Storage Container
name
Name of the Storage Container
Set the down-migrate times (in minutes) for a Storage Tier in a Storage Container
ncli> container { set-down-migrate-times | set-dm-times } tier-
names="tier_names" [ id="id" ][ name="name" ][ time-in-min="time_in_min" ]
Required arguments
tier-names
A comma-separated list of Storage Tiers
Optional arguments
id
ID of the Storage Container
name
Name of the Storage Container
time-in-min
Time in minutes after which to down-migrate data in a given Storage Tier in a
Storage Container
Default: 30
List of results of the certificate tests that were performed against key management servers
ncli> data-at-rest-encryption { get-recent-certificate-test-results }[ host-
ids="host_ids" ][ key-management-server-names="key_management_server_names" ]
Required arguments
None
Optional arguments
host-ids
List of Host ids
key-management-server-names
List of Key Management Server names
Assigns new passwords to encryption capable disks when cluster is password protected. If
disk ids are not given, rekey will be performed on all disks of the cluster
ncli> data-at-rest-encryption { rekey-disks }[ disk-ids="disk_ids" ]
Required arguments
None
Optional arguments
disk-ids
IDs of the Physical Disks
• Delete ca certificate with the specified name from the cluster : remove-ca-
certificate | rm-ca-certificate
datastore: Datastore
Description An NFS Datastore
Alias
Operations
• Create a new NFS datastore on the Physical Hosts using the Storage
Container (ESX only) : create | add
• Delete the NFS datastore on the Physical Hosts : delete | remove | rm
• List NFS Datastores : list | ls
List Physical Disks that are not assigned to any Storage Pool
ncli> disk { list-free | ls-free }
Required arguments
None
events: Event
Description An Event
Alias event
Operations
• Acknowledge Events : acknowledge | ack
• List history of Events : history
• List of unacknowledged Events : list | ls
Acknowledge Events
ncli> events { acknowledge | ack } ids="ids"
Required arguments
ids
A comma-separated list of ids of the Events
Add a Share
ncli> file-server { add-share } uuid="uuid" name="name" [ description="description"
][ enable-windows-previous-version="{ true | false }" ][ share-type="share_type" ][
share-size-gib="share_size_gib" ][ default-quota-limit-gib="default_quota_limit_gib"
][ quota-enforcement-type="quota_enforcement_type" ][ send-quota-
notifications-to-user="send_quota_notifications_to_user" ][ enable-access-based-
enumeration="{ true | false }" ][ protocol-type="protocol_type" ][ secondary-protocol-
type="secondary_protocol_type" ][ enable-concurrent-reads="{ true | false }" ][ enable-
case-sensitive-namespace="{ true | false }" ][ enable-symlink-creation="{ true |
false }" ][ enable-simultaneous-access="{ true | false }" ][ share-path="share_path"
][ parent-share-uuid="parent_share_uuid" ][ share-auth-type="share_auth_type"
][ default-share-access-type="default_share_access_type" ][ client-with-
read-write-access="client_with_read_write_access" ][ client-with-read-only-
access="client_with_read_only_access" ][ client-with-no-access="client_with_no_access"
][ anonymous-uid="anonymous_uid" ][ anonymous-gid="anonymous_gid" ][ squash-
type="squash_type" ]
Required arguments
uuid
uuid of the File Server
name
Name of the Share
Optional arguments
description
Description of the Share
enable-windows-previous-version
Enable self service restore flag
share-type
Type of Share. Homes or General (General Purpose)
share-size-gib
Share size in Gibs
default-quota-limit-gib
Default quota limit in Gibs (Quota applies to all users of the share)
quota-enforcement-type
Quota enforcement type (Hard or Soft)
send-quota-notifications-to-user
Send quota notifications to user
enable-access-based-enumeration
Enable access based enumeration flag
protocol-type
Primary protocol type (SMB or NFS)
Add a user
ncli> file-server { add-user | add-user } uuid="uuid" user="user" [
password="password" ]
Required arguments
uuid
Uuid of the file server
user
File server user name.
Optional arguments
password
The password for the above file server user.
Join and unjoin the File Server to the Windows AD domain or bind and unbind from LDAP.
ncli> file-server { configure-name-services | configure-name-services }
uuid="uuid" [ windows-ad-username="windows_ad_username" ][ organizational-
unit="organizational_unit" ][ windows-ad-password="windows_ad_password" ][
overwrite="overwrite" ][ add-user-as-afs-admin="add_user_as_afs_admin" ][ rfc-
enabled="rfc_enabled" ][ use-ad-credential-for-dns="use_ad_credential_for_dns"
][ preferred-domain-controller="preferred_domain_controller" ][ ad-protocol-
type="ad_protocol_type" ][ ldap-protocol-type="ldap_protocol_type" ][ local-
protocol-type="local_protocol_type" ][ nfsv4-domain="nfsv4_domain" ][ ldap-server-
Delete a Share
ncli> file-server { delete-share } uuid="uuid" share-uuid="share_uuid" [
force="force" ]
Required arguments
uuid
uuid of the File Server
share-uuid
uuid of the FileServer share
Optional arguments
force
force delete Share
Delete a user
ncli> file-server { delete-user } uuid="uuid" user="user"
Required arguments
uuid
Uuid of the file server that user is associated with
user
Name of the user
List users
ncli> file-server { list-user } uuid="uuid"
Required arguments
uuid
uuid of the file server
Update a Share
ncli> file-server { update-share | edit-share } uuid="uuid" share-
uuid="share_uuid" [ name="name" ][ enable-windows-previous-version="{ true | false }" ][
description="description" ][ share-size-gib="share_size_gib" ][ default-quota-limit-
gib="default_quota_limit_gib" ][ quota-enforcement-type="quota_enforcement_type" ][ send-
quota-notifications-to-user="send_quota_notifications_to_user" ][ enable-access-based-
enumeration="{ true | false }" ][ protocol-type="protocol_type" ][ secondary-protocol-
type="secondary_protocol_type" ][ enable-concurrent-reads="{ true | false }" ][ enable-
case-sensitive-namespace="{ true | false }" ][ enable-symlink-creation="{ true | false }"
][ enable-simultaneous-access="{ true | false }" ][ share-auth-type="share_auth_type"
][ default-share-access-type="default_share_access_type" ][ client-with-
read-write-access="client_with_read_write_access" ][ client-with-read-only-
access="client_with_read_only_access" ][ client-with-no-access="client_with_no_access"
][ anonymous-uid="anonymous_uid" ][ anonymous-gid="anonymous_gid" ][ squash-
type="squash_type" ]
Required arguments
uuid
uuid of the File Server
share-uuid
Uuid of the Share
Optional arguments
name
Name of the Share
Update a user
ncli> file-server { update-user } uuid="uuid" [ user="user" ][ password="password"
]
Required arguments
uuid
Uuid of the file server that user is associated with
Optional arguments
user
File server user name.
password
The password for the above file server user.
Operations
• Edit a Health Check : edit | update
• List Health Checks : list | ls
• Reset to factory setting, the default location to be used for storing the
virtual machine configuration files and the virtual hard disk files : reset-
default-vm-vhd-location
• Set the default location to be used for storing the virtual machine
configuration files and the virtual hard disk files : set-default-vm-vhd-
location
Add the configured node to the cluster. In case of compute only node, cvm ip corresponds to
host ip
ncli> host { add-node } node-uuid="node_uuid" [ server-certificate-
list="server_certificate_list" ]
Required arguments
node-uuid
UUID of the new node
Optional arguments
server-certificate-list
Comma-separated list of the key management server uuid and corresponding
certificate file path. List should be of format <server_uuid:path_to_certificate>
Configure discovered node with IP addresses (Hypervisor, CVM and IPMI addresses)
ncli> host { configure-node } node-uuid="node_uuid" [ cvm-ip="cvm_ip" ][
hypervisor-ip="hypervisor_ip" ][ ipmi-ip="ipmi_ip" ][ ipmi-netmask="ipmi_netmask" ][
ipmi-gateway="ipmi_gateway" ]
Generates and downloads the csr from discovered node based on certification information
from the cluster
ncli> host { generate-csr-for-discovered-node } cvm-ip="cvm_ip" file-
path="file_path"
Required arguments
cvm-ip
IPv6 address of the controller VM of discovered node
file-path
Path where csr from the discovered node needs to be downloaded
Join one or more host(s) to a domain. This operation is only valid for hosts running Hyper-V.
ncli> host { join-domain } domain="domain" logon-name="logon_name"
restart="restart" [ name-server-ip="name_server_ip" ][ host-name-
prefix="host_name_prefix" ][ password="password" ][ host-ids="host_ids" ][ host-
names="host_names" ][ ou-path="ou_path" ][ cps-prefix="cps_prefix" ]
Required arguments
domain
Full name of the domain
logon-name
Logon name (domain\username) of a domain user/administrator account that
has privileges to perform the operation
Reset to factory setting, the default location to be used for storing the virtual machine
configuration files and the virtual hard disk files. This operation is only valid for hosts running
Hyper-V.
ncli> host { reset-default-vm-vhd-location } host-ids="host_ids"
Required arguments
host-ids
A comma-separated list of the ids of the Physical Hosts
Set the default location to be used for storing the virtual machine configuration files and the
virtual hard disk files. This operation is only valid for hosts running Hyper-V.
ncli> host { set-default-vm-vhd-location } ctr-for-vm-config="ctr_for_vm_config"
ctr-for-vhd-files="ctr_for_vhd_files" [ host-ids="host_ids" ]
Required arguments
ctr-for-vm-config
Name of the Storage Container to be used for storing VM configuration files.
ctr-for-vhd-files
Name of the Storage Container to be used for storing virtual hard disk files.
Optional arguments
host-ids
A comma-separated list of the ids of the Physical Hosts
license: License
Description License for a Nutanix cluster
Alias
Operations
• Apply a license file to the cluster : apply-license
• Download cluster info as a file : download-cluster-info
• Get cluster info from the cluster : generate-cluster-info
• Read allowances for features as listed in the license : get-allowances
• Read license file from the cluster : get-license
Operations
• Add a new Management Server : add
• Add a new Management Server : edit | update
• List Management Servers : list | ls
• Returns a list of information for management servers which are used for
managing the cluster : list-management-server-info
• Create and register a management server extension for Nutanix : register
• Delete a Management Server : remove | rm
• Unregister the management server extension for Nutanix : unregister
Returns a list of information for management servers which are used for managing the cluster.
ncli> managementserver { list-management-server-info }
Required arguments
None
multicluster: Multicluster
Description A Nutanix Management Console to manage multiple clusters
Alias
Operations
• Add to multicluster : add-to-multicluster
• Get cluster state : get-cluster-state
Add to multicluster
ncli> multicluster { add-to-multicluster } external-ip-address-or-svm-
ips="external_ip_address_or_svm_ips" username="username" password="password"
Required arguments
external-ip-address-or-svm-ips
External IP address or list of SVM IP addresses
username
username
password
password
network: Network
Description Network specific commands
Alias net
Operations
• Delete Nutanix Guest Tools : delete
• Disable Nutanix Guest Tools : disable
• Disable Applications in Nutanix Guest Tools : disable-applications
• Enable Nutanix Guest Tools : enable
• Enable Applications in Nutanix Guest Tools : enable-applications
• Get Nutanix Guest Tools : get
• List Nutanix Guest Tools : list
• List applications supported by Nutanix Guest Tools : list-applications
• Mount Nutanix Guest Tools : mount
• Unmount Nutanix Guest Tools : unmount
Create a new out of band snapshot schedule in a Protection domain to take a snapshot at a
specified time
ncli> protection-domain { add-one-time-snapshot | create-one-time-snapshot
} name="name" [ snap-time="snap_time" ][ remote-sites="remote_sites" ][ retention-
time="retention_time" ][ app-consistent-snapshots="app_consistent_snapshots" ]
Required arguments
name
Name of the Protection domain
Optional arguments
snap-time
Specify time in format MM/dd/yyyy [HH:mm:ss [z]] at which snapshot is to be
taken. If not specified, snapshot will be taken immediately
remote-sites
Comma-separated list of Remote Site to which snapshots are replicated. If not
specified, remote replication is not performed
retention-time
Number of seconds to retain the snapshot. Aged snapshots will be garbage
collected. By default, snapshot is retained forever
app-consistent-snapshots
Whether Consistency group created for Virtual Machine performs application
consistent snapshots. Such special Consistency group can contain one and only
one Virtual Machine
Default: false
Mark Protection domain as inactive and failover to the specified Remote Site
ncli> protection-domain { migrate } name="name" remote-site="remote_site" [ skip-
vm-mobility-check="{ true | false }" ]
Required arguments
name
Name of the Protection domain
remote-site
Remote Site to be used for planned failover
Optional arguments
skip-vm-mobility-check
Skip the vm mobility check while migrating a Protection domain
Mark a Protection domain for removal. Protection domain will be removed from the appliance
when all outstanding operations on it are cancelled
ncli> protection-domain { remove | rm } name="name" [ skip-remote-check="{ true |
false }" ]
Required arguments
name
Name of the Protection domain
Optional arguments
skip-remote-check
Skip checking remoteProtection domain
Default: false
Mark Virtual Machines and NFS files for removal from a given Protection domain. They will be
removed when all outstanding operations on them are completed/cancelled
ncli> protection-domain { unprotect } name="name" [ files="files" ][ vm-
names="vm_names" ][ vm-ids="vm_ids" ][ volume-group-uuids="volume_group_uuids" ]
Required arguments
name
Name of the Protection domain
Optional arguments
files
Comma-separated list of NFS files to be removed from Protection domain
vm-names
Comma-separated list of Virtual Machine name to be removed from Protection
domain
vm-ids
Comma-separated list of Virtual Machine name to be removed from Protection
domain
volume-group-uuids
UUIDs of the Volume Groups
Operations
• Edit a Rackable unit : edit | update
• List Rackable unit : list | ls
• Remove a Rackable unit : remove | rm
Operations
• Add bandwidth policy : add-bandwidth-schedule
• Add a network mapping : add-network-mapping
• Create a new Remote Site : create | add
• Edit a Remote Site : edit | update
• List Remote Sites : list | ls
• List schedules for bandwidth throttling : list-bandwidth-schedules
• List network mapping(s) corresponding to a remote site : list-network-
mapping
Mark a Remote Site for removal. Site will be removed from the appliance when all outstanding
operations that are using the remote site are cancelled
ncli> remote-site { remove | rm } name="name"
Required arguments
name
Name of the Remote Site
Disable Kerberos security services in the SMB server. This operation is only valid for clusters
having hosts running Hyper-V.
ncli> smb-server { disable-kerberos } logon-name="logon_name" [
password="password" ]
Required arguments
logon-name
Logon name (domain\username) of a domain user/administrator account that
has privileges to perform the operation
Optional arguments
password
Password for the account specified by the logon account name
Enable Kerberos security services in the SMB server. This operation is only valid for clusters
having hosts running Hyper-V.
ncli> smb-server { enable-kerberos } logon-name="logon_name" [
password="password" ]
Required arguments
logon-name
Logon name (domain\username) of a domain user/administrator account that
has privileges to perform the operation
Get the status of Kerberos for the SMB server. This operation is only valid for clusters having
hosts running Hyper-V.
ncli> smb-server { get-kerberos-status }
Required arguments
None
snapshot: Snapshot
Description Snapshot of a Virtual Disk
Alias snap
Operations
• Create a (fast) clone based on a Snapshot : clone
• Create a new Snapshot of a Virtual Disk or a NFS file : create | add
• List Snapshots : list | ls
• Get stats data for Snapshots : list-stats | ls-stats
• Delete a Snapshot : remove | rm
List Snapshots
ncli> snapshot { list | ls }[ name="name" ][ vdisk-name="vdisk_name" ]
Required arguments
None
Optional arguments
name
Name of the Snapshot
vdisk-name
Name of the corresponding Virtual Disk
Delete a Snapshot
ncli> snapshot { remove | rm } name="name"
Required arguments
name
Name of the Snapshot
• List all the configured trap sinks along with their user information : list-
traps | ls-traps
• Lists all the snmp users along with their properties like authentication and
privacy information : list-users | ls-users
• Remove a transport from the list of snmp transports : remove-transport |
delete-transport
Add a transport to the list of snmp transports. Each transport is a protocol:port pair
ncli> snmp { add-transport } protocol="protocol" port="port"
Required arguments
protocol
Protocol for the snmp agent or trap sink. Currently supported protocols are UDP,
TCP and UDP_6
port
Port number on which an snmp agent listens for requests or on which a trap sink
is waiting traps
Add a trap sink to the list of trap sinks. Each trap sink is a combination of trap sink address,
username and authentication information
ncli> snmp { add-trap } address="address" [ username="username" ][ port="port"
][ protocol="protocol" ][ version="version" ][ community="community" ][ engine-
id="engine_id" ][ inform="inform" ]
Required arguments
address
Address of an snmp trap sink. This should be an IP address or FQDN
Add an snmp user along with its authentication and privacy keys
ncli> snmp { add-user } username="username" auth-key="auth_key" auth-
type="auth_type" [ priv-key="priv_key" ][ priv-type="priv_type" ]
Required arguments
username
Identity of an snmp user. It is required for version snmpv3. It is not used for
version snmpv2c.
auth-key
Authentication key for an snmp user
auth-type
Authentication type for snmp user. Can be SHA
Optional arguments
priv-key
Encryption key for an snmp user
priv-type
Encryption type for an snmp user. Can be AES
Edit one of the trap sinks from the list of trap sinks. Editable properties are username,
authentication and privacy settings and protocol
ncli> snmp { edit-trap | update-trap } address="address" [ port="port" ][
protocol="protocol" ][ version="version" ][ community="community" ][ engine-
id="engine_id" ][ inform="inform" ][ username="username" ]
List all the configured trap sinks along with their user information.
ncli> snmp { list-traps | ls-traps }[ address="address" ]
Required arguments
None
Optional arguments
address
Address of an snmp trap sink. This should be an IP address or FQDN
Lists all the snmp users along with their properties like authentication and privacy information
ncli> snmp { list-users | ls-users }[ username="username" ]
Required arguments
None
Optional arguments
username
Identity of an snmp user. It is required for version snmpv3. It is not used for
version snmpv2c.
software: Software
Description NOS Software Release
Alias
Operations
• Toggle automatic download of a Software : automatic-download
• Download a Software : download
• List Software : list | ls
• Pause Downloading / Uploading a Software : pause
• Delete a Software : remove | rm | delete
• Upload a Software : upload
Download a Software
ncli> software { download } name="name" software-type="software_type"
Required arguments
name
Name of the software
software-type
Type of the software ( NOS | HYPERVISOR | FIRMWARE_DISK | NCC |
FILE_SERVER | PRISM_CENTRAL_DEPLOY)
List Software
ncli> software { list | ls }[ name="name" ][ software-type="software_type" ]
Required arguments
None
Optional arguments
name
Name of the software
Delete a Software
ncli> software { remove | rm | delete } name="name" software-type="software_type"
Required arguments
name
Name of the software
software-type
Type of the software ( NOS | HYPERVISOR | FIRMWARE_DISK | NCC |
FILE_SERVER | PRISM_CENTRAL_DEPLOY)
Upload a Software
ncli> software { upload } file-path="file_path" software-type="software_type" [
hypervisor-type="hypervisor_type" ][ meta-file-path="meta_file_path" ]
Required arguments
file-path
Path to the software to be uploaded
software-type
Type of the software ( NOS | HYPERVISOR | FIRMWARE_DISK | NCC |
FILE_SERVER | PRISM_CENTRAL_DEPLOY)
Optional arguments
hypervisor-type
Type of the Hypervisor
meta-file-path
Path to the metadata file of the software to be uploaded
Generates SSL Certificate with cipher Strength 2048 bits and replaces the existing certificate
ncli> ssl-certificate { generate }
Required arguments
None
Import SSL Certificate, key and CA certificate or chain file. This import replaces the existing
certificate
ncli> ssl-certificate { import } certificate-path="certificate_path" cacertificate-
path="cacertificate_path" key-path="key_path" key-type="key_type"
Required arguments
certificate-path
Path of the SSL certificate
cacertificate-path
Path of the CA certificate or chain file
key-path
Path of the private key
key-type
Type of Private key. Must be either RSA_2048 or ECDSA_256 or ECDSA_384 or
ECDSA_521
Operations
• Create a new Storage Pool : create | add
• Edit a Storage Pool : edit | update
• List Storage Pools : list | ls
• Get stats data for Storage Pools : list-stats | ls-stats
Operations
• List the (global) default I/O priority order of Storage Tiers : get-default-io-
priority-order | get-def-io-pri
task: Tasks
Description A Task
Alias
Operations
• Inspect Task : get
• List all Tasks : list | ls
• Poll Task to completion : wait-for-task
Inspect Task
ncli> task { get } taskid="taskid" [ include-entity-names="{ true | false }" ]
Required arguments
taskid
Id of the task
Optional arguments
include-entity-names
Include entity names
user: User
Description A User
Alias
Delete a User
ncli> user { delete | remove | rm } user-name="user_name"
Required arguments
user-name
User name of the user
Disable a User
ncli> user { disable } user-name="user_name"
Required arguments
user-name
User name of the user
Edit a User
ncli> user { edit | update } user-name="user_name" [ first-name="first_name" ][
last-name="last_name" ][ middle-initial="middle_initial" ][ email-id="email_id" ]
Required arguments
user-name
User name of the user
Optional arguments
first-name
First name of the user
last-name
Last name of the user
middle-initial
Middle Initial of the user
email-id
Email address of the user
Enable a User
ncli> user { enable } user-name="user_name"
Required arguments
user-name
User name of the user
Get the IP Addresses and browser details of a user who is currently logged in
ncli> user { get-logged-in-user | get-logged-in-user } username="username"
Get a list of all users who are currently logged in to the system along with their IP Addresses
and browser details
ncli> user { get-logged-in-users | get-logged-in-users }
Required arguments
None
List Users
ncli> user { list | ls }[ user-name="user_name" ]
Required arguments
None
Optional arguments
user-name
User name of the user
List Snapshots
ncli> vdisk { list-snapshots | ls-snaps }[ name="name" ]
Required arguments
None
Optional arguments
name
Name of the Virtual Disk or Snapshot
vstore: VStore
Description A file namespace in a Storage Container
Alias
List VStores
ncli> vstore { list | ls }[ id="id" ][ name="name" ]
Required arguments
None
Optional arguments
id
ID of a VStore
name
Name of a VStore
Protect a VStore. Files in a protected VStore are replicated to a Remote Site at a defined
frequency and these protected files can be recovered in the event of a disaster
ncli> vstore { protect }[ id="id" ][ name="name" ]
Required arguments
None
Optional arguments
id
ID of a VStore
name
Name of a VStore
Unprotect a VStore
ncli> vstore { unprotect }[ id="id" ][ name="name" ]
Required arguments
None
Optional arguments
id
ID of a VStore
name
Name of a VStore
vzone: vZone
Description A vZone
Alias
List vZones
ncli> vzone { list | ls }[ name="name" ]
Required arguments
None
Optional arguments
name
Name of the vZone
Delete avZone
ncli> vzone { remove | rm } name="name"
• To display all user name and password options for diagnostics.py, type /home/nutanix/
diagnostics/diagnostics.py --help | egrep -A1 'password|user'
--hypervisor_password: Default hypervisor password.
(default: 'nutanix/4u')
--ipmi_password: The password to use when logging into the local IPMI device.
(default: 'ADMIN')
--ipmi_username: The username to use when logging into the local IPMI device.
(default: 'ADMIN')
• You can find all user name and password options for cluster, genesis, and setup_hyperv.py
by also typing --help | egrep -A1 'password|user' as part of the command. For example,
setup_hyperv.py --help | egrep -A1 'password|user'
cluster
Usage
Usage: /usr/local/nutanix/cluster/bin/cluster [flags] [command]
commands:
add_public_key
convert_cluster
create
destroy
disable_auto_install
enable_auto_install
firmware_upgrade
foundation_upgrade
host_upgrade
/usr/local/nutanix/cluster/bin/cluster
--add_dependencies
Include Dependencies.
Default: false
--backplane_netmask
Backplane netmask
--backplane_network
Backplane network config
Default: false
--backplane_subnet
Backplane subnet
--backplane_vlan
Backplane VLAN id
Default: -1
--block_aware
Set to True to enable block awareness.
Default: false
--bundle
Bundle for upgrading host in cluster.
--clean_debug_data
If 'clean_debug_data' is True, then when we destroy a cluster we will also remove
the logs, binary logs, cached packages, and core dumps on each node.
Default: false
--cluster_external_ip
Cluster ip to manage the entire cluster.
--cluster_function_list
List of functions of the cluster (use with create). Accepted functions are
['minerva', 'multicluster', 'two_node_cluster', 'jump_box_vm', 'ags_cluster',
'one_node_cluster', 'xi_vm', 'iam_cluster', 'ndfs', 'extension_store_vm',
'witness_vm', 'cloud_data_gateway']
Default: ndfs
cluster.ce_helper
--ce_version_map_znode_path
Zookeeper node containing the CE version mapping.
Default: /appliance/logical/community_edition/version_map
cluster.consts
--allow_hetero_sed_node
Flag that can be set by an SRE to let a node have a mix of sed and non-sed disks.
Default: true
--app_deployment_progress_zknode
Zknode to use for deployment state machine
Default: /appliance/logical/app_deployment_progress
--app_deployment_proto_zknode
Zknode to use for deployment state machine
Default: /appliance/logical/app_deployment_info
--authorized_certs_file
Path to file containing list of permitted SSL certs.
Default: /home/nutanix/ssh_keys/AuthorizedCerts.txt
--auxiliary_config_json_path
Path to the auxiliary_config.json file
Default: /etc/nutanix/auxiliary_config.json
--build_last_commit_date_path
Path to the file that contains the local release version's last commit date.
Default: /etc/nutanix/build_last_commit_date
--cassandra_health_znode
Zookeeper node where each cassandra creates an ephmeral node indicating it is
currently available.
Default: /appliance/logical/health-monitor/cassandra
--cluster_disabled_services
Zookeeper node where a service profile is represented asthe set of services to
disable.
Default: /appliance/logical/cluster_disabled_services
--command_timeout_secs
Number of seconds to spend retrying an RPC request.
Default: 180
--compute_only_enabled
Boolean signifying CO feature support in current NOS
Default: true
--convert_cluster_zknode
Holds information about cluster conversion operations and current status for
each node.
cluster.container.docker.utils
--default_volume_plugin_name
Name of default docker nutanix volume plugin
Default: pc/nvp
--default_volume_plugin_type
Default docker volume plugin type
Default: default
--docker_systemd_service
Name of the systemd docker service
Default: docker-latest
--docker_volume_plugin_binary_path
Path to volume plugin install script
Default: /home/nutanix/bin/create_plugin_from_tar.sh
--docker_volume_plugin_image_path
Path to docker volume plugin image
Default: /usr/local/nutanix/volume-plugin/dvp.tar.gz
--volume_plugin_install_timeout_secs
Timeout in secs for volume plugin installation
Default: 60
--volume_plugin_version_znode
Path to docker volume plugin image
Default: /appliance/logical/genesis/volume_plugin
cluster.deployment.deployment_utils
--default_password_reset_timeout
Timeout in seconds for the executing the password reset script on the PC VM.
cluster.disk_flags
--clean_disk_log_path
Path to the logs from the clean_disks script.
Default: /home/nutanix/data/logs/clean_disks.log
--clean_disk_script_path
Path to the clean_disks script.
Default: /home/nutanix/cluster/bin/clean_disks
--disk_partition_margin
Limit for the number of bytes we will allow to be unpartitioned on a disk.
Default: 2147483648
--disk_size_threshold_percent
Percentage of available disk space to be allocated to stargate
Default: 95
--enable_all_ssds_for_oplog
DEPRECATED: Use all ssds attached to this node for oplog storage.
Default: true
--enable_fio_realtime_scheduling
Use realtime scheduling policy for fusion io driver.
Default: false
--fio_realtime_priority
Priority for fusion io driver, when realtime scheduling policy is being used.
Default: 10
--format_fusion_percent
The percentage of total capacity of fusion-io drives that should be formatted as
usable
Default: 60
--max_ssds_for_oplog
Maximum number of ssds used for oplog per node. If value is -1, use all ssds
available. If only_select_nvme_disks_for_oplog gflag is true and NVMe disks are
present, only NVMe disks are used for selecting oplog disks.
Default: 8
--metadata_maxsize_GB
Maximum size of metadata in GB
Default: 30
--only_select_nvme_disks_for_oplog
If true and NVMe disks are present, only use NVMe disks for selecting oplog disks.
Default: true
cluster.esx_upgrade_helper
--esx_vib_extraction_dir
Directory where ESXi vibs are extracted on CVM before copying to host.
Default: /home/nutanix/tmp/.esx_upgrade
--foundation_esx_vib_path
Path in foundation package where ESX VIBs are stored.
Default: /home/nutanix/foundation/lib/driver/esx/vibs
--poweroff_uvms
Power off UVMs during hypervisor upgrade if Vmotion is not enabled or Vcenter
is not configured for cluster.
Default: false
--update_foundation_vibs
Update VIBS which are present in foundation during ESX hypervisor upgrade.
Default: true
cluster.firewall.consts
--cluster_function_temp_file
Path to temporary file which has the cluster function
Default: /home/nutanix/tmp/cluster_function
--consider_salt_framework
Whether to consider salt framework or not
Default: true
--execute_concurrent_salt_call
Indicate if salt call should be executed concurrently.
Default: true
cluster.genesis.breakfix.host_bootdisk_graceful
--clone_bootdisk_default_timeout
The default timeout for completion of cloning of bootdisk.
Default: 28800
--restore_bootdisk_default_timeout
The default timeout for completion of restore of bootdisk.
Default: 14400
--wait_for_phoenix_boot_timeout
The maximum amount of time for which the state machine waits after cloning for
the node, to be booted in phoenix environment.
Default: 36000
cluster.genesis.breakfix.host_bootdisk_utils
--host_boot_timeout
The maximum amount of time for which the state machine waits for host to be
up.
Default: 36000
cluster.genesis.breakfix.ssd_breakfix
--ssd_repair_copy_svmrescue_timeout
Timeout for copying svmrescue.iso from CVM to host
Default: 600
cluster.genesis.breakfix.ssd_breakfix_esx_helper
--svm_regex
Regular expression used to find the SVM vmx name.
Default: ServiceVM
cluster.genesis.compute_only.client
--configured_marker_file
Path to the marker file containing cluster id if node is part of a cluster
Default: /root/configured
cluster.genesis.compute_only.consts
--factory_config_json_path_on_host
Path to factory_config.json on the CO host
Default: /root/factory_config.json
--hardware_config_json_path_on_host
Path to hardware_config.json on the CO host
Default: /root/hardware_config.json
cluster.genesis.convert_cluster.utils
--cluster_conversion_preserve_mac
Preserve MAC addresses of VM NICs in conversion
Default: true
--convert_cluster_blacklisted_vms
List of VM UUIDs which won't be converted during cluster conversion
Default: /appliance/logical/genesis/convert_cluster/blacklisted_vms
cluster.genesis.convert_cluster.vm_migration
--disable_vm_migration
Disable VM migration for the node. This is used for error injection and testing.
Default: false
cluster.genesis.expand_cluster.expand_cluster
--node_up_retries
Number of retries for node genesis rpcs to be up after reboot
Default: 40
cluster.genesis.expand_cluster.utils
--nos_packages_file
File containing packages present in the nos software
Default: install/nutanix-packages.json
--nos_tar_timeout_secs
Timeout in secs for tarring nos package
Default: 3600
cluster.genesis.la_jolla.la_jolla_utils
--nfs_buf_size
NFS buffer size
Default: 8388608
cluster.genesis.network_segmentation_helper
--disable_wait_time
Time to wait (in seconds), between removing network segmentation
configuration in zeus and removing the interface configuration on cvms
Default: 5
--ns_state_machine_timeout
The timeout for completion of network segmentation state machine.
Default: 600
--retry_count_zk_map_publish
Retry count for publishing new zk mapping.
Default: 3
--revert_ns_config_on_failure
Revert the network segmentation configuration in the case of a failure.
Default: true
cluster.genesis.node_manager
--auto_discovery_interval_secs
Number of seconds to sleep when local node can't join any discovered cluster.
Default: 5
--co_nodes_unconfigure_marker
Path to marker file to indicate that node has to unconfigure CO nodes as part of
unconfiguring itself. The contents of the marker file containsspace seperated IPs
of the CO nodes to unconfigure
Default: /home/nutanix/.co_nodes_unconfigure
--download_staging_area
Directory where we will download directories from other SVMs.
Default: /home/nutanix/tmp
--firmware_disable_auto_upgrade_marker
Path to marker file to indicate that automatic firmware upgrade should not be
performed on this node.
Default: /home/nutanix/.firmware_disable_auto_upgrade
cluster.genesis.rdma_helper
--check_rdma_switch_config_script
Script to check RDMA interface and port config
Default: /usr/local/nutanix/cluster/bin/check_rdma_switch_config
--mellanox_tc_wrap
Path to the Mellanox's tc_wrap.py script
Default: /usr/local/nutanix/bin/tc_wrap.py
--rdma_nic_config_file
Path to json containing the mac of the nic to be used for rdma
Default: /etc/nutanix/nic_config.json
cluster.genesis.resource_management.rm_helper
--common_pool_map
Mapping of node with its common pool memory in kb
Default: /appliance/logical/genesis/common_pool_map
--common_pool_mem_for_low_mem_nodes_gb
Common pool memory reservation for nodeswith cvm memory less than 20gb
Default: 8
--default_common_pool_memory_in_gb
Stargate default common pool memory reservation
Default: 12
--memory_update_history
File containing history of memory update on node
Default: /home/nutanix/config/memory_update.history
--memory_update_resolution
Minumum amount of memory difference for update
Default: 2097152
--rolling_restart_memory_update_reason
Reason set in rolling restart for memory update
Default: cvm_memory_update
--target_memory_zknode
CVM target memory map zk node
cluster.genesis.resource_management.rm_prechecks
--cushion_memory_in_kb
Cushion Memory required in nodes before update
Default: 2097152
--delta_memory_for_nos_upgrades_kb
Amount of CVM memory to be increased during NOS upgrade
Default: 4194304
--host_memory_threshold_in_kb
Min host memory for memory update , set to 62 Gb
Default: 65011712
--max_cvm_memory_upgrade_kb
Maximum allowed CVM memory for update during upgrade
Default: 31457280
cluster.genesis.resource_management.rm_tasks
--cvm_reconfig_component
Component for CVM reconfig
Default: kGenesis
--cvm_reconfig_operation
Component for CVM reconfig
Default: kCvmreconfig
cluster.genesis.service_management.service_mgmt_utils
--core_services_managed
This Flag will be used to force service mgmt to enable/disable core services.
Default: false
cluster.genesis_utils
--orion_config_path
Path to orion config
Default: /appliance/logical/orion/config
--svm_default_login
User name for logging into SVM.
Default: nutanix
--timeout_HA_route_verification
Timeout for setting HA route.
Default: 180
--timeout_zk_operation
Timeout for zk operation like write
Default: 120
cluster.hades.client
--hades_jsonrpc_url
URL of the JSON RPC handler on the Hades HTTP server.
Default: /jsonrpc
--hades_port
Port that Hades listens on.
Default: 2099
--hades_rpc_timeout_secs
Timeout for each Hades RPC.
Default: 30
cluster.hades.disk_diagnostics
--hades_retry_count
Default retry count.
Default: 5
--max_disk_offline_count
Maximum error count for disk after which disk is to removed.
Default: 3
--max_disk_offline_timeout
Maximum time value where disk offline events areignored.
Default: 3600
cluster.hades.disk_manager
--aws_cores_partition
Partition in which core files are stored on AWS.
Default: /dev/xvdb1
--boot_part_size
The size of a regular boot partition in 512-byte sectors.
Default: 20969472
--device_mapper_name
A name of the device mapper that is to be created in case striped devices are
discovered.
Default: dm0
--disk_unmount_retry_count
Number of times to retry unmounting the disk.
Default: 60
cluster.hades.raid_utils
--min_raid_sync_speed
Minimum raid sync speed
Default: 50000
cluster.host_upgrade_common
--host_poweroff_uvm_file
File containing list of UVM which are powered off for host upgrade.
Default: /home/nutanix/config/.host_poweroff_vm_list
--upgrade_delay
Seconds to wait before runninguprgade script in Esx Host.
Default: 0
cluster.host_upgrade_helper
--host_disable_auto_upgrade_marker
Path to marker file to indicate that automatic host upgrade should not be
performed on this node.
Default: /home/nutanix/.host_disable_auto_upgrade
--hypervisor_upgrade_history_file
File path where hypervisor upgrade history is recorded.
Default: /home/nutanix/config/hypervisor_upgrade.history
--hypervisor_upgrade_info_znode
Location in a zookeeper where we keep the Hypervisor upgrade information.
Default: /appliance/logical/upgrade_info/hypervisor
cluster.hyperv_upgrade
--vmms_setting_info
Location where the vmms settings are stored
Default: /appliance/logical/genesis/vmms_setting_info
cluster.ipv4config
--end_linklocal_ip
End of the range of link local IP4 addresses.
Default: 169.254.254.255
--esx_cmd_timeout_secs
Default timeout for running a remote command on an ESX host.
Default: 120
--hyperv_cmd_timeout_secs
Default timeout for running a remote command on an hyperv host.
Default: 120
--ipmi_apply_config_retries
Number of times to try applying an IPMI IPv4 configuration before failing.
Default: 6
--kvm_cmd_timeout_secs
Default timeout for running a remote command on an KVM host.
Default: 120
--kvm_external_network_interface
Default name of the network device for KVM's external network.
Default: br0
--linklocal_netmask
End of the range of link local IP4 addresses.
Default: 255.255.0.0
--start_linklocal_ip
Start of the range of link local IP4 addresses.
Default: 169.254.1.0
--xen_external_network_interface
Default name of the network device for Xen's external network.
Default: xapi1
cluster.kvm_upgrade_helper
--ahv_enter_maintenance_mode_retry_max_delay_secs
Max delay time for exponential backoff retries to enter maintenance
Default: 720
cluster.license_config
--license_config_file
Zookeeper path where license configuration is stored.
Default: /appliance/logical/license/configuration
--license_config_proto_file
License configuration file shipped with NOS.
Default: configuration.cfg
--license_dir
License feature set files directory shipped with NOS.
Default: /home/nutanix/serviceability/license
--license_public_key
License public key string shipped with NOS.
Default: /appliance/logical/license/public_key
--license_public_key_str
License public key string shipped with NOS.
Default: public_key.pub
--zookeeper_license_root_path
Zookeeper path where license information is stored.
Default: /appliance/logical/license
cluster.lite_upgrade.core.consts
--cluster_sync_path
Path to cluster_sync command on SVMs.
Default: /home/nutanix/cluster/bin/cluster_sync
--hades_path
Path to hades command on SVMs.
Default: /home/nutanix/cluster/bin/hades
cluster.lite_upgrade.interfaces.genesis_interface
--genesis_lu_intent_zknode
Lite upgrade zk node, set with target version.
Default: /appliance/logical/genesis/lite_upgrade/genesis_intent
cluster.multihome_utils
--multihome_zkpath
Marker to indicate if any node of cluster is multihome.
Default: /appliance/logical/genesis/multihome
cluster.ncc_upgrade_helper
--cluster_health_shutdown_max_retries
Max number of retries to shutdown cluster health.
Default: 5
--cluster_health_shutdown_threshold_ms
Time threshold (ms) between cluster health shutdown retries.
Default: 2000
--ncc_installation_path
Location where NCC is installed on a CVM.
Default: /home/nutanix/ncc
--ncc_num_nodes_to_upload
Number of nodes to upload the NCC installer directory to.
Default: 2
--ncc_uncompress_path
Location for uncompressing nutanix NCC binaries.
Default: /home/nutanix/data/ncc/installer
--ncc_upgrade_info_znode
Location in a zookeeper where we keep the Upgrade node information.
Default: /appliance/logical/upgrade_info/ncc
--ncc_upgrade_params_znode
Zookeeper location to store NCC upgrade parameters.
Default: /appliance/logical/upgrade_info/ncc_upgrade_params
--ncc_upgrade_status
Location in Zookeeper where we store upgrade status of nodes.
Default: /appliance/logical/genesis/ncc_upgrade_status
--ncc_upgrade_timeout_secs
Timeout in seconds for the NCC upgrade module.
cluster.preupgrade_checks
--arithmos_binary_path
Path to the arithmos binary.
Default: /home/nutanix/bin/arithmos
--cluster_external_state
Cluster external state zk path
Default: /appliance/physical/clusterexternalstate
--connected_cluster_path
Connected cluster zk path
Default: /appliance/physical/zeusconfig
--license_config_path
The path to the license configuration.
Default: /appliance/logical/license/configuration
--min_disk_space_for_upgrade
Minimum space (KB) required on /home/nutanix for upgrade to proceed.
Default: 3600000
--min_replication_factor
Minimum replication factor required per container.
Default: 2
--minimum_memory_for_prism_pro_in_kb
Minimum memory needed for prism pro features.
Default: 16777216
--mountsfile
Path to the mounts file in proc.
Default: /proc/mounts
--prism_gateway_port
The port on which prism gateway is running.
Default: 9440
--role_mapping_path
Role mapping zk path
Default: /appliance/logical/prism/rolemapping
--signature_file_extension
Extension of the signature file.
Default: .asc
cluster.preupgrade_checks_ncc_helper
--ncc_temp_location
Location to extract NCC.
Default: /home/nutanix/ncc_preupgrade
cluster.rsyslog_helper
--lock_dir
Default path for nutanix lock files.
Default: /home/nutanix/data/locks/
--log_dir
Default path for nutanix log files.
Default: /home/nutanix/data/logs
--module_level
Level of syslog used for sending module logs
Default: local0
--rsyslog_conf_file
Default Configuration file for Rsyslog service.
Default: /etc/rsyslog.d/rsyslog-nutanix.conf
--rsyslog_configure_queue
Boolean indicating whether to configure action queue for rsyslog.
Default: true
--rsyslog_queue_memory_size
Size of rsyslog remote logging action queue in bytes.
Default: 104857600
--rsyslog_rule_header
Nutanix specified rsyslog rules are appended only below this marker.
Default: # Nutanix remote server rules
--rsyslog_rule_header_end
Nutanix specified rsyslog rules are added above this marker.
Default: # Nutanix remote server rules end
--rsyslog_work_dir
Default path for syslog state files. This stores thestate of rsyslog across restarts.
Default: /var/lib/rsyslog
cluster.service.cluster_config_service
--cluster_config_path
Path to the Cluster Config binary.
Default: /home/nutanix/bin/cluster_config
--cluster_config_server_rss_mem_limit
Maximum amount of resident memory Cluster Config may use on an Svm with
8GB memory configuration.
Default: 268435456
cluster.service.curator_service
--curator_config_json_file
JSON file with curator configuration
Default: curator_config.json
--curator_data_dir_size
Curator data directory size in MB (80 GB).
Default: 81920
--curator_data_dir_symlink
Path to curator data directory symlink.
Default: /home/nutanix/data/curator
--curator_data_disk_subdir
Path to curator subdirectory in data disk.
Default: curator
--curator_oom_score
If -1, OOM is disabled for this component. If in [1, 1000], it is taken as the OOM
score to be applied for this component.
Default: -1
--curator_path
Path to the curator binary.
Default: /home/nutanix/bin/curator
--curator_rss_mem_limit
Maximum amount of resident memory Curator may use on an Svm with 8GB
memory configuration.
Default: 536870912
cluster.service.foundation_service
--foundation_path
Path to the foundation service script.
Default: /home/nutanix/foundation/bin/foundation
cluster.service.ha_service
--def_stargate_stable_interval
Default number of seconds a stargate has to be alive to be considered asstable
and healthy.
Default: 30
--hyperv_internal_switch_health_timeout
Timeout for how long we should wait before waking up the thread that monitors
internal switch health on HyperV.
Default: 30
--num_worker_threads
The number of worker threads to use for running tasks.
Default: 8
--old_stop_ha_zk_node
When this node is created the old ha should not take any actions on the cluster.
Default: /appliance/logical/genesis/ha_stop
--stargate_aggressive_monitoring_secs
Default number of seconds a stargate is aggressively monitored after it is down.
Default: 3
--stargate_aggressive_monitoring_two_node_multiplier
Multiplier to the --stargate_aggressive_monitoring_secsfor the two node cluster.
Default: 4
--stargate_exit_handler_aggressive_timeout_secs
Aggressive timeout for accessing the Stargate exit handler page during an
unplanned failover.
Default: 3
--stargate_exit_handler_timeout_secs
Default timeout for accessing the Stargate exit handler page.
Default: 10
--stargate_health_watch_timeout
Timeout for how long we should wait before waking up the thread that monitors
stargate health.
Default: 30
--stargate_initialization_secs
Number of seconds to wait for stargate to initialize.
Default: 30
--stop_ha_zk_node
When this node is created ha 2.0 should not take any actions on the cluster.
cluster.service.kafka_service
--kafka_bootstrap_binary
Path to binary which will start kafka on PC in docker container
Default: /usr/local/nutanix/bin/bootstrap_kafka
--kafka_data_volume_mode
Volume mode to provide data volume to kafka. If host attached, vmdisk is
attached to PC VM. Else, docker volume plugin is used to conenct to remote PE
cluster iscsi endpoint
Default: host_attached
--kafka_disk_size_factor_wrt_data_disk
Kafka disk size = data disk size / <this constant>
Default: 7
--kafka_disks_base_dir
Base directory of kafka data disks. Each disk is mounted with directory name
being disk serial on this base dir. Applicable to host attached disks
Default: /home/nutanix/data/kafka/disks/
--kafka_pc_deployed_disk_scsi_id
When PC is deployed, kafka disk is attached at this scsi id
Default: 4
--kafka_rss_mem_limit
Maximum amount of resident memory Kafka may use on an svm with 8 GB
memory configuration.
Default: 268435456
cluster.service.service_utils
--auto_set_cloud_gflags
If true, recommended gflags will be automatically set for cloud instances.
Default: true
--cgroup_subsystems
Default subsystems used for cgroup creation.
Default: cpu,cpuacct,memory,freezer,net_cls
cluster.sshkeys_helper
--authorized_keys_file
Path to file containing list of permitted RSA keys
Default: /home/nutanix/.ssh/authorized_keys2
--authorized_keys_file_admin
Path to file containing list of permitted RSA keys for admin.
Default: /home/admin/.ssh/authorized_keys2
--id_rsa_path
Nutanix default SSH key used for logging into SVM.
Default: /home/nutanix/.ssh/id_rsa
cluster.time_manager.time_manager_utils
--ntpdate_timeout_secs
Timeout to wait for ntp server to return a valid time.
Default: 10
cluster.two_node.cluster_manager
--minimum_pass_pings_to_allow_fails
Default minimum number of pings to peer node that must pass before we allow
some pings to fail (controlled by number_of_ping_fails_to_reset_pass_counter
flag) and not reset pass counter.
Default: 10
--minimum_uptime_mins_before_starting_pings
Minimum system uptime before starting pinging the peer node.
Default: 4
--minutes_of_successful_pings_for_transition
Default number of minutes for successful pings beforeconsidering peer node
stable and moving to kSwitchToTwoNode mode.
Default: 15
--node_health_check_ping_interval_secs
Default time (in seconds) for which health thread sleeps before checking on
another node.
Default: 2
cluster.two_node.state_transitions
--witness_state_history_size
Maximum length of witness state history to keep in zeus configuration.
Default: 10
cluster.upgrade_helper
--arithmos_rpc_timeout
Timeout for arithmos RPCs retries
Default: 180
--cluster_name_update_timeout
Default timeout for updating the cluster name in zeus.
Default: 5
--num_nodes_to_upload
Number of nodes to upload the installer directory to.
Default: 2
--nutanix_packages_json_basename
Base file name of the JSON file that contains the list of packages to expect in the
packages directory.
Default: nutanix-packages.json
--uncompress_buffer_ratio
Space Buffer ratio to uncompress a compressed file.
Default: 0.2
--upgrade_genesis_restart
Location in Zookeeper where we store if genesis restart is required or not.
Default: /appliance/logical/upgrade_info/upgrade_genesis_restart
cluster.utils.device_mapper_utils
--stripe_size_sectors
Stripe chunk size in number of 512-byte sectors.
Default: 128
cluster.utils.foundation_rest_client
--foundation_ipv6_interface
Ipv6 interface corresponding to eth0
Default: 2
cluster.utils.foundation_utils
--foundation_root_dir
Root directory for foundation in CVM
cluster.utils.hyperv_ha_utils
--default_internal_switch_monitor_interval
Default polling period for monitoring internal switch health
Default: 30
--default_ping_success_percent
Default percentage of success which is used to determine switch health
Default: 100
--default_total_ping_count
Default number of pings sent to determine switch health
Default: 10
cluster.utils.hypervisor_ha
--internal_nutanix_portgroup_name
Name of the internal Nutanix portgroup configured on ESX.
Default: vmk-svm-iscsi-pg
--terminate_connection_timeout_secs
Timeout in seconds for the NfsTerminateConnection RPC issued to stargate on a
failback.
Default: 15
cluster.utils.new_node_nos_upgrade
--stand_alone_upgrade_log
Log file for stand-alone upgrade.
Default: /home/nutanix/data/logs/stand_alone_upgrade.out
cluster.xen_upgrade_helper
--xen_maintenance_mode_retries
Seconds to delay before reboot Xen host
Default: 5
--xen_reboot_delay
Seconds to delay before reboot Xen host
Default: 30
--xen_uvm_no_migration_counter
Number of retries to wait for UVMs to migrate from xen host after it is put in
maintenance mode
Default: 7
--xen_webserver_port
Port for webserver that will serve files required during xen upgrades
Default: 8999
util.infrastructure.cluster
--infra_service_vm_config_json_path
Path to the service_vm_config.json file with the svm id.
Default: /home/nutanix/data/stargate-storage/service_vm_config.json
diagnostics.py
Usage
Usage: /home/nutanix/diagnostics/diagnostics.py [command]
commands:
cleanup
/home/nutanix/diagnostics/diagnostics.py
--add_vms_to_pd
Whether to add Diagnostic VMs to pd.
Default: false
--cluster_external_data_services_ip
Cluster external data services IP
--collect_cassandra_latency_stats
Collect cassandra latency stats for each test.
Default: true
--collect_fio_logs
Collects fio bw and iops logs
Default: false
--collect_illuminati_stats
Grab collect_perf stats to be upload to illuminati during every test.
Default: false
--collect_iostat_info
Reads and writes to disk
Default: false
--collect_sched_stats
Collect stats related to Linux scheduling in SVM
Default: false
--collect_stargate_stats
Grab snapshot of 2009 stargate stats page before and after every test.
Default: false
--collect_stargate_stats_interval
Internal in secs to collect stargate stats.
Default: 10
--collect_stargate_stats_timeout
Max timeout in secs for stargate stats collection.
Default: 720
--collect_top_stats
Collect top stats for each test.
Default: false
--collect_uvm_stats
Collects uvm cpu and latency stats.
genesis
Usage
Usage: /usr/local/nutanix/cluster/bin/genesis start|stop [all|<service1> [<service2> ...]]|
restart|status
/usr/local/nutanix/cluster/bin/genesis
--foreground
Run Genesis in foreground.
Default: false
--genesis_debug_stack
Flag to indicate whether signal handler need to be registered for debugging
greenlet stacks.
Default: true
--genesis_self_monitoring
Genesis to do self monitoring.
Default: true
--genesis_upgrade
Flag to indicate that genesis restarted because it is upgrading itself.
Default: false
--help
show this help
Default: 0
--helpshort
show usage only for this module
Default: 0
--helpxml
like --help, but generates XML output
Default: false
cluster.genesis.compute_only.consts
--factory_config_json_path_on_host
Path to factory_config.json on the CO host
Default: /root/factory_config.json
--hardware_config_json_path_on_host
Path to hardware_config.json on the CO host
Default: /root/hardware_config.json
cluster.genesis.convert_cluster.utils
--cluster_conversion_preserve_mac
Preserve MAC addresses of VM NICs in conversion
Default: true
--convert_cluster_blacklisted_vms
List of VM UUIDs which won't be converted during cluster conversion
Default: /appliance/logical/genesis/convert_cluster/blacklisted_vms
--convert_cluster_disable_marker
Marker file to disable hypervisor conversion on node.
Default: /home/nutanix/.convert_cluster_disable
--convert_cluster_node_ids
List of node ids which will be converted to target hypervisor
Default: /appliance/logical/genesis/convert_cluster/converting_node_ids
--converting_vm_info
Path to zk node where the reg info of all VMs undergoing conversion is stored
Default: /appliance/logical/genesis/convert_cluster/converting_vm
--default_vcenter_port
Default port to register with Vcenter.
Default: 443
--fail_vm_uuids_conversion
Comma separated list of VM UUIDs which will fail vm conversion
--fail_vm_uuids_power_off
Comma separated list of VM UUIDs which will fail vm power off operation during
conversion
--fail_vm_uuids_power_on
Comma separated list of VM UUIDs which will fail vm power on operation during
conversion
cluster.genesis.convert_cluster.vm_migration
--disable_vm_migration
Disable VM migration for the node. This is used for error injection and testing.
Default: false
cluster.genesis.la_jolla.la_jolla_utils
--nfs_buf_size
NFS buffer size
Default: 8388608
cluster.genesis.node_manager
--auto_discovery_interval_secs
Number of seconds to sleep when local node can't join any discovered cluster.
Default: 5
--co_nodes_unconfigure_marker
Path to marker file to indicate that node has to unconfigure CO nodes as part of
unconfiguring itself. The contents of the marker file containsspace seperated IPs
of the CO nodes to unconfigure
Default: /home/nutanix/.co_nodes_unconfigure
--download_staging_area
Directory where we will download directories from other SVMs.
Default: /home/nutanix/tmp
--firmware_disable_auto_upgrade_marker
Path to marker file to indicate that automatic firmware upgrade should not be
performed on this node.
Default: /home/nutanix/.firmware_disable_auto_upgrade
--foundation_disable_auto_upgrade_marker
Path to marker file to indicate that automatic foundation upgrade should not be
performed on this node.
Default: /home/nutanix/.foundation_disable_auto_upgrade
--genesis_restart_required_path
Marker file to indicate that genesis restart is required during upgrade.
Default: /home/nutanix/.genesis_restart_required_path
--genesis_restart_timeout
Time we wait for the genesis to restart.
Default: 120
--gold_image_version_path
Path to the file that contains the version of the gold image.
cluster.genesis.resource_management.rm_helper
--common_pool_map
Mapping of node with its common pool memory in kb
Default: /appliance/logical/genesis/common_pool_map
cluster.genesis.resource_management.rm_prechecks
--cushion_memory_in_kb
Cushion Memory required in nodes before update
Default: 2097152
--delta_memory_for_nos_upgrades_kb
Amount of CVM memory to be increased during NOS upgrade
Default: 4194304
--host_memory_threshold_in_kb
Min host memory for memory update , set to 62 Gb
Default: 65011712
--max_cvm_memory_upgrade_kb
Maximum allowed CVM memory for update during upgrade
Default: 31457280
cluster.genesis.resource_management.rm_tasks
--cvm_reconfig_component
Component for CVM reconfig
Default: kGenesis
--cvm_reconfig_operation
Component for CVM reconfig
Default: kCvmreconfig
cluster.genesis_utils
--orion_config_path
Path to orion config
Default: /appliance/logical/orion/config
--svm_default_login
User name for logging into SVM.
Default: nutanix
--timeout_HA_route_verification
Timeout for setting HA route.
Default: 180
--timeout_zk_operation
Timeout for zk operation like write
Default: 120
--upgrade_fail_marker
Marker to indicate upgrade has failed.
Default: /appliance/logical/genesis/upgrade_failed
ncc
Usage
nutanix@cvm$ /home/nutanix/ncc/bin/ncc [flags]
/home/nutanix/ncc/bin/ncc
--generate_plugin_config_template
Generate plugin config for the plugin
Default: false
--health_checks_and_log_collector
Flag to run health_checks and log_collector together.
Default: false
--help
show this help
Default: 0
cluster.ncc_upgrade_helper
--cluster_health_shutdown_max_retries
Max number of retries to shutdown cluster health.
Default: 5
--cluster_health_shutdown_threshold_ms
Time threshold (ms) between cluster health shutdown retries.
Default: 2000
--ncc_installation_path
Location where NCC is installed on a CVM.
Default: /home/nutanix/ncc
--ncc_num_nodes_to_upload
Number of nodes to upload the NCC installer directory to.
Default: 2
--ncc_uncompress_path
Location for uncompressing nutanix NCC binaries.
Default: /home/nutanix/data/ncc/installer
--ncc_upgrade_info_znode
Location in a zookeeper where we keep the Upgrade node information.
ncc.cluster_checker
--auto_log_coll
If true, ncc runs the periodic log collector and panacea.
Default: false
--auto_log_coll_output_location
The location where periodic log collector stores the log files.
Default: /home/nutanix/data/periodic_log_collector
--ignore_frequency_check
If set, NCC frequency check is ignored.
Default: false
--max_ncc_lock_checks
Maximum number of times to check if ncc lock is acquired.
Default: 5
--ncc_email_subject
Subject of the email to be sent.
Default: NCC Email Digest
--ncc_failed_plugins_file
Zookeeper path where the list of last failed plugins is stored.
Default: /appliance/logical/serviceability/last_failed_plugins
--ncc_lock_sleep
Sleep for these manys seconds if ncc lock is not acquired.
Default: 5
--ncc_send_email
If True, ncc tries to sends an email from Zookeeper leader if configured time
constraints are met.
Default: false
ncc.ncc_utils.cluster_utils
--hcl_file_path
The path of file hcl.json on CVM.
Default: /etc/nutanix/hcl.json
--hyperv_ncc_cmd_timeout_secs
Timeout seconds for some commands run on HyperV
Default: 120
--ipmiutil_lock_file
ipmiutil.exe does not behave correctly when multuple parallel instances are
running at the same time. This file will be used for flock around ipmiutil.exe
invocations to protect against concurrent NCC and cluster_health instances.
Default: /tmp/.ipmiutil_lock
--ipmiutil_max_num_retries
Max number of times ipmiutil is run to get error free output
Default: 10
--ipmiutil_retry_interval
Time in seconds after which ipmiutil is run again after an unsuccessful run
Default: 10
--prism_secure_key_zk_path
Znode containing the PrismSecureKey.
Default: /appliance/logical/prismsecurekey
ncc.ncc_utils.esx_utils
--fix_high_perf_policy
Fix high performance policy.
Default: False
--temp_cvm_vmx
Temporary path to store the CVM vmx file.
Default: /tmp/ServiceVM.vmx
ncc.ncc_utils.hyperv_utils
--ncc_server_port
Port that the HTTP server started by NCC listens on.
Default: 2101
ncc.ncc_utils.hypervisor_logs
--copy_from_host_timeout
Timeout used while copying log files from host.
Default: 300
ncc.ncc_utils.hypervisor_utils
--nic_link_down_job_state
Path to the job state for nic status.
Default: /home/nutanix/data/serviceability/check_cvm_health_job_state.json
--nic_link_down_timeout
Nic link down timeout. It means that if a nic link's down time is longer than the
timeout time, this nic is regarded as disconnected and will be removed from
the nic status list saved in nic status file, which means this nic will no longer be
checked until its status becomes Up that makes it reenter the nic status list. The
default timeout time is one day.If timeout time is 0, it means the nic will always
be in the nic status list, thus always be checked, just like before.
Default: 86400
--raw_sel_log_file_path
Path to file storing raw sel log information.
Default: /tmp/raw_sel.log
ncc.plugins.log_collector.alerts_collector
--max_query_alerts
The maximum number of alerts that should be queried at a time.
Default: 100
ncc.plugins.log_collector.binary_log_collector
--binary_log_tool
The binary log analyzer tool to retrieve binary logs.
Default: /home/nutanix/bin/binary_log_analyzer
--chunk_time
The number of hours for which binary logs are collected at once. Binary logs are
collected in chunks to avoid overfilling the memory.
Default: 1
ncc.plugins.log_collector.component_data_collector
--chronos_master_port
The port on which chronos master activity traces should be collected.
Default: 2011
--chronos_node_port
The port on which chronos node activity traces should be collected.
Default: 2012
--component_data_acropolis_port
The port on which acropolis activity traces should becollected.
Default: 2030
--component_data_cerebro_port
The port on which cerebro activity traces should be collected.
Default: 2020
--component_data_curator_port
The port on which curator activity traces should be collected.
Default: 2010
--component_data_ip
The IP address from where activity traces should be collected.
Default: http://127.0.0.1
--component_data_stargate_port
The port on which stargate activity traces should be collected.
Default: 2009
ncc.plugins.log_collector.cvm_kernel_logs_collector
--kernel_logs_subcomponent_list
Comma seperated list of subcomponents for which kernel logs are to
becollected. The valid values are: ['audit', 'dmesg', 'secure', 'dmesg.old',
'salt_minion', 'journal', 'messages', 'salt_master', 'wtmp']
Default: all
ncc.plugins.log_collector.cvm_logs_collector
--collect_zookeeper_transaction_logs
Flag to specify if zookeeper transaction logs should be collected.
Default: false
ncc.plugins.log_collector.fileserver_logs_collector
--end_time_epoch
End Time in epoch for minerva log collection.
Default: 0
--fsvm_cmd_timeout_secs
Command timeout in seconds for FSVM.
Default: 15
--fsvm_log_copy_timeout_secs
File Server VM log copy timeout in seconds.
Default: 300
--minerva_collect_cores
Whether to collect sysstats logs
Default: true
--minerva_collect_sysstats
Whether to collect sysstats logs
Default: true
--minerva_timeout_secs
Timeout in secs for minerva log collection.
Default: 1800
ncc.plugins.log_collector.log_utils.hyperv_log
--hyperv_cluster_logs
Collect HyperV cluster logs
Default: false
ncc.plugins.log_collector.vpn_log_collector
--vpn_log_files_filter
Log files to capture expressed as <folder-path>:<shell-file-pattern>
Default: /var/log/ocx-vpn:vpn.log*,/var/log:charon.log,/var/log:auth.log,/var/log/
nginx/:error.log
--vpn_timeout_secs
Timeout in secs for vpn log collection.
Default: 1800
util.ncc.config_module.config
--min_nos_version_with_impact_type
Minimum NOS version for which we move category_list members to
impact_type_list, classification_list.
Default: 5.0
--template_checks_max_id
The last ID to be used for template checks.
Default: 220000
--template_checks_min_id
The first ID to be used for template checks.
Default: 210000
util.ncc.data_access.insights_data_access
--batch_entity_cnt
Batch cnt of entities read and written to Entity DB
Default: 50
--batch_entity_cnt_long_term
Batch cnt of entities read and written to Entity DBfor long term data
util.ncc.ncc_async_task_service.async_task_interface.async_task_base_manager
--use_ergon_interface
Flag to specify if the code should use ergon as the task interface.
Default: true
util.ncc.ncc_logger
--enable_plugin_wise_logging
Enabling this flag will add plugin name to log record during plugin run
Default: true
util.ncc.ncc_utils.arithmos_interface
--use_new_arithmos_interface
If true, use the new arithmos interface.
Default: true
util.ncc.ncc_utils.arithmos_interface.arithmos_interface_new
--arithmos_update_interval
Time in seconds between updates to arithmos.
Default: 30
--cos_update_interval
Interval measured in arithmos cycles during which check.overall_score is
updated.
Default: 2
util.ncc.ncc_utils.arithmos_utils
--cluster_cpu_usage_sampling_interval_sec
Sampling interval for CPU usage for a cluster.
Default: 300
--cluster_memory_usage_sampling_interval_sec
Sampling interval for Memory usage for a cluster.
Default: 300
util.ncc.ncc_utils.checkpoint
--ignore_checkpoints
Whether to force execution even when previous checkpoints exists.
Default: true
util.ncc.ncc_utils.cluster_utils
--cmd_timeout_secs
Timeout seconds for commands run on ESX and CVMs
Default: 60
--copy_timeout_secs
Timeout seconds for file copy operations.
Default: 600
--cvm_timeout_cmd
Timeout for any command executed on cvms via ssh
Default: /usr/bin/timeout -s 9 %d
--cvm_uname
Username used to authenticate with cluster CVMs
Default: nutanix
util.ncc.ncc_utils.gflags_definition
--alert_severity_list
Comma separated list of severity types for alerts to retrieve. The valid values are:
['kInfo', 'kWarning', 'kCritical', 'kAudit', 'kAll']
Default: kAll
--anonymize_output
Flag to specify if the output of log collector should be anonymized.
Default: false
--auto_log_coll_frequency_hour
If set log collector and panacea will run periodically. E.g Run every 1hr: <--
auto_log_coll_frequency_hour=1>
Default: 0
--case_number
The case number for which logs are collected. If specified, logs are stored in this
directory on FTP server.
--collect_activity_traces
Boolean flag to specify whether to collect the activity traces.
Default: 1
--collect_all_hypervisor_logs
Flag to specify if all /var/log/* files should be collected.
Default: true
--collect_binary_logs
Boolean flag to specify wheteher to collect binary logs or not.
Default: 0
--collect_boot_log
Flag to specify if the /var/log/boot.log should be collected.
Default: true
util.ncc.ncc_utils.globals
--insights_rpc_server_ip
The IP address of Insights RPC server.
Default: 127.0.0.1
--insights_rpc_server_port
The port where the Insights RPC server listens.
Default: 2027
util.ncc.ncc_utils.progress_monitor
--progress_sleep_duration_sec
Time duration for which progress monitor should sleep between retries.
Default: 4
--rsync_max_retry_count
Number of retries when rsync fails to fetch remote progress file.
Default: 3
util.ncc.plugins.base_plugin
--accumulate_pagination_results
If we want to collect results from the pagination calls.
Default: false
--default_cluster_uuid
The flags specifies default_cluster_uuid
Default: Unknown
--is_run_all
Is this a run_all run ?
Default: false
--ncc_plugin_para_file_dir
Directory path where ncc plugin parameter files are kept
Default: /home/nutanix/ncc/plugin_config/plugin_param
--waiting_plugin_sleep_time
The flags specifies the amount of time the waiting plugins should sleep before
checking for idle threads
Default: 0.05
util.ncc.plugins.consts
--HDD_latency_threshold_ms
HDD await threshold (ms/command) to determine Disk issues.
Default: 500
--SSD_latency_threshold_ms
SSD await threshold (ms/command) to determine Disk issues.
Default: 50
--autocomplete
If true, recreate autocomplete bash script.
Default: false
util.ncc.plugins.firstimport
--ncc_base_dir
The base NCC directory
Default: /home/nutanix/ncc
--neuron_base_dir
The base NEURON directory
Default: /home/nutanix/neuron
util.ncc.plugins.log_collector.flags
--append_ftp_cluster_metadata
If true, cluster metadata is appended to the ftp_target_location. The dir structure
in this case is : FLAGS.ftp_target_location/<cluster-uuid>/timestamp/<node-id>
Default: false
--append_ftp_location
If true, 'LogCollector' is appended to the target ftp location before uploadingthe
log bundle. In case it is False, the log bundle is directly uploaded to
FLAGS.ftp_target_location.
Default: true
--authenticate_using_keys
Boolean flag to specify if the authentication should happen using keys. Keys
authentication is only supported via SFTP.
Default: false
--auto_log_coll_slave
If true, ncc is running the periodic log collector on slave
Default: false
--cache_reset_limit_mb
The disk space cache reset limit.
Default: 512
--curr_logfile_timestamp
The timestamp for the current tarball generated
Default: 0
--gz_extension
The extension of compressed tarfile.
Default: .tar.gz
--http_timeout
Timeout for http request.
Default: 60
--hypervisor_log_input_dir
If this flag is mentioned the logs on the hypervisor will be collected from the
given directory, otherwise they will be collected from the default configured
directory.
--iam_svc_acc_zkpath
IAM service account Zk Path.
Default: /appliance/logical/iam/mgmt_plane/service_account_config
setup_hyperv.py
commands:
register_shares
setup_metro
setup_scvmm
/usr/local/nutanix/bin/setup_hyperv.py
--configure_library_share
Whether a library share should be configured
Default: None
--default_host_group_path
The default SCVMM host group
Default: All Hosts
--help
Print detailed help
Default: false
--library_share_name
The name of the container that will be registered as a library share in SCVMM
--metro_smb_account_password
Password for the new metro cluster pair fqdn.
--metro_smb_name
This is the name that identifies a unique pair of AOS clusters. This name must
be used for the name of the SMB server when provisioning virtual machines and
virtual hard disks on any metro container stretched between the pair of AOS
clusters.
--ncli_password
Password to be used when running ncli
--nutanix_management_share
The storage container for nutanix cluster management
Default: NutanixManagementShare
--password
Domain account password for the host
--scvmm_host_group_path
Host Group to which this cluster should be added
--scvmm_password
SCVMM account password - defaults to <password>
--scvmm_server_name
Name of the server running SCVMM
--scvmm_username
SCVMM account username (with the FQDN) - defaults to <host_fqdn>
\<username>
License
The provision of this software to you does not grant any licenses or other rights under any
Microsoft patents with respect to anything other than the file server implementation portion of
the binaries for this software, including no licenses or any other rights in any hardware or any
devices or software that are used to communicate with or in connection with this software.
Conventions
Convention Description
root@host# command The commands are executed as the root user in the vSphere or
Acropolis host shell.
> command The commands are executed in the Hyper-V host shell.
AOS |
Interface Target Username Password
Version
Last modified: July 7, 2020 (2020-07-07T10:45:03+05:30)