CVM Commands
CVM Commands
CVM Commands
To verify the names, speed, and connectivity status of all AHV host interfaces, use
the manage_ovs show_uplinks command.
nutanix@CVM$ acli
<acropolis> net.list
Network name Network UUID Type Identifier
Production ea8468ec-c1ca-4220-bc51-714483c6a266 VLAN 27
vlan.0 a1850d8a-a4e0-4dc9-b247-1849ec97b1ba VLAN 0
<acropolis> net.list_vms vlan.0
VM UUID VM name MAC address
7956152a-ce08-468f-89a7-e377040d5310 VM1 52:54:00:db:2d:11
47c3a7a2-a7be-43e4-8ebf-c52c3b26c738 VM2 52:54:00:be:ad:bc
501188a6-faa7-4be0-9735-0e38a419a115 VM3 52:54:00:0c:15:35
Prism Uplink Configuration and Virtual Switches
From AOS 5.19 on, you can manage interfaces and load balancing for bridges and
bonds from the Prism web interface using virtual switches. This method
automatically changes all hosts in the cluster and performs the appropriate
maintenance mode and VM migration. You don�t need a manual maintenance mode
entrance or exit, which saves configuration time. Refer to the AHV Host Network
Management documentation for complete instructions and more information on virtual
switches.
Nutanix recommends using the Prism web interface exclusively for all network
management from 5.19 on. For AOS versions from 5.11 through 5.18, use the Prism
uplink configuration to configure networking for systems with a single bridge and
bond. For versions prior to 5.11 or for versions from 5.11 through 5.18 in systems
with multiple bridges and bonds, use the CLI to modify the network configuration
instead.
Don�t use any manage_ovs commands to make changes once you've used the Prism
virtual switch or uplink configuration, because this configuration automatically
reverts changes made using manage_ovs.
Follow the steps in this section on one node at a time to make CLI network changes
to a Nutanix cluster that is connected to a production network.
Use SSH to connect to the first CVM you want to update. Check its name and IP to
make sure you�re connected to the correct CVM. Verify failure tolerance and don�t
proceed if the cluster can�t tolerate at least one node failure.
Verify that the target AHV host can enter maintenance mode.
nutanix@CVM$ acli host.enter_maintenance_mode_check <host ip>
Put the AHV host in maintenance mode.
nutanix@CVM$ acli host.enter_maintenance_mode <host ip>
Find the <host ID> in the output of the command ncli host list.
nutanix@CVM$ ncli host list
Id : 00058977-c18c-af17-0000-000000006f89::2872
?- "2872" is the host ID
Uuid : ddc9d93b-68e0-4220-85f9-63b73d08f0ff
...
Enable maintenance mode for the CVM on the target AHV host. You may skip this step
if the CVM services aren�t running or the cluster state is stopped.
nutanix@CVM$ ncli host edit id=<host ID> enable-maintenance-mode=true
Because network changes can disrupt host connectivity, use IPMI to connect to the
host console and perform the desired network configuration changes. Once you have
changed the configuration, ping the default gateway and another Nutanix node to
verify connectivity.
After all tests have completed successfully, remove the CVM and AHV host from
maintenance mode.
From a different CVM, run the following command to take the affected CVM out of
maintenance mode:
nutanix@cvm$ ncli host edit id=<host ID> enable-maintenance-mode=false
Exit host maintenance mode to restore VM locality, migrating VMs back to their
original AHV host.
nutanix@cvm$ acli host.exit_maintenance_mode <host ip>
Move to the next node in the Nutanix cluster and repeat these steps to enter
maintenance mode, make the desired changes, and exit maintenance mode. Repeat this
process until you have made the changes on all hosts in the cluster.
Note: If you do not enter a bridge_name, the command runs on the default bridge,
br0.
nutanix@CVM$ manage_ovs --bridge_name <bridge> --interfaces <interfaces>
update_uplinks
nutanix@CVM$ manage_ovs --bridge_name <bridge> --interfaces <interfaces>
--require_link=false update_uplinks
The manage_ovs update_uplinks command deletes an existing bond and creates it with
the new parameters when you need to make a change to the bond members or load
balancing algorithm. Unless you specify the correct bond mode parameter, using
manage_ovs to update uplinks deletes the bond, then recreates it with the default
load balancing configuration. If you use active-backup load balancing,
update_uplinks can cause a short network interruption. If you use balance-slb or
balance-tcp (LACP) load balancing and don�t specify the correct bond mode
parameter, update_uplinks resets the configuration to active-backup. At this point,
the host stops responding to keepalives and network links that rely on LACP go
down.
Note: With AHV clusters that run versions 5.10.x prior to 5.10.4, using manage_ovs
to make configuration changes on an OVS bridge configured with a single uplink may
cause a network loop. If you configured the AHV node with a single interface in the
bridge, upgrade AOS to 5.10.4 or later before you make any changes. If you have a
single interface in the bridge, engage Nutanix Support if you can�t upgrade AOS and
must change the bond configuration.
Compute-Only Node Network Configuration
The manage_ovs command runs from a CVM and makes network changes to the local AHV
host where it runs. Because Nutanix AHV compute-only nodes don�t run a CVM, you use
a slightly different process to configure their networks.
Follow the steps in the Production Network Changes section to put the target AHV
host in maintenance mode.
Run the manage_ovs command from any other CVM in the cluster with the --host flag.
nutanix@CVM$ manage_ovs --host <compute-only-node-IP> --bridge_name <br-name>
--bond_name <bond-name> --interfaces 10g update_uplinks
Exit maintenance mode after you complete the network changes, then repeat these
steps on the next compute-only node.
Perform the following steps on each Nutanix node in the cluster to achieve the
configuration shown in the previous figure:
In AOS 5.5 or later, manage_ovs handles bridge creation. On each CVM, add bridge
br1. Bridge names must not exceed six characters. We suggest using the name br1.
Note: When you create a bridge, ensure that it�s created on every host in the
cluster. Failure to add bridges to all hosts can lead to VM migration errors.
nutanix@CVM$ manage_ovs --bridge_name br1 create_single_bridge
From the CVM, remove eth0 and eth1 from the default bridge br0 on all CVMs. Run the
following show commands to make sure that all interfaces are in a good state before
you perform the update:
nutanix@CVM$ allssh "manage_ovs show_interfaces"
nutanix@CVM$ allssh "manage_ovs --bridge_name br0 show_uplinks"
The output from these show commands should verify that the 10 GbE and 1 GbE
interfaces have connectivity to the upstream switches�just look for the columns
labeled link and speed.
Note: You can use the --require_link=false flag to create the bond even if all the
1 GbE adapters aren�t connected.
nutanix@CVM$ manage_ovs --bridge_name br1 --bond_name br1-up --interfaces 1g
--require_link=false update_uplinks
Exit maintenance mode and repeat the previous update_uplinks steps for every host
in the cluster.
Creating Networks
In AOS versions 5.19 and later, you can create networks on any virtual switch and
bridge using a drop-down in the Prism UI.
In clusters with AOS versions from 5.11 through 5.18 with a single bridge and bond,
use the Uplink Configuration UI instead of the CLI to configure active-backup mode
by selecting Active-Backup (shown in the following figure).
For all clusters with versions 5.19 and later, even with multiple bridges, use the
Prism Virtual Switch UI instead of the CLI to select active-backup mode
(configuration shown in the following figure).
Figure. Virtual Switch Configuration for Active-Backup
Click to enlarge
../images/BP-2071-AHV-Networking_image16.png
Note: Don�t use the following CLI instructions unless you�re using an AOS version
prior to 5.11 or a version from 5.11 through 5.18 with multiple bridges and bonds.
Active-backup mode is enabled by default, but you can also configure it with the
following ovs-vsctl command on the CVM:
Bridge: br0
Bond: br0-up
bond_mode: active-backup
interfaces: eth3 eth2
lacp: off
lacp-fallback: false
lacp_speed: slow
For more detailed bond information such as the currently active adapter, use the
following ovs-appctl command on the CVM:
Don�t use IGMP snooping on physical switches connected to Nutanix servers that use
balance-slb. Balance-slb forwards inbound multicast traffic on only a single active
adapter and discards multicast traffic from other adapters. Switches with IGMP
snooping may discard traffic to the active adapter and only send it to the backup
adapters. This mismatch leads to unpredictable multicast traffic behavior. Disable
IGMP snooping or configure static IGMP groups for all switch ports connected to
Nutanix servers using balance-slb.
Each individual VM NIC uses only a single bond member interface at a time, but a
hashing algorithm distributes multiple VM NICs (multiple source MAC addresses)
across bond member interfaces. As a result, it�s possible for a Nutanix AHV node
with two 10 GbE interfaces to use up to 20 Gbps of network throughput, while an
individual VM may have a maximum throughput of 10 Gbps, the speed of a single
physical interface.
In clusters with AOS versions from 5.11 through 5.18 with a single bridge and bond,
use the Uplink Configuration UI instead of the CLI to configure balance-slb mode by
selecting Active-Active with MAC pinning (shown in the following figure).
For all clusters with versions 5.19 and later, even with multiple bridges, use the
Prism Virtual Switch UI instead of the CLI to configure balance-slb mode by
selecting Active-Active with MAC pinning (shown in the following figure).
Note: Don�t use the following CLI instructions unless you�re using an AOS version
prior to 5.11 or a version from 5.11 through 5.18 with multiple bridges and bonds.
The default rebalance interval is 10 seconds, but Nutanix recommends setting this
interval to 30 seconds to avoid excessive movement of source MAC address hashes
between upstream switches. Nutanix has tested this configuration using two separate
upstream switches with AHV. No additional configuration (such as link aggregation)
is required on the switch side, as long as the upstream switches are interconnected
physically or virtually and both uplinks allow the same VLANs.
Note: Don�t use link aggregation technologies such as LACP with balance-slb. The
balance-slb algorithm assumes that upstream switch links are independent layer 2
interfaces and handles broadcast, unknown, and multicast (BUM) traffic accordingly,
selectively listening for this traffic on only a single active adapter in the bond.
Tip: In a production environment, Nutanix strongly recommends that you make changes
on one node at a time, after you verify that the cluster can tolerate a node
failure. Follow the steps in the Production Network Changes section when you make
changes.
After you enter maintenance mode for the desired CVM and AHV host, configure the
balance-slb algorithm for the bond with the following commands:
Verify the proper bond mode on each CVM with the following command:
Nutanix and OVS require dynamic link aggregation with LACP instead of static link
aggregation on the physical switch. Do not use static link aggregation such as
EtherChannel with AHV.
Nutanix recommends that you enable LACP on the AHV host with fallback to active-
backup, then configure the connected upstream switches. Different switch vendors
may refer to link aggregation as port channel or LAG. Using multiple upstream
switches may require additional configuration, such as a multichassis link
aggregation group (MLAG) or virtual PortChannel (vPC). Configure switches to fall
back to active-backup mode in case LACP negotiation fails (sometimes called
fallback or no suspend-individual). This switch setting assists with node imaging
and initial configuration where LACP may not yet be available on the host.
For clusters with AOS versions from 5.11 through 5.18 with a single bridge and
bond, use the Uplink Configuration UI to configure balance-tcp with LACP mode by
selecting Active-Active with link aggregation (shown in the following figure).
For all clusters with versions 5.19 and later, even with multiple bridges, use the
Prism Virtual Switch UI instead of the CLI to configure balance-tcp with LACP mode
by selecting Active-Active (shown in the following figure).
The Prism GUI Active-Active mode configures all AHV hosts with the fast setting for
LACP speed, causing the AHV host to request LACP control packets at the rate of one
per second from the physical switch. In addition, the Prism GUI configuration sets
LACP fallback to active-backup on all AHV hosts. You can�t modify these default
settings after you�ve configured them from the GUI, even by using the CLI.
Note: Don�t use the following CLI instructions unless you�re using an AOS version
prior to 5.11 or a version from 5.11 through 5.18 with multiple bridges and bonds.
Tip: In a production environment, Nutanix strongly recommends that you make changes
on one node at a time, after you verify that the cluster can tolerate a node
failure. Follow the steps in the Production Network Changes section when you make
changes.
After you enter maintenance mode for the desired AHV host and CVM, configure link
aggregation with LACP and balance-tcp using the commands in the following
paragraphs.
Note: Upstream physical switch LACP settings such as timers should match the AHV
host settings for configuration consistency.
If upstream LACP negotiation fails, the default AHV host CLI configuration disables
the bond, blocking all traffic. The following command allows fallback to active-
backup bond mode in the AHV host in the event of LACP negotiation failure:
Tip: In a production environment, Nutanix strongly recommends that you make changes
on one node at a time, after you verify that the cluster can tolerate a node
failure. Follow the steps in the Production Network Changes section when you make
changes.
After you enter maintenance mode on the desired AHV host and CVM, configure a
bonding mode that doesn�t require LACP (such as active-backup):
Note: All CVMs and hypervisor hosts must be on the same subnet and broadcast
domain. No systems other than the CVMs and hypervisor hosts should be on this
network, which should be isolated and protected.
Figure. Default Untagged VLAN for CVM and AHV Host
Click to enlarge
../images/BP-2071-AHV-Networking_image23.png
The setup depicted in the previous figure works well for situations where the
switch administrator can set the CVM and AHV VLAN to untagged. However, if you
don�t want to send untagged traffic to the AHV host and CVM or if your security
policy doesn�t allow this configuration, you can add a VLAN tag to the host and the
CVM with the procedure shown in the next image.
Note: Ensure that IPMI console access is available for recovery before you start
this configuration.
Tip: In a production environment, Nutanix strongly recommends that you make changes
on one node at a time, after you verify that the cluster can tolerate a node
failure. Follow the steps in the Production Network Changes section when you make
changes.
After you enter maintenance mode on the target AHV host and CVM, configure VLAN
tags on the AHV host:
Note: Use port br0 in the following command. Port br0 is the internal port assigned
to AHV on bridge br0. Do not use any value other than br0.
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0 tag=10"
nutanix@CVM$ ssh [email protected] "ovs-vsctl list port br0"
Configure VLAN tags for the CVM:
nutanix@CVM$ change_cvm_vlan 10
Exit maintenance mode and repeat these steps on every node to configure the entire
cluster.
After you enter maintenance mode on the target AHV host and CVM, run the following
command for the CVM:
nutanix@CVM$ change_cvm_vlan 0
Run the following command for the AHV host:
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0 tag=0"
Exit maintenance mode and repeat the previous steps on every node to configure the
entire cluster.
When you have multiple bridges, putting the bridge name in the network name is
helpful when you view the network in the Prism UI. In this example, Prism shows a
network named �br1_vlan99� to indicate that this network sends VM traffic over VLAN
99 on bridge br1.
If you are running a version of AOS prior to 5.10, turn off the VM. Otherwise,
leave the VM on.
Create a new network with a different VLAN ID that you want to assign to the VM
NIC.
Run the following command on any CVM:
nutanix@cvm$ acli vm.nic_update <VM_name> <NIC MAC address> network=<new network>
VM NIC VLAN Modes
VM NICs on AHV can operate in three modes:
Access
Trunked
Direct
Access mode is the default for VM NICs, where a single VLAN travels to and from the
VM as untagged but is encapsulated with the appropriate VLAN tag on the physical
NIC. VM NICs in trunked mode allow multiple tagged VLANs and a single untagged VLAN
on a single NIC for VLAN-aware VMs. You can only add a NIC in trunked mode using
the aCLI; you can�t distinguish between access and trunked NIC modes in the Prism
UI. Direct-mode NICs connect to brX and bypass the bridge chain; don�t use them
unless advised by Nutanix Support to do so.
Run the following command on any CVM in the cluster to add a new trunked NIC:
nutanix@CVM~$ acli vm.nic_create <vm name> network=<network name>
trunked_networks=<comma separated list of allowed VLAN IDs> vlan_mode=kTrunked
The native VLAN for the trunked NIC is the VLAN assigned to the network specified
in the network parameter. Additional tagged VLANs are designated by the
trunked_networks parameter.
Run the following command on any CVM in the cluster to verify the VM NIC mode:
nutanix@CVM~$ acli vm.nic_update <vm name> <vm nic mac address> vlan_mode=kAccess
update_vlan_trunk_info=true
CVM Network Segmentation
The optional backplane LAN creates a dedicated interface in a separate VLAN on all
CVMs and AHV hosts in the cluster for exchanging storage replication traffic. The
backplane network shares the same physical adapters on bridge br0 by default but
uses a different nonroutable VLAN. From AOS 5.11.1 on, you can create the backplane
network in a new bridge (such as br1). If you place the backplane network on a new
bridge, ensure that this bridge has redundant network adapters with at least 10
Gbps throughput and use a fault-tolerant load balancing algorithm.
Use the backplane network only if you need to separate CVM management traffic (such
as Prism) from storage replication traffic. The section Securing Traffic Through
Network Segmentation in the Nutanix Security Guide includes diagrams and
configuration instructions.
From AOS 5.11 on, you can also separate iSCSI traffic for Nutanix Volumes onto a
dedicated virtual network interface on the CVMs using the Create New Interface
dialog. The new iSCSI virtual network interface can use a shared or dedicated
bridge. Ensure that the selected bridge uses multiple redundant uplinks.
Jumbo Frames
The Nutanix CVM uses the standard Ethernet MTU (maximum transmission unit) of 1,500
bytes for all the network interfaces by default. The standard 1,500-byte MTU
delivers excellent performance and stability. Nutanix doesn�t support configuring
the MTU on a CVM�s network interfaces to higher values.
You can enable jumbo frames (MTU of 9,000 bytes) on the physical network interfaces
of AHV, ESXi, or Hyper-V hosts and user VMs if the applications on your user VMs
require them. If you choose to use jumbo frames on hypervisor hosts, enable them
end to end in the desired network and consider both the physical and virtual
network infrastructure impacted by the change.
Test :-