CVM Commands

Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1of 13

View Physical NIC Status from the CVM

To verify the names, speed, and connectivity status of all AHV host interfaces, use
the manage_ovs show_uplinks command.

nutanix@CVM$ manage_ovs --bridge_name br0 show_uplinks


Uplink ports: br0-up
Uplink ifaces: eth3 eth2
nutanix@CVM$ manage_ovs show_interfaces
name mode link speed
eth0 1000 True 1000
eth1 1000 True 1000
eth2 10000 True 10000
eth3 10000 True 10000
View OVS Bridge and Bond Status from the CVM
Verify bridge and bond details in the AHV host using the show_uplinks command.

nutanix@CVM$ manage_ovs show_uplinks


Bridge: br0
Bond: br0-up
bond_mode: active-backup
interfaces: eth3 eth2
lacp: off
lacp-fallback: true
lacp_speed: off
Bridge: br1
Bond: br1-up
bond_mode: active-backup
interfaces: eth1 eth0
lacp: off
lacp-fallback: true
lacp_speed: off
View VM Network Configuration from a CVM Using the aCLI
Connect to any CVM in the Nutanix cluster to launch the aCLI and view cluster-wide
VM network details.

nutanix@CVM$ acli
<acropolis> net.list
Network name Network UUID Type Identifier
Production ea8468ec-c1ca-4220-bc51-714483c6a266 VLAN 27
vlan.0 a1850d8a-a4e0-4dc9-b247-1849ec97b1ba VLAN 0
<acropolis> net.list_vms vlan.0
VM UUID VM name MAC address
7956152a-ce08-468f-89a7-e377040d5310 VM1 52:54:00:db:2d:11
47c3a7a2-a7be-43e4-8ebf-c52c3b26c738 VM2 52:54:00:be:ad:bc
501188a6-faa7-4be0-9735-0e38a419a115 VM3 52:54:00:0c:15:35
Prism Uplink Configuration and Virtual Switches
From AOS 5.19 on, you can manage interfaces and load balancing for bridges and
bonds from the Prism web interface using virtual switches. This method
automatically changes all hosts in the cluster and performs the appropriate
maintenance mode and VM migration. You don�t need a manual maintenance mode
entrance or exit, which saves configuration time. Refer to the AHV Host Network
Management documentation for complete instructions and more information on virtual
switches.

Nutanix recommends using the Prism web interface exclusively for all network
management from 5.19 on. For AOS versions from 5.11 through 5.18, use the Prism
uplink configuration to configure networking for systems with a single bridge and
bond. For versions prior to 5.11 or for versions from 5.11 through 5.18 in systems
with multiple bridges and bonds, use the CLI to modify the network configuration
instead.

Don�t use any manage_ovs commands to make changes once you've used the Prism
virtual switch or uplink configuration, because this configuration automatically
reverts changes made using manage_ovs.

Production Network Changes


Note: Exercise caution when you make changes that impact the network connectivity
of Nutanix nodes.
When you use the CLI, we strongly recommend that you perform changes on one node
(AHV host and CVM) at a time after you ensure that the cluster can tolerate a
single-node outage. To prevent network and storage disruption, place the AHV host
and CVM of each node in maintenance mode before you make CLI network changes. While
in maintenance mode, the system migrates VMs off the AHV host and directs storage
services to another CVM.

Follow the steps in this section on one node at a time to make CLI network changes
to a Nutanix cluster that is connected to a production network.

Use SSH to connect to the first CVM you want to update. Check its name and IP to
make sure you�re connected to the correct CVM. Verify failure tolerance and don�t
proceed if the cluster can�t tolerate at least one node failure.
Verify that the target AHV host can enter maintenance mode.
nutanix@CVM$ acli host.enter_maintenance_mode_check <host ip>
Put the AHV host in maintenance mode.
nutanix@CVM$ acli host.enter_maintenance_mode <host ip>
Find the <host ID> in the output of the command ncli host list.
nutanix@CVM$ ncli host list
Id : 00058977-c18c-af17-0000-000000006f89::2872
?- "2872" is the host ID
Uuid : ddc9d93b-68e0-4220-85f9-63b73d08f0ff
...
Enable maintenance mode for the CVM on the target AHV host. You may skip this step
if the CVM services aren�t running or the cluster state is stopped.
nutanix@CVM$ ncli host edit id=<host ID> enable-maintenance-mode=true
Because network changes can disrupt host connectivity, use IPMI to connect to the
host console and perform the desired network configuration changes. Once you have
changed the configuration, ping the default gateway and another Nutanix node to
verify connectivity.
After all tests have completed successfully, remove the CVM and AHV host from
maintenance mode.
From a different CVM, run the following command to take the affected CVM out of
maintenance mode:
nutanix@cvm$ ncli host edit id=<host ID> enable-maintenance-mode=false
Exit host maintenance mode to restore VM locality, migrating VMs back to their
original AHV host.
nutanix@cvm$ acli host.exit_maintenance_mode <host ip>
Move to the next node in the Nutanix cluster and repeat these steps to enter
maintenance mode, make the desired changes, and exit maintenance mode. Repeat this
process until you have made the changes on all hosts in the cluster.

OVS Command Line Configuration


To view the OVS configuration from the CVM command line, use the AHV-specific
manage_ovs command. To run a single view command on every Nutanix CVM in a cluster,
use the allssh shortcut described in the AHV Command Line appendix.

Note: In a production environment, we only recommend using the allssh shortcut to


view information. Don�t use the allssh shortcut to make changes in a production
environment. When you make network changes, only use the allssh shortcut in a
nonproduction or staging environment.
Note: The order in which flags and actions pass to manage_ovs is critical. The flag
must come before the action. Any flag passed after an action isn�t parsed.
nutanix@CVM$ manage_ovs --helpshort
USAGE: manage_ovs [flags] <action>
To list all physical interfaces on all nodes, use the show_interfaces command. The
show_uplinks command returns the details of a bonded adapter for a single bridge.

nutanix@CVM$ allssh "manage_ovs show_interfaces"


nutanix@CVM$ allssh "manage_ovs --bridge_name <bridge> show_uplinks"
The update_uplinks command configures a comma-separated list of interfaces into a
single uplink bond in the specified bridge. If the bond doesn�t have at least one
interface with a physical connection, the manage_ovs command issues a warning and
exits without configuring the bond. To avoid this error and provision members of
the bond even if they are not connected, use the require_link=false flag.

Note: If you do not enter a bridge_name, the command runs on the default bridge,
br0.
nutanix@CVM$ manage_ovs --bridge_name <bridge> --interfaces <interfaces>
update_uplinks
nutanix@CVM$ manage_ovs --bridge_name <bridge> --interfaces <interfaces>
--require_link=false update_uplinks
The manage_ovs update_uplinks command deletes an existing bond and creates it with
the new parameters when you need to make a change to the bond members or load
balancing algorithm. Unless you specify the correct bond mode parameter, using
manage_ovs to update uplinks deletes the bond, then recreates it with the default
load balancing configuration. If you use active-backup load balancing,
update_uplinks can cause a short network interruption. If you use balance-slb or
balance-tcp (LACP) load balancing and don�t specify the correct bond mode
parameter, update_uplinks resets the configuration to active-backup. At this point,
the host stops responding to keepalives and network links that rely on LACP go
down.

Note: Don�t use the command allssh manage_ovs update_uplinks in a production


environment. If you haven�t set the correct flags and parameters, this command can
cause a cluster outage.
You can use the manage_ovs command for host uplink configuration in various common
scenarios, including initial cluster deployment, cluster expansion, reimaging a
host during boot disk replacement, or general host network troubleshooting.

Note: With AHV clusters that run versions 5.10.x prior to 5.10.4, using manage_ovs
to make configuration changes on an OVS bridge configured with a single uplink may
cause a network loop. If you configured the AHV node with a single interface in the
bridge, upgrade AOS to 5.10.4 or later before you make any changes. If you have a
single interface in the bridge, engage Nutanix Support if you can�t upgrade AOS and
must change the bond configuration.
Compute-Only Node Network Configuration
The manage_ovs command runs from a CVM and makes network changes to the local AHV
host where it runs. Because Nutanix AHV compute-only nodes don�t run a CVM, you use
a slightly different process to configure their networks.

Follow the steps in the Production Network Changes section to put the target AHV
host in maintenance mode.
Run the manage_ovs command from any other CVM in the cluster with the --host flag.
nutanix@CVM$ manage_ovs --host <compute-only-node-IP> --bridge_name <br-name>
--bond_name <bond-name> --interfaces 10g update_uplinks
Exit maintenance mode after you complete the network changes, then repeat these
steps on the next compute-only node.
Perform the following steps on each Nutanix node in the cluster to achieve the
configuration shown in the previous figure:

In AOS 5.5 or later, manage_ovs handles bridge creation. On each CVM, add bridge
br1. Bridge names must not exceed six characters. We suggest using the name br1.
Note: When you create a bridge, ensure that it�s created on every host in the
cluster. Failure to add bridges to all hosts can lead to VM migration errors.
nutanix@CVM$ manage_ovs --bridge_name br1 create_single_bridge
From the CVM, remove eth0 and eth1 from the default bridge br0 on all CVMs. Run the
following show commands to make sure that all interfaces are in a good state before
you perform the update:
nutanix@CVM$ allssh "manage_ovs show_interfaces"
nutanix@CVM$ allssh "manage_ovs --bridge_name br0 show_uplinks"
The output from these show commands should verify that the 10 GbE and 1 GbE
interfaces have connectivity to the upstream switches�just look for the columns
labeled link and speed.

The following sample output of the manage_ovs show_interfaces command verifies


connectivity:

nutanix@CVM$ manage_ovs show_interfaces


name mode link speed
eth0 1000 True 1000
eth1 1000 True 1000
eth2 10000 True 10000
eth3 10000 True 10000
The following sample output of the manage_ovs show_uplinks command verifies bond
configuration:

nutanix@CVM$ manage_ovs show_uplinks


Bridge: br0
Bond: br0-up
bond_mode: active-backup
interfaces: eth3 eth2 eth1 eth0
lacp: off
lacp-fallback: true
lacp_speed: off
Once you�ve placed the AHV host and CVM into maintenance mode and confirmed
connectivity, update the bond to include only 10 GbE interfaces. This command
removes all other interfaces from the bond.

nutanix@CVM$ manage_ovs --bridge_name br0 --bond_name br0-up --interfaces 10g


update_uplinks
Add the eth0 and eth1 uplinks to br1 in the CVM using the 1g interface shortcut.

Note: You can use the --require_link=false flag to create the bond even if all the
1 GbE adapters aren�t connected.
nutanix@CVM$ manage_ovs --bridge_name br1 --bond_name br1-up --interfaces 1g
--require_link=false update_uplinks
Exit maintenance mode and repeat the previous update_uplinks steps for every host
in the cluster.

Creating Networks
In AOS versions 5.19 and later, you can create networks on any virtual switch and
bridge using a drop-down in the Prism UI.

Figure. Create Network on Additional Virtual Switch


Click to enlarge
../images/BP-2071-AHV-Networking_image14.png
In AOS versions prior to 5.19, you must use the aCLI to create networks on bridges
other than br0. Once you�ve created the networks in the aCLI, you can view them by
network name in the Prism GUI, so it�s helpful to include the bridge name in the
network name.

nutanix@CVM$ acli net.create <net_name> vswitch_name=<br_name> vlan=<vlan_num>


For example:

nutanix@CVM$ acli net.create br1_production vswitch_name=br1 vlan=1001


nutanix@CVM$ acli net.create br2_production vswitch_name=br2 vlan=2001
Load Balancing in Bond Interfaces
AHV hosts use a bond containing multiple physical interfaces that each connect to a
physical switch. To build a fault-tolerant network connection between the AHV host
and the rest of the network, connect each physical interface in a bond to a
separate physical switch.

A bond distributes traffic between multiple physical interfaces according to the


bond mode.

Table. Load Balancing Use Cases


Bond Mode Use Case Maximum VM NIC Throughput* Maximum Host Throughput*
active-backup Recommended. Default configuration, transmits all traffic over a
single active adapter. 10 Gbps 10 Gbps
balance-slb Has caveats for multicast traffic. Increases host bandwidth utilization
beyond a single 10 GbE adapter. Places each VM NIC on a single adapter at a time.
Don�t use with link aggregation such as LACP. 10 Gbps 20 Gbps
LACP and balance-tcp LACP and link aggregation required. Increases host and VM
bandwidth utilization beyond a single 10 GbE adapter by balancing VM NIC TCP and
UDP sessions among adapters. Also used when network switches require LACP
negotiation. 20 Gbps 20 Gbps
* Assuming 2x 10 GbE adapters. Simplex speed.
Active-Backup
The recommended and default bond mode is active-backup, where one interface in the
bond is randomly selected at boot to carry traffic and other interfaces in the bond
are used only when the active link fails. Active-backup is the simplest bond mode,
easily allowing connections to multiple upstream switches without additional switch
configuration. The limitation is that traffic from all VMs uses only the single
active link in the bond at one time. All backup links remain unused until the
active link fails. In a system with dual 10 GbE adapters, the maximum throughput of
all VMs running on a Nutanix node is limited to 10 Gbps, or the speed of a single
link.

Figure. Active-Backup Fault Tolerance


Click to enlarge
../images/BP-2071-AHV-Networking_image15.png

In clusters with AOS versions from 5.11 through 5.18 with a single bridge and bond,
use the Uplink Configuration UI instead of the CLI to configure active-backup mode
by selecting Active-Backup (shown in the following figure).

Figure. Uplink Configuration for Active Backup


Click to enlarge
../images/BP-2071-AHV-Networking_image10.png

For all clusters with versions 5.19 and later, even with multiple bridges, use the
Prism Virtual Switch UI instead of the CLI to select active-backup mode
(configuration shown in the following figure).
Figure. Virtual Switch Configuration for Active-Backup
Click to enlarge
../images/BP-2071-AHV-Networking_image16.png

Note: Don�t use the following CLI instructions unless you�re using an AOS version
prior to 5.11 or a version from 5.11 through 5.18 with multiple bridges and bonds.
Active-backup mode is enabled by default, but you can also configure it with the
following ovs-vsctl command on the CVM:

nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up bond_mode=active-


backup"
View the bond mode with the following CVM command:

nutanix@CVM$ manage_ovs show_uplinks


In the active-backup configuration, this command returns a variation of the
following output, where eth2 and eth3 are marked as interfaces used in the bond
br0-up:

Bridge: br0
Bond: br0-up
bond_mode: active-backup
interfaces: eth3 eth2
lacp: off
lacp-fallback: false
lacp_speed: slow
For more detailed bond information such as the currently active adapter, use the
following ovs-appctl command on the CVM:

nutanix@CVM$ ssh [email protected] "ovs-appctl bond/show"


Balance-SLB
Nutanix doesn�t recommend balance-slb because of the multicast traffic caveats
noted in this section. To combine the bandwidth of multiple links, consider using
link aggregation with LACP and balance-tcp instead of balance-slb. Don�t use
balance-slb unless you verify that the multicast limitations described here aren�t
present in your network.

Don�t use IGMP snooping on physical switches connected to Nutanix servers that use
balance-slb. Balance-slb forwards inbound multicast traffic on only a single active
adapter and discards multicast traffic from other adapters. Switches with IGMP
snooping may discard traffic to the active adapter and only send it to the backup
adapters. This mismatch leads to unpredictable multicast traffic behavior. Disable
IGMP snooping or configure static IGMP groups for all switch ports connected to
Nutanix servers using balance-slb.

Note: IGMP snooping is often enabled by default on physical switches.


The balance-slb bond mode in OVS takes advantage of all links in a bond and uses
measured traffic load to rebalance VM traffic from highly used to less-used
interfaces. When the configurable bond-rebalance interval expires, OVS uses the
measured load for each interface and the load for each source MAC hash to spread
traffic evenly among links in the bond. Traffic from some source MAC hashes may
move to a less active link to more evenly balance bond member utilization.
Perfectly even balancing may not always be possible, depending on the number of
source MAC hashes and their stream sizes.

Each individual VM NIC uses only a single bond member interface at a time, but a
hashing algorithm distributes multiple VM NICs (multiple source MAC addresses)
across bond member interfaces. As a result, it�s possible for a Nutanix AHV node
with two 10 GbE interfaces to use up to 20 Gbps of network throughput, while an
individual VM may have a maximum throughput of 10 Gbps, the speed of a single
physical interface.

Figure. Balance-SLB Load Balancing


Click to enlarge
../images/BP-2071-AHV-Networking_image17.png

In clusters with AOS versions from 5.11 through 5.18 with a single bridge and bond,
use the Uplink Configuration UI instead of the CLI to configure balance-slb mode by
selecting Active-Active with MAC pinning (shown in the following figure).

Figure. Uplink Configuration for Balance-SLB


Click to enlarge
../images/BP-2071-AHV-Networking_image18.png

For all clusters with versions 5.19 and later, even with multiple bridges, use the
Prism Virtual Switch UI instead of the CLI to configure balance-slb mode by
selecting Active-Active with MAC pinning (shown in the following figure).

Figure. Virtual Switch Configuration for Balance-SLB


Click to enlarge
../images/BP-2071-AHV-Networking_image19.png

Note: Don�t use the following CLI instructions unless you�re using an AOS version
prior to 5.11 or a version from 5.11 through 5.18 with multiple bridges and bonds.
The default rebalance interval is 10 seconds, but Nutanix recommends setting this
interval to 30 seconds to avoid excessive movement of source MAC address hashes
between upstream switches. Nutanix has tested this configuration using two separate
upstream switches with AHV. No additional configuration (such as link aggregation)
is required on the switch side, as long as the upstream switches are interconnected
physically or virtually and both uplinks allow the same VLANs.

Note: Don�t use link aggregation technologies such as LACP with balance-slb. The
balance-slb algorithm assumes that upstream switch links are independent layer 2
interfaces and handles broadcast, unknown, and multicast (BUM) traffic accordingly,
selectively listening for this traffic on only a single active adapter in the bond.
Tip: In a production environment, Nutanix strongly recommends that you make changes
on one node at a time, after you verify that the cluster can tolerate a node
failure. Follow the steps in the Production Network Changes section when you make
changes.
After you enter maintenance mode for the desired CVM and AHV host, configure the
balance-slb algorithm for the bond with the following commands:

nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up bond_mode=balance-slb"


nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up other_config:bond-
rebalance-interval=30000"
Exit maintenance mode for this node and repeat this configuration on all nodes in
the cluster.

Verify the proper bond mode on each CVM with the following command:

nutanix@CVM$ manage_ovs show_uplinks


Bridge: br0
Bond: br0-up
bond_mode: balance-slb
interfaces: eth3 eth2
lacp: off
lacp-fallback: false
lacp_speed: slow
The output shows that the bond_mode selected is balance-slb.
For more detailed information such as the source MAC hash distribution, use ovs-
appctl.

LACP and Link Aggregation


Link aggregation is required to take full advantage of the bandwidth provided by
multiple links. In OVS it is accomplished though dynamic link aggregation with LACP
and load balancing using balance-tcp.

Nutanix and OVS require dynamic link aggregation with LACP instead of static link
aggregation on the physical switch. Do not use static link aggregation such as
EtherChannel with AHV.

Nutanix recommends that you enable LACP on the AHV host with fallback to active-
backup, then configure the connected upstream switches. Different switch vendors
may refer to link aggregation as port channel or LAG. Using multiple upstream
switches may require additional configuration, such as a multichassis link
aggregation group (MLAG) or virtual PortChannel (vPC). Configure switches to fall
back to active-backup mode in case LACP negotiation fails (sometimes called
fallback or no suspend-individual). This switch setting assists with node imaging
and initial configuration where LACP may not yet be available on the host.

With link aggregation negotiated by LACP, multiple links to separate physical


switches appear as a single layer 2 link. A traffic-hashing algorithm such as
balance-tcp can split traffic between multiple links in an active-active fashion.
Because the uplinks appear as a single layer 2 link, the algorithm can balance
traffic among bond members without any regard for switch MAC address tables.
Nutanix recommends using balance-tcp when you have LACP and link aggregation
configured, because each TCP or UDP stream from a single VM can potentially use a
different uplink in this configuration. The balance-tcp algorithm hashes traffic
streams by source IP, destination IP, source port, and destination port. With link
aggregation, LACP, and balance-tcp, a single user VM with multiple TCP or UDP
streams could use up to 20 Gbps of bandwidth in an AHV node with two 10 GbE
adapters.

Figure. LACP and Balance-TCP Load Balancing


Click to enlarge
../images/BP-2071-AHV-Networking_image20.png

For clusters with AOS versions from 5.11 through 5.18 with a single bridge and
bond, use the Uplink Configuration UI to configure balance-tcp with LACP mode by
selecting Active-Active with link aggregation (shown in the following figure).

Figure. Uplink Configuration for Balance-TCP with LACP


Click to enlarge
../images/BP-2071-AHV-Networking_image21.png

For all clusters with versions 5.19 and later, even with multiple bridges, use the
Prism Virtual Switch UI instead of the CLI to configure balance-tcp with LACP mode
by selecting Active-Active (shown in the following figure).

Figure. Virtual Switch Configuration for Balance-TCP with LACP


Click to enlarge
../images/BP-2071-AHV-Networking_image22.png

The Prism GUI Active-Active mode configures all AHV hosts with the fast setting for
LACP speed, causing the AHV host to request LACP control packets at the rate of one
per second from the physical switch. In addition, the Prism GUI configuration sets
LACP fallback to active-backup on all AHV hosts. You can�t modify these default
settings after you�ve configured them from the GUI, even by using the CLI.

Note: Don�t use the following CLI instructions unless you�re using an AOS version
prior to 5.11 or a version from 5.11 through 5.18 with multiple bridges and bonds.
Tip: In a production environment, Nutanix strongly recommends that you make changes
on one node at a time, after you verify that the cluster can tolerate a node
failure. Follow the steps in the Production Network Changes section when you make
changes.
After you enter maintenance mode for the desired AHV host and CVM, configure link
aggregation with LACP and balance-tcp using the commands in the following
paragraphs.

Note: Upstream physical switch LACP settings such as timers should match the AHV
host settings for configuration consistency.
If upstream LACP negotiation fails, the default AHV host CLI configuration disables
the bond, blocking all traffic. The following command allows fallback to active-
backup bond mode in the AHV host in the event of LACP negotiation failure:

nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up other_config:lacp-


fallback-ab=true"
In the AHV host CLI and on most switches, the default OVS LACP speed configuration
is slow, or 30 seconds. This value�which is independent of the switch timer
setting�determines how frequently the AHV host requests LACPDUs from the connected
physical switch. The fast setting (1 second) requests LACPDUs from the connected
physical switch every second, which helps you detect interface failures more
quickly. Failure to receive three LACPDUs�in other words, after 3 seconds with the
fast setting�shuts down the link in the bond. Nutanix recommends setting lacp-time
to fast on the AHV host and physical switch to decrease link failure detection time
from 90 seconds to 3 seconds.

nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up other_config:lacp-


time=fast"
Next, enable LACP negotiation and set the hash algorithm to balance-tcp:

nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up lacp=active"


nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up bond_mode=balance-tcp"
Enable LACP on the upstream physical switches for this AHV host with matching timer
and load balancing settings. Confirm LACP negotiation using ovs-appctl commands,
looking for the word �negotiated� in the status lines.

nutanix@CVM$ ssh [email protected] "ovs-appctl bond/show br0-up"


nutanix@CVM$ ssh [email protected] "ovs-appctl lacp/show br0-up"
Exit maintenance mode and repeat these steps for each node and every connected
switch port one node at a time until you�ve configured the entire cluster and all
connected switch ports.

Disable LACP on the Host


To safely disable the LACP configuration so you can use another load balancing
algorithm, perform the following steps.

Tip: In a production environment, Nutanix strongly recommends that you make changes
on one node at a time, after you verify that the cluster can tolerate a node
failure. Follow the steps in the Production Network Changes section when you make
changes.
After you enter maintenance mode on the desired AHV host and CVM, configure a
bonding mode that doesn�t require LACP (such as active-backup):

nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up bond_mode=active-


backup"
Turn off LACP on the connected physical switch ports.

Turn off LACP on the hosts:

nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up lacp=off"


Disable LACP fallback on the host:

nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0-up other_config:lacp-


fallback-ab=true"
Exit maintenance mode and repeat these steps for each node and connected switch
port one node at a time until you�ve configured the entire cluster and all
connected switch ports.

Storage Traffic between CVMs


Using active-backup or any other OVS load balancing method, you can�t select the
active adapter for the CVM such that it persists between host reboots. When
multiple uplinks from the AHV host connect to multiple switches, ensure that
adequate bandwidth exists between these switches to support Nutanix CVM replication
traffic between nodes. Nutanix recommends redundant 40 Gbps or faster connections
between switches. A leaf-spine configuration or direct interswitch link can satisfy
this recommendation. Review the Physical Networking best practices guide for more
information.

VLANs for AHV Hosts and CVMs


The recommended VLAN configuration is to place the CVM and AHV host in the untagged
VLAN (sometimes called the native VLAN), as shown in the following figure. Neither
the CVM nor the AHV host requires special configuration with this option. Configure
the switch to allow tagged VLANs for user VM networks to the AHV host using
standard 802.1Q VLAN tags. Also, configure the switch to send and receive traffic
for the CVM and AHV host�s VLAN as untagged. Choose any VLAN on the switch other
than 1 as the native untagged VLAN on ports facing AHV hosts.

Note: All CVMs and hypervisor hosts must be on the same subnet and broadcast
domain. No systems other than the CVMs and hypervisor hosts should be on this
network, which should be isolated and protected.
Figure. Default Untagged VLAN for CVM and AHV Host
Click to enlarge
../images/BP-2071-AHV-Networking_image23.png

The setup depicted in the previous figure works well for situations where the
switch administrator can set the CVM and AHV VLAN to untagged. However, if you
don�t want to send untagged traffic to the AHV host and CVM or if your security
policy doesn�t allow this configuration, you can add a VLAN tag to the host and the
CVM with the procedure shown in the next image.

Figure. Tagged VLAN for CVM and AHV Host


Click to enlarge
../images/BP-2071-AHV-Networking_image24.png

Note: Ensure that IPMI console access is available for recovery before you start
this configuration.
Tip: In a production environment, Nutanix strongly recommends that you make changes
on one node at a time, after you verify that the cluster can tolerate a node
failure. Follow the steps in the Production Network Changes section when you make
changes.
After you enter maintenance mode on the target AHV host and CVM, configure VLAN
tags on the AHV host:
Note: Use port br0 in the following command. Port br0 is the internal port assigned
to AHV on bridge br0. Do not use any value other than br0.
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0 tag=10"
nutanix@CVM$ ssh [email protected] "ovs-vsctl list port br0"
Configure VLAN tags for the CVM:
nutanix@CVM$ change_cvm_vlan 10
Exit maintenance mode and repeat these steps on every node to configure the entire
cluster.

Removing VLAN Configuration


Use the following steps to remove VLAN tags and revert to the default untagged VLAN
configuration.

After you enter maintenance mode on the target AHV host and CVM, run the following
command for the CVM:
nutanix@CVM$ change_cvm_vlan 0
Run the following command for the AHV host:
nutanix@CVM$ ssh [email protected] "ovs-vsctl set port br0 tag=0"
Exit maintenance mode and repeat the previous steps on every node to configure the
entire cluster.

VLAN for User VMs


The VLAN for a VM is assigned using the VM network. Using Prism, br0 is the default
bridge for new networks. Create networks for user VMs in bridge br0 in either the
Prism UI or the aCLI. In versions prior to AOS 5.19, you must use the aCLI to add
networks for bridges other than br0, such as br1. In AOS versions 5.19 and later,
use the Prism UI to create networks in any bridge and virtual switch from a drop-
down menu.

When you have multiple bridges, putting the bridge name in the network name is
helpful when you view the network in the Prism UI. In this example, Prism shows a
network named �br1_vlan99� to indicate that this network sends VM traffic over VLAN
99 on bridge br1.

nutanix@CVM$ acli net.create br1_vlan99 vswitch_name=br1 vlan=99


Tip: From AOS 5.10 onward, it�s no longer necessary or recommended to turn off the
VM to change the network.
Perform the following steps to change the VLAN tag of VM NIC without deleting and
recreating the vNIC:

If you are running a version of AOS prior to 5.10, turn off the VM. Otherwise,
leave the VM on.
Create a new network with a different VLAN ID that you want to assign to the VM
NIC.
Run the following command on any CVM:
nutanix@cvm$ acli vm.nic_update <VM_name> <NIC MAC address> network=<new network>
VM NIC VLAN Modes
VM NICs on AHV can operate in three modes:

Access
Trunked
Direct
Access mode is the default for VM NICs, where a single VLAN travels to and from the
VM as untagged but is encapsulated with the appropriate VLAN tag on the physical
NIC. VM NICs in trunked mode allow multiple tagged VLANs and a single untagged VLAN
on a single NIC for VLAN-aware VMs. You can only add a NIC in trunked mode using
the aCLI; you can�t distinguish between access and trunked NIC modes in the Prism
UI. Direct-mode NICs connect to brX and bypass the bridge chain; don�t use them
unless advised by Nutanix Support to do so.

Run the following command on any CVM in the cluster to add a new trunked NIC:
nutanix@CVM~$ acli vm.nic_create <vm name> network=<network name>
trunked_networks=<comma separated list of allowed VLAN IDs> vlan_mode=kTrunked
The native VLAN for the trunked NIC is the VLAN assigned to the network specified
in the network parameter. Additional tagged VLANs are designated by the
trunked_networks parameter.

Run the following command on any CVM in the cluster to verify the VM NIC mode:

nutanix@CVM~$ acli vm.get <vm name>


Sample output:

nutanix@CVM~$ acli vm.get testvm


testvm {
config {
...
nic_list {
ip_address: "X.X.X.X"
mac_addr: "50:6b:8d:8a:46:f7"
network_name: "network"
network_type: "kNativeNetwork"
network_uuid: "6d8f54bb-4b96-4f3c-a844-63ea477c27e1"
trunked_networks: 3 <--- list of allowed VLANs
trunked_networks: 4
trunked_networks: 5
type: "kNormalNic"
uuid: "9158d7da-8a8a-44c8-a23a-fe88aa5f33b0"
vlan_mode: "kTrunked" <--- mode
}
...
}
...
}
To change the VM NIC�s mode from Access to Trunked, use the command acli vm.get <vm
name> to find its MAC address. Using this MAC address, run the following command on
any CVM in the cluster:

nutanix@CVM~$ acli vm.nic_update <vm name> <vm nic mac address>


trunked_networks=<comma separated list of allowed VLAN IDs>
update_vlan_trunk_info=true
Note: The update_vlan_trunk_info=true parameter is mandatory. If you don�t specify
this parameter, the command appears to run successfully but the trunked_networks
setting doesn�t change.
To change the mode of a VM NIC from trunked to access, find its MAC address in the
output from the acli vm.get <vm name> command and run the following command on any
CVM in the cluster:

nutanix@CVM~$ acli vm.nic_update <vm name> <vm nic mac address> vlan_mode=kAccess
update_vlan_trunk_info=true
CVM Network Segmentation
The optional backplane LAN creates a dedicated interface in a separate VLAN on all
CVMs and AHV hosts in the cluster for exchanging storage replication traffic. The
backplane network shares the same physical adapters on bridge br0 by default but
uses a different nonroutable VLAN. From AOS 5.11.1 on, you can create the backplane
network in a new bridge (such as br1). If you place the backplane network on a new
bridge, ensure that this bridge has redundant network adapters with at least 10
Gbps throughput and use a fault-tolerant load balancing algorithm.

Use the backplane network only if you need to separate CVM management traffic (such
as Prism) from storage replication traffic. The section Securing Traffic Through
Network Segmentation in the Nutanix Security Guide includes diagrams and
configuration instructions.

Figure. Prism UI CVM Network Interfaces


Click to enlarge
../images/BP-2071-AHV-Networking_image25.png

From AOS 5.11 on, you can also separate iSCSI traffic for Nutanix Volumes onto a
dedicated virtual network interface on the CVMs using the Create New Interface
dialog. The new iSCSI virtual network interface can use a shared or dedicated
bridge. Ensure that the selected bridge uses multiple redundant uplinks.

Jumbo Frames
The Nutanix CVM uses the standard Ethernet MTU (maximum transmission unit) of 1,500
bytes for all the network interfaces by default. The standard 1,500-byte MTU
delivers excellent performance and stability. Nutanix doesn�t support configuring
the MTU on a CVM�s network interfaces to higher values.

You can enable jumbo frames (MTU of 9,000 bytes) on the physical network interfaces
of AHV, ESXi, or Hyper-V hosts and user VMs if the applications on your user VMs
require them. If you choose to use jumbo frames on hypervisor hosts, enable them
end to end in the desired network and consider both the physical and virtual
network infrastructure impacted by the change.

Test :-

You might also like