0% found this document useful (1 vote)
2K views4 pages

Change CVM IP Address

1. The document describes the procedure to change the IP addresses of Controller VMs (CVMs) in a Nutanix cluster. This involves stopping the cluster, running a script to reconfigure the external IP addresses, restarting the CVMs, and verifying the new IP configuration before starting the cluster again. 2. Some checks are recommended after completing the IP address change, like running NCC health checks and updating any remote site configurations. 3. Planning for guest VM downtime is required as the cluster must be stopped during the IP change procedure. Network segmentation settings may also need to be adjusted if the new IP scheme impacts the backplane network.

Uploaded by

Vishal Idge
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (1 vote)
2K views4 pages

Change CVM IP Address

1. The document describes the procedure to change the IP addresses of Controller VMs (CVMs) in a Nutanix cluster. This involves stopping the cluster, running a script to reconfigure the external IP addresses, restarting the CVMs, and verifying the new IP configuration before starting the cluster again. 2. Some checks are recommended after completing the IP address change, like running NCC health checks and updating any remote site configurations. 3. Planning for guest VM downtime is required as the cluster must be stopped during the IP change procedure. Network segmentation settings may also need to be adjusted if the new IP scheme impacts the backplane network.

Uploaded by

Vishal Idge
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Changes in cvm and vlan : -

Assigning the Controller VM to a VLAN

By default, the public interface (eth0) of a Controller VM is assigned to VLAN 0. To assign the
Controller VM to a different VLAN, change the VLAN ID of its public interface. After the change, you
can access the public interface from a device that is on the new VLAN.

Note: Perform the following procedure during a scheduled downtime. Before you begin, stop the
cluster. Once the process begins, hosts and CVMs partially lose network access to each other and
VM data or storage containers become unavailable until the process completes.

Note: To avoid losing connectivity to the Controller VM, do not change the VLAN ID when you are
logged on to the Controller VM through its public interface. To change the VLAN ID, log on to the
internal interface that has IP address 192.168.5.254.

Procedure

1. Log on to the AHV host with SSH.

2. Put the AHV host and the Controller VM in maintenance mode

Perform the following steps to put the node into maintenance mode.

Procedure Entering node in maintenance mode


1. Use SSH to log on to a Controller VM in the cluster.

2. Determine the IP address of the node you want to put into maintenance mode.

nutanix@cvm$ acli host.list

Note the value of Hypervisor IP for the node you want to put in maintenance mode.

3. nutanix@cvm$ acli host.enter_maintenance_mode 10.x.x.x

4. Verify if the host is in the maintenance mode. nutanix@cvm$ acli host.get host-ip

5. Put the CVM into the maintenance mode.

nutanix@cvm$ ncli host edit id=host-ID enable-maintenance-mode=true

Replace host-ID with the ID of the host. This step prevents the CVM services from being
affected by any connectivity issues. Determine the ID of the host by running the following
command: nutanix@cvm$ ncli host list

3. Check the Controller VM status on the host.

root@host# virsh list

An output similar to the following is displayed:

root@host# virsh list

running root@host# logout

4. Log on to the Controller VM.


root@host# ssh [email protected]

Accept the host authenticity warning if prompted, and enter the Controller VM nutanix
password

5. Assign the public interface of the Controller VM to a VLAN.

nutanix@cvm$ change_cvm_vlan vlan_id

Replace vlan_id with the ID of the VLAN to which you want to assign the Controller VM.

For example, add the Controller VM to VLAN 201.

nutanix@cvm$ change_cvm_vlan 201

6. Confirm VLAN tagging on the Controller VM.

root@host# virsh dumpxml cvm_name Replace cvm_name with the CVM name or CVM ID to
view the VLAN tagging information.

7. Restart the network service.

nutanix@cvm$ sudo service network restart

8. Verify connectivity to the Controller VMs external IP address by performing a ping test from the
same subnet. For example, perform a ping from another Controller VM or directly from the host
itself.

10. Exit the AHV host and the Controller VM from the maintenance mode.

Procedure Exiting a Node from the Maintenance Mode


A. Remove the CVM from the maintenance mode.

a. From any other CVM in the cluster, run the following command to exit the CVM
from the maintenance mode. nutanix@cvm$ ncli host edit id=host-ID enable-
maintenance-mode=false Replace host-ID with the ID of the host.

Note: The command fails if you run the command from the CVM that is in the
maintenance mode.

b. Verify if all processes on all the CVMs are in the UP state.

nutanix@cvm$ cluster status | grep -v UP

B. Remove the AHV host from the maintenance mode.

a. From any CVM in the cluster, run the following command to exit the AHV host
from the maintenance mode.

nutanix@cvm$ acli host.exit_maintenance_mode host-ip Replace host-ip with the


new IP address of the host. This command migrates (live migration) all the VMs that
were previously running on the host back to the host.

b. Verify if the host has exited the maintenance mode.

nutanix@cvm$ acli host.get host-ip In the output that is displayed, ensure that
node_state equals to kAcropolisNormal and schedulable equals to True.
============================================================================

Changing the Controller VM IP Addresses in your Nutanix Cluster (CLI Script) Before you begin

1) Guest VM downtime is necessary for this change, because the Nutanix cluster must be in a
stopped state. Therefore, plan the guest VM downtime accordingly
2) Verify if your cluster is using the network segmentation feature.
nutanix@cvm$ network_segment_status
3) The network segmentation feature enables the backplane network for CVMs in your cluster
(eth2 interface). The backplane network is always a non-routable subnet and/or VLAN that is
distinct from the one which is used by the external interfaces (eth0) of your CVMs and the
management network on your hypervisor.
Typically, you do not need to change the IP addresses of the backplane interface (eth2) if
you are updating the CVM or host IP addresses.

If you have enabled network segmentation on your cluster, check to make sure that the
VLAN and subnet in-use by the backplane network is still going to be valid once you move to
the new IP scheme. If not, and change the subnet or VLAN. See the Prism Web Console
Guide for your version of AOS to find instructions on disabling the network segmentation
feature (see the Disabling Network Segmentation topic) before you change the CVM and
host IP addresses. After you have updated the CVM and host IP addresses by following the
steps outlined later in this document, you can then proceed to re-enable network
segmentation. Follow the instructions in the Prism Web Console Guide, which describes how
to designate the new VLAN or subnet for the backplane network
4) If you have configured remote sites for data protection, either wait until any ongoing
replications are complete or abort them. After you successfully reconfigure the IP addresses,
update the reconfigured IP addresses at the remote sites before you resume the replications
5) Log on to a Controller VM in the cluster and check that all hosts are part of the metadata
store. nutanix@cvm$ ncli host ls | grep "Metadata store status" For every host in the
cluster, Metadata store enabled on the node is displayed.

Procedure to change ip

1) Log on to any Controller VM in the cluster. » vSphere or AHV


root@host# ssh [email protected]
2) Stop the Nutanix cluster
nutanix@cvm$ cluster stop
3) Run the external IP address reconfiguration script (external_ip_reconfig) from any one
Controller VM in the cluster. nutanix@cvm$ external_ip_reconfig
4) Follow the prompts to type the new netmask, gateway, and external IP addresses.
A message like the following is displayed after the reconfiguration is successfully completed:
External IP reconfig finished successfully.
Restart all the CVMs and start the cluster.
5) Restart each Controller VM in the cluster. nutanix@cvm$ sudo reboot
6) Run the following commands on every CVM in the cluster.
a. Display the CVM IP addresses.
nutanix@cvm$ svmips
b. Display the hypervisor IP addresses.
nutanix@cvm$ hostips
c. From any one CVM in the cluster, verify that the following outputs show the
new IP address scheme and that the Zookeeper IDs are mapped correctly.
Note: Never edit the following files manually. Contact Nutanix Support for
assistance.
nutanix@cvm$ allssh sort -k2 /etc/hosts
nutanix@cvm$ allssh sort -k2 data/zookeeper_monitor/zk_server_config_file
nutanix@cvm$ zeus_config_printer | grep -B 20 myid | egrep -i "myid|
external_ip"
7) Start the Nutanix cluster.
nutanix@cvm$ cluster start
If the cluster starts properly, output similar to the following is displayed for each node in the
cluster: CVM: 10.1.64.60 Up Zeus UP [3704, 3727, 3728, 3729, 3807, 3821] Scavenger UP
[4937, 4960, 4961, 4990] SSLTerminator UP [5034, 5056, 5057, 5139] Hyperint UP [5059,
5082, 5083, 5086, 5099, 5108] Medusa UP [5534, 5559, 5560, 5563, 5752]
DynamicRingChanger UP [5852, 5874, 5875, 5954] Pithos UP [5877, 5899, 5900, 5962]
Stargate UP [5902, 5927, 5928, 6103, 6108] Cerebro UP [5930, 5952, 5953, 6106] Chronos
UP [5960, 6004, 6006, 6075] Curator UP [5987, 6017, 6018, 6261] Prism UP [6020, 6042,
6043, 6111, 6818] AlertManager UP [6070, 6099, 6100, 6296] Arithmos UP [6107, 6175,
6176, 6344] SysStatCollector UP [6196, 6259, 6260, 6497] Tunnel UP [6263, 6312, 6313]
ClusterHealth UP [6317, 6342, 6343, 6446, 6468, 6469, 6604, 6605, 6606, 6607] Janus UP
[6365, 6444, 6445, 6584] NutanixGuestTools UP [6377, 6403, 6404

What to do next

• Run the following NCC checks to verify the health of the Zeus configuration. If any of these checks
report a failure or you encounter issues, contact Nutanix Support.

• nutanix@cvm$ ncc health_checks system_checks zkalias_check_plugin

• nutanix@cvm$ ncc health_checks system_checks zkinfo_check_plugin

• If you have configured remote sites for data protection, you must update the new IP addresses on
both the sites by using the Prism Element web console.

• Configure the network settings on the cluster such as DNS, DHCP, NTP, SMTP, and so on.

• Power on the guest VMs and configure the network settings in the new network domain.

• After you verify that the cluster services are up and that there are no alerts informing that the
services are restarting, you can change the IPMI IP addresses at this stage, if necessary. For
instructions about how to change the IPMI addresses, see the Configuring the Remote Console IP
Address (Command Line) topic in the Acropolis Advanced Setup Guide.

You might also like