FusionCompute V100R005C10 Host and Cluster Management Guide 01
FusionCompute V100R005C10 Host and Cluster Management Guide 01
V100R005C10
Issue 01
Date 2015-11-11
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: http://e.huawei.com
Purpose
This document describes how to create, configure, adjust, and reclaim host and cluster
resources on the FusionCompute.
Intended Audience
This document is intended for:
l Technical support engineers
l Maintenance engineers
Symbol Conventions
The symbols that may be found in this document are defined as follows:
Symbol Description
Symbol Description
Change History
Changes between document issues are cumulative. The latest document issue contains all the
changes made in earlier issues.
Issue 01 (2015-11-11)
This issue is the first official release.
Contents
3 Host Management....................................................................................................................... 35
3.1 Adding Hosts................................................................................................................................................................ 36
3.2 Querying Host Information.......................................................................................................................................... 39
3.3 Modifying the Name and Description of a Host...........................................................................................................40
3.4 Modifying the Host name in OS...................................................................................................................................41
3.5 Changing the Switching Mode of a Network Port....................................................................................................... 42
3.6 Updating the Number of Virtual Ports of the SR-IOV-enabled Network Port............................................................. 43
3.7 Setting a Host to the Maintenance Mode......................................................................................................................44
3.8 Changing the Multipathing Type of a Host.................................................................................................................. 45
3.9 Setting Time Synchronization on a Host...................................................................................................................... 47
3.10 Performing Forcible Time Synchronization............................................................................................................... 49
3.11 Configuring Host BMC Parameters............................................................................................................................50
3.12 Sharing the GPU on a Host.........................................................................................................................................51
3.13 Configuring the Host GPU Mode............................................................................................................................... 52
3.14 Enabling or Disabling LLDP for a Host..................................................................................................................... 54
3.15 Enabling the Host Antivirus Function........................................................................................................................ 55
3.16 Moving a Host............................................................................................................................................................ 56
A Appendix......................................................................................................................................87
A.1 FAQ..............................................................................................................................................................................88
A.1.1 How to Configure Static Routes for Storage Ports on a Host...................................................................................88
A.1.2 How to Identify Server Ports.................................................................................................................................... 90
A.1.3 Failed to Execute the Policy for Scheduling Computing Resources in a Cluster.....................................................95
A.1.4 Background Information...........................................................................................................................................96
A.1.5 Compatibility.......................................................................................................................................................... 101
A.2 Parameter Reference.................................................................................................................................................. 102
A.2.1 Cluster Parameters.................................................................................................................................................. 102
A.2.2 Computing Resource Scheduling Rules Parameters...............................................................................................107
A.2.3 HA Parameters........................................................................................................................................................ 113
A.2.4 VM Startup Policy Parameters................................................................................................................................116
A.2.5 Host Parameters...................................................................................................................................................... 116
A.2.6 Time Synchronization Parameters.......................................................................................................................... 117
A.2.7 BMC Parameters.....................................................................................................................................................118
A.2.8 Network Ports Parameters.......................................................................................................................................118
A.2.9 Storage Port Parameters..........................................................................................................................................123
A.2.10 System Port Parameters........................................................................................................................................ 125
A.3 Parameters in Advanced Settings for Computing Resource Scheduling...................................................................127
Summary
FusionCompute resources include host and cluster resources, network resources, and storage
resources. Host and cluster management involves the following operations for the
FusionCompute: create a cluster or host and adjust host or cluster configurations.
VM
Virtual CPU
Virtual memory
Virtualized
computing
resource pool
Memory resource pool CPU resource pool
Virtualization
layer
Host
(physical server)
Virtual CPU and When the system creates a VM, the system automatically allocates
memory required memory space and virtual CPUs from the resource pool to
the VM according to the specified VM specifications.
VM
NOTE
The virtual CPU and memory resources used by a VM must be provided by
the same host. If this host fails, the system automatically assigns another host
to the VM to provide computing resources. Therefore, the resources actually
used by a VM cannot exceed the specifications of the hardware resources on
the host.
Add an uplink.
Associate the storage
No resource with hosts.
Is a Huawei SAN
device or an FC
SAN device (Optional) Add a VLAN
used? pool.
Scan storage devices.
Yes
Change the multiple (Optional) Add a subnet.
storage path.
Add a data store.
Yes
Add a storage port to the
host.
Procedure
Table 1-2 describes the creation procedure.
2 Cluster Management
Scenarios
On FusionCompute, create a user cluster based on the data plan.
Prerequisites
Conditions
You have logged in to FusionCompute.
Procedure
Switch to the page for creating a cluster.
1 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
2 In the navigation tree on the left, right-click the site and choose Create Cluster.
The Create Cluster page is displayed.
Configure the basic information.
3 Configure the name and description of the cluster.
4 Click Next.
The Basic Configuration page is displayed.
5 If you need to use the host memory overcommitment function, set Host memory
overcommitment to Enable.
If the host memory overcommitment function is enabled, the memory provided by the
host for VMs can be greater than the physical memory of the host, and therefore the VM
density supported by the host can be increased.
NOTE
– After host memory overcommitment is enabled, the overcommitment ratio for a VM can be
controlled by specifying the Reserved (MB) parameter in the VM QoS settings area.
n If the configured VM memory is greater than or equal to 16 GB, the VM Reserved
(MB) parameter must be set to its maximum value to ensure the optimal VM
performance. In this case, host memory overcommitment does not take effect for the
VM.
n If the configured VM memory is less than 16 GB, the VM Reserved (MB) value can be
set to 70% of the specified VM memory size so that more VMs than the number of
physically supported ones can use the host memory resources in practice. If the
monitored VM memory usages have been persistently greater than 40% in several hours,
the VM Reserved (MB) parameter must be set to its maximum value. In this case, host
memory overcommitment does not take effect for the VM.
– After host memory overcommitment is enabled, the total available memory capacity equals to
the total memory capacity in the virtualization domain multiplied by 120%. The total memory
capacity in the virtualization domain equals to the server memory capacity minus memory size
required by virtualization management. You can choose Computing Pool > Site > Cluster >
Host > Summary > Monitoring Information to view the total memory capacity in the
virtualization domain.
– After host memory overcommitment is enabled, plan VMs on hosts based on the total memory
capacity. If VMs that consumed a large amount of memory exist, some VMs may fail to start
even if the used memory has been released. This startup failure occurs because the
virtualization layer does not know that the memory is released.
– After host memory overcommitment is enabled, VMs cannot be hibernated and memory
snapshots cannot be created for them.
6 Configure the VM startup policy.
– Assign automatically: The system starts a VM on a host that has available
resources in the cluster.
– Assign based on load balancing: The system starts VMs on the host with the
largest available CPU capacity.
7 If you need the guest non-uniform memory access (NUMA) function, set GuestNUMA
to Enable.
Guest NUMA presents a topology view of memory and CPU resources on each host to
VMs, the VM user can configure VM CPUs and memory using the third-party software
(such as Eclipse) based on this topology, so that VMs can obtain the most easy-to-access
memory based on the topology, thereby reducing access latency and improving VM
performance.
NOTE
Ensure that the following conditions are met for the Guest NUMA function to take effect:
– The number of VM CPUs supported by a host in the cluster must be a multiple of the physical
CPU quantity on the host or a multiple of the thread quantity of a single CPU of the host. (To
check the number of physical CPUs on a host or the number of CPU threads of a host, switch
to the host hardware information page and choose Hardware > CPU, as described in
Querying Host Information.)
The Guest NUMA function may fail on the host that accommodates the VM if the host CPU
quantity changes (due to VM live migration or VM startup on another host) or VM CPU
specifications change.
– The memory overcommitment function or the host CPU resource mode is disabled for the
cluster.
– The NUMA function is enabled for hosts in the cluster. For example, to enable the NUMA
function for a RH2288H V2 server, choose Advanced > Advanced Processor in the advanced
basic input/output system (BIOS) settings of the server, and set NUMA mode to Enabled.
– VMs on the hosts in the cluster are restarted after the Guest NUMA function is enabled for the
cluster.
The HA function can be enabled for VMs in a cluster only when the cluster also has the
HA function enabled.
– If yes, go to 12.
– If no, click Next and go to 14.
12 Select Enable.
13 Configure the HA function.
– HA resource reservation: The system reserves the specified amount of CPU and
memory resources for the cluster. The reserved resources can only be used to
implement the VM HA function.
– Tolerate cluster host failures: The system allows the specified number of hosts to
become faulty in the cluster. The system also periodically checks whether sufficient
resources are available in the cluster to support service switchover for these faulty
VMs. If no sufficient resources are available, an alarm will be generated to notify
users of ensuring sufficient resources in the cluster for VM service switchover.
A slot is a basic unit for allocating CPU and memory resources, and the slot value
can be set to Automatic or Custom.
n Automatic: The system sets the slot size based on the maximum amount of
VM CPU and memory resources required by the cluster.
n Custom: Users can set custom VM CPU and memory resource size based on
the service requirements.
14 Click Next.
The Configure Resource page is displayed.
Configure computing resource scheduling policies.
15 Determine whether to enable the computing resource scheduling function.
– If yes, go to 16.
– If no, click Next and go to 27.
16 Select Enable computing resource scheduling.
17 Select an automation level.
Automation levels include:
– Manual: The user is prompted to migrate VMs based on the suggestions provided
on the Computing Resource Scheduling page.
– Automatic: The system automatically migrates VMs to maximize resource
utilization when the resource load is heavy.
– Conservative: The system does not power off hosts by default. It powers on
available hosts in the cluster only when the average host resource usage in the
cluster is higher than the heavy-load threshold.
– Slightly conservative, Medium and Slightly radical: The system powers on
available hosts in the cluster when the average host resource usage in the cluster is
higher than the heavy-load threshold. It powers off some hosts when the average
host resource usage in the cluster is lower than the light-load threshold.
– Radical: The system does not power on available hosts by default. It powers off
some hosts in the cluster only when the average host resource usage in the cluster is
lower than the light-load threshold.
The default value is Medium for all periods.
Table 2-1 lists the light-load and heavy-load threshold value of each power management
threshold.
Table 2-1 Light-load and heavy-load threshold value of each power management
threshold
Threshold Name Heavy-Load Threshold Light-Load Threshold
Value Value
Radical - 63%
– Merom
– Penryn
– Nehalem
– Westmere
– Sandy Bridge
– Ivy Bridge
NOTE
After setting an IMC mode for a cluster, you need to enable the Execute Disable Bit function,
which is also known as the No eXecute bit (NX) or eXecute Disable (ND) function, in the BIOS
advanced CPU options for existing hosts in the cluster and the hosts to be added to the cluster.
28 Set the IMC mode and description.
29 Click Next.
The Confirm page is displayed.
Finish cluster creation.
30 Click Create.
An information dialog box is displayed.
31 Click OK.
The cluster creation task is complete.
Follow-up Procedure
Add hosts to the cluster.
----End
Prerequisites
Conditions
You have logged in to FusionCompute.
Data
You have obtained the name of the cluster to be queried.
Procedure
----End
Scenarios
On FusionCompute, modify the name and description of a cluster.
Prerequisites
Conditions
You have logged in to FusionCompute.
Procedure
Step 1 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
Step 2 In the navigation tree on the left, select the site and cluster.
The Getting Started page is displayed.
Step 3 In the Basic Information area on the Summary page, click in the Name and
Description rows.
A dialog box is displayed.
Step 4 Enter the new name and description of the cluster.
----End
Scenarios
On FusionCompute, create cluster folders, move existing clusters to or create clusters in the
cluster folders so that clusters running different services can be managed by folder.
Prerequisites
Conditions
You have logged in to FusionCompute.
Procedure
Create a cluster folder.
1 On FusionCompute, click Computing Pool.
2 In the navigation tree on the left, right-click the target cluster and choose Create Cluster
Folder.
3 Enter the cluster folder name and click OK.
An information dialog box is displayed.
4 Click OK.
The cluster folder is created.
Create a cluster in a cluster folder.
5 In the navigation tree, right-click the cluster folder and choose Create Cluster.
6 Set the cluster parameters as instructed.
For details about the parameters, see Creating a Cluster.
Move a cluster.
7 In the navigation tree, right-click the cluster and choose Move.
active/standby mode to ensure that the VMs run on different hosts. This improves system
reliability.
l The time-based scheduling settings allow the system to schedule resource in different
time periods based on service requirements. VM migration requires certain system
overhead. Therefore, set a conservative policy for a heavy-traffic scenario and a medium
or radical policy for a light-traffic scenario to ensure the optimal service performance.
If resources in a cluster cannot be scheduled, rectify this problem by following operations
provided in Failed to Execute the Policy for Scheduling Computing Resources in a
Cluster.
NOTE
To maximize resource scheduling efficiency, make sure that hosts in the same cluster use the same CPU,
memory, network, and storage configurations, so that VMs can be migrated across the hosts in the
cluster when required.
VM Scheduling Customization
The VM scheduling customization function allows users to set a scheduling automation level
for each VM in a dynamic resource scheduling (DRS) cluster to meet diverse VM scheduling
automation level requirements.
The automation level customized for a VM prevails over the automation level set for the
cluster of the VM. You can set different automation levels for a VM and its cluster. If the
automation level of a VM is set to Disabled, the system will not migrate this VM or provide
any migration suggestions for it. If the VM scheduling customization function is disabled or
the automation level of a VM is set to Default, the VM uses the same automation level as the
cluster.
The system will not migrate VMs that do not meet migration requirements. For details about
the VM migration requirements, see Appendix > FAQ > VM Migration Requirements in
the FusionCompute V100R005C10 Virtual Machine Management Guide.
Automatic Power Management
The automatic power management function enables the system to periodically check resource
usage on hosts in a cluster. If resources in the cluster are sufficient but service load on each
host is light, the system migrates VMs on redundant hosts and powers off these hosts to
reduce power consumption. If the in-service hosts are overloaded, the system powers on
offline hosts in the cluster to balance load among hosts.
l When the automatic power management function is enabled, automatic computing
resource scheduling must be enabled so that the system automatically balances VM load
on hosts after the hosts are powered on.
l The time-based power management settings allow the system to manage power in
different time periods based on service requirements. When services are running stably,
set automatic power management to a low level to prevent adverse impact on services.
l With the automatic power management function enabled, the system checks resource
usage in the cluster, and powers off some light loaded hosts only when the resource
utilization drops below the light-load threshold over the specified time period (the
default value is 40 minutes). Similarly, the system powers on some hosts only when the
resource utilization rises above the heavy-load threshold over the specified time period
(the default value is 5 minutes). You can customize the time period for evaluating the
thresholds of powering on or off hosts.
For example, the system administrator can enable automatic power management for a holiday
or off-duty period that requires less resources than usual. The system will power off some
hosts in this period to reduce energy consumption. When users start to use VMs on working
days, the system will automatically power on hosts to meet increasing resource requirements.
NOTE
A several-minute delay is required before a host is powered on. If you perform an operation that requires
a large amount of resources during this delay period, for example, start VMs, the operation may fail due
to resource insufficiency. In this case, perform the operation again after several minutes.
The default resource scheduling interval for both computing resource scheduling and
automatic power management is 10 minutes, which is configurable to administrators. For
details, see Configuring the Resource Scheduling Interval.
Balancing Group
Users can add VMs in the current cluster to the balancing group. After resources are
scheduled, resources in the cluster are balanced among hosts. The balancing group enables
VMs in the group are evenly distributed in the cluster.
Advanced Settings
Before you set any options related to advances settings, contact Huawei technical support
engineers and have them evaluate and confirm the settings to ensure the resource scheduling
efficiency and effects.
Prerequisites
Conditions
l Computing, storage, and network resources on hosts in a cluster are shared.
l You have logged in to FusionCompute.
l You have obtained the name of the cluster for which the resource scheduling rules are to
be configured.
l You have configured the baseboard management controller (BMC) for each host before
configuring automated power management. For details, see Configuring Host BMC
Parameters.
Procedure
Configure the basic information.
1 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
2 In the navigation tree on the left, select the site and cluster.
The Getting Started page is displayed.
3 Click Configuration and click Control Cluster Resource on the right pane of the page.
A configuration page is displayed.
4 Click Configure Computing Resource Scheduling.
The Configure Computing Resource Scheduling page is displayed.
5 Select Enable computing resource scheduling.
Table 2-2 Light-load and heavy-load threshold value of each power management
threshold
Threshold Name Heavy-Load Light-Load Threshold
Threshold Value Value
n By month: The thresholds take effect only on the specified days in a month.
Configure a resource group.
To allow VMs in a specified VM group to be migrated only within a specific host group, the
VM group and the host group must be associated and the VM group members must be able to
run on the specified hosts in the host group. Therefore, configure VM group and host group
settings on the Resource Group page in advance based on the data plan.
7 Check whether the required VM group is displayed on the Resource Group page.
If yes, go to 14.
If no, go to 8.
8 On the Resource Group page, click Add in the VM Group area.
After adding VMs in a cluster to a VM group, you can set a VM migration range for the
VM group by configuring a VMs to hosts rule.
The Add VM Group page is displayed, as shown in Figure 2-2.
11 Select the VMs to be added to the VM group and click to add them to the list
in the right pane.
12 Click OK.
A dialog box is displayed.
13 Click OK.
The VM group is added.
To modify or delete the VM group, locate the row that contains the VM group on the
Resource Group page, click Modify or Delete.
14 Check whether the required host group is displayed on the Resource Group page.
If yes, go to 22.
If no, go to 15.
15 On the Resource Group page, click Add in the Host Group area.
After adding hosts in a cluster to a host group, you can set a VM migration range for
VMs by configuring a VMs to hosts rule.
A dialog box shown in Figure 2-3 is displayed.
18 Select the hosts to be added to the host group and click to add them to the list
in the right pane.
19 Click OK.
A dialog box is displayed.
20 Click OK.
The host group is added.
To modify or delete the host group, locate the row that contains the host group on the
Resource Group page, click Modify or Delete.
Configure scheduling rules.
21 Perform the corresponding operation based on the rule specified.
– To keep the selected VMs running on the same host, go to 32.
– To keep the selected VMs running on different hosts, go to 27.
– To limit the migration range of VMs in a specified VM group to a specific host
group, go to 22.
Keep VMs together: This rule keeps the selected VMs always running on the same host.
One VM can be added to only one keep-VMs-together rule.
Mutually exclusive VMs: This rule keeps the selected VMs running on different hosts.
One VM can be added to only one VM-mutually-exclusive rule.
VMs to hosts: This rule associates a VM group with a host group so that the specified
VMs in the VM group can be configured to run only on the specified host in the host
group.
22 Click Add on the Rule Group page.
A dialog box shown in Figure 2-4 is displayed.
To modify or delete the rule, locate the row that contains the rule on the Rule
Management page, click Modify or Delete.
Go to 37 after this step.
27 Click Add on the Rule Management page.
A dialog box is displayed.
28 Enter a rule name in Name and set Type to Mutually exclusive VMs.
Mutually exclusive VMs: This rule keeps the selected VMs running on different hosts.
One VM can be added to only one VM-mutually-exclusive rule.
29 In the Available VMs area, select two VMs and click to move them to
Selected VMs.
30 Click OK.
A dialog box is displayed.
31 Click OK.
The Mutually exclusive VMs rule is added.
To modify or delete the rule, locate the row that contains the rule on the Rule
Management page, click Modify or Delete.
After this step, go to 37.
32 Click Add on the Rule Management page.
A dialog box is displayed.
33 Enter a rule name in Name and set Type to Keep VMs together.
Keep VMs together: This rule keeps the selected VMs always running on the same host.
One VM can be added to only one keep-VMs-together rule.
34 In the Available VMs area, select two VMs and click to move them to
Selected VMs.
35 Click OK.
A dialog box is displayed.
36 Click OK.
The Keep VMs together rule is added.
To modify or delete the rule, locate the row that contains the rule on the Rule
Management page, click Modify or Delete.
Configure balancing groups.
Users can add some VMs in the current cluster to a balancing group. The workload on these
VMs can be further balanced on the basis that the cluster workload is balanced. For example,
if two services are deployed on VMs in the same cluster, the VMs can be added to a balancing
group to ensure that the workload of these two services can be balanced among these VMs.
37 On the Balancing Group page, click Add.
A dialog box is displayed.
38 Set a name for the group.
39 Select VMs from the Available VMs area and click to move the selected VMs
to the Selected VMs area.
40 Click OK.
Scenarios
On FusionCompute, configure the high availability (HA) policy for a cluster to ensure that
sufficient resources are available in the cluster to fail over VMs on a faulty host if any.
Prerequisites
Conditions
You have logged in to FusionCompute.
Procedure
Step 1 On FusionCompute, click Computing Pool.
Step 2 In the navigation tree on the left, select the site and cluster.
The Getting Started page is displayed.
Step 3 Choose Configuration > Configure HA and click Control Cluster Resource.
A dialog box is displayed, as shown in Figure 2-5.
Step 5 Select HA resource reservation, Use dedicated failover hosts, or Tolerate cluster host
failures.
l HA resource reservation: specifies whether the system reserves preset CPU and
memory resources in the cluster to implement VM HA.
l Use dedicated failover hosts: specifies whether system reserves some hosts as dedicated
failover hosts. VMs that are working properly cannot be started, woken up, or restored
using a snapshot on a failover host, or migrated to it.
If an HA-enabled VM fails, the system restarts the VM on a common host or a failover
host based on resource usage on the hosts.
If Migrate all VMs on the dedicated failover host is selected, the system periodically
migrates VMs on a failover host to other common hosts that have sufficient resources to
reserve resources on the failover host.
l Tolerate cluster host failures: specifies the number of tolerated faulty hosts in a cluster.
The system periodically checks whether resources in the cluster are sufficient for VMs
failover on these hosts. If the resources are insufficient, the system generates an alarm to
notify the user.
A slot is a logical unit of memory and CPU resources. You can set slot sizes in
Automatic or Custom mode.
– Automatic: The system selects the maximum reserved CPU resources (MHz) and
memory resources (MB/GB) on all running VMs in the cluster as the CPU slot size
and the memory slot size.
– Custom: You need to set a custom CPU slot size and a memory slot size based on
site requirements.
Step 10 To allow all VMs to be migrated from the failover host periodically, select Migrate all VMs
on the dedicated failover host.
After this step, go to Step 12.
Step 11 Set the number of tolerated faulty hosts and the slot size based on the following configuration
principles:
1. Calculate the slot sizes.
A slot is a logical unit of memory and CPU resources.
– If Automatic is selected, the system selects the maximum CPU resources (MHz)
and memory resources (MB) on all running VMs in the cluster as the CPU slot size
and the memory slot size.
– If Custom is selected, you need to set a custom CPU slot size and a memory slot
size based on site requirements. This mode is recommended when the cluster
contains any VMs that have much larger reservations than the others.
To set proper slot sizes, perform the following operations:
Enter the maximum CPU resources (MHz) and memory resources (MB) on all
running VMs in the cluster as the CPU slot size and the memory slot size, click
Calculate, and view how many VMs require multiple slots. Modify the slot sizes
based on the calculated value.
2. Calculate the total capacity of the cluster.
The total capacity indicates the maximum number of slots that can be supported in the
cluster.
Capacity of a host = Rounddown [Min (Host CPU capacity/CPU slot size, Host memory
capacity/Memory slot size)]
The total capacity of a cluster is the sum of the capacity of all hosts in the cluster.
3. Calculate the reserved capacity of the cluster for VM failover.
Reserved capacity of a cluster for VM failover = Total capacity of the cluster – Total
capacity of first N hosts that have largest reservations (N is the number of tolerated
faulty hosts)
The reserved capacity of a cluster for VM failover must be greater than the number of
running VMs in the cluster. You can adjust the number of tolerated faulty hosts based on
this formula.
For example, a cluster has three hosts, H1 (9 GHz CPU, 9 GB memory), H2 (9 GHz CPU, 6
GB memory), and H3 (6 GHz CPU, 6 GB memory), and five running VMs, VM1 (2 GHz
In this case, the maximum CPU resources and memory resources are 2 GHz and 3 GB. The
capacity of H1 is 4 (Rounddown {Min [9/2, 9/2]}); the capacity of H2 is 3 (Rounddown {Min
[9/2, 6/2]}); the capacity of H3 is 3 (Rounddown {Min [6/2, 6/2]}). Therefore, the total
capacity of the cluster is 10 (4 + 3+3). If the number of tolerated faulty hosts is set to 1, the
reserved capacity of a cluster for VM failover is 6 (10 – 4). 6 is greater than the number of
running VMs (5). If the number of tolerated faulty hosts is set to 2, the reserved capacity of a
cluster for VM failover is 3 (10 – 4 – 3). 3 is less than the number of running VMs (5).
Therefore, 1 is recommended for the number of tolerated faulty hosts.
----End
Scenarios
On FusionCompute, configure the VM processing policy upon data store faults for a cluster.
The policy can be set to:
If the data store type is FusionStorage, the cluster does not support the data store
troubleshooting policy configuration.
l (Default) No processing: If a data store becomes faulty, the system does not handle VMs
that use the data store. The VMs will be automatically started after the fault is rectified.
l Stop VM: If a data store becomes faulty, the system stops all VMs that use the data
store. This policy does not apply to data stores of the FusionStorage type.
Prerequisites
Conditions
You have logged in to FusionCompute.
Procedure
Step 1 On FusionCompute, click Computing Pool.
Step 2 In the navigation tree on the left, select the site and cluster.
The Getting Started page is displayed.
Step 3 Select Configuration > Basic Configuration > Control Cluster Resource.
A dialog box is displayed, as shown in Figure 2-6.
Step 4 Set VM processing policy upon data store faults: to No processing or Stop VM.
l (Default) No processing: If a data store becomes faulty, the system does not handle VMs
that use the data store. The VMs will be automatically started after the fault is rectified.
l Stop VM: If a data store becomes faulty, the system stops all VMs that use the data
store. This policy does not apply to data stores of the FusionStorage type.
----End
Scenarios
On FusionCompute, enable the host memory overcommitment function in the cluster.
Memory overcommitment allows more space than the physical host has available to be used
by VMs, improving VM density on a host. The enabled Guest NUMA function become
unavailable if host memory overcommitment is enabled or the host CPU resource mode is set
to isolated.
The memory overcommitment function cannot be enabled for a cluster if the cluster contains
hosts that use intelligent network cards (iNICs).
Prerequisites
Conditions
You have logged in to FusionCompute.
Procedure
Step 1 On FusionCompute, click Computing Pool.
Step 2 In the navigation tree on the left, select the site and cluster.
The Getting Started page is displayed.
Step 3 Click the Configuration tab and then click Control Cluster Resource.
A dialog box shown in Figure 2-7 is displayed.
NOTE
l After host memory overcommitment is enabled, the overcommitment ratio for a VM can be
controlled by specifying the Reserved (MB) parameter in the VM QoS settings area.
– If the configured VM memory is greater than or equal to 16 GB, the VM Reserved (MB)
parameter must be set to its maximum value to ensure the optimal VM performance. In this
case, host memory overcommitment does not take effect for the VM.
– If the configured VM memory is less than 16 GB, the VM Reserved (MB) value can be set to
70% of the specified VM memory size so that more VMs than the number of physically
supported ones can use the host memory resources in practice. If the monitored VM memory
usages have been persistently greater than 40% in several hours, the VM Reserved (MB)
parameter must be set to its maximum value. In this case, host memory overcommitment does
not take effect for the VM.
l After host memory overcommitment is enabled, the total available memory capacity equals to the
total memory capacity in the virtualization domain multiplied by 120%. The total memory capacity
in the virtualization domain equals to the server memory capacity minus memory size required by
virtualization management. You can choose Computing Pool > Site > Cluster > Host > Summary
> Monitoring Information to view the total memory capacity in the virtualization domain.
l After host memory overcommitment is enabled, plan VMs on hosts based on the total memory
capacity. If VMs that consumed a large amount of memory exist, some VMs may fail to start even if
the used memory has been released. This startup failure occurs because the virtualization layer does
not know that the memory is released.
l After host memory overcommitment is enabled, VMs cannot be hibernated and memory snapshots
cannot be created for them.
----End
Scenarios
On FusionCompute, configure the VM startup policy, which allows the system to select a
proper host in a cluster to start a VM.
If computing resource scheduling is enabled for the cluster, the system uses the default VM
startup policy Assign based on load balancing.
Prerequisites
Conditions
You have logged in to FusionCompute.
Procedure
Step 1 On FusionCompute, click Computing Pool.
Step 2 In the navigation tree on the left, select the site and cluster.
The Getting Started page is displayed.
Step 3 Click the Configuration tab and then click Control Cluster Resource.
A dialog box shown in Figure 2-8 is displayed.
l Assign automatically: The system starts a VM on a host that has available resources in
the cluster.
l Assign based on load balancing: The system starts a VM on a host that has lower CPU
and memory usage.
----End
Scenarios
On FusionCompute, configure the Guest Non-Uniform Memory Access (NUMA) function for
hosts in a cluster. Guest NUMA presents a topology view of memory and CPU resources on
each host to VMs, the VM user can configure VM CPUs and memory using the third-party
software (such as Eclipse) based on this topology, so that VMs can obtain the most easy-to-
access memory based on the topology, thereby reducing access latency and improving VM
performance.
The enabled Guest NUMA function become unavailable if host memory overcommitment is
enabled or the host CPU resource mode is set to isolated.
NOTE
Ensure that the following conditions are met for the Guest NUMA function to take effect:
l The number of VM CPUs supported by a host in the cluster must be a multiple of the physical CPU
quantity on the host or a multiple of the thread quantity of a single CPU of the host. (To check the
number of physical CPUs on a host or the number of CPU threads of a host, switch to the host
hardware information page and choose Hardware > CPU, as described in Querying Host
Information.)
The Guest NUMA function may fail on the host that accommodates the VM if the host CPU
quantity changes (due to VM live migration or VM startup on another host) or VM CPU
specifications change.
l The memory overcommitment function or the host CPU resource mode is disabled for the cluster.
l The NUMA function is enabled for hosts in the cluster. For example, to enable the NUMA function
for a RH2288H V2 server, choose Advanced > Advanced Processor in the advanced basic input/
output system (BIOS) settings of the server, and set NUMA mode to Enabled.
l VMs on the hosts in the cluster are restarted after the Guest NUMA function is enabled for the
cluster.
Prerequisites
Conditions
l You have logged in to FusionCompute.
l You have enabled the Host NUMA function in the VM BIOS.
Procedure
Step 1 On FusionCompute, click Computing Pool.
Step 2 In the navigation tree on the left, select the site and cluster.
The Getting Started page is displayed.
Step 3 Click Advanced Settings.
A dialog box shown in Figure 2-9 is displayed.
Step 8 Restart VMs in the cluster to make the configurations take effect.
For details, see Restarting a VM in VM Operation Management in the FusionCompute
V100R005C10 Virtual Machine Management Guide.
----End
Scenarios
On FusionCompute, configure the incompatible migration cluster (IMC) function for a cluster
to enable VM migration across hosts that use CPUs of different performance baselines in the
cluster.
Currently, the IMC function applies to the CPUs from Intel only.
The IMC function allows the hosts in a cluster to present the same CPU function set to VMs
running on them. This function ensures successful VM migration across these hosts even
when the hosts physically use CPUs of different performance baselines.
Ensure that the following conditions are met for the IMC function if the cluster contains hosts
and VMs:
l The CPU generations of the hosts in the cluster are the same as or later than the target
IMC mode.
l The CPU generations of the running or hibernated VMs in the cluster are the same as or
earlier than the target IMC mode. If any VM in the cluster does not meet this
requirement, the VM must be stopped or migrated to another cluster.
NOTE
After setting an IMC mode for a cluster, you need to enable the Execute Disable Bit function, which is
also known as the No eXecute bit (NX) or eXecute Disable (ND) function, in the BIOS advanced CPU
options for existing hosts in the cluster and the hosts to be added to the cluster.
Prerequisites
Conditions
You have logged in to FusionCompute.
Procedure
Check the IMC mode of each host in the cluster.
3 On the Host page, click on the right part of the host list, and select Select Column.
A dialog box is displayed.
4 Select IMC and Latest Supported IMC CPU Generation and click OK.
The host list displays the IMC and Latest Supported IMC CPU Generation columns.
IMC specifies the IMC mode current used by a host, and Latest Supported IMC CPU
Generation specifies the latest IMC CPU generation supported by a host.
5 Identify the lowest IMC mode in the Latest Supported IMC CPU Generation column.
FusionCompute supports the following Intel CPU generations, which present higher
performance levels in ascending order:
– Merom
– Penryn
– Nehalem
– Westmere
– Sandy Bridge
– Ivy Bridge
Set the IMC mode for the cluster.
6 Click Advanced Settings on the Getting Started tab.
A dialog box shown in Figure 2-10 is displayed.
7 Click Enable IMC and set IMC Mode to the lowest IMC mode identified in the Latest
Supported IMC CPU Generation column.
The IMC mode of a cluster must be set to the CPU generation that exposes the minimum
function set in the cluster or an earlier version of the CPU generation.
8 Click OK.
A dialog box is displayed.
9 Click OK.
The IMC mode is configured for the cluster.
After the configuration, you can check the IMC settings of the cluster in the IMC
column of the host on the Host page.
Follow-up Procedure
After setting an IMC mode for a cluster, you need to enable the Execute Disable Bit function,
which is also known as the No eXecute bit (NX) or eXecute Disable (ND) function, in the
BIOS advanced CPU options for existing hosts in the cluster and the hosts to be added to the
cluster.
After setting an IMC mode for a cluster, you can migrate VMs across hosts that use CPUs of
different performance baselines. For details, see Migrating a VM in VM Operation
Management in the FusionCompute V100R005C10 Virtual Machine Management Guide.
----End
Scenarios
On FusionCompute, remove an unwanted user cluster.
Prerequisites
Conditions
l You have logged in to FusionCompute.
l The hosts in the cluster have been removed.
Data
The name of the cluster to be removed is available.
Procedure
Step 1 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
Step 2 In the navigation tree on the left, select the site and cluster.
----End
3 Host Management
When you add hosts to a cluster, the system automatically binds the management ports in
active/standby mode. Therefore, you need to delete the Eth-Trunk configuration on the access
switch. Otherwise, network communication is interrupted.
After hosts are added to a cluster, add other planned management network ports to the bound
network port to improve reliability of the management plane network.
Prerequisites
Conditions
l You have logged in to the FusionCompute.
l The required operating system (OS) is installed on the server carrying the host. For
details, see the FusionCompute V100R005C10 Software Installation Guide.
l The Virtualization Resource Management (VRM) management plane has been initialized
after FusionCompute installation is complete, and the communication between the host
and the Virtualization Resource Management (VRM) management plane is normal. Only
one management plane can be configured for the VRM node.
l The CPU generation of the host is the same as or later than the incompatible migration
cluster (IMC) mode configured for the cluster if the target cluster has IMC mode
enabled. Execute Disable Bit function, which is also known as the No eXecute bit (NX)
or eXecute Disable (ND) function, has been enabled in the advanced CPU options for the
BIOS of the host.
Procedure
Determine the method of adding hosts.
NOTICE
The Connect to OpenStack option can be configured only when no host is added. If
hosts have been added to the FusionCompute system, you are not allowed to change the
Connect to OpenStack option.
Add a host.
4 Set the following parameters for the host:
– Name
– IP address
– BMC IP
– Username
– Password
– Use the site time sync policy: synchronizes the configured cluster system time to
the host.
5 Click OK.
An information dialog box is displayed.
6 Click OK.
The host is added to the cluster.
After this step is complete, no further action is required.
Obtain the template for adding hosts.
Prerequisites
Conditions
You have logged in to FusionCompute.
Data
You have obtained the name of the host to be queried.
Procedure
3 In the navigation tree on the left, select the site, the cluster, and the host.
The Getting Started page is displayed.
4 Query the general information about the host on the Summary page.
No further action is required.
Query the information about VMs on the host.
5 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
6 In the navigation tree on the left, select the site, the cluster, and the host.
The Getting Started page is displayed.
7 Query the information about VMs on the host on the VM page.
No further action is required.
Query the information about devices used by the host.
8 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
9 In the navigation tree on the left, select the site, the cluster, and the host.
The Getting Started page is displayed.
10 Query the information about devices used by the host on the Device page.
No further action is required.
Query the information about data stores associated with the host.
11 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
12 In the navigation tree on the left, select the site, the cluster, and the host.
The Getting Started page is displayed.
13 Query the information about data stores associated with the host on the Storage page.
No further action is required.
Query the information about hardware used by the host.
14 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
15 In the navigation tree on the left, select the site, the cluster, and the host.
The Getting Started page is displayed.
16 Query the information about hardware used by the host on the Hardware page.
No further action is required.
----End
Scenarios
On FusionCompute, modify the name and description of a host.
Prerequisites
Conditions
You have logged in to FusionCompute.
Procedure
Search for the target host.
1 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
2 In the navigation tree on the left, select the target site.
The Getting Started page is displayed.
3 On the Host page, enter the search criteria and click Search.
The host that meets the search criteria is displayed.
Search criteria can be the host name.
Modify the host information.
4 Click the name of the target host in the host list.
5 In the Basic Information area on the Summary page, click in the Name and
Description rows.
A dialog box is displayed.
6 Enter the new name and description of the host.
7 Click OK.
The host information is modified.
----End
Scenarios
On FusionCompute, modify the Host name in OS.
NOTE
Historical logs of a host cannot be identified or backed up after the host name recorded in the OS is
changed. Therefore, back up the logs before changing the name or delete them if they are no longer
used.
Prerequisites
Conditions
You have logged in to FusionCompute.
Procedure
Search for the target host.
1 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
5 In the Basic Information area on the Summary page, click in the Host name in
OS row.
A dialog box is displayed.
6 Enter the new host name in OS.
7 Click OK.
A dialog box is displayed.
8 Click OK.
The host name in OS is modified.
----End
Scenarios
On FusionCompute, change the switching mode of a network port between Standard and
SR-IOV-enabled when Intel 82599 network interface cards (NICs) are used. If the MZ510 or
MZ512 NICs are used, change the switching mode of a network port by following the steps
provided in Manually Changing the Switching Mode of a Network Port.
Prerequisites
Conditions
l The VT-x/VT-d support and the SR-IOV function have been enabled for the host
accommodating the network port for which the switching mode is to be changed. For
details about how to enable the support and the function, see Enabling the VT-x/VT-d
Support and SR-IOV Function.
l The network port for which the switching mode is to be changed is not in use.
l You have obtained the name of the network port for which the switching mode is to be
changed and the name of the host accommodating the network port.
l You have logged in to FusionCompute.
Procedure
2 In the navigation tree on the left, select the site, cluster, and host.
The Getting Started page is displayed.
3 Click Device.
4 Select PCI Device.
5 Click Change Switching Mode.
The Change Switching Mode dialog box is displayed.
6 Select the network port for which the switching mode is to be changed.
7 Click OK.
A dialog box is displayed.
8 Click OK.
The task is complete after the host restarts.
NOTE
----End
Additional Information
Related Tasks
If a VM uses the distributed virtual switch (DVS) in SR-IOV-enabled mode, perform the
following steps to install the required NIC driver after the VM starts:
The following operations use the Windows operating system (OS) as an example:
1. Download the required NIC driver.
– If the Intel 82599 NIC is used, visit https://downloadcenter.intel.com/
Default.aspx?lang=zho to download the NIC driver.
– If the MZ510 or MZ512 NIC is used, visit http://www.emulex.com/downloads/
emulex.html to download the NIC driver.
2. On the Windows OS, double-click the NIC driver and install it as prompted.
Scenarios
On FusionCompute, update the number of virtual ports of a network port for which the
switching mode is set to SR-IOV-enabled when Intel 82599 network interface cards (NICs)
are used.
Prerequisites
Conditions
l The network port for which the number of virtual ports needs to be updated is not in use.
l You have obtained the name of the network port for which the number of virtual ports
needs to be updated and the name of the host accommodating the network port.
l You have logged in to FusionCompute.
Procedure
----End
Scenarios
On FusionCompute, set a host to maintenance mode. The host in maintenance mode is
isolated from the entire system, so that some maintenance operations, such as parts
replacement, power-off, or restart, can be performed on the host without affecting system
services.
NOTE
After a host enters maintenance mode, stop or migrate all VMs on the host, and then perform
maintenance operations.
When a host works in the maintenance mode, the VMs on the host do not participate in
computing resource scheduling in the cluster that the VMs belong.
Prerequisites
Conditions
You have logged in to FusionCompute.
Procedure
Search for the target host.
1 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
2 In the navigation tree on the left, select the target site.
The Getting Started page is displayed.
3 On the Host page, enter the search criteria and click Search.
The host that meets the search criteria is displayed.
Search criteria can be the host name.
Set the host to the maintenance mode.
4 Click the name of the target host in the host list.
5 In the upper part of the page, click Operation and select Enter Maintenance.
A confirmation dialog box is displayed.
6 Select Migrate All VMs if VMs running on the host need to be migrated to other hosts.
7 Click OK.
An information dialog box is displayed.
8 Click OK.
The host is set to the maintenance mode.
To have the host exit the maintenance mode, click Operation in the upper part of the
page and select Exit Maintenance.
----End
Scenarios
On FusionCompute, change the multipathing type of a host that uses Huawei storage area
network (SAN) devices. The default multipathing type is Universal, which indicates that non-
Huawei storage devices are used.
Prerequisites
Conditions
You have logged in to FusionCompute.
Procedure
Search for the target host.
1 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
2 In the navigation tree on the left, select the target site.
The Getting Started page is displayed.
3 On the Host page, enter the search criteria and click Search.
The host that meets the search criteria is displayed.
Search criteria can be the host name.
Migrate VMs on the host.
4 Click the name of the target host in the host list.
The Getting Started page is displayed.
5 On the VM page, migrate the VMs on the target host to other hosts.
For details, see VM Operation Management > Migrating a VM in the FusionCompute
V100R005C10 Virtual Machine Management Guide.
Change the multipathing type of the host.
6 In the navigation tree on the left, click the name of the target host.
7 Click Configure Storage Multipathing.
A dialog box is displayed, as shown in Figure 3-3.
NOTE
This operation is not allowed when VMs exist on the host, try again after the VMs are migrated or
deleted.
10 Click OK.
The multipathing type of the host is changed.
----End
Scenarios
On FusionCompute, configure an external clock source and the time synchronization
function for a host so that the host provides precise time by periodically synchronizing time
from the external clock source.
NOTE
If a host is set to the internal clock source, the configuration causes the restart of the service process on
the host. If more than 40 VMs run on the host, the service process restart will take a long time, resulting
in VMs that are running on the host to trigger the High Availability (HA) tasks. However, the VMs will
not be migrated to another host. After the service process restarts, the HA tasks will be automatically
canceled.
Prerequisites
Conditions
l You have logged in to FusionCompute.
l All the Network Time Protocol (NTP) servers use the same upper-layer clock source so
that the system times of the NTP servers are the same if multiple NTP servers are to be
deployed.
l The domain name server (DNS) is configured if time synchronization is configured using
a domain name. For details, see System Configuration > Configuring the DNS Server
in the FusionCompute V100R005C10 Configuration Management Guide.
Procedure
Search for the target host.
1 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
2 In the navigation tree on the left, select the target site.
The Getting Started page is displayed.
3 On the Host page, enter the search criteria and click Search.
The host that meets the search criteria is displayed.
NOTICE
If multiple NTP servers are to be deployed, ensure that all the NTP servers use the same
upper-layer clock source so that the system times of the NTP servers are the same.
Configure the host time synchronization parameters, including NTP server and
Synchronization interval(s).
Note the configuration requirements for the following parameters:
– NTP server: specifies the IP address or domain name of an NTP server. Enter one
to three IP addresses of the NTP servers. If time synchronization is configured
using a domain name, the DNS server must be configured in advance.
– Synchronization interval(s): specifies the time synchronization duration.
7 Click OK.
A confirmation dialog box is displayed.
8 Click OK.
An information dialog box is displayed.
9 Click OK.
The time synchronization function is configured on the host.
----End
NOTE
If a host is set to the internal clock source, the configuration causes the restart of the service process on
the host. If more than 40 VMs run on the host, the service process restart will take a long time, resulting
in VMs that are running on the host to trigger the High Availability (HA) tasks. However, the VMs will
not be migrated to another host. After the service process restarts, the HA tasks will be automatically
canceled.
Prerequisites
Conditions
You have logged in to FusionCompute.
Data
You have obtained the name of the host on which the system time needs to be synchronized.
Procedure
Search for the target host.
1 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
2 In the navigation tree on the left, select the target site.
The Getting Started page is displayed.
3 On the Host page, enter the search criteria and click Search.
The host that meets the search criteria is displayed.
Search criteria can be Name.
Set time synchronization on the host.
4 Click the name of the target host in the host list.
5 Click Forcibly Synchronize Time.
A dialog box is displayed.
6 Click OK.
An information dialog box.
7 Click OK.
----End
Scenarios
On the FusionCompute, configure Baseboard Management Controller (BMC) IP address,
username, and password for a host so that the system can power on or power off it when
scheduling resources.
Prerequisites
Conditions
You have logged in to the FusionCompute.
Procedure
Step 1 On the FusionCompute, choose Computing Pool.
Step 2 In the navigation tree on the left, choose Site name > Cluster name > Host name.
The Getting Started page is displayed.
Step 4 Enter the BMC parameters, including the IP address, username, and password for logging into
the BMC.
----End
Scenarios
On FusionCompute, host memory sharing is employed to share the Graphic Processing Unit
(GPU) of a host so that the GPU is accessible to multiple VMs.
GPU sharing is supported only by the FusionCloud Desktop Solution.
NOTE
The antivirus virtualization function and GPU sharing function cannot be enabled on a host at the same
time.
Prerequisites
Conditions
You have logged in to FusionCompute.
Procedure
Search for the target host.
1 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
2 In the navigation tree on the left, select the target site.
The Getting Started page is displayed.
3 On the Host page, enter the search criteria and click Search.
The host that meets the search criteria is displayed.
Search criteria can be Name.
Share the host GPU.
4 Click the name of the target host in the host list.
5 On the Getting Started page, click Configure GPU Sharing.
A dialog box is displayed.
6 Select Enable GPU sharing.
7 From the drop-down list, select the number of VMs that can access the shared host GPU.
– Number of GPU clients: specifies the number of the VMs that can access the shared
host GPU.
– Shared memory for data sending by GPU clients (MB): specifies the memory size
of the GPU client VM that sends data to the GPU service VM.
– Shared memory for data receiving by GPU clients (MB): specifies the memory size
of the GPU service VM that sends data to the GPU client VM.
NOTE
Note the algorithm when configuring the number of the VMs: (Shared memory for data sending by
GPU clients + Shared memory for data receiving by GPU clients) x Number of GPU clients ≤
1552 MB.
8 Click OK.
An information dialog box is displayed.
9 Click OK.
The host GPU is shared.
The sharing settings can take effect only after the host is restarted on the
FusionCompute web client.
To modify the sharing settings, refer to 5 to 9. The modification takes effect upon restart.
Ensure that the number of VMs that can access the shared host GPU must be greater than
or equal to the number of available GPU clients.
----End
Scenarios
On FusionCompute, configure the general process unit (GPU) mode for a GPU on a host.
The modes include passthrough mode and virtualization mode.
A host GPU in passthrough mode can be attached to only one VM. A host GPU in
virtualization mode can be shared by multiple VMs if the GPU supports virtualization.
Prerequisites
Conditions
l You have logged in to FusionCompute.
l The host GPU has not been attached to any VM.
l The GPU model is Nvidia Quadro 2000, Nvidia Quadro 4000, Nvidia GRID K1, Nvidia
GRID K2, Nvidia Quadro K2000, or Nvidia Quadro K4000 if you determine to set the
GPU to passthrough mode.
l The GPU model is NVIDIA GRID K1 or NVIDIA GRID K2 if you determine to set the
GPU to virtualization mode.
l You have obtained the host GPU driver if you determine to set the GPU to virtualization
mode. To obtain the GPU driver, contact Technical Support.
Procedure
Search for the target host.
1 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
2 In the navigation tree on the left, choose the site containing the target host.
The Getting Started page is displayed.
3 On the Host page, enter the search criteria and click Search.
The hosts that meet the search criteria are displayed.
Search criteria can be the host name.
Install the host GPU driver.
4 Check the GPU mode to be set.
– If it is the passthrough mode, go to 11.
– If it is the virtualization mode, go to 5.
5 Obtain the IP address of the target host in the list.
6 Decompress the obtained GPU driver package NVIDIA-vgx-uvp-xxx-xxx.x86_64.rar.
NVIDIA-vgx-uvp-xxx-xxx.x86_64.rpm is obtained.
7 Use WinSCP to upload NVIDIA-vgx-uvp-xxx-xxx.x86_64.rpm to the /home directory
on the host.
8 Use PuTTY to log in to the host as user gandalf and run the su - root command to
switch to user root.
9 Run the cd /home command to switch to the /home directory.
10 Run the following command to install the host GPU driver:
rpm -ivh NVIDIA-vgx-uvp-xxx-xxx.rpm
Configure the GPU mode.
11 Click the name of the target host in the host list on FusionCompute, click the Device
tab, and click GPU.
12 Locate the row containing the target GPU, and click Modify in the Operation column.
A dialog box is displayed, as shown in Figure 3-6.
– If the host GPU does not support virtualization, you can only select
Passthrough(1:1).
– If the host GPU supports virtualization (only NVIDIA GRID K1 and NVIDIA
GRID K2 support GPU virtualization), you can set any of the modes listed in Table
3-1 besides Passthrough(1:1).
– If the performance of a virtual GPU (vGPU) provided by a host GPU does not meet
service requirements, set the GPU mode to Passthrough(1:1).
14 Click OK.
An information dialog box is displayed.
15 Click OK.
The host GPU mode is set.
To change the GPU mode of other GPUs on the host, see 12 to 15.
----End
Scenarios
FusionCompute supports the host Link Layer Discovery Protocol (LLDP) service, which
enables a host to report the host name, port names, MAC address, and IP address to the
switch. This service is disabled by default. You can use the FusionCompute web client to
enable or disable the LLDP service for a host.
Prerequisites
Conditions
l You have obtained the target host name and the port name.
NOTE
The LLDP service cannot be enabled for a host that uses intelligent network interface cards
(iNICs).
l You have logged in to FusionCompute.
Procedure
2 In the navigation tree on the left, select the site, cluster, and host.
The Getting Started page is displayed.
3 Click Device.
4 Select PCI Device.
5 Determine whether to enable or disable LLDP for the host.
– To enable LLDP, go to 6.
– To disable LLDP, go to 9.
6 Locate the row that contains the target host port and click Enable LLDP.
A dialog box is displayed, asking you whether to enable the LLDP service.
7 Click OK.
A dialog box is displayed.
8 Click OK.
No further action is required.
9 Locate the row that contains the target host port and click Disable LLDP.
A dialog box is displayed, asking you whether to disable the LLDP service.
10 Click OK.
A dialog box is displayed.
11 Click OK.
No further action is required.
----End
Scenarios
On FusionCompute, enable the antivirus function for a host to provide VMs on it with virus
scan, removal, and real-time monitoring services. For details about the antivirus function and
its deployment plans, see VM Antivirus Management in the FusionCompute V100R005C10
Virtual Machine Management Guide.
NOTE
The antivirus virtualization function and GPU sharing function cannot be enabled on a host at the same
time.
Prerequisites
Conditions
You have logged in to FusionCompute.
Procedure
Step 1 On FusionCompute, click Computing Pool.
Step 2 In the navigation tree on the left, select the site, cluster, and host.
The Getting Started page is displayed.
Step 5 Enter the number of the required secure user VMs on the host.
NOTE
Each host supports a maximum of 50 secure user VMs. However, the number of supported VMs on a
host varies depending on the antivirus software. If the number exceeds the upper limit of the software,
the antivirus functions become invalid.
----End
Prerequisites
Conditions
l You have logged in to the FusionCompute.
l You have stopped or migrated the VMs on the host.
l You have migrated or deleted the VMs using the local storage of the host.
l The CPU generation of the host to be moved to the IMC-enabled cluster is the same as or
later than the IMC mode configured for the cluster. Execute Disable Bit function, which
is also known as the No eXecute bit (NX) or eXecute Disable (ND) function, has been
enabled in the advanced CPU options for the BIOS of the host.
Procedure
Step 1 On the FusionCompute, choose Computing Pool.
The Computing Pool page is displayed.
Step 2 In the navigation tree on the left, choose Site name > Cluster name > Host name.
The Getting Started page is displayed.
Step 3 Right-click the host to be moved, and choose Move.
A dialog box is displayed, as shown in Figure 3-8.
----End
Prerequisites
Conditions
l You have logged in to the FusionCompute.
l If the host uses FusionStorage, the host has been deleted from the FusionStorage
management system before being removed from the cluster. This is to prevent host
removal from adversely affecting the FusionStorage system. For details, see the
FusionStorage V100R003C30SPC200 Capacity Adjustment Guide.
l The following conditions are met for removing a host in Remove mode:
– The VMs on the host have been deleted or migrated to other hosts.
– The host upstream link connected to the DVS has been removed. For details, see
Upstream Link Group Management > Deleting an Upstream Link in the
FusionCompute V100R005C10 Network Management Guide.
– The host has been disassociated from the associated data store and disconnected
from all storage resources.
– Data stores whose type is Local on the host have been destroyed.
– Heartbeat Communication Between the Host and VRM Interrupted is not
generated for the target host.
l The following conditions are met for removing a host in Forcibly Remove mode:
– The host is in the fault or poweroff state.
– The VMs running on the host have been stopped forcibly.
– If the host is associated with a shared storage resource, the shard storage resource
has been associated with multiple hosts including the one to be removed.
Data
The following data is available:
l The name of the host to be removed
l The name of the cluster from which the host is to be removed
Procedure
5 Click OK.
The host is removed from the cluster.
Follow-up Procedure
Power off the host immediately after removing it in Forcibly Remove mode. Otherwise,
some functions, such as storage device attaching, on other normal hosts in the same site may
fail.
----End
NOTE
If the host uses intelligent network interface cards (iNICs), bind the uplink network ports on the host together.
Otherwise, the broadcast suppression function of the port group may be adversely affected.
Prerequisites
Conditions
l You have logged in to FusionCompute.
l The host has been added to a cluster.
Procedure
Determine the method of binding network ports.
1 Determine the method of binding network ports.
– To bind network ports in batches, go to 10.
This method is recommended when multiple bound ports are to be added to hosts or
the hosts are large in number.
– To bind network ports one by one, go to 2.
This method is recommended when the hosts are small in number and each host
requires a few bound ports.
Bind network ports one by one.
2 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
3 In the navigation tree on the left, select the site, the cluster, and the host.
The Getting Started page is displayed.
4 Choose Configuration > System Port > Bind Network Port.
The Bind Network Port page is displayed, as shown in Figure 4-1.
5 In the Network Port list, select the physical network ports to be bound.
6 In the middle of the page, set Name and Binding Mode for the network ports.
NOTICE
– In all load sharing modes, aggregation must be configured on the switch to which
network ports are connected, that is, the ports to be bound must be configured on the
same Eth-trunk port on the switch. Otherwise, network exception may occur.
– In the Link Aggregation Control Protocol (LACP) mode, create an Eth-trunk in
LACP mode on the switch to which network ports are connected, configure ports to
be bound on the same Eth-trunk, and enable the bridge protocol data unit (BPTU)
protocol packet forwarding function on the Eth-trunk. For example, if the switch is
Huawei S5300, run the following commands:
<S5352_01>sys
[S5352_01]interface Eth-Trunk x
[S5352_01-Eth-Trunkx]mode lacp-static
[S5352_01-Eth-Trunkx]bpdu enable
For details about how to configure port aggregation on a switch, see the switch user
guide.
The following binding modes are available for common network interface cards (NICs):
– Active-backup: applies to scenarios where two network ports are to be bound. This
mode provides high reliability. The bandwidth of the bound port in this mode equals
to that of a member port.
– Round-robin: applies to scenarios where two or more network ports are to be
bound. The bandwidth of the bound port in this mode is higher than that of a
member port, because the member ports share workloads in sequence.
This mode may result in data packet disorder because traffic is evenly sent to each
port. Therefore, MAC address based load balancing prevails over Polling in load
sharing modes.
– IP address and port-based load balancing: applies to scenarios where two or
more network ports are to be bound. The bandwidth of the bound port in this mode
is higher than that of a member port, because the member ports share workloads
based on the source-destination-port-based load sharing algorithm.
Source-destination-port-based load balancing algorithm: When the packets
contain IP addresses and ports, the member ports share loads based on the source
and destination IP addresses, ports, and MAC addresses. When the packets contain
IP addresses, the member ports share loads based on the IP addresses and MAC
addresses. When the packets contain only MAC addresses, the member ports share
loads based on the MAC addresses.
This mode is recommended when the virtual extensible LAN (VXLAN) function is
enabled. This mode allows network traffic to be evenly distributed based on the
source and destination port information in the packets.
– MAC address-based load balancing: applies to scenarios where two or more
network ports are to be bound. The bandwidth of the bound port in this mode is
higher than that of a member port, because the member ports share workloads based
on the MAC addresses of the source and destination ports.
This mode is recommended when most network traffic is on the layer 2 network.
This mode allows network traffic to be evenly distributed based on the MAC
addresses.
– MAC address-based LACP: This mode is developed based on the MAC address
based load balancing mode. In MAC address-based LACP mode, the bound port
can automatically detect faults on the link layer and trigger a switchover if a link
fails using the LACP protocol.
– IP address-based LACP: applies to scenarios where two or more network ports are
to be bound. The bandwidth of the bound port in this mode is higher than that of a
member port, because the member ports share workloads based on the source-
destination-IP-address-based load sharing algorithm. When the packets contain IP
addresses, the member ports share loads based on the IP addresses and MAC
addresses. When the packets contain only MAC addresses, the member ports share
loads based on the MAC addresses. In this mode, the bound port can also
automatically detect faults on the link layer and trigger a switchover if a link fails
using the LACP protocol.
This mode is recommended when most network traffic goes across layer 2 and layer
3 networks.
The following binding modes are available for intelligent network interface cards
(iNICs):
– Active-backup: applies to scenarios where two network ports are to be bound. This
mode provides high reliability. The bandwidth of the bound port in this mode equals
to that of a member port.
– Source MAC address-based load balancing: applies to scenarios where two or
more network ports are to be bound. The bandwidth of the bound port in this mode
is higher than that of a member port, because the member ports share workloads
based on the MAC address of the source port.
– Destination MAC address-based load balancing: applies to scenarios where two
or more network ports are to be bound. The bandwidth of the bound port in this
mode is higher than that of a member port, because the member ports share
workloads based on the MAC address of the destination port.
This mode is recommended when most network traffic is on the layer 2 network.
This mode allows network traffic to be evenly distributed based on the MAC
addresses.
– Source IP address-based load balancing: applies to scenarios where two or more
network ports are to be bound. The bandwidth of the bound port in this mode is
higher than that of a member port, because the member ports share workloads based
on the IP address of the source port.
– Destination IP address-based load balancing: applies to scenarios where two or
more network ports are to be bound. The bandwidth of the bound port in this mode
is higher than that of a member port, because the member ports share workloads
based on the IP address of the destination port.
This mode is recommended when most network traffic is on the layer 3 network.
This mode allows network traffic to be evenly distributed based on the destination
IP addresses.
7 Click Bind.
An information dialog box is displayed.
8 Click OK.
The Bind Network Port page is displayed.
To change the binding mode of a bound port, locate the row that contains the bound port,
click Operation, and click Modifying binding mode. You can change the binding mode
in the displayed dialog box.
NOTICE
– Switching between different load sharing modes or between different LACP modes
interrupts network communication of the bound network port for 2 or 3 seconds.
– If the binding mode is changed from the active/standby mode to load sharing mode,
port aggregation must be configured on the switch to which network ports are
connected. If the binding mode is changed from the load sharing mode to active/
standby mode, the aggregation configured on the switch must be canceled.
Otherwise, network exception may occur.
– If the binding mode is changed from the LACP mode to another mode, port
configuration must be changed on the switch to which network ports are connected. If
the binding mode is changed from another mode to the LACP mode, port aggregation
in LACP mode on the switch. Otherwise, network exception may occur.
Configuration operations on the switch may interrupt the network communication. After
the configurations are complete, the network communication is automatically restored. If
the network communication is not restored, perform either of the following methods to
troubleshoot the network:
– Ping the destination IP address from the switch to trigger a MAX table update.
– Select a member port in port aggregation, disable other ports on the switch, change
the binding mode, and enable those ports.
9 Click OK.
The network ports on the host are bound.
After this step is complete, no further action is required.
Bind network ports in batches.
10 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
11 In the navigation tree on the left, select the site and cluster.
The Getting Started page is displayed.
12 In the Operation list of the cluster, click Bind Network Ports in Batches.
The Bind Network Ports in Batches page is displayed, as shown in Figure 4-2.
----End
Additional Information
Related Tasks
Removing a Default Network Port from a Bound Port
The default network port is the first port added to the bound port. To remove the default port,
unbind the bound port and then bind the other non-default member ports. If the bound port is
used by any services, you must remove the services first.
1. Locate the row that contains the bound port, choose More > Unbind Port, and check
whether The network port aggregation is in use is displayed.
The host does not need any storage port to connect to a local hard disk, Fibre Channel storage
area network (FC SAN) device, or a local RAM disk.
Prerequisites
Conditions
l You have logged in to FusionCompute.
l The host has been added to a cluster.
l Operations provided in Binding Network Ports have been performed on the host if the
host uses multiple network ports to connect to a storage device.
Procedure
Determine the method of adding storage ports to hosts.
1 Determine the method of adding storage ports to hosts.
5 Select the network port to which the storage plane network interface card (NIC)
connects, and click Next.
PORTX indicates network port ethX on the host. To identify the ports on common
Huawei servers, see How to Identify Server Ports.
The Connection Settings page is displayed, as shown in Figure 4-5.
the storage device is 172.20.100.100 and subnet mask is 255.255.0.0, set the IP
address to 172.20.XXX.XXX.
n If the storage plane uses a layer 3 network, set it to an IP address that
communicates with the storage IP address of the storage device.
– Subnet mask: Enter the subnet mask of the storage plane.
– VLAN ID: Enter the VLAN ID of the storage plane.
– Switching mode: specifies the data exchange mode of the storage plane. The value
can be Linux subinterface or OVS forwarding.
NOTE
If SAN devices are used and multipathing is required, configure multiple storage
interfaces for a single storage network port.
For example, if a storage device has four storage paths, using VLAN4, VLAN5,
VLAN6, and VLAN7, respectively, the network port eth2 on the host is to be configured
to intercommunicate with VLAN4 and VLAN5, and eth3 to intercommunicate with
VLAN6 and VLAN7, configure storage interfaces VLAN4 and VLAN5 on eth2, and
VLAN6 and VLAN7 on eth3.
7 Click Next.
The Confirm page is displayed.
8 Ensure the all the information is correct and click Add.
A dialog box is displayed.
9 Check whether all the planned storage ports have been added.
– If yes, go to 11.
– If no, go to 10.
10 Click Continue, and perform 5 to 8 for each storage port to be added.
11 Click OK.
The storage ports are added to the host.
After this step is complete, no further action is required.
Add storage ports to hosts in batches.
12 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
13 In the navigation tree on the left, select the site and cluster.
The Getting Started page is displayed.
14 Choose Operation > Add Storage Ports in Batches on top of the page.
The Add Storage Ports in Batches page is displayed, as shown in Figure 4-6.
17 Open the template, click the Host Storage Port sheet, locate the row that contains
information about storage ports to be added, and copy the information in the row to the
Config sheet.
PORTX indicates network port ethX on the host. To identify the ports on common
Huawei servers, see How to Identify Server Ports.
Storage port information includes Host IP Address, Host ID, Network Port Name, and
Network Port ID.
18 On the Config sheet, set the following parameters for the storage ports:
– Storage Port Name
– Storage Port Description
– Storage IP Address: Enter the storage port IP address. It must be an idle IP address
in the same network segment as the storage device IP address.
n If the storage plane uses a layer 2 network, set it to an idle IP address that
communicates with the storage plane. For example, if the storage IP address of
the storage device is 172.20.100.100 and subnet mask is 255.255.0.0, set the IP
address to 172.20.XXX.XXX.
n If the storage plane uses a layer 3 network, set it to an IP address that
communicates with the storage IP address of the storage device.
– Subnet Mask: Enter the subnet mask of the storage plane.
– VLAN ID: Enter the VLAN ID of the storage plane.
– Switching mode: specifies the data exchange mode of the storage plane. The value
can be Linux subinterface or OVS forwarding.
NOTE
For details about the parameters, see the help sheet in the template.
19 Save and close the template.
20 Click Browse to the right of Import template file on the Add Storage Ports in Batches
page.
A dialog box is displayed.
21 Select the template and click Open.
22 Click OK.
An information dialog box is displayed.
23 Click OK.
You can choose System > Tasks and Logs > Task Center to view the task progress.
Follow-up Procedure
After the storage ports are added to the hosts, If the hosts use shared storage resources, add
and associate the storage resources to hosts, and then scan storage devices. For details, see
Adding Storage Resources to a Site, Associating Storage Resources with a Host and
Scanning Storage Devices in the FusionCompute V100R005C10 Storage Management
Guide.
----End
If a host is added to a cluster and does not have any service management port added, the
traffic for the special services and management services is carried by the management port by
default. After service management ports are added to a host and are enabled, the service
management ports carry and manage the traffic for the special services.
To enable the live migration or host-based replication function for a cluster, you are advised to
enable or disable the corresponding function for service management ports on all hosts in the
cluster.
A host supports a maximum of four service management ports. A host that uses intelligent
network interface cards (iNICs) does not support service management ports.
Prerequisites
Conditions
l You have logged in to FusionCompute.
l Hosts have been added to a cluster.
l You have enabled an independent network plane that carries the virtualized SAN storage
traffic if you want to add a service management port that manages the virtualized SAN
storage heartbeat traffic. For details, see Changing the Network Plane that Carries
Procedure
Determine the method of adding service management ports to hosts.
1 Determine the method of adding service management ports to hosts.
– To add service management ports to hosts in batches, go to 10.
This method is recommended when the system has a large number of hosts.
– To add service management ports to hosts one by one, go to 2.
This method is recommended when the system has a small number of hosts.
Manually add a service management port to a host.
5 Select the host network port that connects to the service plane, and click Next.
PORTX indicates network port ethX on the host. For details about common network
ports on Huawei servers, see How to Identify Server Ports.
The Connection Settings page is displayed, as shown in Figure 4-8.
NOTE
This IP address and IP addresses of other ports of the host must be on different network
segments.
– Subnet mask: specifies the subnet mask of the planned service management plane.
– VLAN ID: specifies the VLAN ID of the planned service management plane.
– Routing info: specifies the information about the route to the peer host. This
parameter is required when the service management port on the local host and the
service management ports on other hosts in the cluster do not belong to the same
network segment and the service management port on the local host is enabled.
n Gateway: specifies the gateway on the network segment to which the service
management port on the destination host belongs.
n Network destination: specifies the start IP address of the network segment to
which the service management port on the destination host belongs, for
example, 192.168.0.0.
n Netmask: specifies the subnet mask of the network segment to which the
service management port on the destination host belongs.
– Available Services
n Use this port for VM live migration: makes the service management port
carry the VM live migration traffic.
n Use this port for VM DR: makes the service management port carry the host-
based replication DR traffic.
n Use this port for virtualized SAN storage traffic: makes the service port
carry the VIMS heartbeat traffic.
– Outbound Traffic Shaping
n Average send bandwidth (Mbit/s): specifies the average number of bits per
second to allow across a port during a certain period of time.
If a common NIC is used, the port traffic remains close to the configured
average bandwidth when no burst of traffic occurs. If an iNIC is used, the
average bandwidth is equal to the minimum bandwidth when no congestion
occurs on the network. If the burst send size is set to a too small value, the
network bandwidth decreases.
n Peak send bandwidth (Mbit/s): specifies the maximum number of bits per
second to allow across a port when it is sending a burst of traffic.
The peak receive bandwidth must be greater than or equal to the average
receive bandwidth. A proper peak send bandwidth set for a service prevents
network congestion on other VM networks when the traffic of this service is
too large. When an iNIC is used, the peak send bandwidth is equal to the
maximum bandwidth after the burst of traffic disappears, and in the idle
period, the bandwidth remains around the peak receive bandwidth.
n Burst send size (Mbits): specifies the maximum number of bytes to allow in a
burst.
7 Click Next.
The Confirm page is displayed.
8 Confirm the information and click Add.
The Finish page is displayed.
9 Click Close.
You can view the task progress on the Task Tracing page.
– Use this port for VM DR: makes the service management port carry the host-based
replication DR traffic.
– Use this port for virtualized SAN storage traffic: makes the service management
port carry the VIMS heartbeat traffic.
– Outbound Traffic Shaping: specifies whether to limit the bandwidth on the port.
Select Enable or Disable from the drop-down menu.
– Average Send Bandwidth (Mbit/s)
– Peak Send Bandwidth (Mbit/s)
– Burst Send Size (Mbit)
For details about the parameters, see the help sheet in the template.
17 After information about all service management ports is configured, save and close the
template file.
18 Click Browse on the right of Import template on the Add Service Management Port
in Batches page.
A dialog box is displayed.
19 Select the configured template file and click Open.
20 Click OK.
An information dialog box is displayed.
21 Click OK.
You can choose System > Tasks and Logs > Task Center to view the task progress.
----End
Prerequisites
Conditions
You have logged in to FusionCompute.
Procedure
Step 1 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
Step 2 In the navigation tree on the left, select the site, cluster, and host.
The Getting Started page is displayed.
Step 3 Choose Configuration > System Port.
All the system ports provided by the host are displayed.
Step 4 Locate the row that contains the system port and click .
All routes of the system port are displayed in the list, as shown in the following figure.
----End
Prerequisites
Conditions
Procedure
Step 1 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
Step 2 In the navigation tree on the left, select the site, cluster, and host.
The Getting Started page is displayed.
Step 3 Choose Configuration > System Port.
All the system ports provided by the host are displayed.
----End
NOTE
The management port cannot be deleted.
Prerequisites
Conditions
You have logged in to the FusionCompute.
Procedure
Step 1 On the FusionCompute, choose Computing Pool.
The Computing Pool page is displayed.
Step 2 In the navigation tree on the left, choose Site name > Cluster name > Host name.
The Getting Started page is displayed.
Step 3 Choose Configuration > System Port.
All the system ports provided by the host are displayed.
----End
NOTE
The management port cannot be deleted.
Prerequisites
Conditions
You have logged in to FusionCompute.
Procedure
Step 1 On FusionCompute, click Computing Pool.
The Computing Pool page is displayed.
Step 2 In the navigation tree on the left, select the site, cluster, and host.
The Getting Started page is displayed.
Step 3 Choose Configuration > System Port.
All the system ports provided by the host are displayed.
----End
A Appendix
A.1 FAQ
A.2 Parameter Reference
A.3 Parameters in Advanced Settings for Computing Resource Scheduling
A.1 FAQ
Scenarios
Configure static routes for storage ports on a host if the storage plane requires layer 3
communication, so that the storage IP addresses of storage devices and host storage ports that
do not belong to the same network segment can communicate with each other.
Prerequisites
Conditions
l You have installed the host operating system (OS).
l You have obtained the password for user root of the host.
Data
Data preparation is not required for this operation.
Procedure
Log in to the host.
1 Use PuTTY to log in to the host.
Ensure that the management IP address and username gandalf are used to establish the
connection.
The default password for user gandalf to log in to the VRM VM is Huawei@CLOUD8.
2 Run the following command and enter the password of user root to switch to user root:
su - root
3 Run the following command to disable logout on timeout:
TMOUT=0
Configure static routes for storage ports on the host.
4 Run the following command to switch to the /opt/galax/eucalyptus/ecs_scripts
directory:
cd /opt/galax/eucalyptus/ecs_scripts/
5 Run the following command to open the route parameter file using the visual interface
(vi) editor:
vi routes_param
6 Press i to enter editing mode.
– If the command output contains the the following information, the OS is installed
on a USB flash drive on the host. Go to 13.
sys_type=udisk
Verification
Run the route command to query all routes of the host:
Additional Information
Related Tasks
Delete routes from host storage ports.
Perform the following operations to delete routes from host storage ports:
1. Log in to the host and run the following command to delete the scheduled task for routes:
sed -i '/add_static_routes.sh/d' /etc/crontab
2. In the /opt/galax/eucalyptus/ecs_scripts directory, open the routes_param file using
the vi editor and delete unwanted routes.
3. Run the following command to delete information about the routes that have taken
effect:
route del -net Storage plane network segment/Subnet mask gw Gateway address
For example, if the route information to be deleted is 172.20.0.0 16 172.25.1.1, run the
following command:
route del -net 172.20.0.0/16 gw 172.25.1.1
4. Query the host OS installation location. If the OS is installed on a USB flash drive, route
information backup is required. For details about how to back up the information, see 11
to 13.
PLR03C7001
NOTE
By default, MM1, located in the left of the subrack, works as the active management module of the
E6000, and MM2, in the right of the shelf, works as the standby management module. You can
determine the active and standby management modules based on the indicators on the front panel of the
management modules.
l After the active management module is powered on, the ACT indicator is steady green.
l After the standby management module is powered on, the ACT indicator blinks green at 0.5 Hz.
SLOT C1
SLOT C2
SLOT B1
SLOT A1
SLOT A2
SLOT B2
PWR4 PWR5 PWR6
PLE01003
Figure A-8 shows port 23 of switch module A1 in the E6000. The NX112 switch module is
used as an example.
If NX113 switch module is used in the E6000, connect the local computer to the port on the
switch. Ensure that the connected port on the switch and the host management plane belong to
the same VLAN.
Possible Cause
l The system does not automatically migrate VMs that do not meet migration
requirements, for example:
– The Keep VMs together or Mutually exclusive VMs rule is configured for VMs.
– The dynamic resource scheduling (DRS) function is disabled for VMs.
– A device is mounted to one or more VMs so that the dynamic resource scheduler
(DRS) cannot migrate these VMs. Therefore, the system cannot implement load
balancing.
– The VMs are incompatible with the destination host to which the DRS migrates the
VMs.
– The VM is bound to its host.
– The memory size configured for the VM is greater than 4 GB.
l The system does not migrate VMs in the event that balanced CPU and memory usage
cannot be realized after the migration. For example:
– When CPU and memory is selected, some hosts have high CPU capacity usage,
and the others have high memory usage.
– When CPU or Memory is selected, migrating a VM cannot solve the problem of
unbalanced CPU and memory usage.
Storage Resources
The FusionCompute can use storage resources provided by dedicated storage devices or local
disks on hosts. Dedicated storage devices are connected to hosts through network cables or
fiber cables.
Data Store
A data store is a storage unit that is converted from a storage resource by FusionCompute.
After a data store is associated with a host, the data store can be used to create virtual disks
for VMs.
Raw device mapping (RDM) allows logical unit numbers (LUNs) on SAN devices can serve
as data stores without creating virtual disks. This technology applies to large disk capacity
scenarios, for example, the database server construction. RDM can be used only for VMs that
run certain operating systems (OSs). For details about the supported OS list, see
Compatibility. If RDM storage is used to deploy application cluster services, such as Oracle
RAC, it is recommended that you not use the VM snapshot creation function and not restore a
VM using a snapshot. If you use a snapshot to restore a VM, the application cluster service
may become faulty.
After storage resources are converted to data stores, the difference between virtual disks
created using different resources are hidden from VM OSs.
Storage resources that can be converted to data stores are:
l LUNs on SAN devices, including Internet Small Computer Systems Interface (iSCSI)
storage devices and fiber channel (FC) storage devices.
l File systems on NAS devices.
l Storage pools on FusionStorage
l Local hard disks on hosts
l Local RAM disk on hosts
Table A-1 shows the relationship between data resources, storage devices and data stores.
Table A-1 The storage devices, storage resources, and data stores support for the
FusionCompute
Storage Storage Data Store Storage Space
Device Resource Required By Data
Store
Local RAM N/A Local RAM disk (non- Local RAM disk: [16 GB,
disk virtualization) 512 GB]
VMs
File systems
Virtual disks
NAS devices
Hosts LUN
Data stores LUN
Storage Port
A storage port on a host connects the host to a storage device. One physical NIC or a group of
physical NICs that are bound together on the host can be set as a storage port.
If iSCSI storage devices are used, two physical NICs on a host can be connected to multiple
storage NICs on the storage devices, working in multipathing mode. Binding of physical
NICs is not required in this mode.
If NAS devices are to be used, you are advised to bind the storage plane NICs in active/
standby mode and set the storage port to connect to the NAS devices to enhance reliability.
iSCSI Storage
An iSCSI storage device is connected to a host through network cables. The host accesses the
storage device using the TCP/IP protocol.
To ensure efficient access to an iSCSI storage device, configure an iSCSI initiator using the
world wide name (WWN) generated after the storage device is associated to the host.
Typical iSCSI storage devices include IP SAN devices and OceanStor 18000 series storage
devices.
FC Storage
An FC storage device is connected to the FC host bus adapter (HBA) on a host through
optical cables, which provide high data transmission rate.
To ensure efficient access to an FC storage device, configure an FC initiator using the WWN
generated after the storage device is connected to the host FC HBA.
Typical FC storage devices include FC SAN devices and OceanStor 18000 series storage
devices.
Multipathing
Multipathing is a storage access mechanism that provides more than one physical path to
connect a network storage device to one or more host network interface cards (NICs),
enabling load sharing for data flows and thereby enhancing reliability for storage access.
Usually, multipathing is supported by storage devices using iSCSI and Fibre Channel (FC),
such as IP SAN devices, FC SAN devices, and OceanStor 18000 series storage devices.
Multipathing supports the Huawei and universal multipathing modes. If the universal
multipathing mode is used and the VM uses raw device mapped disks, the MSCS cluster
cannot be deployed.for Windows Server OS. However, you can deploy the iSCSI network on
the VM to deploy the MSCS cluster.
VLAN 4 Controller A
172.20.10.10
Host
VLAN 5
172.30.10.10
VLAN 6
Eth2 172.40.10.10
172.20.100.100
172.30.100.100 VLAN 7
172.50.10.10
NAS Storage
NAS storage devices use the network file sharing (NFS) protocol to provide shared folders
over a network.
A NAS storage device is connected to a host through network cables. A host accesses the
storage device using TCP/IP.
Local Storage
Disks on hosts provide local storage resources.
The FusionCompute can identify the following local storage resources:
l Free space on the disk on which the host operating system is installed
NOTE
The remaining space on the local disk where the host OS is installed can be added as a data store
of the local storage type. If the space of the disk is greater than 2 TB, the system identifies the disk
only as a 2 TB disk during the host OS installation process. Therefore, the remaining space on this
disk after the OS installation is less than 2 TB. Other local disks on the host are not affected by the
OS installation. Therefore, they can provide all of their space.
l Bare disks or unpartitioned redundant array of independent disks (RAID) on the host
Local storage can be provided only to the host housing the disks.
For details about adding local RAM disks on a host, see Creating Local RAM Disks on a
Host.
A.1.5 Compatibility
For details about the compatibility for servers, I/O devices, storage devices. and operating
systems (OSs), log in to compatibility check assistant.
Type OS
Type OS
Rule group Name Specifies the name of the rule for VM Yes rule
migration. 01
This parameter is valid only when Enable
computing resource scheduling is
selected.
This parameter is optional.
A.2.3 HA Parameters
Table A-5 Parameter description
Parameter Description Modifiable Example Value
packets contain IP
addresses and ports, the
member ports share loads
based on the source and
destination IP addresses,
ports, and MAC
addresses. When the
packets contain IP
addresses, the member
ports share loads based on
the IP addresses and MAC
addresses. When the
packets contain only
MAC addresses, the
member ports share loads
based on the MAC
addresses.
This mode is
recommended when the
virtual extensible LAN
(VXLAN) function is
enabled. This mode
allows network traffic to
be evenly distributed
based on the source and
destination port
information in the
packets.
l MAC address-based
load balancing: applies to
scenarios where two or
more network ports are to
be bound. The bandwidth
of the bound port in this
mode is higher than that
of a member port, because
the member ports share
workloads based on the
MAC addresses of the
source and destination
ports.
This mode is
recommended when most
network traffic is on the
layer 2 network. This
mode allows network
traffic to be evenly
Subnet Specifies the subnet mask of the system port. Yes 255.2
mask This parameter is optional. 55.25
5.0
Outboun Specifies the limit for the send traffic on the Yes -
d Traffic system port.
Shaping Value range:
l Average send bandwidth (Mbit/s): 1 to 10000
l Peak send bandwidth (Mbit/s): Average send
bandwidth to 10000
l Burst send size (Mbit/s): Peak send
bandwidth to 10000
This parameter is optional.
NOTE
FusionCompute defines five types of VM memory specifications (tiny VM, small-sized VM, medium-
sized VM, and large-sized VM) and migration duration. The DRS function calculates the VM migration
costs based on the VM memory specification and migration duration and does not migrate VMs with
high migration costs.
If a cluster accommodates VMs with different memory specifications and migration duration, you can
change the value ranges of the VM memory specifications and migration duration.