NSX-T Install Guide 3.0 PDF
NSX-T Install Guide 3.0 PDF
Installation Guide
Modified on 29 MAY 2020
VMware NSX-T Data Center 3.0
NSX-T Data Center Installation Guide
You can find the most up-to-date technical documentation on the VMware website at:
https://docs.vmware.com/
VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com
©
Copyright 2020 VMware, Inc. All rights reserved. Copyright and trademark information.
VMware, Inc. 2
Contents
VMware, Inc. 3
NSX-T Data Center Installation Guide
VMware, Inc. 4
NSX-T Data Center Installation Guide
VMware, Inc. 5
NSX-T Data Center Installation Guide
VMware, Inc. 6
NSX-T Data Center Installation Guide
The NSX-T Data Center Installation Guide describes how to install the VMware NSX-T™ Data Center
product. The information includes step-by-step configuration instructions and suggested best practices.
Intended Audience
This information is intended for anyone who wants to install or use NSX-T Data Center. This information
is written for experienced system administrators who are familiar with virtual machine technology and
network virtualization concepts.
Related Documentation
You can find the VMware NSX® Intelligence™ documentation at https://docs.vmware.com/en/VMware-
NSX-Intelligence/index.html. The NSX Intelligence 1.0 content was initially included and released with the
NSX-T Data Center 2.5 documentation set.
VMware, Inc. 7
1
Overview of NSX-T Data Center
In the same way that server virtualization programmatically creates and manages virtual machines, NSX-
T Data Center network virtualization programmatically creates and managed software-based virtual
networks. 3 planes : mgmt, control, data
With network virtualization, the functional equivalent of a network hypervisor reproduces the complete set
of Layer 2 through Layer 7 networking services (for example, switching, routing, access control,
firewalling, QoS) in software. As a result, these services can be programmatically assembled in any
arbitrary combination, to produce unique, isolated virtual networks in a matter of seconds.
NSX-T Data Center works by implementing three separate but integrated planes: management, control,
and data. These planes are implemented as a set of processes, modules, and agents residing on two
types of nodes: NSX Manager and transport nodes. NSX Manager nodes
n Every node hosts a management plane agent. Transport nodes
n NSX Manager nodes host API services and the management plane cluster daemons.
n NSX Controller nodes host the central control plane cluster daemons.
n Transport nodes host local control plane daemons and forwarding engines.
NSX Manager provides a three-node clustering support which merges policy manager, management, and
central control services on a cluster of nodes. NSX Manager clustering provides high availability of the
user interface and API. The convergence of management and control plane nodes reduces the number of
virtual appliances that must be deployed and managed by the NSX-T Data Center administrator.
The NSX Manager appliance is available in three different sizes for different deployment scenarios.
n A medium appliance for deployments up to 64 hosts and a large appliance for customers who deploy
to a large-scale environment.
See NSX Manager VM and Host Transport Node System Requirements and Configuration maximums
tool.
n Key Concepts
VMware, Inc. 8
NSX-T Data Center Installation Guide
Key Concepts
The common NSX-T Data Center concepts that are used in the documentation and user interface.
Compute Manager
A compute manager is an application that manages resources such as hosts and VMs. One example
is vCenter Server.
Control Plane
Computes runtime state based on configuration from the management plane. Control plane
disseminates topology information reported by the data plane elements, and pushes stateless
configuration to forwarding engines.
Data Plane
Performs stateless forwarding or transformation of packets based on tables populated by the control
plane. Data plane reports topology information to the control plane and maintains packet level
statistics.
External Network
A physical network or VLAN not managed by NSX-T Data Center. You can link your logical network or
overlay network to an external network through an NSX Edge. For example, a physical network in a
customer data center or a VLAN in a physical environment.
Outbound network traffic leaving the VM or logical network is called egress because traffic is leaving
virtual network and entering the data center.
Inbound network traffic leaving the data center and entering the VM is called ingress traffic.
Logical Router
Logical network port to which you can attach a logical switch port or an uplink port to a physical
network.
Logical Switch
Entity that provides virtual Layer 2 switching for VM interfaces and Gateway interfaces. A logical
switch gives tenant network administrators the logical equivalent of a physical Layer 2 switch,
allowing them to connect a set of VMs to a common broadcast domain. A logical switch is a logical
entity independent of the physical hypervisor infrastructure and spans many hypervisors, connecting
VMs regardless of their physical location.
VMware, Inc. 9
NSX-T Data Center Installation Guide
In a multi-tenant cloud, many logical switches might exist side-by-side on the same hypervisor
hardware, with each Layer 2 segment isolated from the others. Logical switches can be connected
using logical routers, and logical routers can provide uplink ports connected to the external physical
network.
Logical switch attachment point to establish a connection to a virtual machine network interface or a
logical router interface. The logical switch port reports applied switching profile, port state, and link
status.
Management Plane
Provides single API entry point to the system, persists user configuration, handles user queries, and
performs operational tasks on all of the management, control, and data plane nodes in the system.
Management plane is also responsible for querying, modifying, and persisting use configuration.
Collection of NSX Edge node appliances that have the same settings as protocols involved in high-
availability monitoring.
Component with the functional goal is to provide computational power to deliver the IP routing and the
IP services functions.
The NSX managed virtual distributed switch (N-VDS, previously known as hostswitch)or OVS is used
for shared NSX Edge and compute cluster. N-VDS is required for overlay traffic configuration.
An N-VDS has two modes: standard and enhanced datapath. An enhanced datapath N-VDS has the
performance capabilities to support NFV (Network Functions Virtualization) workloads.
NSX Manager
Node that hosts the API services, the management plane, and the agent services. NSX Manager is
an appliance included in the NSX-T Data Center installation package. You can deploy the appliance
in the role of NSX Manager or nsx-cloud-service-manager. Currently, the appliance only supports
one role at a time.
Open source software switch that acts as a virtual switch within XenServer, Xen, KVM, and other
Linux-based hypervisors.
VMware, Inc. 10
NSX-T Data Center Installation Guide
Logical network implemented using Layer 2-in-Layer 3 tunneling such that the topology seen by VMs
is decoupled from that of the physical network.
Segment
Entity that provides virtual Layer 2 switching for VM interfaces and Gateway interfaces. A segment
gives tenant network administrators the logical equivalent of a physical Layer 2 switch, allowing them
to connect a set of VMs to a common broadcast domain. A segment is a logical entity independent of
the physical hypervisor infrastructure and spans many hypervisors, connecting VMs regardless of
their physical location. A segment is also known as a logical switch.
In a multi-tenant cloud, many segments might exist side-by-side on the same hypervisor hardware,
with each Layer 2 segment isolated from the others. Segments can be connected using gateways,
which can provide connectivity to the external physical network.
The Tier-0 Gateway in the Networking tab interfaces with the physical network and can be realized
as an active-active or active-standby cluster. The Tier-0 gateway runs BGP and peers with physical
routers. In active-standby mode the gateway can also provide stateful services.
The Tier-1 Gateway in the Networking tab connects to one Tier-0 gateway for northbound
connectivity and one or more overlay networks for southbound connectivity. A Tier-1 gateway can be
an active-standby cluster that provides stateful services.
Transport Zone
Collection of transport nodes that defines the maximum span for logical switches. A transport zone
represents a set of similarly provisioned hypervisors and the logical switches that connect VMs on
those hypervisors. It also has been registered with the NSX-T Data Center management plane and
has NSX-T Data Center modules installed. For a hypervisor host or NSX Edge to be part of the NSX-
T Data Center overlay, it must be added to the NSX-T Data Center transport zone.
Transport Node
A node capable of participating in an NSX-T Data Center overlay or NSX-T Data Center VLAN
networking. For a KVM host, you can preconfigure the N-VDS, or you can have NSX Manager
perform the configuration. For an ESXi host, NSX Manager always configures the N-VDS.
Uplink Profile
Defines policies for the links from hypervisor hosts to NSX-T Data Center logical switches or from
NSX Edge nodes to top-of-rack switches. The settings defined by uplink profiles might include
teaming policies, active/standby links, the transport VLAN ID, and the MTU setting. The transport
VLAN set in the uplink profile tags overlay traffic only and the VLAN ID is used by the TEP endpoint.
VMware, Inc. 11
NSX-T Data Center Installation Guide
VM Interface (vNIC)
Network interface on a virtual machine that provides connectivity between the virtual guest operating
system and the standard vSwitch or vSphere distributed switch. The vNIC can be attached to a
logical port. You can identify a vNIC based on its Unique ID (UUID).
Each hypervisor has a Virtual Tunnel Endpoint (VTEP) responsible for encapsulating the VM traffic
inside a VLAN header and routing the packet to a destination VTEP for further processing. Traffic can
be routed to another VTEP on a different host or the NSX Edge gateway to access the physical
network.
The NSX Manager interface provides two modes for configuring resources:
n By default, if your environment contains only objects created through Policy mode, your user interface
is in Policy mode and you do not see the Policy and Manager buttons.
n By default, if your environment contains any objects created through Manager mode, you see the
Policy and Manager buttons in the top-right corner.
These defaults can be changed by modifying the user interface settings. See Configure User Interface
Settings for more information.
The same System tab is used in the Policy and Manager interfaces. If you modify Edge nodes, Edge
clusters, or transport zones, it can take up to 5 minutes for those changes to be visible in Policy mode.
You can synchronize immediately using POST /policy/api/v1/infra/sites/default/enforcement-
points/default?action=reload.
VMware, Inc. 12
NSX-T Data Center Installation Guide
n If you are deploying a new NSX-T Data Center environment, using Policy mode to create and
manage your environment is the best choice in most situations.
n Some features are not available in Policy mode. If you need these features, use Manager mode
for all configurations.
n If you plan to use Federation, use Policy mode to create all objects. Global Manager supports only
Policy mode.
n If you are upgrading from an earlier version of NSX-T Data Center and your configurations were
created using the Advanced Networking & Security tab, use Manager mode.
The menu items and configurations that were found under the Advanced Networking & Security tab
are available in NSX-T Data Center 3.0 in Manager mode.
Important If you decide to use Policy mode, use it to create all objects. Do not use Manager mode to
create objects. VAZNO !
Similarly, if you need to use Manager mode, use it to create all objects. Do not use Policy mode to create
objects. https://spillthensxt.com/nsx-t-3-0-user-interface-mode/ - display both modes
Table 1-1. When to Use Policy Mode or Manager Mode
Policy Mode Manager Mode
Most new deployments should use Policy mode. Deployments which were created using the advanced interface,
Federation supports only Policy mode. If you want to use for example, upgrades from versions before Policy mode was
Federation, or might use it in future, use Policy mode. available.
NSX Cloud deployments Deployments which integrate with other plugins. For example,
NSX Container Plug-in, Openstack, and other cloud
management platforms.
Networking features available in Policy mode only: Networking features available in Manager mode only:
n DNS Services and DNS Zones n Forwarding up timer
n VPN
n Forwarding policies for NSX Cloud
Security features available in Policy mode only: Security features available in Manager mode only:
n Endpoint Protection n Bridge Firewall
n Network Introspection (East-West Service Insertion)
n Context Profiles uzeti POLICY MODE ?? i koristiti samo taj
n L7 applications
n FQDN
n New Distributed Firewall and Gateway Firewall Layout
n Categories
n Auto service rules
n Drafts
VMware, Inc. 13
NSX-T Data Center Installation Guide
For more information about using the Policy API, see the NSX-T Policy API Getting Started Guide.
If present, you can use the Policy and Manager buttons to switch between the Policy and Manager
modes. Switching modes controls which menus items are available to you.
n By default, if your environment contains only objects created through Policy mode, your user interface
is in Policy mode and you do not see the Policy and Manager buttons.
n By default, if your environment contains any objects created through Manager mode, you see the
Policy and Manager buttons in the top-right corner.
You can use the User Interface Settings to modify these defaults.
See Overview of the NSX Manager for more information about the modes.
VMware, Inc. 14
NSX-T Data Center Installation Guide
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-ip-
address>.
IZMJENA UI MODE-ova !
2 Navigate to System > User Interface Settings and click Edit.
3 Modify the user interface settings: Toggle Visibility and Default Mode.
Visible to All Users If Manager mode objects are present, the mode buttons are visible to all users.
Visible to Users with the Enterprise If Manager mode objects are present, the mode buttons are visible to users with
Admin Role the Enterprise Admin role.
Hidden from All Users Even if Manager mode objects are present, the mode buttons are hidden from all
users.
VMware, Inc. 15
2
NSX-T Data Center Installation
Workflows
You can install NSX-T Data Center on vSphere or KVM hosts. You can also configure a bare metal server
to use NSX-T Data Center.
To install or configure any of the hypervisors or bare metal, follow the recommended tasks in the
workflows.
VMWARE host, KVM host, bare metal server installations
This chapter includes the following topics:
1 Review the NSX Manager installation requirements. See Chapter 4 NSX Manager Installation.
2 Configure the necessary ports and protocols. See Ports and Protocols.
3 Install the NSX Manager. See Install NSX Manager and Available Appliances.
4 Log in to the newly created NSX Manager. See Log In to the Newly Created NSX Manager .
6 Deploy additional NSX Manager nodes to form a cluster. See Deploy NSX Manager Nodes to Form a
Cluster from UI.
7 Review the NSX Edge installation requirements. See NSX Edge Installation Requirements.
8 Install NSX Edges. See Install an NSX Edge on ESXi Using the vSphere GUI.
11 Create host transport nodes. See Create a Standalone Host or Bare Metal Server Transport Node or
Configure a Managed Host Transport Node.
VMware, Inc. 16
NSX-T Data Center Installation Guide
2 Review the NSX Manager installation requirements. See Chapter 4 NSX Manager Installation.
3 Configure the necessary ports and protocols. See Ports and Protocols.
5 Log in to the newly created NSX Manager. See Log In to the Newly Created NSX Manager .
6 Configure third-party packages on the KVM host. See Install Third-Party Packages on a KVM Host.
7 Deploy additional NSX Manager nodes to form a cluster. See Deploy NSX Manager Nodes to Form a
Cluster Using CLI .
8 Review the NSX Edge installation requirements. See NSX Edge Installation Requirements.
12 Create host transport nodes. See Create a Standalone Host or Bare Metal Server Transport Node.
A virtual switch is created on each host. The management plane sends the host certificates to the
control plane, and the management plane pushes control plane information to the hosts. Each host
connects to the control plane over SSL presenting its certificate. The control plane validates the
certificate against the host certificate provided by the management plane. The controllers accept the
connection upon successful validation.
Post-Installation
When the hosts are transport nodes, you can create transport zones, logical switches, logical routers, and
other network components through the NSX Manager UI or API at any time. When NSX Edges and hosts
join the management plane, the NSX-T Data Center logical entities and configuration state are pushed to
the NSX Edges and hosts automatically.
For more information, see the NSX-T Data Center Administration Guide.
1 Review the bare metal requirements. See Bare Metal Server System Requirements.
VMware, Inc. 17
NSX-T Data Center Installation Guide
2 Configure the necessary ports and protocols. See Ports and Protocols.
4 Configure third-party packages on the bare metal server. See Install Third-Party Packages on a Bare
Metal Server.
5 Create host transport nodes. See Create a Standalone Host or Bare Metal Server Transport Node.
A virtual switch is created on each host. The management plane sends the host certificates to the
control plane, and the management plane pushes control plane information to the hosts. Each host
connects to the control plane over SSL presenting its certificate. The control plane validates the
certificate against the host certificate provided by the management plane. The controllers accept the
connection upon successful validation.
6 Create an application interface for bare metal server workload. See Create Application Interface for
Bare Metal Server Workloads.
VMware, Inc. 18
3
Preparing for Installation
Before installing NSX-T Data Center, make sure your environment is prepared.
n System Requirements
System Requirements
Before you install NSX-T Data Center, your environment must meet specific hardware and resource
requirements.
Note To avoid memory errors on a hypervisor host running vSphere ESXi version 7.x.x, ensure that at
least 16 GB is available before deploying NSX Manager.
VMware, Inc. 19
NSX-T Data Center Installation Guide
For ESXi hosts, NSX-T Data Center supports the Host Profiles and Auto Deploy features on vSphere 6.7
U1 or higher. See Understanding vSphere Auto Deploy in the VMware ESXi Installation and Setup
documentation for more information.
Caution On RHEL and Ubuntu, the yum update command might update the kernel version, which must
not be greater than 4.14.x, and break the compatibility with NSX-T Data Center. Disable the automatic
kernel update when you run yum update. Also, after running yum install, verify that NSX-T Data
Center supports the kernel version.
Tip To quickly identify compatible cards in the Compatibility Guide, apply the following criteria:
n Optionally, to use supported GENEVE encapsulation, under Features, select the GENEVE options.
n Optionally, to use Enhanced Data Path, select N-VDS Enhanced Data Path.
VMware, Inc. 20
NSX-T Data Center Installation Guide
Note NSX Manager provides multiple roles which previously required separate appliances. This includes
the policy role, the management plane role and the central control plane role. The central control plane
role was previously provide by the NSX Controller appliance.
n You can use the Extra Small VM resource size only for the Cloud Service Manager appliance (CSM).
Deploy CSM in the Extra Small VM size or higher, as required. See Overview of Deploying NSX
Cloud for more information.
n The NSX Manager Small VM appliance size is suitable for lab and proof-of-concept deployments, and
must not be used in production.
n The NSX Manager Medium VM appliance size is suitable for typical production environments. An
NSX-T management cluster formed using this appliance size can support up to 64 hypervisors. koristiti ovaj !
n The NSX Manager Large VM appliance size is suitable for large-scale deployments. An NSX-T
management cluster formed using this appliance size can support more than 64 hypervisors.
For maximum scale using the NSX Manager Large VM appliance size, go to the VMware Configuration
Maximums tool at https://configmax.vmware.com/guest and select NSX-T Data Center from the product
list.
Language Support
NSX Manager has been localized into multiple languages: English, German, French, Japanese,
Simplified Chinese, Korean, Traditional Chinese, and Spanish.
VMware, Inc. 21
NSX-T Data Center Installation Guide
Note
n Internet Explorer is not supported.
n Language support: NSX Manager has been localized into multiple languages: English, German,
French, Japanese, Simplified Chinese, Korean, Traditional Chinese, and Spanish. However, because
NSX Manager localization utilizes the browser language settings, ensure that your settings match the
desired language. There is no language preference setting within the NSX Manager interface itself.
The maximum network latency between NSX Managers and Transport Nodes is 150ms.
Storage Requirements
n The maximum disk access latency is under 10ms.
n Storage should be highly available to avoid a storage outage causing all NSX Manager file systems to
be placed into read-only mode upon event of a storage failure.
Please consult documentation for your storage technology on how to best design a highly available
storage solution.
Note The following conditions apply to the hosts for the NSX Edge nodes:
n NSX Edge nodes are supported only on ESXi-based hosts with Intel-based and AMD-based chipsets.
Otherwise, vSphere EVC mode may prevent NSX Edge nodes from starting, showing an error
message in the console.
n If vSphere EVC mode is enabled for the host for the NSX Edge VM, the CPU must be Haswell or later
generation. HASWELL OR LATER
n Only VMXNET3 vNIC is supported for the NSX Edge VM. Only VMXNET3 vNIC is supported
NSX Cloud Note If using NSX Cloud, note that the NSX Public Cloud Gateway(PCG) is deployed in a
single default size for each supported public cloud. See Deploy the NSX Public Cloud Gateway for
details.
VMware, Inc. 22
NSX-T Data Center Installation Guide
NSX Edge Medium 8 GB 4 200 GB 11 or later The NSX Edge Medium appliance
(vSphere 6.0 or size is suitable for a typical
Koristiti ovaj ! later) production environments.
NSX Edge Large 32 GB 8 200 GB 11 or later The NSX Edge Large appliance
(vSphere 6.0 or size is suitable for environments
later) with load balancing. See Scaling
Load Balancer Resources in the
NSX-T Data Center Administration
Guide.
NSX Edge Extra Large 64 GB 16 200 GB 11 or later The NSX Edge Extra Large
(vSphere 6.0 or appliance size is suitable for
later) environments with load balancing.
See Scaling Load Balancer
Resources in the NSX-T Data
Center Administration Guide.
Hardware Type
VMware, Inc. 23
NSX-T Data Center Installation Guide
32 GB 8 200 GB
Recommended Requirements
256 GB 24 200 GB
Hardware Type
VMware, Inc. 24
NSX-T Data Center Installation Guide
VMware, Inc. 25
NSX-T Data Center Installation Guide
Note For all the supported NICs listed above, verify that the media adapters and cables you use follow
the vendor's supported media types. Any media adapter or cables not supported by the vendor can result
in unpredictable behavior, including the inability to boot up due to an unrecognized media adapter. See
the NIC vendor documentation for information about supported media adapters and cables.
Important The user performing the installation may require sudo command permissions for some of the
procedures. See Install Third-Party Packages on a Bare Metal Server.
Note Hosts running Ubuntu 18.04.2 LTS must be upgraded from 16.04 or freshly installed.
VMware, Inc. 26
NSX-T Data Center Installation Guide
Configure the ports and protocols required to be open on both the physical and the host hypervisor
firewalls in NSX-T Data Center. Refer to https://ports.vmware.com/home/NSX-T-Data-Center for more
details.
By default, all certificates are self-signed certificates. The northbound GUI and API certificates and private
keys can be replaced by CA signed certificates.
There are internal daemons that communicate over the loopback or UNIX domain sockets:
Note To get access to NSX-T Data Center nodes, you must enable SSH on these nodes.
NSX Cloud Note See Enable Access to Ports and Protocols for a list of ports required for deploying
NSX Cloud.
VMware, Inc. 27
4
NSX Manager Installation
NSX Manager provides a graphical user interface (GUI) and REST APIs for creating, configuring, and
monitoring NSX-T Data Center components such as logical switches, logical routers, and firewalls.
NSX Manager provides a system view and is the management component of NSX-T Data Center.
For high availability, NSX-T Data Center supports a management cluster of three NSX Managers. For a
production environment, deploying a management cluster is recommended. For a proof-of-concept
environment, you can deploy a single NSX Manager.
n vCenter Server can use the vMotion function to live migrate NSX Manager across hosts and clusters.
n vCenter Server can use the Storage vMotion function to live migrate NSX Manager across hosts and
clusters.
n vCenter Server can use the Distributed Resource Scheduler function to rebalance NSX Manager
across hosts and clusters.
n vCenter Server can use the Anti-affinity function to manage NSX Manager across hosts and clusters.
Requirements Description
Supported platforms See NSX Manager VM and Host Transport Node System Requirements.
On ESXi, it is recommended that the NSX Manager appliance be installed on shared
storage.
IP address An NSX Manager must have a static IP address. You cannot change the IP address after
installation. Only IPv4 addresses are supported.
VMware, Inc. 28
NSX-T Data Center Installation Guide
Requirements Description
Note For more details on Linux PAM module to check the password against
dictionary words, refer to the man page.
Hostname When installing NSX Manager, specify a hostname that does not contain invalid
characters such as an underscore or special characters such as dot ".". If the hostname
contains any invalid character or special characters, after deployment the hostname will
be set to nsx-manager.
For more information about hostname restrictions, see https://tools.ietf.org/html/rfc952 and
https://tools.ietf.org/html/rfc1123.
VMware Tools The NSX Manager VM running on ESXi has VMTools installed. Do not remove or upgrade
VMTools.
VMware, Inc. 29
NSX-T Data Center Installation Guide
Requirements Description
System n Verify that the system requirements are met. See System Requirements.
n Verify that the required ports are open. See Ports and Protocols.
n Verify that a datastore is configured and accessible on the ESXi host.
n Verify that you have the IP address and gateway, DNS server IP addresses, domain
search list, and the NTP server IP address for the NSX Manager to use.
n If you do not already have one, create the target VM port group network. Place the
NSX-T Data Center appliances on a management VM network.
If you have multiple management networks, you can add static routes to the other
networks from the NSX-T Data Center appliance.
n Plan your NSX Manager IPv4 IP addressing scheme.
OVF Privileges Verify that you have adequate privileges to deploy an OVF template on the ESXi host.
A management tool that can deploy OVF templates, such as vCenter Server or the
vSphere Client. The OVF deployment tool must support configuration options to allow for
manual configuration.
OVF tool version must be 4.0 or later.
Note On an NSX Manager fresh install, reboot, or after an admin password change when prompted on
first login, it might take several minutes for the NSX Manager to start.
n If you specify a user name for the admin or audit user, the name must be unique. If you specify the
same name, it is ignored and the default names (admin and audit) is used.
n If the password for the admin user does not meet the complexity requirements, you must log in to
NSX Manager through SSH or at the console as the admin user with the password default. You are
prompted to change the password.
n If the password for the audit user does not meet the complexity requirements, the user account is
disabled. To enable the account, log in to NSX Manager through SSH or at the console as the admin
user and run the command set user audit to set the audit user's password (the current password
is an empty string).
VMware, Inc. 30
NSX-T Data Center Installation Guide
n If the password for the root or admin user does not meet the complexity requirements, you must log
in to NSX Manager through SSH or at the console as root with password vmware and admin with
password default. You are prompted to change the password.
Caution Changes made to the NSX-T Data Center while logged in with the root user credentials might
cause system failure and potentially impact your network. You can only make changes using the root
user credentials with the guidance of VMware Support team.
Note The core services on the appliance do not start until a password with sufficient complexity is set.
After you deploy NSX Manager from an OVA file, you cannot change the VM's IP settings by powering off
the VM and modifying the OVA settings from vCenter Server.
You enable FQDN usage by publishing the FQDNs of the NSX Managers.
Note Enabling FQDN usage (DNS) on NSX Managers is required for multisite Lite and NSX Cloud and
deployments. (It is optional for all other deployment types.) See Multisite Deployment of NSX-T Data
Center in the NSX-T Data Center Administration Guide and Chapter 12 Getting Started with NSX Cloud in
this guide.
Important It is highly recommended that you configure both the forward and reverse lookup entries for
the NSX Managers' FQDN with a short TTL, for example, 600 seconds.
In addition, you must also enable publishing the NSX Manager FQDNs using the NSX-T API.
{
"publish_fqdns": true,
"_revision": 0
}
Example response:
{
"publish_fqdns": true,
VMware, Inc. 31
NSX-T Data Center Installation Guide
"_revision": 1
}
Note After publishing the FQDNs, validate access by the transport nodes as described in the next
section.
Using SSH, log into a transport node such as a hypervisor or Edge node, and run the get controllers
CLI command.
Example response:
Controller IP Port SSL Status Is Physical Master Session State Controller FQDN
192.168.60.5 1235 enabled connected true up nsxmgr.corp.com
If the password expires, you will be unable to log in and manage components. Additionally, any task or
API call that requires the administrative password to execute will fail. If your password expires, see
Knowledge Base article 70691 NSX-T admin password expired.
Procedure
You can set the expiration period for between 1 and 9999 days.
Note Alternatively, you can use API commands to set the admin password expiration period.
3 (Optional) You can disable password expiry so the apassword never expires.
VMware, Inc. 32
5
Installing NSX-T Data Center on
vSphere
You can install the NSX-T Data Center components, NSX Manager and NSX Edge using the UI or CLI.
Make sure that you have the supported vSphere version. See vSphere support.
Cloud Service Manager is a virtual appliance that uses NSX-T Data Center components and integrates
them with your public cloud.
Prerequisites
n Verify that the system requirements are met. See System Requirements.
n Verify that the required ports are open. See Ports and Protocols.
n Verify that you have the IP address and gateway, DNS server IP addresses, domain search list, and
the NTP server IP address for the NSX Manager to use.
n If you do not already have one, create the target VM port group network. Place the NSX-T Data
Center appliances on a management VM network. MGMT port group !
If you have multiple management networks, you can add static routes to the other networks from the
NSX-T Data Center appliance.
VMware, Inc. 33
NSX-T Data Center Installation Guide
1 Locate the NSX-T Data Center OVA file on the VMware download portal.
2 In the vSphere Client, select the host or host cluster on which to install NSX-T Data Center.
3 Right-click and select Deploy OVF template to start the installation wizard.
4 Enter the download OVA URL or navigate to the OVA file, and click Next.
5 Enter a name and a location for the NSX Manager VM, and click Next.
The name you enter appears in the vSphere and vCenter Server inventory.
6 Select a compute resource for the NSX Manager appliance, and click Next.
u To install on a ESXi host managed by vCenter, select a host on which to deploy the NSX
Manager appliance.
u To install on a standalone ESXi host, select the host on which to deploy the NSX Manager
appliance.
7 Review and verify the OVF template details, and click Next.
The Description panel on the right side of the wizard shows the details of selected configuration.
d Click Next.
11 Select the port group or destination network for the NSX Manager.
13 Click Next.
The following steps are all located in the Customize Template section of the Deploy OVF Template
wizard.
14 In the Application section, enter the system root, CLI admin, and audit passwords for the NSX
Manager. The root and admin credentials are mandatory fields.
n At least 12 characters
VMware, Inc. 34
NSX-T Data Center Installation Guide
n Default password complexity rules are enforced by the following Linux PAM module arguments:
n retry=3: The maximum number of times a new password can be entered, for this argument
at the most 3 times, before returning with an error.
n minlen=12: The minimum acceptable size for the new password. In addition to the number of
characters in the new password, credit (of +1 in length) is given for each different kind of
character (other, upper, lower and digit).
n difok=0: The minimum number of bytes that must be different in the new password. Indicates
similarity between the old and new password. With a value 0 assigned to difok, there is no
requirement for any byte of the old and new password to be different. An exact match is
allowed.
n lcredit=1: The maximum credit for having lower case letters in the new password. If you
have less than or 1 lower case letter, each letter will count +1 towards meeting the current
minlen value.
n ucredit=1: The maximum credit for having upper case letters in the new password. If you
have less than or 1 upper case letter each letter will count +1 towards meeting the current
minlen value.
n dcredit=1: The maximum credit for having digits in the new password. If you have less than
or 1 digit, each digit will count +1 towards meeting the current minlen value.
n ocredit=1: The maximum credit for having other characters in the new password. If you have
less than or 1 other characters, each character will count +1 towards meeting the current
minlen value.
Note For more details on Linux PAM module to check the password against dictionary words,
refer to the man page.
15 In the Optional parameters section, leave the password fields blank. It is to avoid the risk of
compromising passwords set for VMC roles by a user who has access to the vCenter Server. When
deploying VMC for NSX-T Data Center, this field is used internally to set passwords for the Cloud
Admin and Cloud Audit roles.
16 In the Network Properties section, enter the hostname of the NSX Manager.
Note The host name must be a valid domain name. Ensure that each part of the host name
(domain/subdomain) that is separated by dot starts with an alphabet character.
VMware, Inc. 35
NSX-T Data Center Installation Guide
17 Select a Rolename for the appliance. The default role is NSX Manager.
n To install a Global Manager appliance for a Federation deployment, select the NSX Global
Manager role.
n To install a Cloud Service Manager (CSM) appliance for an NSX Cloud deployment, select the
nsx-cloud-service-manager role.
18 (Required fields) Enter the default gateway, management network IPv4, and management network
netmask.
Important If you leave the Management Network IPv4 field blank without entering a static IP
address, no IP address is assigned to the NSX Manager during deployment of the appliance. You
cannot access the NSX Manager when it powers on. The workaround is to re-deploy the NSX
Manager appliance.
19 In the DNS section, enter the DNS Server list and Domain Search list.
Optionally, you can enable SSH service and allow root SSH login. (Not recommended.)
21 Verify that all your custom OVF template specification is accurate and click Finish to initiate the
installation.
22 From the vSphere Client, open NSX Manager VM console to track the boot process.
23 After the NSX Manager boots, log in to the CLI as admin and run the get interface eth0
command to verify that the IP address was applied as expected.
24 Enter the get services command to verify that all the services are running.
n The NSX Manager can ping the hypervisor hosts that are in the same network as the NSX
Manager using the management interface.
n The NSX Manager can ping its DNS server and its NTP server.
n If you enabled SSH, make sure that you can SSH to your NSX Manager.
If connectivity is not established, make sure that the network adapter of the virtual appliance is in the
proper network or VLAN.
VMware, Inc. 36
NSX-T Data Center Installation Guide
What to do next
Log in to the NSX Manager from a supported web browser. See Log In to the Newly Created NSX
Manager .
By default, nsx_isSSHEnabled and nsx_allowSSHRootLogin are both disabled for security reasons. When
they are disabled, you cannot SSH or log in to the NSX Manager command line. If you enable
nsx_isSSHEnabled but not nsx_allowSSHRootLogin, you can SSH to NSX Manager but you cannot log in
as root.
Prerequisites
n Verify that the system requirements are met. See System Requirements.
n Verify that the required ports are open. See Ports and Protocols.
n Verify that you have the IP address and gateway, DNS server IP addresses, domain search list, and
the NTP server IP address for the NSX Manager to use.
n If you do not already have one, create the target VM port group network. Place the NSX-T Data
Center appliances on a management VM network.
If you have multiple management networks, you can add static routes to the other networks from the
NSX-T Data Center appliance.
Procedure
The process depends on whether the host is standalone or managed by vCenter Server.
Note On a standalone host, if you enter an incorrect role in the nsx_role property, then the
appliance is deployed in the NSX Manager role.
n Windows example:
VMware, Inc. 37
NSX-T Data Center Installation Guide
--network=<network name> \
--acceptAllEulas \
--noSSLVerify \
--diskMode=thin \
--powerOn \
--prop:"nsx_role=NSX Manager" \
--prop:"nsx_ip_0=10.168.110.75" \
--prop:"nsx_netmask_0=255.255.255.0" \
--prop:"nsx_gateway_0=10.168.110.1" \
--prop:"nsx_dns1_0=10.168.110.10" \
--prop:"nsx_domain_0=corp.local" \
--prop:"nsx_ntp_0=10.168.110.10" \
--prop:"nsx_isSSHEnabled=<True|False>" \
--prop:"nsx_allowSSHRootLogin=<True|False>" \
--prop:"nsx_passwd_0=<password>" \
--prop:"nsx_cli_passwd_0=<password>" \
--prop:"nsx_cli_audit_passwd_0=<password>" \
--prop:"nsx_hostname=nsx-manager" \
<nsx-unified-appliance-release>.ova \
vi://root:<password>@10.168.110.51
Note The above Windows code block uses the backslash (\) to indicate the continuation of
the command line. In actual use, omit the backslash and put the entire command in a single
line.
Note In the above example, 10.168.110.51 is the IP address of the host machine where
NSX Manager is to be deployed.
n Linux example:
mgrformfactor="small"
ipAllocationPolicy="fixedPolicy"
mgrdatastore="QNAP-Share-VMs"
mgrnetwork="Management-VLAN-210"
mgrname01="nsx-manager-01"
mgrhostname01="nsx-manager-01"
mgrip01="192.168.210.121"
mgrnetmask="255.255.255.0"
mgrgw="192.168.210.254"
mgrdns="192.168.110.10"
mgrntp="192.168.210.254"
mgrpasswd="<password>"
mgrssh="<True|False>"
mgrroot="<True|False>"
logLevel="trivia"
mgresxhost01="192.168.110.113"
VMware, Inc. 38
NSX-T Data Center Installation Guide
--acceptAllEulas \
--allowExtraConfig \
--ipProtocol=IPv4 \
--ipAllocationPolicy=$ipAllocationPolicy \
--datastore=$mgrdatastore \
--network=$mgrnetwork \
--name=$mgrname01 \
--prop:nsx_hostname=$mgrhostname01 \
--prop:nsx_role="NSX Manager" \
--prop:nsx_ip_0=$mgrip01 \
--prop:nsx_netmask_0=$mgrnetmask \
--prop:nsx_gateway_0=$mgrgw \
--prop:nsx_dns1_0=$mgrdns \
--prop:nsx_ntp_0=$mgrntp \
--prop:nsx_passwd_0=$mgrpasswd \
--prop:nsx_cli_passwd_0=$mgrpasswd \
--prop:nsx_cli_audit_passwd_0=$mgrpasswd \
--prop:nsx_isSSHEnabled=$mgrssh \
--prop:nsx_allowSSHRootLogin=$mgrroot \
--X:logFile=nsxt-manager-ovf.log \
--X:logLevel=$logLevel \
/home/<user/nsxt-autodeploy/<nsx-unified-appliance-release>.ova \
vi://root:<password>@$mgresxhost01
n Windows example:
C:\Users\Administrator\Downloads>ovftool
--name=nsx-manager \
--X:injectOvfEnv \
--X:logFile=ovftool.log \
--allowExtraConfig \
--datastore=ds1 \
--network="management" \
--acceptAllEulas \
--noSSLVerify \
--diskMode=thin \
--powerOn \
--prop:"nsx_role=NSX Manager" \
--prop:"nsx_ip_0=10.168.110.75" \
VMware, Inc. 39
NSX-T Data Center Installation Guide
--prop:"nsx_netmask_0=255.255.255.0" \
--prop:"nsx_gateway_0=10.168.110.1" \
--prop:"nsx_dns1_0=10.168.110.10" \
--prop:"nsx_domain_0=corp.local" \
--prop:"nsx_ntp_0=10.168.110.10" \
--prop:"nsx_isSSHEnabled=<True|False>" \
--prop:"nsx_allowSSHRootLogin=<True|False>" \
--prop:"nsx_passwd_0=<password>" \
--prop:"nsx_cli_passwd_0=<password>" \
--prop:"nsx_hostname=nsx-manager" \
<nsx-unified-appliance-release>.ova \
vi://[email protected]:<password>@10.168.110.24/?ip=10.168.110.51
Note The above Windows code block uses the backslash (\) to indicate the continuation of
the command line. In actual use, omit the backslash and put the entire command in a single
line.
n Linux example:
mgrformfactor="small"
ipAllocationPolicy="fixedPolicy"
mgrdatastore="QNAP-Share-VMs"
mgrnetwork="Management-VLAN-210"
mgrname01="nsx-manager-01"
mgrhostname01="nsx-manager-01"
mgrip01="192.168.210.121"
mgrnetmask="255.255.255.0"
mgrgw="192.168.210.254"
mgrdns="192.168.110.10"
mgrntp="192.168.210.254"
mgrpasswd="<password>"
mgrssh="<True|False>"
mgrroot="<True|False>"
logLevel="trivia"
vcadmin="[email protected]"
vcpass="<password>"
vcip="192.168.110.151"
mgresxhost01="192.168.110.113"
VMware, Inc. 40
NSX-T Data Center Installation Guide
--prop:nsx_ip_0=$mgrip01 \
--prop:nsx_netmask_0=$mgrnetmask \
--prop:nsx_gateway_0=$mgrgw \
--prop:nsx_dns1_0=$mgrdns \
--prop:nsx_ntp_0=$mgrntp \
--prop:nsx_passwd_0=$mgrpasswd \
--prop:nsx_cli_passwd_0=$mgrpasswd \
--prop:nsx_cli_audit_passwd_0=$mgrpasswd \
--prop:nsx_isSSHEnabled=$mgrssh \
--prop:nsx_allowSSHRootLogin=$mgrroot \
--X:logFile=nsxt-manager-ovf.log \
--X:logLevel=$logLevel \
/home/<user/nsxt-autodeploy/<nsx-unified-appliance-release>.ova \
vi://$vcadmin:$vcpass@$vcip/?ip=$mgresxhost01
2 For an optimal performance, reserve memory for the NSX Manager appliance.
Set the reservation to ensure that NSX Manager has sufficient memory to run efficiently. See NSX
Manager VM and Host Transport Node System Requirements.
3 From the vSphere Client, open NSX Manager VM console to track the boot process.
4 After the NSX Manager boots, log in to the CLI as admin and run the get interface eth0
command to verify that the IP address was applied as expected.
n The NSX Manager can ping the hypervisor hosts that are in the same network as the NSX
Manager using the management interface.
n The NSX Manager can ping its DNS server and its NTP server.
n If you enabled SSH, make sure that you can SSH to your NSX Manager.
If connectivity is not established, make sure that the network adapter of the virtual appliance is in the
proper network or VLAN.
VMware, Inc. 41
NSX-T Data Center Installation Guide
What to do next
Log in to the NSX Manager from a supported web browser. See Log In to the Newly Created NSX
Manager .
Important If the configuration is not performed after deploying the appliance and you forget the root,
admin, or audit password, resetting it is not possible.
Procedure
2 Change the value for the parameter GRUB_HIDDEN_TIMEOUT in the /etc/default/grub file.
GRUB_HIDDEN_TIMEOUT=2
update-grub
After you install NSX Manager, you can join the Customer Experience Improvement Program (CEIP) for
NSX-T Data Center. See Customer Experience Improvement Program in the NSX-T Data Center
Administration Guide for more information about the program, including how to join or leave the program
later.
Prerequisites
Verify that NSX Manager is installed. See Install NSX Manager and Available Appliances.
Procedure
3 Select whether to join the VMware's Customer Experience Improvement Program (CEIP).
4 Click Save
VMware, Inc. 42
NSX-T Data Center Installation Guide
NSX-T Data Center polls compute managers to collect cluster information from vCenter Server.
When you add a vCenter Server compute manager, you must provide a vCenter Server user's
credentials. You can provide the vCenter Server administrator's credentials, or create a role and a user
specifically for NSX-T Data Center and provide this user's credentials. This role must have the following
vCenter Server privileges:
Extension.Register extension
Extension.Unregister extension
Extension.Update extension
Sessions.Message
Sessions.Validate session
Host.Configuration.Maintenance
Host.Configuration.NetworkConfiguration
Tasks
Scheduled task
Global.Cancel task
Virtual Machine.Configuration
Virtual Machine.Provisioning
Virtual Machine.Inventory
Network.Assign network
vApp
For more information about vCenter Server roles and privileges, see the vSphere Security document.
Prerequisites
n Verify that you use the supported vSphere version. See Supported vSphere version.
VMware, Inc. 43
NSX-T Data Center Installation Guide
n Verify that you use the recommended number of compute managers. See https://
configmax.vmware.com/home.
Note NSX-T Data Center does not support the same vCenter Server to be registered with more than
one NSX Manager.
Procedure
1 From your browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-ip-
address>. uvezivanje vCenter Servera i NSX manager-a
2 Select System > Fabric > Compute Managers > Add.
Option Description
Name and Description Type the name to identify the vCenter Server.
You can optionally describe any special details such as, the number of clusters in
the vCenter Server.
HTTPS Port of Reverse Proxy The default port is 443. If you use another port, verify that the port is open on all
the NSX Manager appliances.
Set the reverse proxy port to register the compute manager in NSX-T.
SHA-256 Thumbprint Type the vCenter Server SHA-256 thumbprint algorithm value.
Enable Trust Supported only on vCenter Server 7.0 and later versions.
Enable this field to trust compute manager for authentication.
If you left the thumbprint value blank, you are prompted to accept the server provided thumbprint.
After you accept the thumbprint, it takes a few seconds for NSX-T Data Center to discover and
register the vCenter Server resources.
Note If the FQDN, IP, or thumbprint of the compute manager changes after registration, edit the
computer manager and enter the new values.
4 If the progress icon changes from In progress to Not registered, perform the following steps to
resolve the error.
a Select the error message and click Resolve. One possible error message is the following:
VMware, Inc. 44
NSX-T Data Center Installation Guide
Results
It takes some time to register the compute manager with vCenter Server and for the connection status to
appear as UP.
You can click the compute manager's name to view the details, edit the compute manager, or to manage
tags that apply to the compute manager.
After the vCenter Server is successfully registered, do not power off and delete the NSX Manager VM
without deleting the compute manager first. Otherwise, when you deploy a new NSX Manager, you will
not be able to register the same vCenter Server again. You will get the error that the vCenter Server is
already registered with another NSX Manager.
Note After a vCenter Server (VC) compute manager is successfully added, it cannot be removed if you
successfully performed any of the following actions:
n Transport nodes are prepared using VDS that is dependent on the VC.
n Service VMs deployed on a host or a cluster in the VC using NSX service insertion.
n You use the NSX Manager UI to deploy Edge VMs, NSX Intelligence VM, or NSX Manager nodes on
a host or a cluster in the VC.
If you try to perform any of these actions and you encounter an error (for example, installation failed), you
can remove the VC if you have not successfully performed any of the actions listed above.
If you have successfully prepared any transport node using VDS that is dependent on the VC or deployed
any VM, you can remove the VC after you have done the following:
n Unprepare all transport nodes. If uninstalling a transport node fails, you must force delete the
transport node.
n Undeploy all service VMs, any NSX Intelligence VM, all NSX Edge VMs and all NSX Manager nodes.
The undeployment must be successful or in a failed state.
This restriction applies to a fresh installation of NSX-T Data Center 3.0 as well as an upgrade.
After the new nodes are deployed, these nodes connect to the NSX Manager node to form a cluster.
Ensure that the number of clustered NSX Manager nodes is three. create 3 NSX Manager nodes !
Note Deploying multiple NSX Manager nodes using the UI is supported only on ESXi hosts managed by
vCenter Server.
All the repository details and the password of the first deployed NSX Manager node are synchronized
with the newly deployed nodes in the cluster.
Prerequisites
n Verify that an NSX Manager node is installed. See Install NSX Manager and Available Appliances.
VMware, Inc. 45
NSX-T Data Center Installation Guide
n Verify that the system requirements are met. See System Requirements.
n Verify that the required ports are open. See Ports and Protocols.
n Verify that you have the IP address and gateway, DNS server IP addresses, domain search list, and
the NTP server IP address for the NSX Manager to use.
n If you do not already have one, create the target VM port group network. Place the NSX-T Data
Center appliances on a management VM network.
If you have multiple management networks, you can add static routes to the other networks from the
NSX-T Data Center appliance.
Procedure
2 Select System > Appliances > Overview > Add NSX Appliance.
Option Description
Host Name or FQDN Enter a name for the NSX Manager node.
Management Gateway Enter a gateway IP address to be used by the NSX Manager node.
DNS Servers Enter DNS server IP addresses to be used by the NSX Manager node.
NTP Server Enter an NTP server IP address to be used by the NSX Manager node.
Node Size Select the form factor to deploy the NSX Manager node from the following options:
n Small (4 vCPU, 16 GB RAM, 300 GB storage)
n Medium (6 vCPU, 24 GB RAM, 300 GB storage)
n High (12 vCPU, 48 GB RAM, 300 GB storage)
Option Description
Compute Manager Select the vCenter Server to provision compute resources for deploying the NSX
Manager node.
Resource Pool Select either a resource pool or a host for the node from the drop-down menu.
Host If you did not select a resource pool, select the host to deploy the NSX Manager
VIBs.
Datastore Select a datastore for the node files from the drop-down menu.
VMware, Inc. 46
NSX-T Data Center Installation Guide
Option Description
Virtual Disk Format n For NFS datastores, select a virtual disk format from the available provisioned
policies on the underlying datastore.
n With hardware acceleration, Thin Provision, Thick Provision Lazy
Zeroed, and Thick Provision Eager Zeroed formats are supported.
n Without hardware acceleration, only Thin Provision format is supported.
n For VMFS datastores, Thin Provision, Thick Provision Lazy Zeroed, and
Thick Provision Eager Zeroed formats are supported.
n For vSAN datastores, you cannot choose a virtual disk format because the
format is defined by the VM storage policy.
n The actual disk format is determined by the vSAN storage policies. The
default virtual disk format for vSAN is Thin Provision. You can change the
vSAN storage policies to set a percentage of the virtual disk that must be
thick provisioned.
By default, the virtual disk for an NSX Manager node is prepared in the Thin
Provision format.
Note You can provision each NSX Manager node with a different disk format
based on which policies are provisioned on the datastore.
Option Description
Enable SSH Toggle the button to allow an SSH login to the new NSX Manager node.
Enable Root Access Toggle the button to allow the root access to the new NSX Manager node.
VMware, Inc. 47
NSX-T Data Center Installation Guide
Option Description
CLI User name and Password Set the CLI password and password confirmation for the new node.
Confirmation Your password must comply with the password strength restrictions.
n At least 12 characters
n At least one lower-case letter
n At least one upper-case letter
n At least one digit
n At least one special character
n At least five different characters
n Default password complexity rules are enforced by the following Linux PAM
module arguments:
n retry=3: The maximum number of times a new password can be entered,
for this argument at the most 3 times, before returning with an error.
n minlen=12: The minimum acceptable size for the new password. In addition
to the number of characters in the new password, credit (of +1 in length) is
given for each different kind of character (other, upper, lower and digit).
n difok=0: The minimum number of bytes that must be different in the new
password. Indicates similarity between the old and new password. With a
value 0 assigned to difok, there is no requirement for any byte of the old
and new password to be different. An exact match is allowed.
n lcredit=1: The maximum credit for having lower case letters in the new
password. If you have less than or 1 lower case letter, each letter will count
+1 towards meeting the current minlen value.
n ucredit=1: The maximum credit for having upper case letters in the new
password. If you have less than or 1 upper case letter each letter will count
+1 towards meeting the current minlen value.
n dcredit=1: The maximum credit for having digits in the new password. If
you have less than or 1 digit, each digit will count +1 towards meeting the
current minlen value.
n ocredit=1: The maximum credit for having other characters in the new
password. If you have less than or 1 other characters, each character will
count +1 towards meeting the current minlen value.
n enforce_for_root: The password is set for the root user.
Note For more details on Linux PAM module to check the password against
dictionary words, refer to the man page.
The CLI user name is already set to admin.
VMware, Inc. 48
NSX-T Data Center Installation Guide
Option Description
Root Password and Password Set the root password and password confirmation for the new node.
Confirmation Your password must comply with the password strength restrictions.
n At least 12 characters
n At least one lower-case letter
n At least one upper-case letter
n At least one digit
n At least one special character
n At least five different characters
n Default password complexity rules are enforced by the following Linux PAM
module arguments:
n retry=3: The maximum number of times a new password can be entered,
for this argument at the most 3 times, before returning with an error.
n minlen=12: The minimum acceptable size for the new password. In addition
to the number of characters in the new password, credit (of +1 in length) is
given for each different kind of character (other, upper, lower and digit).
n difok=0: The minimum number of bytes that must be different in the new
password. Indicates similarity between the old and new password. With a
value 0 assigned to difok, there is no requirement for any byte of the old
and new password to be different. An exact match is allowed.
n lcredit=1: The maximum credit for having lower case letters in the new
password. If you have less than or 1 lower case letter, each letter will count
+1 towards meeting the current minlen value.
n ucredit=1: The maximum credit for having upper case letters in the new
password. If you have less than or 1 upper case letter each letter will count
+1 towards meeting the current minlen value.
n dcredit=1: The maximum credit for having digits in the new password. If
you have less than or 1 digit, each digit will count +1 towards meeting the
current minlen value.
n ocredit=1: The maximum credit for having other characters in the new
password. If you have less than or 1 other characters, each character will
count +1 towards meeting the current minlen value.
n enforce_for_root: The password is set for the root user.
Note For more details on Linux PAM module to check the password against
dictionary words, refer to the man page.
DNS Servers Enter the DNS server IP address available in the vCenter Server.
6 Click Save.
8 Click Finish.
The new nodes are deployed. You can track the deployment process on the System > Appliances >
Overview page or the vCenter Server.
VMware, Inc. 49
NSX-T Data Center Installation Guide
9 Wait for 10-15 minutes for the deployment, cluster formation, and repository synchronization to
complete.
All the repository details and the password of the first deployed NSX Manager node are synchronized
with the newly deployed nodes in the cluster.
Note If the master node reboots when the deployment of a new node is in progress, the new node
might fail to register with the cluster. It displays the Failed to Register message on the new
node's thumbnail. To redeploy the node manually on the cluster, delete and redeploy the node.
10 After the NSX Manager boots, log in to the CLI as admin and run the get interface eth0
command to verify that the IP address was applied as expected.
11 Enter the get services command to verify that all the services are running.
If the services are not running, wait for all the services to start running.
Note The following services are not running by default: liagent, migration-coordinator, and snmp.
You can start them as follows:
Note Start this service on only one NSX Manager node. See the NSX-T Data Center Migration
Coordinator Guide.
n For SNMPv1/SNMPv2:
n For SNMPv3
The maximum character limit for user_name is 32. Ensure that your passwords meet PAM
constraints. If you want to change the default engine id, use the following command:
NSX-T Data Center supports SHA1 and AES128 as the authentication and privacy protocols. You
can also use API calls to set up SNMPv3. For more information, see the NSX-T Data Center API
Guide.
VMware, Inc. 50
NSX-T Data Center Installation Guide
12 Log in to the first deployed NSX Manager node and enter the get cluster status command to
verify that the nodes are successfully added to the cluster.
n The NSX Manager can ping the hypervisor hosts that are in the same network as the NSX
Manager using the management interface.
n The NSX Manager can ping its DNS server and its NTP server.
n If you enabled SSH, make sure that you can SSH to your NSX Manager.
If connectivity is not established, make sure that the network adapter of the virtual appliance is in the
proper network or VLAN.
What to do next
Configure NSX Edge. See Install an NSX Edge on ESXi Using the vSphere GUI.
Prerequisites
Procedure
3 On the NSX Manager node, run the get certificate api thumbprint command.
The command output is a string of numbers that is unique to this NSX Manager.
4 Run the get cluster config command to get the first deployed NSX Manager cluster ID.
Note You must run the join command on the newly deployed NSX Manager node.
n Cluster ID
n User name
VMware, Inc. 51
NSX-T Data Center Installation Guide
n Password
n Certificate thumbprint
Repeat step 5.
7 Verify the cluster status by running the get cluster status command on your hosts.
8 (NSX Manager UI) Select System > Appliances > Overview and verify the cluster connectivity.
What to do next
Create a transport zone. See Create a Standalone Host or Bare Metal Server Transport Node.
NSX Managers of a cluster become part of an HTTPS group to service API and UI requests. The leader
node of the cluster assumes ownership of the set VIP of the cluster to service any API and UI request.
Any API and UI request coming in from clients is directed to the leader node.
Note When assigning Virtual IP, all the NSX Manager VMs in the cluster must be configured in the same
subnet.
If the leader node that owns VIP becomes unavailable, NSX-T elects a new leader. The new leader owns
the VIP. It sends out a gratuitous ARP packet advertising the new VIP to MAC address mapping. After a
new leader node is elected, new API and UI requests are sent to the new leader node.
Failover of VIP to a new leader node of the cluster might take a few minutes to become functional. If the
VIP fails over to a new leader node because the previous leader node became unavailable,
reauthenticate credentials so that API requests are directed to the new leader node.
Note VIP is not designed to serve as a load-balancer and you cannot use it if you enable the vIDM
External Load Balancer Integration from System > Users > Configuration. Do not set up a VIP if you
want to use the External Load Balancer from vIDM. See Configure VMware Identity Manager Integration
in the NSX-T Data Center Administration Guide for more details.
VMware, Inc. 52
NSX-T Data Center Installation Guide
Procedure
4 Enter the VIP for the cluster. Ensure that VIP is part of the same subnet as the other management
nodes.
5 Click Save.
6 To verify the cluster status and the API leader of the HTTPS group, enter the NSX Manager CLI
command get cluster status verbose in the NSX Manager console or over SSH.
Members:
UUID FQDN IP
STATUS
cdb93642-ccba-fdf4-8819-90bf018cd727 nsx-manager 192.196.197.84
UP
51a13642-929b-8dfc-3455-109e6cc2a7ae nsx-manager 192.196.198.156
UP
d0de3642-d03f-c909-9cca-312fd22e486b nsx-manager 192.196.198.54
UP
Leaders:
SERVICE LEADER LEASE
VERSION
api cdb93642-ccba-fdf4-8819-90bf018cd727 8
Results
Any API requests to NSX-T is redirected to the virtual IP address of the cluster, which is owned by the
leader node. The leader node then routes the request forward to the other components of the appliance.
VMware, Inc. 53
NSX-T Data Center Installation Guide
Procedure
VMware, Inc. 54
6
Installing NSX-T Data Center on
KVM
NSX-T Data Center supports KVM in two ways: as a host transport node and as a host for NSX Manager.
Make sure that you have the supported KVM versions. See NSX Manager VM and Host Transport Node
System Requirements.
n Set Up KVM
Set Up KVM
If you plan to use KVM as a transport node or as a host for NSX Manager guest VM, but you do not
already have a KVM setup, you can use the procedure described here.
Note The Geneve encapsulation protocol uses UDP port 6081. You must allow this port access in the
firewall on the KVM host.
Procedure
3 Add the line "kernel* redhat-release*" to configure YUM to avoid any unsupported RHEL
upgrades.
VMware, Inc. 55
NSX-T Data Center Installation Guide
If you plan to run NSX-T Data Center Container Plug-in, which has specific compatibility
requirements, exclude the container-related modules as well.
SUSE Linux Enterprise Start YaSt and select Virtualization > Install Hypervisor and Tools.
Server YaSt allows you to automatically enable and configure the network bridge.
5 For NSX manager to automatically install NSX software packages on KVM host, prepare the network
configuration of the uplink/data interface.
The KVM host can have multiple network interfaces. For the network interface that you plan to
provide as an uplink interface (data interface) for NSX-T purposes, it is important to have network
configuration files correctly populated. NSX-T looks at these network configuration files to create
NSX-T specific network devices. On Ubuntu, populate /etc/network/interfaces file. On RHEL,
CentOS, or SUSE, populate the /etc/sysconfig/network-scripts/ifcfg-$uplinkdevice file.
In the following examples, interface "ens32" is the uplink device (data interface). Depending on your
deployment environment, this interface can use DHCP or static IP settings.
VMware, Inc. 56
NSX-T Data Center Installation Guide
Linux
Distribution Network Configuration
auto eth0
iface eth0 inet manual
auto ens32
iface ens32 inet manual
SUSE Linux If a SLES host already exists, verify that data interfaces is already configured on the host.
Enterprise If you do not have a pre-configured SLES host, see the reference configuration for the management and
Server data interface.
Edit /etc/sysconfig/network/ifcfg-ens32:
DEVICE="ens32"
NAME="ens32"
UUID="<UUID>"
BOOTPROTO="none"
LLADDR="<HWADDR>"
STARTMODE="yes"
6 Restart networking service systemctl restart network or reboot the Linux server for the
networking changes take effect.
7 After the KVM host is configured as a transport node, the bridge interface 'nsx-vtep0.0' is
automatically created by NSX-T.
VMware, Inc. 57
NSX-T Data Center Installation Guide
In RHEL, the host NSX agent (nsxa) creates a configuration file named ifcfg-nsx-vtep0.0 that has
entries such as the following:
DEVICE=nsx-vtep0.0
BOOTPROTO=static
NETMASK=<IP address>
IPADDR=<subnet mask>
MTU=1600
ONBOOT=yes
USERCTL=no
NM_CONTROLLED=no
In SUSE,
DEVICE=nsx-vtep0.0
BOOTPROTO=static
NETMASK=255.255.255.0
IPADDR=192.168.13.119
MACADDR=ae:9d:b7:ca:20:4a
MTU=1600
USERCTL=no
STARTMODE=auto
KVM guest VM management is beyond the scope of this guide. However, here are some simple KVM CLI
commands to get you started.
To manage your guest VMs in the KVM CLI, use the virsh commands. Following are some common
virsh commands. Refer to the KVM documentation for additional information.
# List running
virsh list
# List all
virsh list --all
# Control instances
virsh start <instance>
virsh shutdown <instance>
virsh destroy <instance>
virsh undefine <instance>
virsh suspend <instance>
virsh resume <instance>
VMware, Inc. 58
NSX-T Data Center Installation Guide
In the Linux CLI, the ifconfig command shows the vnetX interface, which represents the interface
created for the guest VM. If you add additional guest VMs, additional vnetX interfaces are added.
ifconfig
...
The QCOW2 installation procedure uses guestfish, a Linux command-line tool to write virtual machine
settings into the QCOW2 file.
Prerequisites
n Verify that the password in the guestinfo adheres to the password complexity requirements so that
you can log in after installation. See Chapter 4 NSX Manager Installation.
n Familiarize yourself with the NSX Manager resource requirements. See NSX Manager VM and Host
Transport Node System Requirements.
n If you plan to install Ubuntu OS, it is recommened to install Ubuntu version 18.04 before installing
NSX Manager on the KVM host.
Important
n If you are deploying an NSX Manager on a KVM v18.04 for a production environment, ensure that the
KVM host is not running as a virtual machine on an ESXi host. However, if you want to deploy an
NSX Manager in a nested KVM environment for purposes of proof-of-concept, deploy the NSX
Manager in the QEMU user space, by using virt-type qemu.
n Do not deploy NSX Manager on a single disk. If you install NSX Manager on a single disk, some
startup services might fail to come up.
VMware, Inc. 59
NSX-T Data Center Installation Guide
Procedure
1 Download NSX Manager QCOW2 images (for primary and secondary disk) from the nsx-unified-
appliance > exports > kvm folder.
2 Make three copies of the images to the KVM machine that is going to run the NSX Manager using
SCP or sync.
VMware, Inc. 60
NSX-T Data Center Installation Guide
4 In the same directory where you saved the QCOW2 image, create three files (name: guestinfo.xml)
for the primary disk image and populate it with the NSX Manager VM's properties. You do not need to
create any files for the secondary disk image.
Property Description
n nsx_cli_passwd_0 Your passwords must comply with the password strength restrictions.
n nsx_cli_audit_passwd_0 n At least 12 characters
n nsx_passwd_0 n At least one lower-case letter
n At least one upper-case letter
n At least one digit
n At least one special character
n At least five different characters
n Default password complexity rules are enforced by the following Linux PAM
module arguments:
n retry=3: The maximum number of times a new password can be entered,
for this argument at the most 3 times, before returning with an error.
n minlen=12: The minimum acceptable size for the new password. In
addition to the number of characters in the new password, credit (of +1 in
length) is given for each different kind of character (other, upper, lower and
digit).
n difok=0: The minimum number of bytes that must be different in the new
password. Indicates similarity between the old and new password. With a
value 0 assigned to difok, there is no requirement for any byte of the old
and new password to be different. An exact match is allowed.
n lcredit=1: The maximum credit for having lower case letters in the new
password. If you have less than or 1 lower case letter, each letter will count
+1 towards meeting the current minlen value.
n ucredit=1: The maximum credit for having upper case letters in the new
password. If you have less than or 1 upper case letter each letter will count
+1 towards meeting the current minlen value.
n dcredit=1: The maximum credit for having digits in the new password. If
you have less than or 1 digit, each digit will count +1 towards meeting the
current minlen value.
n ocredit=1: The maximum credit for having other characters in the new
password. If you have less than or 1 other characters, each character will
count +1 towards meeting the current minlen value.
n enforce_for_root: The password is set for the root user.
Note For more details on Linux PAM module to check the password against
dictionary words, refer to the man page.
nsx_hostname Enter the host name for the NSX Manager. The host name must be a valid domain
name. Ensure that each part of the host name (domain/subdomain) that is
separated by dot must start with an alphabet character.
nsx_role n nsx-manager: Select this role to install the NSX Manager appliance.
n nsx-cloud-service-manager: Select this role to install the Cloud Service
Manager appliance for NSX Cloud. See Overview of Deploying NSX Cloud for
details.
n nsx-global-manager: Select this role to install the Global Manager appliance for
Federation. See Chapter 13 Getting Started with Federation for details.
VMware, Inc. 61
NSX-T Data Center Installation Guide
Property Description
nsx_isSSHEnabled You can enable or disable this property. If enabled, you can log in to the NSX
Manager using SSH.
nsx_allowSSHRootLogin You can enable or disable this property. If enabled, you can log in to the NSX
Manager using SSH as the root user. To be able to use this property,
nsx_isSSHEnabled must be enabled.
n nsx_dns1_0 Enter IP addresses for the default gateway, management network IPv4,
n nsx_ntp_0 management network netmask, DNS, and NTP IP address.
n nsx_domain_0
n nsx_gateway_0
n nsx_netmask_0
n nsx_ip_0
For example:
Note In the example, nsx_isSSHEnabled and nsx_allowSSHRootLogin are both enabled. When they
are disabled, you cannot SSH or log in to the NSX Manager command line. If you enable
nsx_isSSHEnabled but not nsx_allowSSHRootLogin, you can SSH to NSX Manager but you cannot log
in as root.
VMware, Inc. 62
NSX-T Data Center Installation Guide
5 Use guestfish to write the guestinfo.xml file into the QCOW2 image.
Note After the guestinfo information is written into a QCOW2 image, the information cannot be
overwritten.
The vCPU and RAM values are suitable for a large VM. The network name and portgroup name are
specific to your environment. The model must be virtio.
sudo virt-install \
--import \
--ram 48000 \
--vcpus 12 \
--name <manager-name> \
--disk path=<manager-qcow2-file-path>,bus=virtio,cache=none \
--disk path=<secondary-qcow2-file-path>,bus=virtio,cache=none \
--network [bridge=<bridge-name> or network=<network-name>],
portgroup=<portgroup-name>,model=virtio \
--noautoconsole \
--cpu mode=host-passthrough
Starting install...
Domain installation still in progress. Waiting for installation to complete.
sudo virt-install \
--import \
--ram 48000 \
--vcpus 12 \
--name <manager-name> \
--disk path=<manager-qcow2-file-path>,bus=virtio,cache=none \
--disk path=<secondary-qcow2-file-path>,bus=virtio,cache=none \
--network [bridge=<bridge-name> or network=<network-name>],
portgroup=<portgroup-name>,model=virtio \
--noautoconsole \
--cpu mode=host-passthrough,cache.mode=passthrough
Starting install...
Domain installation still in progress. Waiting for installation to complete.
VMware, Inc. 63
NSX-T Data Center Installation Guide
Id Name State
---------------------------------
18 nsx-manager1 running
virsh console 18
Connected to domain nsx-manager1
Escape character is ^]
9 After the NSX Manager boots, log in to the CLI as admin and run the get interface eth0
command to verify that the IP address was applied as expected.
n The NSX Manager can ping the hypervisor hosts that are in the same network as the NSX
Manager using the management interface.
n The NSX Manager can ping its DNS server and its NTP server.
n If you enabled SSH, make sure that you can SSH to your NSX Manager.
If connectivity is not established, make sure that the network adapter of the virtual appliance is in the
proper network or VLAN.
control-]
What to do next
Log in to the NSX Manager. See Log In to the Newly Created NSX Manager .
VMware, Inc. 64
NSX-T Data Center Installation Guide
Prerequisites
n (RHEL and CentOS Linux) Before you install the third-party packages, run the following commands to
install the virtualization packages.
If you are not able to install the packages, you can manually install them by running yum install
glibc.i686 nspr command on a new installation.
n (Ubuntu) Before you install the third-party packages, run the following commands to install the
virtualization packages.
apt install -y \
qemu-kvm \
libvirt-bin \
virtinst \
virt-manager \
virt-viewer \
ubuntu-vm-builder \
bridge-utils
n (SUSE Linux Enterprise Server) Before you install the third-party packages, run the following
commands to install the virtualization packages.
libcap-progs
Procedure
u On Ubuntu 18.04.2 LTS, run apt-get install <package_name> to install the following third-party
packages manually.
traceroute
python-mako
python-simplejson
python-unittest2
python-yaml
python-netaddr
python-netifaces
dkms
libc6-dev
libelf-dev
u On RHEL and CentOS Linux, run yum install <package_name> to install the third-party packages
manually.
If you manually prepare the host that is already registered to RHEL or CentOS, you do not need to
install third-party packages on the host.
VMware, Inc. 65
NSX-T Data Center Installation Guide
wget wget
PyYAML PyYAML
libunwind libunwind
python-gevent python-gevent
python-mako python-mako
python-netaddr python-netaddr
python-netifaces python-netifaces
redhat-lsb-core redhat-lsb-core
tcpdump tcpdump
u On SUSE, run zypper install <package_name> to install the third-party packages manually.
python-simplejson
python-PyYAML
python-netaddr
python-netifaces
lsb-release
libcap-progs
Procedure
1 Verify the current version of the Open vSwitch installed on the host.
ovs-vswitchd --version
Important If the existing Open vSwitch packages run the latest or an earlier version, you must
replace the existing Open vSwitch packages with the supported version.
b Download and copy the nsx-lcp file into the /tmp directory.
VMware, Inc. 66
NSX-T Data Center Installation Guide
cd nsx-lcp-rhel77_x86_64/
ls | grep openvswitch
6 If the Open vSwitch packages are successfully deleted on the host, proceed to the NSX Manager UI
to prepare the KVM host as a transport node. If the Open vSwitch packages cannot be deleted, then
proceed to the next step.
7 If packages cannot be deleted, upgrade the existing Open vSwitch package version with the
supported one.
Procedure
1 Verify the current version of the Open vSwitch installed on the host.
ovs-vswitchd --version
Important If the existing Open vSwitch packages run the latest or an earlier version, you must
replace the existing Open vSwitch packages with the supported version.
b Download and copy the nsx-lcp file into the /tmp directory.
VMware, Inc. 67
NSX-T Data Center Installation Guide
nsx-lcp-3.0.0.0.0.14335404-linux64-sles12sp3.tar.gz
nsx-lcp-linux64-sles12sp3/
ls | grep openvswitch
6 If the Open vSwitch packages are successfully deleted on the host, proceed to the NSX Manager UI
to prepare the KVM host as a transport node. If the Open vSwitch packages cannot be deleted, then
proceed to the next step.
7 If packages cannot be deleted, upgrade the existing Open vSwitch package version with the
supported one.
Procedure
1 Verify the current version of the Open vSwitch installed on the host.
ovs-vswitchd --version
Important If the existing Open vSwitch packages run the latest or an earlier version, you must
replace the existing Open vSwitch packages with the supported version.
b Download and copy the nsx-lcp file into the /tmp directory.
VMware, Inc. 68
NSX-T Data Center Installation Guide
cd nsx-lcp-xenial_amd64 OR
cd nsx-lcp-bionic_amd64
ls | grep openvswitch
6 If the Open vSwitch packages are successfully deleted on the host, proceed to the NSX Manager UI
to prepare the KVM host as a transport node. If the Open vSwitch packages cannot be deleted, then
proceed to the next step.
7 If packages cannot be deleted, upgrade the existing Open vSwitch package version with the
supported one.
Prerequisites
Procedure
3 On the NSX Manager node, run the get certificate api thumbprint command.
The command output is a string of numbers that is unique to this NSX Manager.
4 Run the get cluster config command to get the first deployed NSX Manager cluster ID.
Note You must run the join command on the newly deployed NSX Manager node.
VMware, Inc. 69
NSX-T Data Center Installation Guide
n Cluster ID
n User name
n Password
n Certificate thumbprint
Repeat step 5.
7 Verify the cluster status by running the get cluster status command on your hosts.
8 (NSX Manager UI) Select System > Appliances > Overview and verify the cluster connectivity.
What to do next
Create a transport zone. See Create a Standalone Host or Bare Metal Server Transport Node.
VMware, Inc. 70
7
Configuring Bare Metal Server to
Use NSX-T Data Center
To use NSX-T Data Center on a bare metal (BM) server, you must install supported third-party packages.
n Application - represents the actual application running on the bare metal server, such as a web server
or a data base server.
n Application Interface - represents the network interface card (NIC) which the application uses for
sending and receiving traffic. One application interface per bare metal server is supported.
n Management Interface - represents the NIC which manages the bare metal server.
n VIF - the peer of the application interface which is attached to the logical switch. This is similar to a
VM vNIC.
NSX-T Data Center supports the bare metal server in two ways: as a host transport node and as a host
for NSX Manager. server as a host transport node or nsx manager node !
Make sure that you have the supported bare metal server versions. See Bare Metal Server System
Requirements.
Note If your NSX Edges are in VM form factor and you intend to use the NSX DHCP service (deployed
on VLAN-based logical switch), you must set the forged transmits option to Accept on the bare metal
hosts on which the NSX Edges are deployed. See Forged Transmits in the vSphere product
documentation.
VMware, Inc. 71
NSX-T Data Center Installation Guide
Prerequisites
n Verify that the user performing the installation has administrative permission to do the following
actions, some of which may require sudo permissions:
Procedure
traceroute libunwind8
python-mako libgflags2v5
python-netaddr libgoogle-perftools4
python-simplejson traceroute
python-unittest2 python-mako
python-yaml python-simplejson
python-openssl python-unittest2
dkms python-yaml
libvirt0 python-netaddr
libelf-dev python-openssl
python-netifaces libboost-filesystem1.58.0
libboost-chrono1.58.0
libgoogle-glog0v5
dkms
libboost-date-time1.58.0
python-protobuf
python-gevent
libsnappy1v5
libleveldb1v5
libboost-program-options1.58.0
libboost-thread1.58.0
libboost-iostreams1.58.0
libvirt0
libelf-dev
python-netifaces
VMware, Inc. 72
NSX-T Data Center Installation Guide
RHEL 7.7, 7.6, and 7.5 CentOS 7.7, 7.6, and 7.5
tcpdump tcpdump
boost-filesystem boost-filesystem
PyYAML PyYAML
boost-iostreams boost-iostreams
boost-chrono boost-chrono
python-mako python-mako
python-netaddr python-netaddr
python-six python-six
gperftools-libs gperftools-libs
libunwind libunwind
libelf-dev libelf-dev
snappy snappy
boost-date-time boost-date-time
c-ares c-ares
redhat-lsb-core redhat-lsb-core
wget wget
net-tools net-tools
yum-utils yum-utils
lsof lsof
python-gevent python-gevent
libev libev
python-greenlet python-greenlet
libvirt-libs libvirt-libs
python-netifaces python-netifaces
u On SUSE, run zypper install <package_name> to install the third-party packages manually.
net-tools
tcpdump
python-simplejson
python-netaddr
python-PyYAML
python-six
libunwind
wget
lsof
libcap-progs
libvirt-libs
python-netifaces
NSX-T Data Center does not support Linux OS interface bonding. You must use Open vSwitch (OVS)
bonding for Bare Metal Server Transport Nodes. See Knowledge Base article 67835 Bare Metal Server
supports OVS bonding for Transport Node configuration in NSX-T.
VMware, Inc. 73
NSX-T Data Center Installation Guide
Procedure
See https://ports.vmware.com/home/NSX-T-Data-Center.
3 Add a bare metal server to the NSX-T Data Center fabric and create a transport node.
See https://github.com/vmware/bare-metal-server-integration-with-nsxt.
Ansible support modes are a set of automated scripts that set up the application interface for bare metal
servers.
n Migrate Mode - This mode supports management and application sharing the same IP address. Also
called underlay mode or VLAN-0.
2 Run the command ansible-version, and check that Ansible is version is 2.4.3 or later.
3 Download and extract the Bare Metal integration with NSX-T Data Centerfrom Github: –https://
github.com/vmware/bare-metal-server-integration-with-nsxt
VMware, Inc. 74
NSX-T Data Center Installation Guide
In this procedure, establish connectivity between the workloads and NSX Manager. Then, configure DFW
rules to secure ingress and egress traffic running between virtual or physical and Windows Server 2016
bare metal workloads.
Procedure
1 Enable Windows Remote Management (WinRM) on Windows Server 2016 to allow the Windows
server to interoperate with third-party software and hardware. To enable the WinRM service with a
self-signed certificate.
2 Configure WinRM to use HTTPS. The default port used for HTTPS is 5986.
See https://github.com/vmware/bare-metal-server-integration-with-nsxt.
4 Using Ansible, create a segment and segment ports on NSX Manager over the WinRM channel.
The application virtual interface, NSX Manager, and Windows Remote Management are
synchronized with the segment and ports created for the application virtual interface.
VMware, Inc. 75
NSX-T Data Center Installation Guide
5 In Windows, customize OVSIM driver for the Windows server to create two new network adapters -
application virtual interfaces and virtual tunnel endpoint (VTEP) for overlay-backed workload.
$:> Get-NetAdapter
vEthernet1-VTEP: Used for overlay-backed VTEP interface. Not needed for a VLAN-backed
workload.
vEthernet1-VIF1: Used for virtual interface or application interface of the bare metal Windows
server.
7 Add the bare metal server as a standalone transport node. In the NSX Manager, go to System →
Fabric → Nodes → Host Transport Nodes.
8 In the Managed by drop-down menu, select Standalone Hosts and click + Add.
9 During transport node creation, ensure the following conditions are met:
n In the Uplink field, map an uplink to an available IP address on the Windows server. Run Get-
NetAdapter on the Windows Server to know which ethernet address to configure as uplinks for
the bare metal transport node.
n On an overlay-backed workload, select among IP Pool, Static IP, or DHCP as the mode for
assigning a VTEP address to the host.
The NSX Manager automatically installs required NSX-T VIBs and NSX-T agent on the bare metal
server. For more details, see Create a Standalone Host or Bare Metal Server Transport Node.
10 Alternatively, you can manually prepare the bare metal server as a transport node or follow the
interactive mode to prepare the server as a transport node.
a Download and install the NSX-T LCP Windows bundle using the wget command.
c To interactively install NSX-T, double click the setup.exe file and follow the wizard to complete
installation.
d Run the nsxcli command to join the Windows Server to the NSX Manager.
e Configure the host as a transport node either from the NSX Manager UI or REST APIs. For more
details, see Create a Standalone Host or Bare Metal Server Transport Node.
11 Verify whether OVS bridges are created on the Windows server. The OVS bridge connects the
application virtual interface to the nsx switch on the transport node.
ovs-vsctl show
VMware, Inc. 76
NSX-T Data Center Installation Guide
The output must show the bridges created from nsxswitch and nsx managed host component. The
nsxswitch bridge is for the transport node that was created. The nsx managed bridge is created for
the application virtual interface on the Windows host. These bridge entries indicate that
communication channel is established between the nsx switch and Windows remote listener.
12 Using the Ansible client, configure a static IP address for the overlay or VLAN-backed segment.
Run vi win_hosts
13 Launch Ansible playbook for the service to configure Windows bare metal server.
n The static IP address is reflected as the IP address of the overlay segment to which the Windows
Server workload is connected.
n The GENEVE tunnels are created between the nsx switch and the nsx managed host component
on the Windows host.
Note Likewise, on a VLAN-backed transport node, verify that the static IP address is reflected as the
IP address of the VLAN segment to which the Windows Server workload is connected.
15 Verify connectivity between the application, Windows bare metal server, and NSX Manager .
16 Add and publish L2 or L3 DFW rules for the overlay or VLAN-backed bare metal workload.
17 Verify ingress and egress traffic between virtual or physical workloads and bare metal workloads is
flowing as per the DFW rules published.
VMware, Inc. 77
8
NSX Manager Cluster
Requirements
The following subsections describe the NSX Manager cluster requirements and provides
recommendations for specific site deployments. They also describe how you can use vSphere HA (High
Availability) with NSX-T Data Center to enable quick recovery if the host running the NSX Manager node
fails.
n NSX Manager Cluster Requirements for Single, Dual, and Multiple Sites
You can use vSphere HA with NSX-T Data Center to enable quick recovery if the host running the NSX
Manager node fails.
Note See Creating and Using vSphere HA Clusters in the vSphere product documentation.
Each cluster member should be placed on a unique hypervisor host with three physical hypervisor
hosts in total. This is required to avoid a single physical hypervisor host failure impacting the NSX
control plane. It is recommended you apply anti-affinity rules to ensure that all three cluster members
are running on different hosts.
The normal production operating state is a three-node NSX Manager cluster. However, you can add
additional, temporary NSX Manager nodes to allow for IP address changes.
Important As of NSX-T Data Center 2.4, the NSX Manager contains the NSX Central Control Plane
process. This service is critical for the operation of NSX. If there is a complete loss of NSX Managers,
or if the cluster is reduced from three NSX Managers to one NSX Manager, you will not be able to
make topology changes to your environment, and vMotion of machines depending on NSX will fail.
VMware, Inc. 78
NSX-T Data Center Installation Guide
n For lab and proof-of-concept deployments where there are no production workloads, you can run a
single NSX Manager to save resources. NSX Manager nodes can be deployed on either ESXi or
KVM. However, mixed deployments of managers on both ESXi and KVM are not supported.
n It is recommended that you place your NSX Managers on different hosts to avoid a single host failure
impacting multiple managers.
n You can place NSX Managers in different vSphere clusters or in a common vSphere cluster.
n It is recommended that you place NSX Managers in different management subnets or a shared
management subnet. When using vSphere HA it is recommended to use a shared management
subnet soNSX Managers that are recovered by vSphere can preserve their IP address.
n It is recommended that you place NSX Managers on shared storage also. For vSphere HA, please
review the requirements for that solution.
You can also use vSphere HA with NSX-T to provide recovery of a lost NSX Manager when the host
where the NSX Manager is running fails.
Scenario example:
n vSphere HA is configured to recover any lost NSX Manager (e.g., nsxmgr-01) from any host (e.g.,
Host-01) to Host-04.
Thus, upon the loss of any hosts where a NSX Manager is running, vSphere recovers the lost NSX
Manager on Host-04.
n It is not recommended to deploy NSX Managers in a dual-site scenario without vSphere HA. In this
scenario, one site requires the deployment of twoNSX Managers and the loss of that site will impact
the operation of NSX-T Data Center.
VMware, Inc. 79
NSX-T Data Center Installation Guide
n Deployment of NSX Managers in a dual site scenario with vSphere HA can be done with the following
considerations:
n A single stretched vSphere cluster contains all the hosts for NSX Managers.
n All three NSX Managers are deployed to a common management subnet/VLAN to allow IP
address preservation upon recovery of a lost NSX Managers.
Scenario example:
n The vSphere cluster consists of six or more hosts, with three hosts in Site A and three hosts in Site B.
n The three NSX Managers are deployed to distinct hosts with additional hosts for placement of
recovered NSX Managers:
Site A:
Site B:
n vSphere HA is configured to recover any lost NSX Manager (e.g., nsxmgr-01) from any host (e.g.,
Host-01) in Site A to one of the hosts in Site B.
Thus, upon failure of Site A, vSphere HA will recover all NSX Managers to hosts in site B.
Important You must you properly configure anti-affinity rules to prevent NSX Managers from being
recovered to the same common host.
In a scenario with three or more sites, you can deploy NSX Managers with or without vSphere HA.
n It is recommended that you use separate management subnets or VLANs per site.
VMware, Inc. 80
NSX-T Data Center Installation Guide
Failure scenarios:
n Single site failure: Two remaining NSX Managers in other sites continue to operate. NSX-T Data
Center is in a degraded state but still operational. It is recommended you manually deploy a third
NSX Manager to replace the lost cluster member.
n Two site failure: Loss of quorum and therefore impact to NSX-T Data Center operations.
Recovery of NSX Managers may take as long as 20 minutes depending on environmental conditions
such as CPU speed, disk performance, and other deployment factors.
VMware, Inc. 81
9
Installing NSX Edge
Install NSX Edge on ESXi using the NSX-T UI, the vSphere web client, or the command-line OVF tool.
Note There can be only one tier-0 router per NSX Edge node. However, multiple tier-1 logical routers
can be hosted on one NSX Edge node. NSX Edge VMs of different sizes can be combined in the same
cluster; however, it is not recommended.
VMware, Inc. 82
NSX-T Data Center Installation Guide
Table 9-1. NSX Edge Deployment, Platforms, and Installation Requirements (continued)
Requirements Description
PXE installation The Password string must be encrypted with sha-512 algorithm
for the root and admin user password.
Hostname When installing NSX Edge, specify a hostname that does not
contain invalid characters such as an underscore. If the
hostname contains any invalid character, after deployment the
hostname will be set to localhost. For more information about
hostname restrictions, see https://tools.ietf.org/html/rfc952 and
https://tools.ietf.org/html/rfc1123.
VMware Tools The NSX Edge VM running on ESXi has VMTools installed. Do
not remove or upgrade VMTools.
System Verify that the system requirements are met. See NSX Edge VM
System Requirements.
Ports Verify that the required ports are open. See Ports and Protocols.
IP Addresses If you have multiple management networks, you can add static
routes to the other networks from the NSX-T Data Center
appliance.
Plan your NSX Edge IPv4 or IPv6 IP addressing scheme.
OVF Template n Verify that you have adequate privileges to deploy an OVF
template on the ESXi host.
n Verify that hostnames do not include underscores.
Otherwise, the hostname is set to localhost.
n A management tool that can deploy OVF templates, such as
vCenter Server or the vSphere Client.
NTP Server The same NTP server must be configured on all NSX Edge VMs
or Bare Metal Edges in an Edge cluster.
Intel-based Chipsets
NSX Edge nodes are supported on ESXi-based hosts with Intel chipsets. If an unsupported chipset type
is used, vSphere EVC mode may prevent Edge nodes from starting, showing an error message in the
console. See NSX Edge VM System Requirements.
VMware, Inc. 83
NSX-T Data Center Installation Guide
AMD EPYC
NSX Edge nodes are also supported on AMD-based chipsets. NSX Edge nodes can now be deployed on
AMD EPYC series chipsets. See NSX Edge VM System Requirements.
n If you specify a user name for the admin or audit user, the name must be unique. If you specify the
same name, it is ignored and the default names (admin and audit) is used.
n If the password for the admin user does not meet the complexity requirements, you must log in to
NSX Edge through SSH or at the console as the admin user with the password default. You are
prompted to change the password.
n If the password for the audit user does not meet the complexity requirements, the user account is
disabled. To enable the account, log in to NSX Edge through SSH or at the console as the admin
user and run the command set user audit to set the audit user's password (the current password
is an empty string).
VMware, Inc. 84
NSX-T Data Center Installation Guide
n If the password for the root user does not meet the complexity requirements, you must log in to NSX
Edge through SSH or at the console as root with the password vmware. You are prompted to change
the password.
Caution Changes made to the NSX-T Data Center while logged in with the root user credentials might
cause system failure and potentially impact your network. You can only make changes using the root
user credentials with the guidance of VMware Support team. ne koristiti root user-a !
Note The core services on the appliance do not start until a password with sufficient complexity has
been set.
After you deploy NSX Edge from an OVA file, you cannot change the VM's IP settings by powering off the
VM and modifying the OVA settings from vCenter Server.
Transport Zone
NSX Edge Transport Node
Tier 0
SR
Intra-Tier 0 transit
mgmt plane cluster- MP 169.254.0.0/28
control plane cluster - CCP
Tier 0 DR DR
2 Transport nodes:
Transport zone local Tier 0-Tier 1 router NSX edge and host
Host 100.64.1.0/24
control plane - LCP Transport Tier 1
Node
SR
Intra-Tier 1 transit
CCP 169.254.0.0/28
Tier 1 DR DR
LCP
N-VDS
MP
bond
vmxnet3
physical ports
fp-ethX
pnic1 pnic2 managed by datapath
VMware, Inc. 85
Tier 0 logicki ruteri - imaju SR
NSX-T Data Center Installation Guide
Tier 1 logicki ruteri - prisutni prilikom konfigurisanja NSX
EDGE cluster-a
When you first deploy an NSX Edge, you can think of it as an empty container. The NSX Edge does not
do anything until you create logical routers. The NSX Edge provides the compute backing for tier-0 and
tier-1 logical routers. Each logical router contains a services router (SR) and a distributed router (DR).
When we say that a router is distributed, we mean that it is replicated on all transport nodes that belong to
the same transport zone. In the figure, the host transport node contains the same DRs contained on the
tier-0 and tier-1 routers. A services router is required if the logical router is going to be configured to
perform services, such as NAT. All tier-0 logical routers have a services router. A tier-1 router can have a
services router if needed based on your design considerations.
By default, the links between the SR and the DR use the 169.254.0.0/28 subnet. These intra-router transit
links are created automatically when you deploy a tier-0 or tier-1 logical router. You do not need to
configure or modify the link configuration unless the 169.254.0.0/28 subnet is already in use in your
deployment. On a tier-1 logical router, the SR is present only if you select an NSX Edge cluster when
creating the tier-1 logical router.
The default address space assigned for the tier-0-to-tier-1 connections is 100.64.0.0/10. Each tier-0-to-
tier-1 peer connection is provided a /31 subnet within the 100.64.0.0/10 address space. This link is
created automatically when you create a tier-1 router and connect it to a tier-0 router. You do not need to
configure or modify the interfaces on this link unless the 100.64.0.0/10 subnet is already in use in your
deployment.
Each NSX-T Data Center deployment has a management plane cluster (MP) and a control plane cluster
(CCP). The MP and the CCP push configurations to each transport zone's local control plane (LCP).
When a host or NSX Edge joins the management plane, the management plane agent (MPA) establishes
connectivity with the host or NSX Edge, and the host or NSX Edge becomes an NSX-T Data Center
fabric node. When the fabric node is then added as a transport node, LCP connectivity is established with
the host or NSX Edge.
Lastly, the figure shows an example of two physical NICs (pNIC1 and pNIC2) that are bonded to provide
high availability. The datapath manages the physical NICs. They can serve as either VLAN uplinks to an
external network or as tunnel endpoint links to internal NSX-T Data Center-managed VM networks.
It is a best practice to allocate at least two physical links to each NSX Edge that is deployed as a VM. vazno !
Optionally, you can overlap the port groups on the same pNIC using different VLAN IDs. The first network
link found is used for management. For example, on an NSX Edge VM, the first link found might be vnic1.
On a bare-metal installation, the first link found might be eth0 or em0. The remaining links are used for
the uplinks and tunnels. For example, one might be for a tunnel endpoint used by NSX-T Data Center-
managed VMs. The other might be used for an NSX Edge-to-external TOR uplink.
You can view the physical link information of the NSX Edge, by logging in to the CLI as an administrator
and running the get interfaces and get physical-ports commands. In the API, you can use the GET
fabric/nodes/<edge-node-id>/network/interfaces API call. Physical links are discussed in more detail
in the next section.
Whether you install NSX Edge as a VM appliance or on bare metal, you have multiple options for the
network configuration, depending on your deployment.
VMware, Inc. 86
NSX-T Data Center Installation Guide
n Overlay for internal NSX-T Data Center tunneling between transport nodes.
An NSX Edge can belong to zero VLAN transport zones or many. For zero VLAN transport zones, the
NSX Edge can still have uplinks because the NSX Edge uplinks can use the same N-VDS installed for
the overlay transport zone. You might do this if you want each NSX Edge to have only one N-VDS.
Another design option is for the NSX Edge to belong to multiple VLAN transport zones, one for each
uplink.
The most common design choice is three transport zones: One overlay and two VLAN transport zones for
redundant uplinks.
To use the same VLAN ID for a transport network for overlay traffic and other for VLAN traffic, such as a
VLAN uplink, configure the ID on two different N-VDS, one for VLAN and the other for overlay.
When you create the NSX Edge transport node, you can select fp-ethX interfaces to associate with the
uplinks and the overlay tunnel. You can decide how to use the fp-ethX interfaces.
On the vSphere distributed switch or vSphere Standard switch, you must allocate at least two vmnics to
the NSX Edge: One for NSX Edge management and one for uplinks and tunnels.
In the following sample physical topology, fp-eth0 is used for the NSX-T Data Center overlay tunnel. fp-
eth1 is used for the VLAN uplink. fp-eth2 and fp-eth3 are not used. vNIC1 is assigned to the management
network.
VMware, Inc. 87
NSX-T Data Center Installation Guide
Figure 9-2. One Suggested Link Setup for NSX Edge VM Networking
ESXi
VM VDS/VSS
fp-eth0
Overlay N-VDS Overlay/Tunnel vNIC2
vmnic1
fp-eth1
VLAN uplink N-VDS Uplink vNIC3
fp-eth2
(Unused) vNIC4
The NSX Edge shown in this example belongs to two transport zones (one overlay and one VLAN) and
therefore has two N-VDS, one for tunnel and one for uplink traffic.
This screenshot shows the virtual machine port groups, nsx-tunnel, and vlan-uplink.
zapamtiti !
During deployment, you must specify the network names that match the names configured on your VM
port groups. For example, to match the VM port groups in the example, your network ovftool settings can
be as follows if you were using the ovftool to deploy NSX Edge:
The example shown here uses the VM port group names Mgmt, nsx-tunnel, and vlan-uplink. You can use
any names for your VM port groups.
The tunnel and uplink VM port groups configured for the NSX Edge do not need to be associated with
VMkernel ports or given IP addresses. This is because they are used at Layer 2 only. If your deployment
uses DHCP to provide an address to the management interface, make sure that only one NIC is assigned
to the management network.
Notice that the VLAN and tunnel port groups are configured as trunk ports. This is required. For example,
on a standard vSwitch, you configure trunk ports as follows: . Host > Configuration > Networking > Add
Networking > Virtual Machine > VLAN ID All (4095).
VMware, Inc. 88
NSX-T Data Center Installation Guide
If you are using an appliance-based or VM NSX Edge, you can use standard vSwitches or vSphere
distributed switches.
NSX Edge VM can be installed on an NSX-T Data Center prepared host and configured as a transport
node. There are two types of deployment:
n NSX Edge VM can be deployed using VSS/VDS port groups where VSS/VDS consume separate
pNIC(s) on the host. Host transport node consumes separate pNIC(s) for N-VDS installed on the
host. N-VDS of the host transport node co-exists with a VSS or VDS, both consuming separate
pNICs. Host TEP (Tunnel End Point) and NSX Edge TEP can be in the same or different subnets.
n NSX Edge VM can be deployed using VLAN-backed logical switches on the N-VDS of the host
transport node. Host TEP and NSX Edge TEP must be in different subnets.
Optionally, you can install multiple NSX Edge appliances/VMs on a single host, and the same
management, VLAN, and tunnel endpoint port groups can be used by all installed NSX Edges.
With the underlying physical links up and the VM port groups configured, you can install the NSX Edge.
When you create the NSX Edge transport node, you can select fp-ethX interfaces to associate with the
uplinks and the overlay tunnel.
You can decide how to use the fp-ethX interfaces. In the following sample physical topology, fp-eth0 and
fp-eth1 are bonded and used for the NSX-T Data Center overlay tunnel. fp-eth2 and fp-eth3 are used as
redundant VLAN uplinks to TORs.
Figure 9-3. One Suggested Link Setup for Bare-Metal NSX Edge Networking
Management Bond
eth0 Physical NIC 1
Bare Metal NSX Edge Node
Transport node eth1 Physical NIC 2
Overlay Bond
fp-eth0
Physical NIC 3
Overlay N-VDS
Physical NIC 4
fp-eth1
fp-eth2
VLAN uplink N-VDS 1 Physical NIC 5
fp-eth3
VLAN uplink N-VDS 2 Physical NIC 6
VMware, Inc. 89
NSX-T Data Center Installation Guide
When you have two ECMP VLAN uplinks, you must also have two TOR switches for high availability and
fully meshed connectivity. Each VLAN logical switch has an associated VLAN ID.
When you add an NSX Edge to a VLAN transport zone, a new N-VDS is installed. For example, if you
add an NSX Edge node to four VLAN transport zones, as shown in the figure, four N-VDS get installed on
the NSX Edge.
Figure 9-4. One Suggested ECMP VLAN Setup for NSX Edges to TORs
ToR ToR
tier-0 tier-0
logical logical
router router
Note For an Edge VM deployed on an ESXi host that has the vSphere Distributed Switch (vDS) and not
N-VDS, you must do the following:
n Enable promiscuous mode for the Edge VM to receive unknown unicast packets because MAC
learning is disabled by default. This is not necessary for vDS 6.6 or later, which has MAC learning
enabled by default.
VMware, Inc. 90
NSX-T Data Center Installation Guide
NSX Manager (recommended method to install n Ensure NSX Edge network requirements are met. See NSX Edge Installation
an NSX Edge VM appliance only) Requirements.
n Create an NSX Edge transport node. See Create an NSX Edge Transport
Node.
n Create an NSX Edge cluster. See Create an NSX Edge Cluster.
vSphere web client or vSphere command-line n Ensure NSX Edge network requirements are met. See NSX Edge Installation
OVF tool Requirements.
n Choose vSphere web client or vSphere command-line OVF tool to install
NSX Edge.
n (Web Client) Install NSX Edge on ESXi. See Install an NSX Edge on
ESXi Using the vSphere GUI.
n (Command-line OVF tool) Install NSX Edge on ESXi. See Install NSX
Manager on ESXi Using the Command-Line OVF Tool.
n Join NSX Edge with the Management Plane. See Join NSX Edge with the
Management Plane.
n Configure an NSX Edge as a transport node. See Configure an NSX Edge as
a Transport Node.
n Create an NSX Edge cluster. See Create an NSX Edge Cluster.
(Bare Metal Server) ISO (Automated or You can configure automated installation of NSX Edge on a bare metal server or
Interactive mode via ISO file) or as an NSX install NSX Edge as a VM appliance using PXE. Note that PXE boot installation
Edge VM appliance procedure is not supported on NSX Manager.
n Ensure NSX Edge network requirements are met. See NSX Edge Installation
Requirements.
n Prepare PXE server. See Prepare the PXE Server for NSX Edge . Choose
from one of the supported installation methods:
n (Automated installation) Install NSX Edge via ISO File on Bare Metal.
See Install NSX Edge Automatically via ISO File.
n (Automated installation) Install NSX Edge via ISO File as a Virtual
Appliance. See Install NSX Edge via ISO File as a Virtual Appliance.
n (Manual installation) Manually Install NSX Edge via ISO File. See Install
NSX Edge Interactively via ISO File .
n Join NSX Edge with the Management Plane. See Join NSX Edge with the
Management Plane.
n Configure an NSX Edge as a transport node. See Configure an NSX Edge as
a Transport Node.
n Create an NSX Edge cluster. See Create an NSX Edge Cluster.
VMware, Inc. 91
NSX-T Data Center Installation Guide
An NSX Edge Node is a transport node that runs the local control plane daemons and forwarding engines
implementing the NSX-T data plane. It runs an instance of the NSX-T virtual switch called the NSX Virtual
Distributed Switch, or N-VDS. The Edge Nodes are service appliances dedicated to running centralized
network services that cannot be distributed to the hypervisors. They can be instantiated as a bare metal
appliance or in virtual machine form factor. They are grouped in one or several clusters, representing a
pool of capacity.
An NSX Edge can belong to one overlay transport zone and multiple VLAN transport zones. An NSX
Edge belongs to at least one VLAN transport zone to provide the uplink access.
Note If you plan to create transport nodes from a template VM, make sure that there are no certificates
on the host in /etc/vmware/nsx/. nsx-proxy does not create a certificate if a certificate already exists.
Prerequisites
n An uplink profile must be configured or you can use the default uplink profile for NSX Edge nodes.
See Create an Uplink Profile.
n An IP pool must be configured or must be available in the network deployment. See Create an IP
Pool for Tunnel Endpoint IP Addresses.
Procedure
2 Select System > Fabric > Nodes > Edge Transport Nodes > Add Edge VM.
VMware, Inc. 92
NSX-T Data Center Installation Guide
6 To customize CPU and memory allocated to an NSX Edge VM appliance, tune the following
parameters. However, for maximum performance NSX Edge VM appliance must be assigned 100%
of the available resources .
Caution If you customize resources allocated to the NSX Edge VM, turn back the reservation later
on to 100% to get maximum performance.
Option Description
Memory Reservation (%) Reservation percentage is relative to the pre-defined value in the form factor.
100 indicates 100% of memory is reserved for the NSX Edge VM.
If you enter 50, it indicates that 50% of the allocated memory is reserved for the
Edge transport node.
CPU Reservation Priority Select the number of shares to be allocated to an NSX Edge VM relative to other
VMs that are contending for shared resources.
The following shares are for an NSX Edge VM in Medium form factor:
n Low - 2000 shares
n Normal - 4000 shares
n High - 8000 shares
n Extra High - 10000 shares
CPU Reservation (MHz) Caution Unless you need fine grained control over CPU reservations, do not use
this field. Instead, change CPU reservations from the CPU Reservation Priority
field.
The maximum CPU reservation value must not exceed the number of vCPUs
multiplied by the normal CPU operation rate of the physical CPU core.
If the MHz value entered exceeds the maximum CPU capacity of the physical CPU
cores, the NSX Edge VM might fail to start even though the allocation was
accepted.
For example, consider a system with two Intel Xeon E5-2630 CPUs. Each CPU
contains ten cores running at 2.20 GHz. The maximum CPU allocation for a VM
configured with two vCPUs is 2 x 2200 MHz = 4400 MHz. If CPU reservation is
specified as 8000 MHz, the reconfiguration of the VM completes successfully.
However, the VM fails to power on.
n Specify the CLI and the root passwords for the NSX Edge. Your passwords must comply with the
password strength restrictions.
n At least 12 characters
VMware, Inc. 93
NSX-T Data Center Installation Guide
n No dictionary words
n No palindromes
n To enable SSH for an administrator, toggle the Allow SSH Login button.
n To enable SSH for a root user, toggle the Allow Root SSH Login button.
n Enter credentials for the Audit role. If you do not enter credentials in the Audit Credentials
section, the audit role remains disabled.
Option Description
Compute Manager Select the compute manager from the drop-down menu.
The compute manager is the vCenter Server registered in the Management Plane.
Cluster Designate the cluster the NSX Edge is going to join from the drop-down menu.
Resource Pool or Host Assign either a resource pool or a specific host for the NSX Edge from the drop-
down menu.
Datastore Select a datastore for the NSX Edge files from the drop-down menu.
Option Description
IP Assignment It is the IP address assigned to NSX Edge node which is required to communicate
with NSX Manager and NSX Controller.
Select DHCP or Static IP.
If you select Static, enter the values for:
n Management IP: Enter IP address of NSX Edge in the CIDR notation.
n Default gateway: Enter the gateway IP address of NSX Edge.
Management Interface Select the management network interface from the drop-down menu. This
interfaces must either be reachable from NSX Manager or must be in the same
management interface as NSX Manager and NSX Controller.
The NSX Edge management interface establishes communication with the NSX
Manager management interface.
Search Domain Names Enter domain names in the format 'example.com' or enter an IP address.
VMware, Inc. 94
NSX-T Data Center Installation Guide
Option Description
Edge Switch Name Select a VLAN or Overlay switch from the drop-down menu.
Transport Zone Select the transport zones that this transport node belongs to. An NSX Edge
transport node belongs to at least two transport zones, an overlay for NSX-T Data
Center connectivity and a VLAN for uplink connectivity.
Note NSX Edge Nodes support multiple overlay tunnels (multi-TEP) when the
following prerequistes are met:
n TEP configuration must be done on one N-VDS only.
n All TEPs must use same transport VLAN for overlay traffic.
n All TEP IPs must be in same subnet and use same default gateway.
Uplink Profile Select the uplink profile from the drop-down menu.
The available uplinks depend on the configuration in the selected uplink profile.
IP Assignment IP address is assigned to the NSX Edge switch that is configured. It is used to
route packets on an overlay or VLAN network.
Select Use IP Pool or Use Static IP List for the overlay N-VDS.
n If you select Use Static IP List, specify:
n Static IP List: Enter a list of comma-separated IP addresses to be used by
the NSX Edge switch.
n Gateway: Enter the default gateway IP address, which is used to route
packets between NSX Edge transport nodes in an overlay network.
n Subnet mask: Enter the subnet mask for the configured gateway.
n If you selected Use IP Pool for IP assignment, specify the IP pool name.
DPDK Fastpath Interfaces / Virtual Select the data path interface name for the uplink interface.
NICs
Note If the uplink profile applied to the Edge node is using a Named Teaming
policy, ensure the following condition is met:
n All uplinks in the Default Teaming policy must be mapped to the physical
network interfaces on the Edge VM for traffic to flow through a logical switch
that uses the Named Teaming policies.
Note
n LLDP profile is not supported on an NSX Edge VM appliance.
n Uplink interfaces are displayed as DPDK Fastpath Interfaces if the NSX Edge is installed using
NSX Manager or on a Bare Metal server.
n Uplink interfaces are displayed as Virtual NICs if the NSX Edge is installed manually using
vCenter Server.
After adding the NSX Edge as a transport node, the connection status changes to Up in 10-12
minutes.
VMware, Inc. 95
NSX-T Data Center Installation Guide
14 If you see inaccurate resource configuration details, such as compute, datastore, or network details
that were configured when you installed the NSX Edge node, refresh the configuration details by
running the API command.
POST api/v1/transport-nodes/<transport-node-id>?
action=refresh_node_configuration&resource_type=EdgeNode
NSX Manager displays inaccurate or stale NSX Edge node details if one of these conditions are true:
n On the NSX Edge node, if you change settings such as SSH, NTP, DNS and so from the NSX
Edge command line interface.
n If the NSX Edge VM appliance is moved to another host using vCenter, the NSX Manager may
have stale values of compute, datastore, and network configuration based on the configuration of
the new host.
What to do next
Add the NSX Edge node to an NSX Edge cluster. See Create an NSX Edge Cluster.
In order to create a tier-0 logical router or a tier-1 router with stateful services such as NAT, load balancer,
and so on. You must associate it with an NSX Edge cluster. Therefore, even if you have only one NSX
Edge, it must still belong to an NSX Edge cluster to be useful.
An NSX Edge transport node can be added to only one NSX Edge cluster.
After creating the NSX Edge cluster, you can later edit it to add additional NSX Edges.
Prerequisites
n Verify that the NSX Edge node is stable, with all services up and running and all groups are stable,
before joining the node to the cluster.
n Optionally, create an NSX Edge cluster profile for high availability (HA). You can also use the default
NSX Edge cluster profile.
Procedure
VMware, Inc. 96
NSX-T Data Center Installation Guide
2 Select System > Fabric > Nodes > Edge Clusters > Add.
5 In Member Type drop-down menu, select either Edge Node if the virtual machine is deployed on-
premises or Public Cloud Gateway if the virtual machine is deployed in a public cloud.
6 From the Available column, select NSX Edges and click the right-arrow to move them to the
Selected column.
What to do next
You can now build logical network topologies and configure services. See the NSX-T Data Center
Administration Guide.
Note Starting in NSX-T Data Center 2.5.1, the NSX Edge VM supports vMotion.
Prerequisites
Procedure
1 Locate the NSX Edge node appliance OVA file on the VMware download portal.
Either copy the download URL or download the OVA file onto your computer.
2 In the vSphere Client, select the host on which to install NSX Edge node appliance.
3 Right-click and select Deploy OVF template to start the installation wizard.
4 Enter the download OVA URL or navigate to the saved OVA file, and click Next.
5 Enter a name and location for the NSX Edge node VM, and click Next.
The name you type appears in the vCenter Server and vSphere inventory.
6 Select a compute resource for the NSX Edge node appliance, and click Next.
7 For an optimal performance, reserve memory for the NSX Edge appliance.
Set the reservation to ensure that NSX Edge has sufficient memory to run efficiently. See NSX Edge
VM System Requirements.
8 Review and verify the OVF template details, and click Next.
9 Select a deployment configuration, Small, Medium, Large, or XLarge and click Next.
The Description panel on the right side of the wizard shows the details of selected configuration.
VMware, Inc. 97
NSX-T Data Center Installation Guide
10 Select storage for the configuration and disk files, and click Next.
c Specify the datastore to store the NSX Edge node appliance files.
b For networks 1, 2, and 3, select the previously configured VDS trunk portgroups.
13 Click Next.
The following steps are all located in the Customize Template section of the Deploy OVF Template
wizard.
14 Enter the NSX Edge node system root, CLI admin, and audit passwords.
Note In the Customize Template window, ignore the message All properties have valid
values that is displayed even before you have entered values in any of the fields. This message is
displayed because all parameters are optional. The validation passes as you have not entered values
in any of the fields.
When you log in for the first time, you are prompted to change the password. This password change
method has strict complexity rules, including the following:
n At least 12 characters
n No dictionary words
n No palindromes
Important The core services on the appliance do not start until a password with sufficient
complexity has been set.
VMware, Inc. 98
NSX-T Data Center Installation Guide
15 (Optional) If you have an available NSX Manager and want to register the NSX Edge with the
management plane during the OVA deployment, complete the Manager IP, Username, Password, and
Thumbprint.
Note Do not register the NSX Edge with the virtual IP (VIP) address of the management plane
during the OVA deployment.
n Node ID: Leave the field blank. The Node UUID field is only for internal use.
16 (Optional) If you want to deploy the NSX Edge as an autonomous edge in a L2 VPN topology, enable
the option. An autonomous edge is not managed by NSX-T Data Center. Do not enable the option if
you want to deploy an NSX Edge that provides centralized edge services to host transport nodes in
an NSX-T Data Center topology.
Note The fields in the External and HA sections are required only when you are configure an
autonomous edge node.
18 Enter the default gateway, management network IPv4, and management network netmask address.
19 Enter the DNS Server, the Domain Search list, and the NTP Server list.
20 (Optional) Do not enable SSH if you prefer to access NSX Edge using the console. However, if you
want root SSH login and CLI login to the NSX Edge command line, enable the SSH option.
21 Verify that all your custom OVA template specification is accurate and click Finish to initiate the
installation.
22 Open the console of the NSX Edge node VM to track the boot process.
If the console window does not open, make sure that pop-ups are allowed.
23 After the NSX Edge node VM starts, log in to the CLI with admin credentials.
Note After NSX Edge node VM starts, if you do not log in with admin credentials for the first time,
the data plane service does not automatically start on the NSX Edge node VM.
VMware, Inc. 99
NSX-T Data Center Installation Guide
24 Run the get interface eth0 (without VLAN) or get interface eth0.<vlan_ID> (with a VLAN)
command to verify that the IP address was applied as expected.
Interface: eth0.100
Address: 192.168.110.37/24
MAC address: 00:50:56:86:62:4d
MTU: 1500
Default gateway: 192.168.110.1
Broadcast address: 192.168.110.255
...
Note When bringing up NSX Edge VMs on non-NSX managed host, verify that the MTU setting is
set to 1600 (instead of 1500) on the physical host switch for the data NIC.
25 Run the get managers command to verify that the NSX Edge is registered.
26 If NSX Edge is not registered with the management plane, see Join NSX Edge with the Management
Plane.
27 Verify that the NSX Edge node VM has the required connectivity.
If you enabled SSH, make sure that you can SSH to your NSX Edge node VM and verify the
following:
n From the NSX Edge node VM, you can ping the node's default gateway.
n From the NSX Edge node VM, you can ping the hypervisor hosts that are either in the same
network or a network reachable through routing.
n From the NSX Edge node VM, you can ping the DNS server and NTP server.
Note If connectivity is not established, make sure the VM network adapter is in the proper network
or VLAN.
By default, the NSX Edge node VM datapath claims all virtual machine NICs except the management
NIC (the one that has an IP address and a default route). If you incorrectly assigned a NIC as the
management interface, follow these steps to use DHCP to assign management IP address to the
correct NIC.
a Log in CLI and type the stop service dataplane command.
c Place interface into the DHCP network and wait for an IP address to be assigned to that interface.
The datapath fp-ethX ports used for the VLAN uplink and the tunnel overlay are shown in the get
interfaces and get physical-port commands on the NSX Edge node VM.
What to do next
Configure NSX Edge as a transport node. See Configure an NSX Edge as a Transport Node.
Prerequisites
n Verify that the system requirements are met. See System Requirements.
n Verify that the required ports are open. See Ports and Protocols.
n Verify that you have the IP address and gateway, DNS server IP addresses, domain search list, and
the NTP server IP address for the NSX Manager to use.
n If you do not already have one, create the target VM port group network. Place the NSX-T Data
Center appliances on a management VM network.
If you have multiple management networks, you can add static routes to the other networks from the
NSX-T Data Center appliance.
n Verify that you have adequate privileges to deploy an OVF template on the ESXi host.
n Verify that hostnames do not include underscores. Otherwise, the hostname is set to localhost.
n Know parameters that you can use to deploy a NSX Edge VM and join it to the management plane.
Procedure
u For a standalone host, run the ovftool command with the appropriate parameters.
C:\Users\Administrator\Downloads>ovftool
--name=nsx-edge-1
--deploymentOption=medium
--X:injectOvfEnv
--X:logFile=ovftool.log
--allowExtraConfig
--datastore=ds1
--net:"Network 0=Mgmt"
--net:"Network 1=nsx-tunnel"
--net:"Network 2=vlan-uplink"
--net:"Network 3=vlan-uplink"
--acceptAllEulas
--noSSLVerify
--diskMode=thin
--powerOn
--prop:nsx_ip_0=192.168.110.37
--prop:nsx_netmask_0=255.255.255.0
--prop:nsx_gateway_0=192.168.110.1
--prop:nsx_dns1_0=192.168.110.10
--prop:nsx_domain_0=corp.local
--prop:nsx_ntp_0=192.168.110.10
--prop:nsx_isSSHEnabled=True
--prop:nsx_allowSSHRootLogin=True
--prop:nsx_passwd_0=<password>
--prop:nsx_cli_passwd_0=<password>
--prop:nsx_hostname=nsx-edge
--prop:mpIp=<NSXManager-IP>
--prop:mpToken=<NSXManager-Token>
--prop:mpThumbprint=<NSXManager-Thumbprint>
--prop:is_autonomous_edge=False
<path/url to nsx component ova>
vi://root:<password>@192.168.110.51
u For a host managed by vCenter Server, run the ovftool command with the appropriate parameters.
C:\Users\Administrator\Downloads>ovftool
--name=nsx-edge-1
--deploymentOption=medium
--X:injectOvfEnv
--X:logFile=ovftool.log
--allowExtraConfig
--datastore=ds1
--net:"Network 0=Mgmt"
--net:"Network 1=nsx-tunnel"
--net:"Network 2=vlan-uplink"
--net:"Network 3=vlan-uplink"
--acceptAllEulas
--noSSLVerify
--diskMode=thin
--powerOn
--prop:nsx_ip_0=192.168.110.37
--prop:nsx_netmask_0=255.255.255.0
--prop:nsx_gateway_0=192.168.110.1
--prop:nsx_dns1_0=192.168.110.10
--prop:nsx_domain_0=corp.local
--prop:nsx_ntp_0=192.168.110.10
--prop:nsx_isSSHEnabled=True
--prop:nsx_allowSSHRootLogin=True
--prop:nsx_passwd_0=<password>
--prop:nsx_cli_passwd_0=<password>
--prop:nsx_hostname=nsx-edge
--prop:mpIp=<NSXManager-IP>
--prop:mpToken=<NSXManager-Token>
--prop:mpThumbprint=<NSXManager-Thumbprint>
--prop:is_autonomous_edge=False
<path/url to nsx component ova>
vi://[email protected]:<password>@192.168.110.24/?ip=192.168.210.53
u For an optimal performance, reserve memory for the NSX Manager appliance.
Set the reservation to ensure that NSX Manager has sufficient memory to run efficiently. See NSX
Manager VM and Host Transport Node System Requirements.
u Open the console of the NSX Edge node VM to track the boot process.
u After the NSX Edge node VM starts, log in to the CLI with admin credentials.
u Run the get interface eth0 (without VLAN) or get interface eth0.<vlan_ID> (with a VLAN)
command to verify that the IP address was applied as expected.
Interface: eth0.100
Address: 192.168.110.37/24
MAC address: 00:50:56:86:62:4d
MTU: 1500
Default gateway: 192.168.110.1
Broadcast address: 192.168.110.255
...
Note When bringing up NSX Edge VMs on non-NSX managed host, verify that the MTU setting is
set to 1600 (instead of 1500) on the physical host switch for the data NIC.
u Verify that the NSX Edge node VM has the required connectivity.
If you enabled SSH, make sure that you can SSH to your NSX Edge node VM and verify the
following:
n From the NSX Edge node VM, you can ping the node's default gateway.
n From the NSX Edge node VM, you can ping the hypervisor hosts that are either in the same
network or a network reachable through routing.
n From the NSX Edge node VM, you can ping the DNS server and NTP server.
Note If connectivity is not established, make sure the VM network adapter is in the proper network
or VLAN.
By default, the NSX Edge node VM datapath claims all virtual machine NICs except the management
NIC (the one that has an IP address and a default route). If you incorrectly assigned a NIC as the
management interface, follow these steps to use DHCP to assign management IP address to the
correct NIC.
a Log in CLI and type the stop service dataplane command.
c Place interface into the DHCP network and wait for an IP address to be assigned to that interface.
The datapath fp-ethX ports used for the VLAN uplink and the tunnel overlay are shown in the get
interfaces and get physical-port commands on the NSX Edge node VM.
What to do next
If you did not join the NSX Edge with the management plane, see Join NSX Edge with the Management
Plane.
Important The NSX-T Data Center component virtual machine installations include VMware Tools.
Removal or upgrade of VMware Tools is not supported for NSX-T Data Center appliances.
Prerequisites
Procedure
1 Go to your MyVMware account (myvmware.com) and navigate to VMware NSX-T Data Center >
Downloads.
4 Select Files > Upload Files > Upload a File to a Datastore, browse to the ISO file, and upload.
If you are using a self-signed certificate, open the IP address in a browser and accept the certificate
and reupload the ISO file.
5 In the vSphere Client inventory, select the host you uploaded the ISO file. or in the vSphere Client,
10 Select the supported ESXi operating systems for your NSX Edge VM.
You must click Connect to bind the NSX Edge ISO file to the VM.
13 During ISO boot, open the VM console and choose Automated installation.
During installation, the installer prompts you to enter a VLAN ID for the management interface. Select
Yes and enter a VLAN ID to create a VLAN subinterface for the network interface. Select No if you do
not want to configure VLAN tagging on the packet.
During power-on, the VM requests a network configuration via DHCP. If DHCP is not available in your
environment, the installer prompts you for IP settings.
By default, the root login password is vmware, and the admin login password is default.
When you log in for the first time, you are prompted to change the password. This password change
method has strict complexity rules, including the following:
n At least 12 characters
n No dictionary words
n No palindromes
Important The core services on the appliance do not start until a password with sufficient
complexity has been set.
14 For an optimal performance, reserve memory for the NSX Edge appliance.
Set the reservation to ensure that NSX Edge has sufficient memory to run efficiently. See NSX Edge
VM System Requirements.
15 After the NSX Edge node VM starts, log in to the CLI with admin credentials.
Note After NSX Edge node VM starts, if you do not log in with admin credentials for the first time,
the data plane service does not automatically start on the NSX Edge node VM.
Note If the server uses Mellanox NIC cards, do not configure the Edge in In-band management
interface.
n Tagged interface.
n In-band interface.
18 Run the get interface eth0 (without VLAN) or get interface eth0.<vlan_ID> (with a VLAN)
command to verify that the IP address was applied as expected.
Interface: eth0.100
Address: 192.168.110.37/24
MAC address: 00:50:56:86:62:4d
MTU: 1500
Default gateway: 192.168.110.1
Broadcast address: 192.168.110.255
...
Note When bringing up NSX Edge VMs on non-NSX managed host, verify that the MTU setting is
set to 1600 (instead of 1500) on the physical host switch for the data NIC.
19 (Tagged interface and In-band interface) Any existing VLAN management interface must be cleared
before creating a new one.
20 Verify that the NSX Edge node VM has the required connectivity.
If you enabled SSH, make sure that you can SSH to your NSX Edge node VM and verify the
following:
n From the NSX Edge node VM, you can ping the node's default gateway.
n From the NSX Edge node VM, you can ping the hypervisor hosts that are either in the same
network or a network reachable through routing.
n From the NSX Edge node VM, you can ping the DNS server and NTP server.
Note If connectivity is not established, make sure the VM network adapter is in the proper network
or VLAN.
By default, the NSX Edge node VM datapath claims all virtual machine NICs except the management
NIC (the one that has an IP address and a default route). If you incorrectly assigned a NIC as the
management interface, follow these steps to use DHCP to assign management IP address to the
correct NIC.
a Log in CLI and type the stop service dataplane command.
c Place interface into the DHCP network and wait for an IP address to be assigned to that interface.
The datapath fp-ethX ports used for the VLAN uplink and the tunnel overlay are shown in the get
interfaces and get physical-port commands on the NSX Edge node VM.
What to do next
If you did not join the NSX Edge with the management plane, see Join NSX Edge with the Management
Plane.
Note PXE boot installation is not supported for NSX Manager. You also cannot configure networking
settings, such as the IP address, gateway, network mask, NTP, and DNS.
DHCP dynamically distributes IP settings to NSX-T Data Center components, such as NSX Edge. In a
PXE environment, the DHCP server allows NSX Edge to request and receive an IP address
automatically.
TFTP is a file-transfer protocol. The TFTP server is always listening for PXE clients on the network. When
it detects any network PXE client asking for PXE services, it provides the NSX-T Data Center component
ISO file and the installation settings contained in a preseed file.
Prerequisites
n A PXE server must be available in your deployment environment. The PXE server can be set up on
any Linux distribution. The PXE server must have two interfaces, one for external communication and
another for providing DHCP IP and TFTP services.
If you have multiple management networks, you can add static routes to the other networks from the
NSX-T Data Center appliance.
n Verify that the preseeded configuration file has the parameters net.ifnames=0 and biosdevname=0
set after -- to persist after reboot.
Procedure
1 (Optional) Use a kickstart file to set up a new TFTP or DHCP services on an Ubuntu server.
A kickstart file is a text file that contains CLI commands that you run on the appliance after the first
boot.
Name the kickstart file based on the PXE server it is pointing to. For example:
nsxcli.install
The file must be copied to your Web server, for example at /var/www/html/nsx-edge/
nsxcli.install.
In the kickstart file, you can add CLI commands. For example, to configure the IP address of the
management interface:
stop dataplane
set interface eth0 <ip-cidr-format> plane mgmt
start dataplane
If you specify a password in the preseed.cfg file, use the same password in the kickstart file.
Otherwise, use the default password, which is "default".
2 Create two interfaces, one for management and another for DHCP and TFTP services.
Make sure that the DHCP/TFTP interface is in the same subnet that the NSX Edge resides in.
For example, if the NSX Edge management interfaces are going to be in the 192.168.210.0/24
subnet, place eth1 in that same subnet.
4 Edit the /etc/default/isc-dhcp-server file, and add the interface that provides the DHCP
service.
INTERFACES="eth1"
5 (Optional) If you want this DHCP server to be the official DHCP server for the local network,
uncomment the authoritative; line in the /etc/dhcp/dhcpd.conf file.
...
authoritative;
...
6 In the /etc/dhcp/dhcpd.conf file, define the DHCP settings for the PXE network.
For example:
9 Install Apache, TFTP, and other components that are required for PXE booting.
RUN_DAEMON="yes"
OPTIONS="-l -s /var/lib/tftpboot"
14 Copy or download the NSX Edge installer ISO file to a temporary folder.
15 Mount the ISO file and copy the install components to the TFTP server and the Apache server.
You can use a Linux tool such as mkpasswd to create a password hash.
Password:
$6$SUFGqs[...]FcoHLijOuFD
a Modify the root password, edit /var/www/html/nsx-edge/preseed.cfg and search for the
following line:
You do not need to escape any special character such as $, ', ", or \.
c Add the usermod command to preseed.cfg to set the password for root, admin, or both.
For example, search for the echo 'VMware NSX Edge' line and add the following command.
The hash string is an example. You must escape all special characters. The root password in the
first usermod command replaces the password that is set in d-i passwd/root-password-crypted
password $6$tgm....
If you use the usermod command to set the password, the user is not prompted to change the
password at the first login. Otherwise, the user must change the password at the first login.
label nsxedge
kernel ubuntu-installer/amd64/linux
ipappend 2
allow booting;
allow bootp;
Note If an error is returned, for example: "stop: Unknown instance: start: Job failed to start", run
sudo /etc/init.d/isc-dhcp-server stop and then sudo /etc/init.d/isc-dhcp-server
start. The sudo /etc/init.d/isc-dhcp-server start command returns information about the
source of the error.
What to do next
Install NSX Edge on bare metal using an ISO file. See Install NSX Edge Automatically via ISO File.
Prerequisites
Procedure
1 Go to your MyVMware account (myvmware.com) and navigate to VMware NSX-T Data Center >
Downloads.
2 Locate and download the ISO file for NSX Edge for Bare Metal.
3 Log in to the out-of-band management interface (for example, Integrated Lights-Out (ILO) on HP
servers) of the bare metal.
6 Select Virtual Media > Map CD/DVD and browse to the ISO file.
During power-on, the installer requests a network configuration via DHCP. If DHCP is not available in
your environment, the installer prompts you for IP settings.
By default, the root login password is vmware, and the admin login password is default.
11 Open the console of the NSX Edge node VM to track the boot process.
If the console window does not open, make sure that pop-ups are allowed.
12 After the NSX Edge node VM starts, log in to the CLI with admin credentials.
Note After NSX Edge node VM starts, if you do not log in with admin credentials for the first time,
the data plane service does not automatically start on the NSX Edge node VM.
13 After the reboot, you can log in with either admin or root credentials. The default root password is
vmware.
Note If the server uses Mellanox NIC cards, do not configure the Edge in In-band management
interface.
n Tagged interface.
n In-band interface.
n (Optional) Create a bond0 interface for management HA interface with multiple interfaces.
You can configure a bond management interface on an NSX Edge using the following CLI
command. Use console to clear existing mangement IP before you create a bond and add an
interface to it.
Note Only active-backup mode is allowed on a bond interface. It does not allows you to
configure VLAN. So, you need to configure VLAN on an access VLAN that sits closer to the
physical switch.
set interface bond0 ip x.x.x.x/mask gateway x.x.x.x plane mgmt mode active-
backup members eth0, eth1 primary eth0
15 Run the get interface eth0 (without VLAN) or get interface eth0.<vlan_ID> (with a VLAN)
command to verify that the IP address was applied as expected.
Interface: eth0.100
Address: 192.168.110.37/24
MAC address: 00:50:56:86:62:4d
MTU: 1500
Default gateway: 192.168.110.1
Broadcast address: 192.168.110.255
...
Note When bringing up NSX Edge VMs on non-NSX managed host, verify that the MTU setting is
set to 1600 (instead of 1500) on the physical host switch for the data NIC.
16 (Tagged interface and In-band interface) Any existing VLAN management interface must be cleared
before creating a new one.
17 Set physical NICs to be used by NSX-T Data Center dataplane from the list of available PCI devices.
d get physical-port
After selecting physical NICs, restart NSX-T Data Center dataplane services for changes to take
effect.
18 To avoid network configuration errors, verify that the physical NICs selected match the NICs
configured in the transport node profiles.
19 Before creating NSX Edge as a transport node, reset the NIC list on dataplane.
20 Verify that the NSX Edge node VM has the required connectivity.
If you enabled SSH, make sure that you can SSH to your NSX Edge node VM and verify the
following:
n From the NSX Edge node VM, you can ping the node's default gateway.
n From the NSX Edge node VM, you can ping the hypervisor hosts that are either in the same
network or a network reachable through routing.
n From the NSX Edge node VM, you can ping the DNS server and NTP server.
Note If connectivity is not established, make sure the VM network adapter is in the proper network
or VLAN.
By default, the NSX Edge node VM datapath claims all virtual machine NICs except the management
NIC (the one that has an IP address and a default route). If you incorrectly assigned a NIC as the
management interface, follow these steps to use DHCP to assign management IP address to the
correct NIC.
a Log in CLI and type the stop service dataplane command.
c Place interface into the DHCP network and wait for an IP address to be assigned to that interface.
The datapath fp-ethX ports used for the VLAN uplink and the tunnel overlay are shown in the get
interfaces and get physical-port commands on the NSX Edge node VM.
What to do next
If you did not join the NSX Edge with the management plane, see Join NSX Edge with the Management
Plane.
Prerequisites
Procedure
1 Go to your MyVMware account (myvmware.com) and navigate to VMware NSX-T Data Center >
Downloads.
2 Locate and download the ISO file for NSX Edge for Bare Metal.
6 Select Virtual Media > Map CD/DVD and browse to the ISO file.
10 In the Configure the keyboard window, select Yes if the installer must auto-detect the keyboard or
select No if the keyboard must not be detected by the console.
12 In the Configure the network window, select the applicable primary network interface.
13 Enter the host name that connects to the selected primary interface and click Ok.
During power-on, the installer requests a network configuration via DHCP. If DHCP is not available in
your environment, the installer prompts you for IP settings.
By default, the root login password is vmware, and the admin login password is default.
n Enter the URL of the NSX kickstart config file if you want to automate NSX configuration on the
bare metal server.
n Leave the field blank if you want to manually configure NSX on the bare metal server.
n Select Yes if you want to unmount existing partitions so that new partitions can be created on
disks.
16 After the NSX Edge node VM starts, log in to the CLI with admin credentials.
Note After NSX Edge node VM starts, if you do not log in with admin credentials for the first time,
the data plane service does not automatically start on the NSX Edge node VM.
17 Run the get interface eth0 (without VLAN) or get interface eth0.<vlan_ID> (with a VLAN)
command to verify that the IP address was applied as expected.
Interface: eth0.100
Address: 192.168.110.37/24
MAC address: 00:50:56:86:62:4d
MTU: 1500
Default gateway: 192.168.110.1
Broadcast address: 192.168.110.255
...
Note When bringing up NSX Edge VMs on non-NSX managed host, verify that the MTU setting is
set to 1600 (instead of 1500) on the physical host switch for the data NIC.
Note If connectivity is not established, make sure the VM network adapter is in the proper network
or VLAN.
By default, the NSX Edge node VM datapath claims all virtual machine NICs except the management
NIC (the one that has an IP address and a default route). If you incorrectly assigned a NIC as the
management interface, follow these steps to use DHCP to assign management IP address to the
correct NIC.
a Log in CLI and type the stop service dataplane command.
c Place interface into the DHCP network and wait for an IP address to be assigned to that interface.
The datapath fp-ethX ports used for the VLAN uplink and the tunnel overlay are shown in the get
interfaces and get physical-port commands on the NSX Edge node VM.
What to do next
If you did not join the NSX Edge with the management plane, see Join NSX Edge with the Management
Plane.
The QAT feature is enabled by default if the NSX Edge is deployed on a bare metal server with an Intel
QuickAssist PCIe card that is based on the installed C62x chipset (Intel QuickAssist Adapter 8960 or
8970). The single root I/O virtualization (SR-IOV) interface must be enabled in the BIOS firmware.
To check the status of the QAT feature, enter the following command on the NSX Edge bare metal server
CLI.
The possible responses you might receive are listed in the following table.
QAT present, enabled, running The QAT feature is enabled and running.
QAT present, enabled, not running The QAT feature has been enabled, but the dataplane service
must be restarted for the status change to take effect.
QAT present, disabled, running The QAT feature has been disabled, but the dataplane service
must be restarted for the status change to take effect.
QAT not present The bare metal server on which you ran the CLI command does
not have a QAT device installed.
To disable or enable the use of an installed QAT device, use the following CLI commands. The expected
responses are also shown.
Important You must enter the restart service dataplane command at the CLI prompt for the QAT
feature status change to take effect.
Prerequisites
Verify that you have admin privileges to log in to the NSX Edges and NSX Manager appliance.
Procedure
1 Open an SSH session or console session to one of the NSX Manager appliances.
2 Open an SSH session or console session to the NSX Edge node VM.
3 On the NSX Manager appliance, run the get certificate api thumbprint command.
The command output is a string of alphanumeric numbers that is unique to this NSX Manager.
For example:
4 On the NSX Edge node VM, run the join management-plane command.
5 Verify the result by running the get managers command on your NSX Edge node VMs.
6 In the NSX Manager UI, navigate to System > Fabric > Nodes > Edge Transport Nodes.
n The Configuration State column displays Configure NSX. Click Configure NSX to begin
configuration on the node. If the NSX Version column does not display the version number
installed on the node, try refreshing the browser window.
n Before you configure NSX on the NSX Edge node, the Node Status and Tunnel Status columns
display state Not Available. The Transport Zones and N-VDS switches columns display value
0, indicating there are no transport zones attached or N-VDS switches configured on the NSX
Edge node.
What to do next
When installing NSX Edge using NSX Manager see Create an NSX Edge Transport Node.
When installing NSX Edge manually, see Configure an NSX Edge as a Transport Node.
A transport node is a node that is capable of participating in an NSX-T Data Center overlay or NSX-T
Data Center VLAN networking. Any node can serve as a transport node if it contains an N-VDS. Such
nodes include but are not limited to NSX Edges.
An NSX Edge can belong to one overlay transport zone and multiple VLAN transport zones. If a VM
requires access to the outside world, the NSX Edge must belong to the same transport zone that the
VM's logical switch belongs to. Generally, the NSX Edge belongs to at least one VLAN transport zone to
provide the uplink access.
Prerequisites
n An uplink profile must be configured or you can use the default uplink profile for bare-metal NSX
Edge nodes.
n At least one unused physical NIC must be available on the host or NSX Edge node.
Procedure
2 Select System > Fabric > Nodes > Edge Transport Nodes > Edit Edge.
Option Description
Edge Switch Name Select a VLAN switch from the drop-down menu.
Transport Zone Select the transport zones that this transport node belongs to. An NSX Edge
transport node belongs to at least two transport zones, an overlay for NSX-T Data
Center connectivity and a VLAN for uplink connectivity.
Note Multiple VTEPs in a transport zone must be configured to the same network
segment. If VTEPs in a transport zone are configured to different network
segments, BFD sessions cannot be established between the VTEPs.
Uplink Profile Select the uplink profile from the drop-down menu.
The available uplinks depend on the configuration in the selected uplink profile.
Option Description
IP Assignment Select Use IP Pool or Use Static IP List for the overlay N-VDS. These IP
addresses are assigned as VTEPs to the NSX Edge transport node. Multiple
VTEPs on an NSX Edge must be in the same subnet.
n If you select Use Static IP List, you must specify a list of comma-separated IP
addresses, a gateway, and a subnet mask.
n If you selected Use IP Pool for IP assignment, specify the IP pool name.
DPDK Fastpath Interfaces / Virtual Select the data path interface name for the uplink interface.
NICs
Note To ensure that traffic flows through logical switches configured with named
teaming policies, map all uplinks in the default teaming policy to physical network
interfaces on the NSX Edge VM.
After adding the NSX Edge as a transport node, the connection status changes to Up in 10-12
minutes.
8 If you see inaccurate resource configuration details, such as compute, datastore, or network details
that were configured when you installed NSX Edge node, refresh the configuration details by running
the API command.
POST api/v1/transport-nodes/<transport-node-id>?
ction=refresh_node_configuration&resource_type=EdgeNode
NSX Manager displays inaccurate or stale NSX Edge node details if one of these conditions are true:
n On the NSX Edge node, if you change settings such as SSH, NTP, DNS and so from the NSX
Edge command line interface.
n If the NSX Edge VM appliance is moved to another host using vCenter, the NSX Manager may
have stale values of compute, datastore, and network configuration based on the configuration of
the new host.
What to do next
Add the NSX Edge node to an NSX Edge cluster. See Create an NSX Edge Cluster.
Transport zones and transport nodes are important concepts in NSX-T Data Center.
n Configuring Profiles
n VLAN Micro-Segmentation
An NSX-T Data Center environment can contain one or more transport zones based on your
requirements. A host can belong to multiple transport zones. A logical switch can belong to only one
transport zone.
NSX-T Data Center does not allow connection of VMs that are in different transport zones in the Layer 2
network. The span of a logical switch is limited to a transport zone, so virtual machines in different
transport zones cannot be on the same Layer 2 network.
The overlay transport zone is used by both host transport nodes and NSX Edges. When a host is added
to an overlay transport zone, you can configure an N-VDS or a VDS switch on the host. When an NSX
Edge transport node is added to an overlay transport zone, you can only configure an N-VDS switch.
The VLAN transport zone is used by the NSX Edge and host transport nodes for its VLAN uplinks. When
an NSX Edge is added to a VLAN transport zone, a VLAN N-VDS is installed on the NSX Edge.
Procedure
5 Enter one or more uplink teaming policy names. These named teaming policies can be used by
logical switches attached to the transport zone. If the logical switches do not find a matching named
teaming policy, then the default uplink teaming policy is used.
6 After you add the transport zone, go to the Transport Zones page and view the newly added
transport zone.
7 (Optional) You can also view the new transport zone with the GET https://<nsx-mgr>/api/v1/
transport-zones API call.
{
"cursor": "00369b661aed-1eaa-4567-9408-ccbcfe50b416tz-vlan",
"result_count": 2,
"results": [
{
"resource_type": "TransportZone",
"description": "comp overlay transport zone",
"id": "efd7f38f-c5da-437d-af03-ac598f82a9ec",
"display_name": "tz-overlay",
"host_switch_name": "overlay-hostswitch",
"transport_type": "OVERLAY",
"transport_zone_profile_ids": [
{
"profile_id": "52035bb3-ab02-4a08-9884-18631312e50a",
"resource_type": "BfdHealthMonitoringProfile"
}
],
"_create_time": 1459547126454,
"_last_modified_user": "admin",
"_system_owned": false,
"_last_modified_time": 1459547126454,
"_create_user": "admin",
"_revision": 0,
"_schema": "/v1/schema/TransportZone"
},
{
"resource_type": "TransportZone",
"description": "comp vlan transport zone",
"id": "9b661aed-1eaa-4567-9408-ccbcfe50b416",
"display_name": "tz-vlan",
"host_switch_name": "vlan-uplink-hostwitch",
"transport_type": "VLAN",
"transport_zone_profile_ids": [
{
"profile_id": "52035bb3-ab02-4a08-9884-18631312e50a",
"resource_type": "BfdHealthMonitoringProfile"
}
],
"_create_time": 1459547126505,
"_last_modified_user": "admin",
"_system_owned": false,
"_last_modified_time": 1459547126505,
"_create_user": "admin",
"_revision": 0,
"_schema": "/v1/schema/TransportZone"
}
]
}
What to do next
Optionally, create a custom transport-zone profile and bind it to the transport zone. You can create
custom transport-zone profiles using the POST /api/v1/transportzone-profiles API. There is no UI
workflow for creating a transport-zone profile. After the transport-zone profile is created, you can find it to
the transport zone with the PUT /api/v1/transport-zones/<transport-zone-id> API.
Create a transport node. See Create a Standalone Host or Bare Metal Server Transport Node.
If you are using both ESXi and KVM hosts, one design option might be to use two different subnets for the
ESXi tunnel endpoint IP pool (sub_a) and the KVM tunnel endpoint IP Pool (sub_b). In this case, on the
KVM hosts a static route to sub_a must be added with a dedicated default gateway.
An example of the resulting routing table on an Ubuntu host where sub_a = 192.168.140.0 and sub_b =
192.168.150.0. (The management subnet, for example, might be 192.168.130.0).
The route can be added in at least two different ways. Of these two methods, the route persists after host
reboot only if you add the route by editing the interface. Adding a route using the route add command
does not persist after a host reboot.
In /etc/network/interfaces before "up ifconfig nsx-vtep0.0 up" add this static route:
Procedure
Gateway 192.168.200.1
Results
You can also use the GET https://<nsx-mgr>/api/v1/pools/ip-pools API call to view the IP pool list.
What to do next
The N-VDS switch can be configured in the enhanced data path mode only on an ESXi host. ENS also
supports traffic flowing through Edge VMs.
In the enhanced data path mode, both traffic modes are supported:
n Overlay traffic
n VLAN traffic
See VMware Compatibility Guide to know NIC cards that support enhanced data path.
On the VMware Compatibility Guide page, under the IO devices category, select ESXi 6.7, IO device
Type as Network, and feature as N-VDS Enhanced Datapath.
2 Download and install the latest NIC drivers from the My VMware page.
VMware ESXi 6.7 ixgben-ens 1.1.3 NIC Driver for Intel Ethernet Controllers
82599, x520, x540, x550, and x552 family
VMware ESXi 6.7 i40en-ens 1.1.3 NIC Driver for Intel Ethernet Controllers
X710, XL710, XXV710, and X722 family
c To use the host as an ENS host, at least one ENS capable NIC must be available on the system.
If there are no ENS capable NICs, the management plane will not allow hosts to be added to ENS
transport zones.
esxcfg-nics -e
X550
vmnic3 ixgben True False e4:43:4b:7b:d2:e3 Intel(R) Ethernet Controller
X550
vmnic4 i40en True False 3c:fd:fe:7c:47:40 Intel(R) Ethernet Controller
X710/X557-AT 10GBASE-T
vmnic5 i40en True False 3c:fd:fe:7c:47:41 Intel(R) Ethernet Controller
X710/X557-AT 10GBASE-T
vmnic6 i40en True False 3c:fd:fe:7c:47:42 Intel(R) Ethernet Controller
X710/X557-AT 10GBASE-T
vmnic7 i40en True False 3c:fd:fe:7c:47:43 Intel(R) Ethernet Controller
X710/X557-AT 10GBASE-T
wget <DriverInstallerURL>
h Reboot the host to load the driver. Proceed to the next step.
vmkload_mod -u i40en
ps | grep vmkdevmgr
ps | grep vmkdevmgr
j To uninstall the ENS driver, esxcli software vib remove --vibname=i40en-ens --force
--no-live-install.
Note ENS transport zones configured for overlay traffic: For a Microsoft Windows virtual machine
running VMware tools version earlier to version 11.0.0 and vNIC type is VMXNET3, ensure MTU is set
to 1500. For a Microsoft Windows virtual machine running vSphere 6.7 U1 and VMware tools version
11.0.0 and later, ensure MTU is set to a value less than 8900. For virtual machines running other
supported OSes, ensure that the virtual machine MTU is set to a value less than 8900.
5 Create a host transport node. Configure mode in Enhanced Datapath on an N-VDS or VDS switch
with logical cores and NUMA nodes.
If the NUMA node location of either the VM or the physical NIC is not available, then the Load Balanced
Source teaming policy does not consider NUMA awareness to align VMs and NICs.
The teaming policy functions without NUMA awareness in the following conditions:
n The LAG uplink is configured with physical links from multiple NUMA nodes.
n The ESXi host failed to define NUMA information for either VM or physical links.
Depending upon the number of physical NICs available to be used as an uplink for an overlay or a VLAN
network, those many redundant network paths are available for a VM to send traffic over to the target VM.
The redundant paths are used when the pinned pNIC to a logical switch fails. The enhanced data path
switch provides redundant network paths between the hosts.
2 Prepare VLAN or Overlay Transport Zone with two N-VDS switches in Enhanced Data Path mode.
The N-VDS in enhanced data path mode ensures that if pNIC1 becomes unavailable, then traffic from VM
1 is routed through the redundant path - vNIC 1 → tunnel endpoint 2 → pNIC 2 → VM 2.
Configuring Profiles
Profiles allow you to consistently configure identical capabilities for network adapters across multiple
hosts or nodes.
Profiles are containers for the properties or capabilities that you want your network adapters to have.
Instead of configuring individual properties or capabilities for each network adapter, you can specify the
capabilities in the profiles, which you can then apply across multiple hosts or nodes.
An uplink profile defines policies for the uplinks. The settings defined by uplink profiles can include
teaming policies, active and standby links, transport VLAN ID, and MTU setting.
Configuring uplinks for VM appliance-based NSX Edge nodes and Host Transport nodes:
n If the Failover teaming policy is configured for an uplink profile, then you can only configure a single
active uplink in the teaming policy. Standby uplinks are not supported and must not be configured in
the failover teaming policy. When you install NSX Edge as a virtual appliance or host transport node,
use the default uplink profile.
n If the Load Balanced Source teaming policy is configured for an uplink profile, then you can configure
multiple active uplinks on the same N-VDS. Each uplink is associated with one physical NIC with a
distinct name and IP address. The IP address assigned to an uplink endpoint is configurable using IP
Assignment for the N-VDS.
You must use the Load Balanced Source teaming policy for traffic load balancing.
Prerequisites
n Each uplink in the uplink profile must correspond to an up and available physical link on your
hypervisor host or on the NSX Edge node.
For example, your hypervisor host has two physical links that are up: vmnic0 and vmnic1. Suppose
vmnic0 is used for management and storage networks, while vmnic1 is unused. This might mean that
vmnic1 can be used as an NSX-T Data Center uplink, but vmnic0 cannot. To do link teaming, you
must have two unused physical links available, such as vmnic1 and vmnic2.
For an NSX Edge, tunnel endpoint and VLAN uplinks can use the same physical link. For example,
vmnic0/eth0/em0 might be used for your management network and vmnic1/eth1/em1 might be used
for your fp-ethX links.
Procedure
2 Select System > Fabric > Profiles > Uplink Profiles > Add.
Option Description
LAGs (Optional) In the LAGs section, click Add for Link aggregation groups (LAGs) using
Link Aggregation Control Protocol (LACP) for the transport network.
The active and standby uplink names you create can be any text to represent
physical links. These uplink names are referenced later when you create transport
nodes. The transport node UI/API allows you to specify which physical link
corresponds to each named uplink.
Possible LAG hashing mechanism options:
n Source MAC address
n Destination MAC address
n Source and destination MAC address
n Source and destination IP address and VLAN
n Source and destination MAC address, IP address, and TCP/UDP port
Teamings In the Teaming section, you can either enter a default teaming policy or you can
choose to enter a named teaming policy. Click Add to add a naming teaming
policy. A teaming policy defines how N-VDS uses its uplink for redundancy and
traffic load balancing. You can configure a teaming policy in the following modes:
n Failover Order: An active uplink is specified along with an optional list of
standby uplinks. If the active uplink fails, the next uplink in the standby list
replaces the active uplink. No actual load balancing is performed with this
option.
n Load Balance Source: A list of active uplinks is specified, and each interface
on the transport node is pinned to one active uplink. This configuration allows
use of several active uplinks at the same time.
Note
n On KVM hosts: Only Failover Order teaming policy is supported, whereas Load
Balance Source and Load Balance Source MAC teaming policies are not
supported.
n On NSX Edge: For default teaming policy, Load Balance Source and Failover
Order teaming policies are supported. For named teaming policy, only Failover
Order policy is supported.
n On ESXi hosts: Load Balance Source MAC, Load Balance Source, and
Failover Order teaming policies are supported.
( ESXi hosts and NSX Edge) You can define the following policies for a transport
zone:
n A Named teaming policy for every VLAN-based logical switch or segment.
n A Default teaming policy for the entire N-VDS.
Option Description
Named teaming policy: A named teaming policy means that for every VLAN-based
logical switch or segment, you can define a specific teaming policy mode and
uplinks names. This policy type gives you the flexibility to select specific uplinks
depending on the traffic steering policy, for example, based on bandwidth
requirement.
n If you define a named teaming policy, N-VDS uses that named teaming policy if
it is attached to the VLAN-based transport zone and finally selected for specific
VLAN-based logical switch or segment in the host.
n If you do not define any named teaming policies, N-VDS uses the default
teaming policy.
4 Enter a Transport VLAN value. The transport VLAN set in the uplink profile tags overlay traffic only
and the VLAN ID is used by the TEP endpoint.
The global physical uplink MTU configures the MTU value for all the N-VDS instances in the NSX-T
Data Center domain. If the global physical uplink MTU value is not specified, the MTU value is
inferred from the uplink profile MTU if configured or the default 1600 is used. The uplink profile MTU
value can override the global physical uplink MTU value on a specific host.
The global logical interface MTU configures the MTU value for all the logical router interfaces. If the
global logical interface MTU value is not specified, the MTU value is inferred from the tier-0 logical
router. The logical router uplink MTU value can override on a specific port the global logical interface
MTU value.
Results
In addition to the UI, you can also view the uplink profiles with the API call GET /api/v1/host-switch-
profiles.
What to do next
NIOC profile introduces a mechanism to reserve bandwidth for the system traffic based on the capacity of
the physical adapters on a host. Version 3 of the Network I/O Control feature offers improved network
resource reservation and allocation across the entire switch.
Network I/O Control version 3 for NSX-T Data Center supports the resource management of the system
traffic related to virtual machines and to infrastructure services, such as vSphere Fault Tolerance. System
traffic is strictly associated with an ESXi host.
n NFS Traffic: is traffic related to a file transfer in the network file system.
vCenter Server propagates the allocation from the distributed switch to each physical adapter on the
hosts that are connected to the switch.
n Shares: Shares, from 1 to 100, reflect the relative priority of a system traffic type against the other
system traffic types that are active on the same physical adapter. The relative shares assigned to a
system traffic type and the amount of data transmitted by other system features determine the
available bandwidth for that system traffic type.
n Reservation: The minimum bandwidth, in Mbps, that must be guaranteed on a single physical
adapter. The total bandwidth reserved among all system traffic types cannot exceed 75 percent of the
bandwidth that the physical network adapter with the lowest capacity can provide. Reserved
bandwidth that is unused becomes available to other types of system traffic. However, Network I/O
Control does not redistribute the capacity that system traffic does not use to virtual machine
placement.
n Limit: The maximum bandwidth, in Mbps or Gbps, that a system traffic type can consume on a single
physical adapter.
Note You can reserve no more than 75 percent of the bandwidth of a physical network adapter.
For example, if the network adapters connected to an ESXi host are 10 GbE, you can only allocate 7.5
Gbps bandwidth to the various traffic types. You might leave more capacity unreserved. The host can
allocate the unreserved bandwidth dynamically according to shares, limits, and use. The host reserves
only the bandwidth that is enough for the operation of a system feature.
Configure Network I/O Control and Bandwidth Allocation for System Traffic
on an N-VDS
To guarantee the minimum bandwidth to the system traffic running on NSX-T Data Center hosts, enable
and configure a network resource management on an N-VDS.
Procedure
2 Select System > Fabric > Profiles > NIOC Profiles > Add.
Option Description
Status Toggle to enable the bandwidth allocations listed in the traffic resources.
Host Infra Traffic Resource You can accept the default listed traffic resources.
Click Add and enter your traffic resource to customize the NIOC profile.
(Optional) Select an existing traffic type and click Delete to remove the resource
from the NIOC profile.
Configure Network I/O Control and Bandwidth Allocation for System Traffic
on an N-VDS Using APIs
You can use NSX-T Data Center APIs to configure the network and bandwidth for applications running on
the host.
Procedure
1 Query the host to display both system-defined and user-defined host switch profiles.
2 GET https://<nsx-mgr>/api/v1/host-switch-profiles?include_system_owned=true.
The sample response displays the NIOC profile that is applied to the host.
{
"description": "This profile is created for Network I/O Control (NIOC).",
"extends": {
"$ref": "BaseHostSwitchProfile"+
},
"id": "NiocProfile",
"module_id": "NiocProfile",
"polymorphic-type-descriptor": {
"type-identifier": "NiocProfile"
},
"properties": {
"_create_time": {
"$ref": "EpochMsTimestamp"+,
"can_sort": true,
"description": "Timestamp of resource creation",
"readonly": true
},
"_create_user": {
"description": "ID of the user who created this resource",
"readonly": true,
"type": "string"
},
"_last_modified_time": {
"$ref": "EpochMsTimestamp"+,
"can_sort": true,
"description": "Timestamp of last modification",
"readonly": true
},
"_last_modified_user": {
"description": "ID of the user who last modified this resource",
"readonly": true,
"type": "string"
},
"_links": {
"description": "The server will populate this field when returning the resource. Ignored on PUT
and POST.",
"items": {
"$ref": "ResourceLink"+
},
"readonly": true,
"title": "References related to this resource",
"type": "array"
},
"_protection": {
"description": "Protection status is one of the following:
PROTECTED - the client who retrieved the entity is not allowed to modify it.
NOT_PROTECTED - the client who retrieved the entity is allowed to modify it
REQUIRE_OVERRIDE - the client who retrieved the entity is a super user and can modify it,
but only when providing the request header X-Allow-Overwrite=true.
UNKNOWN - the _protection field could not be determined for this entity.",
"readonly": true,
"title": "Indicates protection status of this resource",
"type": "string"
},
"_revision": {
"description": "The _revision property describes the current revision of the resource.
To prevent clients from overwriting each other's changes, PUT operations must include the
current _revision of the resource,
which clients should obtain by issuing a GET operation.
If the _revision provided in a PUT request is missing or stale, the operation
will be rejected.",
"readonly": true,
"title": "Generation of this resource config",
"type": "int"
},
"_schema": {
"readonly": true,
"title": "Schema for this resource",
"type": "string"
},
"_self": {
"$ref": "SelfResourceLink"+,
"readonly": true,
"title": "Link to this resource"
},
"_system_owned": {
"description": "Indicates system owned resource",
"readonly": true,
"type": "boolean"
},
"description": {
"can_sort": true,
"maxLength": 1024,
"title": "Description of this resource",
"type": "string"
},
"display_name": {
"can_sort": true,
"description": "Defaults to ID if not set",
"maxLength": 255,
"title": "Identifier to use when displaying entity in logs or GUI",
"type": "string"
},
"enabled": {
"default": true,
"description": "The enabled property specifies the status of NIOC feature.
When enabled is set to true, NIOC feature is turned on and the bandwidth allocations
specified for the traffic resources are enforced.
When enabled is set to false, NIOC feature is turned off and no bandwidth allocation is
guaranteed.
"nsx_feature": "Nioc",
"required": false,
"title": "Enabled status of NIOC feature",
"type": "boolean"
},
"host_infra_traffic_res": {
"description": "host_infra_traffic_res specifies bandwidth allocation for various traffic
resources.",
"items": {
"$ref": "ResourceAllocation"+
},
"nsx_feature": "Nioc",
"required": false,
"title": "Resource allocation associated with NiocProfile",
"type": "array"
},
"id": {
"can_sort": true,
"readonly": true,
"title": "Unique identifier of this resource",
"type": "string"
},
"required_capabilities": {
"help_summary":
"List of capabilities required on the fabric node if this profile is
used.
The required capabilities is determined by whether specific features are enabled in the
profile.",
"items": {
"type": "string"
},
"readonly": true,
"required": false,
"type": "array"
},
"resource_type": {
"$ref": "HostSwitchProfileType"+,
"required": true
},
"tags": {
"items": {
"$ref": "Tag"+
},
"maxItems": 30,
"title": "Opaque identifiers meaningful to the API user",
"type": "array"
}
},
"title": "Profile for Nioc",
"type": "object"
}
POST https://<nsx-mgr>/api/v1/host-switch-profiles
{
"description": "Specify limit, shares and reservation for all kinds of traffic.
Values for limit and reservation are expressed in percentage. And for shares,
the value is expressed as a number between 1-100.\nThe overall reservation among all traffic
types should not exceed 75%.
Otherwise, the API request will be rejected.",
"id": "ResourceAllocation",
"module_id": "NiocProfile",
"nsx_feature": "Nioc",
"properties": {
"limit": {
"default": -1.0,
"description": "The limit property specifies the maximum bandwidth allocation for a given
traffic type and is expressed in percentage. The default value for this
field is set to -1 which means the traffic is unbounded for the traffic
type. All other negative values for this property is not supported\nand will be rejected by
the API.",
"maximum": 100,
"minimum": -1,
"required": true,
"title": "Maximum bandwidth percentage",
"type": "number"
},
"reservation": {
"default": 0.0,
"maximum": 75,
"minimum": 0,
"required": true,
"title": "Minimum guaranteed bandwidth percentage",
"type": "number"
},
"shares": {
"default": 50,
"maximum": 100,
"minimum": 1,
"required": true,
"title": "Shares",
"type": "int"
},
"traffic_type": {
"$ref": "HostInfraTrafficType"+,
"required": true,
"title": "Resource allocation traffic type"
}
},
4 Update the transport node configuration with the NIOC profile ID of the newly created NIOC profile.
PUT https://<nsx-mgr>/api/v1/transport-nodes/<TN-id>
{
"resource_type": "TransportNode",
"description": "Updated NSX configured Test Transport Node",
"id": "77816de2-39c3-436c-b891-54d31f580961",
"display_name": "NSX Configured TN",
"host_switch_spec": {
"resource_type": "StandardHostSwitchSpec",
"host_switches": [
{
"host_switch_profile_ids": [
{
"value": "e331116d-f59e-4004-8cfd-c577aefe563a",
"key": "UplinkHostSwitchProfile"
},
{
"value": "9e0b4d2d-d155-4b4b-8947-fbfe5b79f7cb",
"key": "LldpHostSwitchProfile"
}
{
"value": "b0185099-8003-4678-b86f-edd47ca2c9ad",
"key": "NiocProfile"
}
],
"host_switch_name": "nsxvswitch",
"pnics": [
{
"device_name": "vmnic1",
"uplink_name": "uplink1"
}
],
"ip_assignment_spec": {
"resource_type": "StaticIpPoolSpec",
"ip_pool_id": "ecddcdde-4dc5-4026-ad4f-8857995d4c92"
}
}
]
},
"transport_zone_endpoints": [
{
"transport_zone_id": "e14c6b8a-9edd-489f-b624-f9ef12afbd8f",
"transport_zone_profile_ids": [
{
"profile_id": "52035bb3-ab02-4a08-9884-18631312e50a",
"resource_type": "BfdHealthMonitoringProfile"
}
]
}
],
"host_switches": [
{
"host_switch_profile_ids": [
{
"value": "e331116d-f59e-4004-8cfd-c577aefe563a",
"key": "UplinkHostSwitchProfile"
},
{
"value": "9e0b4d2d-d155-4b4b-8947-fbfe5b79f7cb",
"key": "LldpHostSwitchProfile"
}
],
"host_switch_name": "nsxvswitch",
"pnics": [
{
"device_name": "vmnic1",
"uplink_name": "uplink1"
}
],
"static_ip_pool_id": "ecddcdde-4dc5-4026-ad4f-8857995d4c92"
}
],
"node_id": "41a4eebd-d6b9-11e6-b722-875041b9955d",
"_revision": 0
}
5 Verify that the NIOC profile parameters are updated in the com.vmware.common.respools.cfg file.
Results
NIOC profile is configured with a pre-defined bandwidth allocation for applications running on NSX-T Data
Center hosts.
Prerequisites
Procedure
2 Select System > Fabric > Profiles > Edge Cluster Profiles > Add.
Option Description
Prerequisites
Procedure
2 Select Networking → Segments → Edge Bridge Profiles → Add Edge Bridge Profile.
Option Description
Name and Description Enter a NSX Edge bridge cluster profile name.
You can optionally enter the profile details such as, the primary and backup node
details.
Edge Cluster Select the NSX Edge cluster that you can to use.
Primary Node Designate the preferred NSX Edge node from the cluster.
Backup Node Designate the back up NSX Edge node if the primary node fails.
Note Transport node profiles are only applicable to hosts. It cannot be applied to NSX Edge transport
nodes.
Transport node creation begins when a transport node profile is applied to a vCenter Server cluster. NSX
Manager prepares the hosts in the cluster and installs the NSX-T Data Center components on all the
hosts. Transport nodes for the hosts are created based on the configuration specified in the transport
node profile.
On a cluster prepared with a transport node profile, these outcomes are true:
n When you move an unprepared host into a cluster applied with a transport node profile, NSX-T Data
Center automatically prepares the host as a transport node using the transport node profile.
n When you move a transport node from the cluster to an unprepared cluster or directly as a
standalone host under the data center, first the transport node configuration applied to the node is
removed and then NSX-T Data Center VIBs are removed from the host. See Triggering Uninstallation
from the vSphere Web Client.
To delete a transport node profile, you must first detach the profile from the associated cluster. The
existing transport nodes are not affected. New hosts added to the cluster are no longer automatically
converted into transport nodes.
n You can add a maximum of four N-VDS or VDS switches for each configuration: enhanced N-VDS or
VDS created for VLAN transport zone, standard N-VDS or VDS created for overlay transport zone,
enhanced N-VDS or VDS created for overlay transport zone.
n There is no limit on the number of standard N-VDS switches created for VLAN transport zone.
n In a single host cluster topology running multiple standard overlay N-VDS switches and edge VM on
the same host, NSX-T Data Center provides traffic isolation such that traffic going through the first N-
VDS is isolated from traffic going through the second N-VDS and so on. The physical NICs on each
N-VDS must be mapped to the edge VM on the host to allow the north-south traffic connectivity with
the external world. Packets moving out of a VM on the first transport zone must be routed through an
external router or an external VM to the VM on the second transport zone.
n Each N-VDS switch name must be unique. NSX-T Data Center does not allow use of duplicate switch
names.
n Each transport zone ID associated with each N-VDS or VDS host in a transport node configuraiton or
transport node profile configuration must be unique.
Prerequisites
n Verify that a cluster is available. See Deploy NSX Manager Nodes to Form a Cluster from UI.
n Verify that an IP pool is configured, or DHCP must be available in the network deployment. See
Create an IP Pool for Tunnel Endpoint IP Addresses.
Procedure
2 Select System > Fabric > Profiles > Transport Node Profiles > Add.
You can optionally add the description about the transport node profile.
4 In the Add Transport Node Profile panel, expand New Node Switch.
5 In the Type field, select between N-VDS and VDS as the host switch type to prepare the transport
node.
6 In the Mode field, depending upon the workload requirements, select the appropriate mode:
n Standard mode that applies to all the supported hosts. It is used for regular workloads.
n Enhanced Datapath is a networking stack mode that applies to only transport nodes of ESXi
host version 6.7 and later type that can belong in a transport zone. It is used for telecom
workloads that require relatively higher throughput and performance.
7 Select the available transport zones and click the > button to include the transport zones in the
transport node profile.
8 Select N-VDS as the host switch type and enter the switch details. Skip to the next step to select VDS
as the host switch.
Option Description
Transport Zones Shows the transport zones that are realized by the associated host switches. You
cannot add a transport zone if it is not realized by any N-VDS in the transport node
profile.
NIOC Profile Select the NIOC profile from the drop-down menu.
The bandwidth allocations specified in the profile for the traffic resources are
enforced.
Uplink Profile Select an existing uplink profile from the drop-down menu or create a custom
uplink profile.
You can also use the default uplink profile.
LLDP Profile By default, NSX-T only receives LLDP packets from a LLDP neighbor.
However, NSX-T can be set to send LLDP packets to and receive LLDP packets
from a LLDP neighbor.
IP Assignment Select Use DHCP, Use IP Pool, or Use Static IP List to assign an IP address to
virtual tunnel endpoints (VTEPs) of the transport node.
If you select Use Static IP List, you must specify a list of comma-separated IP
addresses, a gateway, and a subnet mask. All the VTEPs of the transport node
must be in the same subnet otherwise bidirectional flow (BFD) session is not
established.
IP Pool If you selected Use IP Pool for an IP assignment, specify the IP pool name.
Option Description
Physical NICs Add physical NICs to the transport node. You can use the default uplink or assign
an existing uplink from the drop-down menu.
Click Add PNIC to configure additional physical NICs to the transport node.
Note Migration of the physical NICs that you add in this field depends on how you
configure PNIC only Migration, Network Mappings for Install, and Network
Mappings for Uninstall.
PNIC only Migration Before setting this field, consider the following points:
n Know whether the physical NIC defined is a used NIC or a free NIC.
n Determine whether VMkernel interfaces of a host need to be migrated along
with physical NICs.
Set the field:
n Enable PNIC only Migration if you only want to migrate physical NICs from a
VSS or DVS switch to an N-VDS switch.
n Disable PNIC only Migration if you want to migrate a used physical NIC and
its associated VMkernel interface mapping. A free or available physical NIC is
attached to the N-VDS switch when a VMkernel interface migration mapping is
specified.
On a host with multiple host switches:
n If all host switches are to migrate only PNICs, then you can migrate the PNICs
in a single operation.
n If some hosts switches are to migrate VMkernel interfaces and the remaining
host switches are to migrate only PNICs:
1 In the first operation, migrate only PNICs.
2 In the second operation, migrate VMkernel interfaces. Ensure that PNIC
only Migration is disabled.
Both PNIC only migration and VMkernel interface migration are not supported at
the same time across multiple hosts.
Option Description
Network Mappings for Install To migrate VMkernels to N-VDS switch during installation, map VMkernels to an
existing logical switch. The NSX Manager migrates the VMkernel to the mapped
logical switch on N-VDS.
Caution Ensure that the management NIC and management VMkernel interface
are migrated to a logical switch that is connected to the same VLAN that the
management NIC was connected to before migration. If vmnic<n> and
VMkernel<n> are migrated to a different VLAN, then connectivity to the host is lost.
Caution For pinned physical NICs, ensure that the host switch mapping of
physical NIC to a VMkernel interface matches the configuration specified in the
transport node profile. As part of the validation procedure, NSX-T Data Center
verifies the mapping and if the validation passes migration of VMkernel interfaces
to an N-VDS switch is successful. It is also mandatory to configure the network
mapping for uninstallation because NSX-T Data Center does not store the mapping
configuration of the host switch after migrating the VMkernel interfaces to the N-
VDS switch. If the mapping is not configured, connectivity to services, such as
vSAN, can be lost after migrating back to the VSS or VDS switch.
Network Mappings for Uninstall To revert the migration of VMkernels attached to an N-VDS switch during
uninstallation, map VMkernels to port groups on VSS or DVS, so that NSX
Manager knows which port group the VMkernel must be migrated back to on the
VSS or DVS. For a DVS switch, ensure that the port group is of the type
Ephemeral.
To revert the migration of VMkernels attached to a NSX-T port group created on a
vSphere Distributed Switch (VDS) during uninstallation, map VMkernels to port
groups on VSS or DVS, so that NSX Manager knows which port group the
VMkernel must be migrated back to on the VSS or DVS. For a DVS switch, ensure
that the port group is of the type Ephemeral.
Caution For pinned physical NICs, ensure that the transport node profile mapping
of physical NIC to VMkernel interface matches the configuration specified in the
host switch. It is mandatory to configure the network mapping for uninstallation
because NSX-T Data Center does not store the mapping configuration of the host
switch after migrating the VMkernel interfaces to the N-VDS switch. If the mapping
is not configured, connectivity to services, such as vSAN, can be lost after
migrating back to the VSS or VDS switch.
9 Select VDS as the host switch type and enter the switch details.
Option Description
Name (Hosts managed by a vSphere cluster) Select the vCenter Server that manages the
host switch.
Select the VDS that is created in vCenter Server.
Transport Zones Shows the transport zones that are realized by the associated host switches. You
cannot add a transport zone if it is not realized by any host switch.
Option Description
Uplink Profile Select an existing uplink profile from the drop-down menu or create a custom
uplink profile.
Note Ensure MTU value entered in the NSX-T Data Center uplink profile and VDS
switch is set to at least 1600. If the MTU value in vCenter Server for the VDS
switch is lower than the MTU value entered in the uplink profile, then NSX-T Data
Center displays an error asking you to enter an appropriate MTU value in the
vCenter Server.
IP Assignment Select Use DHCP, Use IP Pool, or Use Static IP List to assign an IP address to
virtual tunnel endpoints (VTEPs) of the transport node.
If you select Use Static IP List, you must specify a list of comma-separated IP
addresses, a gateway, and a subnet mask. All the VTEPs of the transport node
must be in the same subnet otherwise bidirectional flow (BFD) session is not
established.
IP Pool If you selected Use IP Pool for an IP assignment, specify the IP pool name.
Teaming Policy Switch Mapping Map the uplinks defined in the NSX-T uplink profile with the VDS switch uplinks.
Alternatively, NSX-T uplinks can also be mapped to LAGs configured on the VDS
switch.
To configure or view the VDS switch uplinks, go to vCenter Server → vSphere
Distributed Switch. Click Actions → Settings → Edit Settings.
Note For a VDS switch, Uplinks/LAGs, NIOC profile, LLDP profile can be defined only in vSphere
ESXi host. These configurations are not available in NSX Manager. In addition, in NSX Manager, you
cannot configure networking mapping for install and uninstall if the host switch is a VDS switch. To
manage VMkernel adapters on a VDS switch, go to vCenter Server to attach VMkernel adapters to
Distributed Virtual port groups or NSX port groups.
10 If you have selected multiple transport zones, click ADD SWITCH again to configure the switch for
the other transport zones.
What to do next
Apply the transport node profile to an existing vSphere cluster. See Configure a Managed Host Transport
Node.
configure uninstall network-mapping (map logical ports to VMkernel interface) in the transport node profile
to be realized during uninstallation.
During migration physical NICs currently in use are migrated to an N-VDS switch, while available or free
physical NICs are attached to the N-VDS switch after migration.
Note Transport node profiles are applied to all member hosts of a cluster. But if you want to limit
migration of VMkernel interfaces on specific hosts, you can directly configure the host. After migration, N-
VDS handles traffic on the VLAN and overlay network for those interfaces attached to the N-VDS switch.
Important Configurations done to individual hosts are marked with the Overridden flag. Any further
updates to the transport node profile are not applied to these overridden hosts. These hosts remain in
overridden state until NSX-T Data Center is uninstalled.
In the following figure, if a host has only two physical NICs, you might want to assign both those NICs to
the N-VDS for redundancy and their associated VMkernel interfaces so that the interfaces do not lose
connectivity with the host.
Kernel interfaces
Management-VLAN-
Management
Logical switch
Storage-VLAN-
Storage
Logical switch
vMotion-VLAN-
vMotion
Logical switch
vSwitch0 N-VDS
vmk0, vmk1, vmk2 vmk0, vmk1, vmk2
ESXi ESXi
Before migration, the ESXi host has two uplinks derived from the two physical ports - vmnic0 and vmnic1.
Here, vmnic0 is configured to be in an active state, attached to a VSS, whereas vmnic1 is unused. In
addition, there are three VMkernel interfaces: vmk0, vmk1, and vmk2.
You can migrate VMkernel interfaces by using the NSX-T Data Center Manager UI or NSX-T Data Center
APIs. See NSX-T Data Center API Guide.
Post migration, the vmnic0, vmnic1, and their VMkernel interfaces are migrated to the N-VDS switch. Both
vmnic0 and vmnic1 are connected over VLAN and overlay transport zones.
n PNIC and VMkernel migration: Before you migrate pinned physical NICs and associated VMkernel
interfaces to an N-VDS switch, make a note of the network-mapping (physical NICs to port group
mapping) on the host switch.
n PNIC only migration: If you plan to only migrate PNICs, ensure that the management physical NIC
connected to the management VMkernel interface is not migrated. It results in loss of connectivity
with the host. For more details, see the PNIC only Migration field in Add a Transport Node Profile.
n Revert migration: Before you plan to revert migrate VMkernel interfaces to the VSS or DVS host
switch for pinned physical NICs, ensure that you make a note of the network-mapping (physical NIC
to port group mapping) on the host switch. It is mandatory to configure the transport node profile with
the host switch mapping in the Network Mapping for Uninstallation field. Without this mapping,
NSX-T Data Center does not know which port groups must the VMkernel interfaces be migrated back
to. This situation can lead to loss of connectivity to the vSAN network.
n vCenter Server registration before migration: If you plan to migrate a VMkernel or PNIC connected to
a DVS switch, ensure that a vCenter Server is registered with the NSX Manager.
n Match VLAN ID: After migration, the management NIC and management VMkernel interface must be
on the same VLAN the management NIC was connected to before migration. If vmnic0 and vmk0 are
connected to the management network and migrated to a different VLAN, then connectivity to the
host is lost.
n Migration to VSS switch: Cannot migrate back two VMkernel interfaces to the same port group of a
VSS witch.
n vMotion: Perform vMotion to move VM workloads to another host before VMkernel and/or PNIC
migration. If migration fails, then workload VMs are not impacted.
n vSAN: If the vSAN traffic is running on the host, place the host in maintenance mode through vCenter
Server and move VMs out of the host using vMotion functionality before VMkernel and/or PNIC
migration.
n Migration: If a VMkernel is already connected to a target switch, it can still be selected to be migrated
into the same switch. This property makes the VMK and/or PNIC migration operation idempotent. It
helps when you want to migrate only PNICs into a target switch. As migration always requires at least
one VMkernel and a PNIC, you select a VMkernel that is already migrated to a target switch when
you migrate only PNICs into a target switch. If no VMkernel needs to be migrated, create a temp
VMkernel through a vCenter Server in either the source switch or target switch. Then migrate it
together with the PNICs, and delete the temp VMkernel through vCenter Server after the migration is
finished.
n MAC sharing: If a VMkernel interface and a PNIC share the same MAC and they are in the same
switch, they must be migrated together to the same target switch if they will be both used after
migration. Always keep vmk0 and vmnic0 in the same switch.
Check the MACs used by all VMKs and PNICs in the host by running the following commands:
esxcfg-vmknic -l
esxcfg-nics -l
n VIF logical ports created after migration: After you migrate VMkernel from a VSS or DVS switch to an
N-VDS switch, a logical switch port of the type VIF is created on the NSX Manager. You must not
create distributed firewall rules on these VIF logical switch ports.
2 Power off VMs on the host from which VMkernel interfaces and PNICs are migrated to an N-VDS
switch.
3 Configure a transport node profile with a network mapping that is used to migrate the VMkernel
interfaces during the creation of transport nodes. Network mapping means mapping a VMkernel
interface to a logical switch.
4 Verify that the network adapter mappings in vCenter Server reflect a new association of the VMkernel
switch with an N-VDS switch. In case of pinned physical NICs, verify the mapping in NSX-T Data
Center reflects any VMkernels pinned to a physical NIC in the vCenter Server.
5 In NSX Manager, go to Networking → Segments. On the Segments page, verify that the VMkernel
interface is attached to the segment through a newly created logical port.
6 Go to System > Nodes > Host Transport Node. For each transport node, verify the status on the
Node Status column is Success to confirm that the transport node configuration is successfully
validated.
7 On the Host Transport Node page, verify the status on the Configuration State is Success to
confirm that the host is successfully realized with the specified configuration.
After you migrate VMkernel interfaces and PNICs from a VDS to a N-VDS switch using NSX-T UI or
transport node API, vCenter Server displays warnings for the VDS. If the host need be connected to the
VDS, remove the host out of the VDS. The vCenter Server no longer displays any warning for VDS.
For details on errors that might encounter during migration, see VMkernel Migration Errors
1 On the ESXi host, power off VMs connected to the logical ports that hosts the VMkernel interface
after migration.
2 Configure the transport node profile with network mapping that is used to migrate the VMkernel
interfaces during the uninstallation process. Network mapping during uninstallation maps the
VMkernel interfaces to a port group on VSS or DVS switch on the ESXi host.
Note Reverting migration of a VMkernel to a port group on a DVS switch, ensure that the port group
type is set to Ephemeral.
3 Verify the network adapter mappings in vCenter Server reflect a new association of the VMkernel
switch with a port group of VSS or DVS switch.
4 In NSX Manager, go to Networking → Segments. On the Segments page, verify that the segment
containing VMkernel interfaces are deleted.
For details on errors that you might encounter during migration, see VMkernel Migration Errors
Important
n Stateful hosts: Add and Update operations are supported. To update an existing mapping, you can
add a new VMkernel interface entry to the network-mapping configuration. If you update the network
mapping configuration of a VMkernel interface that is already migrated to the N-VDS switch, the
updated network mapping is not realized on the host.
n Stateless hosts: Add, Update, and Remove operations are supported. Any changes you make to the
network-mapping configuration is realized after the host reboots.
To update the VMkernel interfaces to a new logical switch, you can edit the transport node profile to
apply the network mappings at a cluster level. If you only want the updates to be applied to a single
host, configure the transport node using host-level APIs.
Note After you update the transport node configuration for an individual host, then any new updates
applied through the transport node profile are not applied to that host. That host state turns to overriden.
1 To update all hosts in a cluster, edit the Network Mapping during Installation field to update the
VMkernel mapping to logical switches.
2 Save the changes. Changes made to a transport node profile is automatically applied to all the
member hosts of the cluster, except on hosts that are marked with the overridden state.
3 Similarly, to update an individual host, edit the VMkernel mapping in the transport node configuration.
Note If you update the Network Mapping during Installation field with a new VMkernel mapping, then
the same VMkernel interface must be added to the Network Mapping during uninstallation field.
For details on errors that you might encounter during migration, see VMkernel Migration Errors
1 Prepare and configure a host as a reference host using transport node APIs.
3 In the vCenter Server, apply the host profile to the stateless cluster.
4 In NSX-T Data Center, apply the transport node profile to the stateless cluster.
The cluster hosts might take several minutes to realize the updated states.
n If the migration still continues to fail, the host performs a rollback to the earlier configuration by
retaining VMkernel connectivity with the management physical NIC, vmnic0.
n In case the rollback also fails such that the VMkernel configured to the management physical NIC
was lost, you must reset the host.
n VMkernel interfaces from two different VSS or DVS switches are migrated at the same time.
n On stateful hosts, network mapping is updated to map VMkernel interface to another logical switch.
For example, before migration the VMkernel is mapped to Logical Switch 1, and the VMkernel
interface is mapped to Logical Switch 2.
8224 Unable to find the host switch The host switch ID cannot be found. n Ensure that the transport zone
specified by the transport node is created with the host switch
configuration. name and then create the
transport node.
n Ensure that a valid host switch
is used in the transport node
configuration.
8225 VMkernel migration is in Migration is in progress. Wait for the migration to complete
progress. before performing another action.
8226 VMkernel migration is only Migration is only valid for ESXi hosts. Ensure that the host is a ESXi host
supported on a ESXi host. before you initiate migration.
8227 VMkernel interface is not On a host with multiple host switches, If the host has multiple N-VDS host
appended with the host switch NSX-T Data Center cannot identify switches, ensure the VMkernel
name. association of each VMkernel interface is appended with the host
interface with its host switch. switch name of the N-VDS the host
is connected to.
For example, the network mapping
for uninstallation of a host with N-
VDS host switch name nsxvswitch1
and VMkernel1 and another N-VDS
host switch name nsxvswitch2 and
VMkernel2 must be defined as
follows: device_name:
VMkernel1@nsxvswitch1,
destination_network:
DPortGroup.
8228 Host switch used in the Incorrect host switch name. Enter the correct host switch name.
device_name field not found on
the host.
8229 Transport node did not specify Transport zone not added. Add the transport zone to the
the transport zone of the logical transport node configuration.
switch.
8230 No physical NIC on the host There must be at least one physical Specify at least one physical NIC to
switch. NIC on the host switch. an uplink profile and the VMkernel
network mapping configuration to a
logical switch.
8231 Host switch name does not If the host switch name used in Ensure that the host switch name
match. vmk1@host_switch does not match specified in the network mapping
the host switch name used by the configuration matches the name
destination logical switch of the used by the logical switch of the
interface. interface.
8232 Logical switch not realized on the Realization of logical switch on the Synchronize the host with the NSX
host. host was unsuccessful. Manager.
8233 Unexpected logical switch in the The network interface mapping for Network mapping for installation
network interface mapping. installation and uninstallation lists both must only contain logical switches
logical switches and port groups. as destination targets. Similarly,
network mapping for uninstallation
must only contain port groups as
destination targets.
8294 Logical switch does not exist in Logical switches not specified. Ensure that logical switches are
the network interface mapping. specified in the network interface
mapping configuration.
8296 Host switch mismatch. The network interface mapping for Ensure that the host switch name
uninstallation is configured with the used in the mapping configuration
incorrect host switch name. matches the name entered on the
host switch where the VMkernel
interfaces reside on.
8297 Duplicate VMkernel. Duplicate VMkernels are specified for Ensure that no duplicate VMkernel
migration. interfaces are specified in the
installation or uninstallation mapping
configuration.
8298 Mismatch of number of VMkernel Incorrect configuration. Ensure that each VMkernel interface
interfaces and destinations. has a corresponding destination
specified in the configuration.
8299 Cannot delete transport node as VMkernel interfaces are using ports Revert the migration of all VMkernel
the VMkernel interface is using from the N-VDS switch. interfaces from the N-VDS switch to
ports on N-VDS. a VSS/DVS switch. Then attempt to
delete the transport node.
9412 VMkernel cannot be migrated Unsupported action. Revert the migration of the
from one N-VDS to another N- VMkernel interface to a VSS or DVS
VDS. switch. Then, you can migrate the
VMkernel interface to another N-
VDS switch.
9413 VMkernel interfaces cannot be On stateful hosts, a VMkernel Revert the migration of the
migrated to a different logical connected to a logical switch cannot VMkernel from the logical switch to
switch. be migrated to another logical switch. a VSS/DVS switch. Then, migrate
the VMkernel to another logical
switch on the N-VDS.
9414 Duplicate VMkernel interfaces. Duplicate VMkernel interfaces Ensure that each VMkernel interface
mapped in the installation and is unique in the installation and
uninstallation mapping configuration. uninstallation mappings.
9415 Powered on VMs on the host. With powered on VMs, migration does Power off the VMs on the host
not proceed. before you initiate migration of
VMkernel interfaces.
9416 VMkernel cannot be found on the Did not specify a VMkernel that exists Specify a VMkernel that exists in the
host. on the host in the network mapping network mapping configuration.
configuration.
9417 Port group not found. Did not specify a port group that exists Specify a port group that exists in
on the host in the network mapping the network mapping configuration.
configuration.
9419 Logical switch not found during Did not find the logical switch defined Specify a logical switch that exists in
migration. in the network interface mapping the network interface mapping
configuration. configuration.
9420 Logical port not found during During migration, NSX-T Data Center Ensure that no logical ports are
migration. does not find the ports created on the deleted from the logical switch for
logical switch. migration to be successful.
9421 Host information missing to Unable to retrieve host information Retry the migration process.
validate the migration process. from inventory.
9423 Pinned physical NICs to a A pinned physical NIC was found in A physical NIC pinned with
VMkernel interface are not the environment but the VMkernel and VMkernel interface must have a
migrated to the correct host physical NIC are not being migrated to transport node configuration that
switch. the same host switch. maps the physical NIC with the
VMkernel on the same host switch.
600 Object not found. The specified transport zone used by n Specify a transport zone which
the logical switch does not exist. exists in the environment.
The logical switch found in the VMK n Create the desired logical
mapping destination cannot be found. switch or use an existing VLAN
logical switch.
8310 The logical switch type is The logical switch type is Overlay. Create a VLAN logical switch.
incorrect.
9424 Cannot migrate if both PNIC only Migration progresses only when one Ensure that either the PNIC only
Migration and Network Mapping of these settings is configured. Migration or Network Mapping for
for install or uninstall settings are install or uninstall setting is
configured at the same time. configured.
For a host or bare metal server to be part of the NSX-T Data Center overlay, it must first be added to the
NSX-T Data Center fabric.
A transport node is a node that participates in an NSX-T Data Center overlay or NSX-T Data Center
VLAN networking.
For a KVM host or bare metal server, you can preconfigure the N-VDS, or you can have NSX Manager
perform the configuration. For a ESXi host, NSX Manager always configures the N-VDS.
Note If you plan to create transport nodes from a template VM, make sure that there are no certificates
on the host in /etc/vmware/nsx/. nsx-proxy does not create a certificate if a certificate exists.
Bare metal server supports an overlay and VLAN transport zone. You can use the management interface
to manage the bare metal server. The application interface allows you to access the applications on the
bare metal server.
Single physical NICs provide an IP address for both the management and application IP interfaces.
Dual physical NICs provide a physical NIC and a unique IP address for the management interface. Dual
physical NICs also provide a physical NIC, and a unique IP address for the application interface.
Multiple physical NICs in a bonded configuration provide dual physical NICs, and a unique IP address for
both the management interface and the application insterface.
You can add a maximum of four N-VDS switches for each configuration:
In a single host cluster topology running multiple standard overlay N-VDS switches and edge VM on the
same host, NSX-T Data Center provides traffic isolation such that traffic going through the first N-VDS is
isolated from traffic going through the second N-VDS. The physical NICs on each N-VDS must be
mapped to the edge VM on the host to allow the north-south traffic connectivity with the external world.
Packets moving out of a VM on the first transport zone must be routed through an external router, or an
external VM to the VM on the second transport zone.
Prerequisites
n The host must be joined with the management plane, and connectivity must be Up.
n An uplink profile must be configured, or you can use the default uplink profile.
n At least one unused physical NIC must be available on the host node.
n Hostname
n Management IP address
n User name
n Password
Procedure
1 (Optional) Retrieve the hypervisor thumbprint so that you can provide it when adding the host to the
fabric.
b Retrieve the SHA-256 thumbprint from a KVM hypervisor, run the command in the KVM host.
2 Select System > Fabric > Nodes > Host Transport Nodes.
3 From the Managed by field, select Standalone Hosts and click + Add.
4 Enter the standalone host or bare metal server details to add to the fabric.
Option Description
Name and Description Enter the name to identify the standalone host or bare metal server.
You can optionally add the description of the operating system used for the host or
bare metal server.
Operating System Select the operating system from the drop-down menu.
Depending on your host or bare metal server, you can select any of the supported
operating systems. See System Requirements.
Important Among the different flavors of Linux supported, you must know the
distinction between a bare metal server running a Linux distribution versus using a
Linux distribution as a hypervisor host. For example, selecting Ubuntu Server as
the operating system means setting up a bare metal server running a Linux server,
whereas selcting Ubuntu KVM means the Linux hypervisior deployed is Ubuntu.
Username and Password Enter the host user name and password.
5 (Required) For a KVM host or bare metal server, select the N-VDS type.
Option Description
For an ESXi host, the N-VDS type is always set to NSX Created.
6 If you select the N-VDS switch to operate in Standard (All hosts) mode, enter value to the following
fields. You can configure multiple N-VDS switches on a single host.
Option Description
Transport Zone From the drop-down menu, select a transport zone that this transport node.
NIOC Profile From the drop-down menu, select an NIOC profile for the ESXi host or create a
custom NIOC profile.
You can also select the default NIOC profile.
Uplink Profile Select an existing uplink profile from the drop-down menu or create a custom
uplink profile.
You can also use the default uplink profile.
LLDP Profile By default, NSX-T only receives LLDP packets from a LLDP neighbor.
However, NSX-T can be set to send LLDP packets to and receive LLDP packets
from a LLDP neighbor.
IP Pool If you selected Use IP Pool for IP assignment, specify the IP pool name.
Teaming Policy Switch Mapping Add physical NICs to the transport node. You can use the default uplink or assign
an existing uplink from the drop-down menu.
Option Description
PNIC only Migration Before setting this field, consider the following points:
n Know whether the physical NIC defined is a used NIC or a free NIC.
n Determine whether VMkernel interfaces of a host need to be migrated along
with physical NICs.
Set the field:
n Enable PNIC only Migration if you only want to migrate physical NICs from a
VSS or DVS switch to an N-VDS switch.
n Disable PNIC only Migration if you want to migrate a used physical NIC and
its associated VMkernel interface mapping. A free or available physical NIC is
attached to the N-VDS switch when a VMkernel interface migration mapping is
specified.
On a host with multiple host switches:
n If all host switches are to migrate only PNICs, then you can migrate the PNICs
in a single operation.
n If some hosts switches are to migrate VMkernel interfaces and the remaining
host switches are to migrate only PNICs:
1 In the first operation, migrate only PNICs.
2 In the second operation, migrate VMkernel interfaces. Ensure that PNIC
only Migration is disabled.
Both PNIC only migration and VMkernel interface migration are not supported at
the same time across multiple hosts.
Option Description
Network Mappings for Install To migrate VMkernels to N-VDS switch during installation, map VMkernels to an
existing logical switch. The NSX Manager migrates the VMkernel to the mapped
logical switch on N-VDS.
Caution Ensure that the management NIC and management VMkernel interface
are migrated to a logical switch that is connected to the same VLAN that the
management NIC was connected to before migration. If vmnic<n> and
VMkernel<n> are migrated to a different VLAN, then connectivity to the host is lost.
Caution For pinned physical NICs, ensure that the host switch mapping of
physical NIC to a VMkernel interface matches the configuration specified in the
transport node profile. As part of the validation procedure, NSX-T Data Center
checks the mapping and if the validation passes migration of VMkernel interfaces
to an N-VDS switch is successful. At the same time it is mandatory to configure the
network mapping for uninstallation because NSX-T Data Center does not store the
mapping configuration of the host switch after migrating the VMkernel interfaces to
the N-VDS switch. If the mapping is not configured, connectivity to services, such
as vSAN, can be lost after migrating back to the VSS or VDS switch.
Network Mappings for Uninstall To revert the migration of VMkernels during uninstallation, map VMkernels to port
groups on VSS or DVS, so that NSX Manager knows which port group the
VMkernel must be migrated back to on the VSS or DVS. For a DVS switch, ensure
the port group is of the type Ephemeral.
To revert the migration of VMkernels attached to a NSX-T port group created on a
vSphere Distributed Virtual Switch 7.0 during uninstallation, map VMkernels to port
groups on VSS or DVS, so that NSX Manager knows which port group the
VMkernel must be migrated back to on the VSS or DVS. For a DVS switch, ensure
that the port group is of the type Ephemeral.
Caution For pinned physical NICs, ensure that the transport node profile mapping
of physical NIC to VMkernel interface matches the configuration specified in the
host switch. It is mandatory to configure the network mapping for uninstallation
because NSX-T Data Center does not store the mapping configuration of the host
switch after migrating the VMkernel interfaces to the N-VDS switch. If the mapping
is not configured, connectivity to services, such as vSAN, can be lost after
migrating back to the VSS or VDS switch.
7 If you select the N-VDS switch to operate in Performance mode, enter values to the following
additional fields. You can configure multiple N-VDS switches on a single host.
Option Description
(CPU Config) In the NUMA Node Index drop-down menu, select the NUMA node that you want to
NUMA Node Index assign to an N-VDS switch. The first NUMA node present on the node is
represented with the value 0.
You can find out the number for NUMA nodes on your host by running the esxcli
hardware memory get command.
Note If you want to change the number of NUMA nodes that have affinity with an
N-VDS switch, you can update the NUMA Node Index value.
(CPU Config) In the Lcore per NUMA node drop-down menu, select the number of logical cores
LCores per NUMA Nodes that must be used by enhanced datapath.
You can find out the maximum number of logical cores that can be created on the
NUMA node by running the esxcli network ens maxLcores get command.
Note If you exhaust the available NUMA nodes and logical cores, any new switch
added to the transport node cannot be enabled for ENS traffic.
Option Description
N-VDS External ID Must be the same as the N-VDS name of the transport zone that this node belongs
to.
9 View the connection status on the Host Transport Nodes page. During the configuration process,
each transport node displays the percentage of progress of the installation process. If installation fails
at any stage, you can restart the process by clicking the Resolve link that is available against the
failed stage of the process.
After adding the host or bare metal server as a transport node, the connection to NSX Manager state
displays as UP only after the host is successfully created as a transport node.
u For ESXi, enter the esxcli network ip connection list | grep 1234 command.
u For KVM, enter the command netstat -anp --tcp | grep 1234.
11 Verify that the NSX-T Data Center modules are installed on your host or bare metal server.
As a result of adding a host or bare metal server to the NSX-T Data Center fabric, a collection of
NSX-T Data Center modules are installed on the host or bare metal server.
n On RHEL or CentOS, enter the command yum list installed or rpm -qa.
n On Windows, open Task Manager. Or, from the command line enter tasklist /V | grep nsx
findstr “nsx ovs
12 (Optional) Change the polling intervals of certain processes, if you have 500 hypervisors or more.
The NSX Manager might experience high CPU use and performance problems if there are more than
500 hypervisors.
a Use the NSX-T Data Center CLI command copy file or the API POST /api/v1/node/file-
store/<file-name>?action=copy_to_remote_file to copy the aggsvc_change_intervals.py
script to a host.
b Run the script, which is located in the NSX-T Data Center file store.
Results
Note For an NSX-T Data Center created N-VDS, after the transport node is created, if you want to
change the configuration, such as IP assignment to the tunnel endpoint, you must do it through the NSX
Manager GUI and not through the CLI on the host.
What to do next
Migrate network interfaces from a vSphere Standard Switch to an N-VDS switch. See VMkernel Migration
to an N-VDS Switch.
High-level tasks to configure a cluster or a standalone managed host using a VDS switch.
Important To create a VDS switch supporting NSX-T networking, the following conditions must be met:
Prerequisites
n Verify that ESXi hosts have the required number of physical NICs to meet networking requirements.
For example, if you plan to configure teaming policies and remote span port mirroring, ensure that a
free physical NIC is available to avoid uplink conflicts.
Procedure
1 In a vCenter Server, create a VDS. For more information about creating a VDS, see the vSphere
Networking documentation.
n Connect the switch to hosts that you want to prepare for NSX-T networking.
2 In NSX-T, add an uplink profile that defines a teaming policy mapping NSX-T uplinks with VDS
uplinks.
At the end of the configuration, the host is prepared as NSX-T transport node with VDS as the host
switch.
What to do next
Configure the host as a transport node. See Configure a Managed Host Transport Node.
This task is only to prepare individual ESXi nodes as transport nodes. If you want to prepare a cluster so
that all hosts in that cluster are prepared as transport nodes, apply the cluster with transport node profile.
See Add a Transport Node Profile.
Prerequisites
n Verify that all hosts in the vCenter Server are powered on.
n Verify that the system requirements are met. See System Requirements.
n Verify that a transport node profile is configured. See Add a Transport Node Profile.
n (Host in lockdown mode) If your exception list for vSphere lockdown mode includes expired user
accounts, NSX-T Data Center installation on vSphere fails. Ensure that you delete all expired user
accounts before you begin installation. For more information on accounts with access privileges in
lockdown mode, see Specifying Accounts with Access Privileges in Lockdown Mode in the vSphere
Security Guide.
n With VMware vSphere® vSphere Lifecycle Manager enabled on a cluster, these limitations apply:
n You cannot prepare an individual host that is part of the cluster as a transport node. Even though
a standalone prepared host can be included in the cluster, you must avoid moving such a
transport node to the cluster. Because, when an administrator remediates the host, NSX-T VIBs
are removed from the transport node. See vSphereLifecycle Manager Guide.
n You cannot apply transport node profile to the cluster to prepare cluster hosts as transport nodes.
Procedure
2 Select System > Fabric > Nodes > Host Transport Nodes.
The page lists the available vSphere clusters and/or ESXi hosts from the selected vCenter Server.
You may need to expand a cluster to view the ESXi hosts.
4 Select a single host from the list and click Configure NSX.
c Select the available transport zones and click the > button to include the transport zones in the
transport node profile.
5 Verify the host name in the Host Details panel, and click Next.
7 In the Type field, select between N-VDS and VDS as the host switch type to prepare the transport
node.
8 In the Mode field, depending upon the workload requirements, select the appropriate mode:
n Standard mode that applies to all the supported hosts. It is used for regular workloads.
n Enhanced Datapath is a networking stack mode that applies to only transport nodes of ESXi
host version 6.7 and later type that can belong in a transport zone. It is used for telecom
workloads that require relatively higher throughput and performance.
9 Select N-VDS as the host switch type and enter the switch details. Skip to the next step to select VDS
as the host switch.
Option Description
Transport Zones Shows the transport zones that are realized by the associated host switches. You
cannot add a transport zone if it is not realized by any N-VDS in the transport node
profile.
NIOC Profile Select the NIOC profile from the drop-down menu.
The bandwidth allocations specified in the profile for the traffic resources are
enforced.
Uplink Profile Select an existing uplink profile from the drop-down menu or create a custom
uplink profile.
You can also use the default uplink profile.
LLDP Profile By default, NSX-T only receives LLDP packets from a LLDP neighbor.
However, NSX-T can be set to send LLDP packets to and receive LLDP packets
from a LLDP neighbor.
IP Assignment Select Use DHCP, Use IP Pool, or Use Static IP List to assign an IP address to
virtual tunnel endpoints (VTEPs) of the transport node.
If you select Use Static IP List, you must specify a list of comma-separated IP
addresses, a gateway, and a subnet mask. All the VTEPs of the transport node
must be in the same subnet otherwise bidirectional flow (BFD) session is not
established.
IP Pool If you selected Use IP Pool for an IP assignment, specify the IP pool name.
Physical NICs Add physical NICs to the transport node. You can use the default uplink or assign
an existing uplink from the drop-down menu.
Click Add PNIC to configure additional physical NICs to the transport node.
Note Migration of the physical NICs that you add in this field depends on how you
configure PNIC only Migration, Network Mappings for Install, and Network
Mappings for Uninstall.
Option Description
PNIC only Migration Before setting this field, consider the following points:
n Know whether the physical NIC defined is a used NIC or a free NIC.
n Determine whether VMkernel interfaces of a host need to be migrated along
with physical NICs.
Set the field:
n Enable PNIC only Migration if you only want to migrate physical NICs from a
VSS or DVS switch to an N-VDS switch.
n Disable PNIC only Migration if you want to migrate a used physical NIC and
its associated VMkernel interface mapping. A free or available physical NIC is
attached to the N-VDS switch when a VMkernel interface migration mapping is
specified.
On a host with multiple host switches:
n If all host switches are to migrate only PNICs, then you can migrate the PNICs
in a single operation.
n If some hosts switches are to migrate VMkernel interfaces and the remaining
host switches are to migrate only PNICs:
1 In the first operation, migrate only PNICs.
2 In the second operation, migrate VMkernel interfaces. Ensure that PNIC
only Migration is disabled.
Both PNIC only migration and VMkernel interface migration are not supported at
the same time across multiple hosts.
Option Description
Network Mappings for Install To migrate VMkernels to N-VDS switch during installation, map VMkernels to an
existing logical switch. The NSX Manager migrates the VMkernel to the mapped
logical switch on N-VDS.
Caution Ensure that the management NIC and management VMkernel interface
are migrated to a logical switch that is connected to the same VLAN that the
management NIC was connected to before migration. If vmnic<n> and
VMkernel<n> are migrated to a different VLAN, then connectivity to the host is lost.
Caution For pinned physical NICs, ensure that the host switch mapping of
physical NIC to a VMkernel interface matches the configuration specified in the
transport node profile. As part of the validation procedure, NSX-T Data Center
verifies the mapping and if the validation passes migration of VMkernel interfaces
to an N-VDS switch is successful. It is also mandatory to configure the network
mapping for uninstallation because NSX-T Data Center does not store the mapping
configuration of the host switch after migrating the VMkernel interfaces to the N-
VDS switch. If the mapping is not configured, connectivity to services, such as
vSAN, can be lost after migrating back to the VSS or VDS switch.
Network Mappings for Uninstall To revert the migration of VMkernels attached to an N-VDS switch during
uninstallation, map VMkernels to port groups on VSS or DVS, so that NSX
Manager knows which port group the VMkernel must be migrated back to on the
VSS or DVS. For a DVS switch, ensure that the port group is of the type
Ephemeral.
To revert the migration of VMkernels attached to a NSX-T port group created on a
vSphere Distributed Switch (VDS) during uninstallation, map VMkernels to port
groups on VSS or DVS, so that NSX Manager knows which port group the
VMkernel must be migrated back to on the VSS or DVS. For a DVS switch, ensure
that the port group is of the type Ephemeral.
Caution For pinned physical NICs, ensure that the transport node profile mapping
of physical NIC to VMkernel interface matches the configuration specified in the
host switch. It is mandatory to configure the network mapping for uninstallation
because NSX-T Data Center does not store the mapping configuration of the host
switch after migrating the VMkernel interfaces to the N-VDS switch. If the mapping
is not configured, connectivity to services, such as vSAN, can be lost after
migrating back to the VSS or VDS switch.
10 Select VDS as the host switch type and enter the switch details.
Option Description
Name (Hosts managed by a vSphere cluster) Select the vCenter Server that manages the
host switch.
Select the VDS that is created in vCenter Server.
Transport Zones Shows the transport zones that are realized by the associated host switches. You
cannot add a transport zone if it is not realized by any host switch.
Option Description
Uplink Profile Select an existing uplink profile from the drop-down menu or create a custom
uplink profile.
Note Ensure MTU value entered in the NSX-T Data Center uplink profile and VDS
switch is set to at least 1600. If the MTU value in vCenter Server for the VDS
switch is lower than the MTU value entered in the uplink profile, then NSX-T Data
Center displays an error asking you to enter an appropriate MTU value in the
vCenter Server.
IP Assignment Select Use DHCP, Use IP Pool, or Use Static IP List to assign an IP address to
virtual tunnel endpoints (VTEPs) of the transport node.
If you select Use Static IP List, you must specify a list of comma-separated IP
addresses, a gateway, and a subnet mask. All the VTEPs of the transport node
must be in the same subnet otherwise bidirectional flow (BFD) session is not
established.
IP Pool If you selected Use IP Pool for an IP assignment, specify the IP pool name.
Teaming Policy Switch Mapping Map the uplinks defined in the NSX-T uplink profile with the VDS switch uplinks.
Alternatively, NSX-T uplinks can also be mapped to LAGs configured on the VDS
switch.
To configure or view the VDS switch uplinks, go to vCenter Server → vSphere
Distributed Switch. Click Actions → Settings → Edit Settings.
Note For a VDS switch, Uplinks/LAGs, NIOC profile, LLDP profile can be defined only in vSphere
ESXi host. These configurations are not available in NSX Manager. In addition, in NSX Manager, you
cannot configure networking mapping for install and uninstall if the host switch is a VDS switch. To
manage VMkernel adapters on a VDS switch, go to vCenter Server to attach VMkernel adapters to
Distributed Virtual port groups or NSX port groups.
11 If you have selected multiple transport zones, click ADD SWITCH again to configure the switch for
the other transport zones.
14 From the Host Transport Node page, verify that the NSX Manager connectivity status of hosts in the
cluster is Up and NSX-T Data Center configuration state is Success. During the configuration
process, each transport node displays the percentage of progress of the installation process. If
installation fails at any stage, you can restart the process by clicking the Resolve link that is available
against the failed stage of the process.
You can also see that the transport zone is applied to the hosts in the cluster.
Note If you again configure a host that is part of a cluster that is already prepared by a transport
node profile, the configuration state of a node is in Configuration Mismatch state.
a Select one or more hosts and click Actions > Remove NSX.
The uninstallation takes up to three minutes. Uninstallation of NSX-T Data Center removes the
transport node configuration on hosts and the host is detached from the transport zone(s) and N-VDS
switch. Similar to the installation process, you can follow the percentage of the uninstallation process
completed on each transport node. If uninstallation fails at any stage, you can restart the process by
clicking the Resolve link that is available against the failed stage of the process.
a Select a single transport node and click Actions > Remove from Transport Zone.
What to do next
When the hosts are transport nodes, you can create transport zones, logical switches, logical routers, and
other network components through the NSX Manager UI or API at any time. When NSX Edge nodes and
hosts join the management plane, the NSX-T Data Center logical entities and configuration state are
pushed to the NSX Edge nodes and hosts automatically. You can create transport zones, logical
switches, logical routers, and other network components through the NSX Manager UI or API at any time.
When the hosts are transport nodes, these entities gets realized on the host.
Create a logical switch and assign logical ports. See the Advanced Switching section in the NSX-T Data
Center Administration Guide.
Prerequisites
n Familiarize yourself with the steps to create an uplink profile. See Create an Uplink Profile.
n Familiarize yourself with the steps to create a host transport node. See Create a Standalone Host or
Bare Metal Server Transport Node.
Procedure
2 Select System > Fabric > Profiles > Uplink Profiles > Add.
6 In the Active Uplinks field, enter the name of the LAG that you added in the step 4. In this example,
the name is lag1.
10 Select Fabric > Nodes > Host Transport Nodes > Add.
11 In the Host Details tab, enter IP address, OS name, admin credentials, and SHA-256 thumbprint of
the host.
12 In the N-VDS tab, select the uplink profile uplink-profile1 that was created in step 3.
13 In the Physical NICs field, the physical NICs and uplinks dropdown list reflects the new NICs and
uplink profile. Specifically, the uplinks lag1-0 and lag1-1, corresponding to the LAG lag1 that was
created in step 4 are displayed. Select a physical NIC for lag1-0 and a physical NIC for lag1-1.
After creating a host transport node, the N-VDS gets installed on the host.
Procedure
2 Navigate to the Transport Node page and view the N-VDS status.
3 Alternatively, view the N-VDS on ESXi with the esxcli network ip interface list command.
On ESXi, the command output should include a vmk interface (for example, vmk10) with a VDS name
that matches the name you used when you configured the transport zone and the transport node.
vmk10
Name: vmk10
MAC Address: 00:50:56:64:63:4c
Enabled: true
Portset: DvsPortset-1
Portgroup: N/A
Netstack Instance: vxlan
VDS Name: overlay-hostswitch
VDS UUID: 18 ae 54 04 2c 6f 46 21-b8 ae ef ff 01 0c aa c2
VDS Port: 10
VDS Connection: 10
Opaque Network ID: N/A
Opaque Network Type: N/A
External ID: N/A
MTU: 1600
TSO MSS: 65535
Port ID: 67108895
...
If you are using the vSphere Client, you can view the installed N-VDS in the UI by selecting host
Configuration > Network Adapters.
The KVM command to verify the N-VDS installation is ovs-vsctl show. Note that on KVM, the N-
VDS name is nsx-switch.0. It does not match the name in the transport node configuration. This is by
design.
# ovs-vsctl show
...
Bridge "nsx-switch.0"
Port "nsx-uplink.0"
Interface "em2"
Port "nsx-vtep0.0"
tag: 0
Interface "nsx-vtep0.0"
type: internal
Port "nsx-switch.0"
Interface "nsx-switch.0"
type: internal
ovs_version: "2.4.1.3340774"
The vmk10 interface receives an IP address from the NSX-T Data Center IP pool or DHCP, as shown
here:
In KVM, you can verify the tunnel endpoint and IP allocation with the ifconfig command.
# ifconfig
...
nsx-vtep0.0 Link encap:Ethernet HWaddr ba:30:ae:aa:26:53
inet addr:192.168.250.4 Bcast:192.168.250.255 Mask:255.255.255.0
...
{
"state": "success",
"host_switch_states": [
{
"endpoints": [
{
"default_gateway": "192.168.250.1",
"device_name": "vmk10",
"ip": "192.168.250.104",
"subnet_mask": "255.255.255.0",
"label": 69633
}
],
"transport_zone_ids": [
"efd7f38f-c5da-437d-af03-ac598f82a9ec"
],
"host_switch_name": "overlay-hostswitch",
"host_switch_id": "18 ae 54 04 2c 6f 46 21-b8 ae ef ff 01 0c aa c2"
}
],
"transport_node_id": "2d030569-5769-4a13-8918-0c309c63fdb9"
}
Prerequisites
n Ensure that the host has at least one free physical adapter.
n Ensure that VMkernel adapters and port groups exist on the host.
Procedure
3 Select a transport node and click Actions > Migrate ESX VMkernel and Physical Adapters.
4 In the Migrate ESX VMkernel and Physical Adapters, enter the following details.
Field Description
Select Switch Select the switch from which you want to migrate the
VMkernel adapters and physical adapters. You can select
from the available switches.
Select VMkernel Adapters to Migrate Click Add to enter the VMkernel adapter name and select
destination as a logical switch or port group depending on
where you want to migrate to.
Edit Physical Adapters in N-VDS Click Add to enter the physical adapter name and map it to
an uplink on the host switch.
Results
The updated VMkernel adapters and physical adapters are migrated to the N-VDS switch or revert
migrated to the VSS or VDS switch in the ESXi host.
To put a transport node in NSX Maintenance Mode, select the node, click Actions → NSX Maintenance
Mode.
When you put a host in NSX Maintenance Mode, the transport node cannot participate in networking.
Also, VMs running on other transport nodes that have N-VDS or vSphere Distributed Switch as the host
switch cannot be vMotioned to this transport node. In addition, logical network cannot be configured on
ESXi or KVM hosts.
n If a host has hardware or software issues that are unrelated to NSX-T, but you want to retain the node
and its configurations in NSX-T, place the host in NSX Maintenance Mode.
n A transport node is automatically put in NSX Maintenance Mode when an upgrade on that transport
node fails.
Any transport node put in the NSX Maintenance Mode is not upgraded.
Procedure
2 Select System > Fabric > Nodes > Host Transport Nodes.
6 Select an N-VDS.
NSX-T visually represents uplink profiles connected to VMs, logical ports associated to VMs, logical
switches connected to a transport zone.
7 To view uplink profiles connected to a VM and the logical port to which a VM is connected, select a
VM.
8 To view which VMs are connected to an uplink profile, select the uplink profile.
9 To view logical ports associated to a VM, expand the logical switch, click the VM.
Note The admin status of a logical port is displayed on the dialog box. If the operational status is
down it is not displayed on the dialog box.
VLAN or MTU configuration mismatch is a common configuration error that can lead to connectivity
outage.
Note
n Health check results are only indicators of possible network configuration errors. For example, health
check run on hosts from different L2 domains results in untrunked VLAN IDs. This result cannot be
considered as a configuration error as hosts must be in the same L2 domain for the health check tool
to give correct results.
n After a health check finishes, NSX-T Data Center preserves that result on the system only for 24
hours.
In a health check operation, the NSX-T Data Center opsAgent sends probe packets from a transport node
to another node to verify compatibility between VLAN ID range you specified and the MTU value on the
transport node with corresponding settings on the physical switch.
As the number of VLAN ID ranges to be verified increases, the waiting time increases.
[3073,4095] 150
[513, 1024] 80
[128, 512] 60
[64, 127] 30
[1, 63] 20
Prerequisites
n At least two uplinks configured on N-VDS for VLAN and MTU check to work.
Procedure
POST https://<NSXManager_IP>/api/v1/manual-health-checks
Example Request:
POST https://<nsx-mgr>/api/v1/manual-health-checks
{
"resource_type": "ManualHealthCheck",
"display_name": "Manual HealthCheck 002",
"transport_zone_id": "7754341c-8f3c-443f-9c1a-2d635d5b0d1c",
"vlans":{
"vlan_ranges":[{
"start": 0,
"end": 6
},]
},
}
Example Response:
{
"operation_status": "FINISHED",
"transport_zone_id": "7754341c-8f3c-443f-9c1a-2d635d5b0d1c",
"vlans": {
"vlan_ranges": [
{
"start": 0,
"end": 6
}
]
},
"result": {
"vlan_mtu_status": "UNTRUNKED",
"results_per_transport_node": [
{
"transport_node_id": "dfcabffa-8839-11e9-b30e-6f45344d8a04",
"result_on_host_switch": {
"host_switch_name": "nsxvswitch",
"results_per_uplink": [
{
"uplink_name": "uplink1",
"vlan_and_mtu_allowed": [
{
"start": 0,
"end": 0
}
],
"mtu_disallowed": [],
"vlan_disallowed": [
{
"start": 1,
"end": 6
}
]
}
]
}
},
{
"transport_node_id": "a300ea62-8839-11e9-a94e-31732bb71949",
"result_on_host_switch": {
"host_switch_name": "nsxvswitch",
"results_per_uplink": [
{
"uplink_name": "uplink1",
"vlan_and_mtu_allowed": [
{
"start": 0,
"end": 0
}
],
"mtu_disallowed": [],
"vlan_disallowed": [
{
"start": 1,
"end": 6
}
]
}
]
}
}
]
},
"resource_type": "ManualHealthCheck",
"id": "8a56ed9e-a31b-479e-987b-2dbfbde07c38",
"display_name": "mc1",
"_create_user": "admin",
"_create_time": 1560149933059,
"_last_modified_user": "system",
"_last_modified_time": 1560149971220,
"_system_owned": false,
"_protection": "NOT_PROTECTED",
"_revision": 0
}
2 To get a list of all manual health check operations initiated, make the API call.
GET https://<NSXManager_IP>/api/v1/manual-health-checks
DELETE https://<NSXManager_IP>/api/v1/manual-health-checks/<Health-check-ID>
4 To get a single health check initiated manually, make the API call.
Results
The API response section contains the health check results. The NSX Ops agent waits for an
acknowledgement packet from the destination transport node to retrieve VLAN ID ranges supported on
the physical switch.
n Untrunked: Lists the VLAN ID ranges that are not compatible with a physical switch. The VLAN ID
ranges that are compatible with the physical switch are also listed.
n Trunked: Lists the VLAN ID ranges that are compatible with a physical switch.
n Unknown: There is no valid result for some or all uplinks because of infrastructure issues or
unsupported platform types such as KVM and Edge.
n mtu_disallowed: Lists the VLAN ID ranges for which the MTU value is not compatible with a physical
switch.
n vlan_disallowed: Lists the VLAN ID ranges that are not compatible with a physical switch.
What to do next
n In an overlay-based transport zone, update both VLAN ID and MTU config in the uplink profile on N-
VDS. Likewise, update VLAN or MTU on the physical switch.
n In a vlan-based transport zone, update MTU config in the uplink profile. And, update VLAN config on
logical switches of that transport zone. Likewise update VLAN or MTU on the physical switch.
Both Host Transport nodes (standalone and hosts registered to a vCenter) and Edge nodes display the
tunnel status. BFD packets support both GENEVE and STT encapsulation. GENEVE is the default
encapsulation.
Procedure
2 Navigate to System > Fabric > Nodes > Host Transport Nodes.
The Monitor page displays the status of tunnel, BFD diagnostic code, remote node UUID,
encapsulation on BFD packets, and tunnel name.
The tunnel BFD diagnostic code indicates the reason for the change in the session state.
Code Description
0 No Diagnostic
5 Path Down
7 Administratively Down
Results
If the BFD status is down, use the diagnostic code to establish connectivity between transport nodes.
Note You cannot manually install of NSX-T Data Center kernel modules on a bare metal server.
You can download the NSX-T Data Center VIBs manually and make them part of the host image. The
download paths can change for each release of NSX-T Data Center. Always check the NSX-T Data
Center downloads page to get the appropriate VIBs.
Procedure
[root@host:~]: cd /tmp
3 Download and copy the nsx-lcp file into the /tmp directory.
Depending on what was already installed on the host, some VIBs might be installed, some might be
removed, and some might be skipped. A reboot is not required unless the command output says
Reboot Required: true.
Results
As a result of adding an ESXi host to the NSX-T Data Center fabric, the following VIBs get installed on
the host.
nsx-adf
(Automated Diagnostics Framework) Collects and analyzes performance data to produce both local
(at host) and central (across datacenter) diagnoses of performance issues.
nsx-aggservice
Provides host-side libraries for NSX-T Data Center aggregation service. NSX-T Data Center
aggregation service is a service that runs in the management-plane nodes and fetches runtime state
from NSX-T Data Center components.
nsx-cfgagent
Provides communication between the central control plane and hypervisors. Receives logical
networking state from the central control plane and programs this state in the data plane.
nsx-cli-libs
Provides the NSX-T Data Center CLI on hypervisor hosts.
nsx-common-libs
Provide some utilities classes such as AES, SHA-1, UUID, bitmap, and others.
nsx-context-mux
Provides NSX Guest Introspection relay functionality. Allows VMware Tools guest agents to relay
guest context to inhouse and registered third-party partner appliances.
nsx-esx-datapath
nsx-exporter
Provides host agents that report runtime state to the aggregation service running in the management
plane.
nsx-host
Provides metadata for the VIB bundle that is installed on the host.
nsx-metrics-libs
nsx-mpa
nsx-nestdb-libs
NestDB is a database that stores NSX configurations related to the host (desired/runtime state, etc).
nsx-opsagent
Communicates operations agent executions (transport node realization, Link Layer Discovery
Protocol - LLDP,traceflow, packet capture, etc.) with the management plane.
nsx-platform-client
Provides a common CLI execution agent, for centralized CLI and audit log collecting.
nsx-profiling-libs
Provides the functionality of profiling based on gpeftool which used for daemon process profiling.
nsx-proxy
Provides the only northbound contact point agent, which talks to the central control plane and
management plane.
nsx-python-gevent
nsx-python-greenlet
nsx-python-logging
nsx-python-protobuf
nsx-rpc-libs
nsx-sfhc
Service fabric host component (SFHC). Provides a host agent for managing the lifecycle of the
hypervisor as a fabric host in the management plane's inventory. This provides a channel for
operations such as NSX-T Data Center upgrade and uninstall and monitoring of NSX-T Data Center
modules on hypervisors.
nsx-shared-libs
nsx-upm-libs
Provides unified profile management functionality for flattening client-side configuration and avoiding
duplicate data transmission.
nsx-vdpi
Provides Deep Packet Inspection capabilities for NSX-T Data Center Distributed Firewall.
nsx-vsipfwlib
nsxcli
To verify, you can run the esxcli software vib list | grep nsx or esxcli software vib list | grep <yyyy-
mm-dd> command on the ESXi host, where the date is the day that you performed the installation.
What to do next
Add the host to the NSX-T Data Center management plane. See Deploy NSX Manager Nodes to Form a
Cluster Using CLI .
You can download the NSX-T Data Center DEBs manually and make them part of the host image. Be
aware that download paths can change for each release of NSX-T Data Center. Always check the NSX-T
Data Center downloads page to get the appropriate DEBs.
Prerequisites
n Verify that the required third-party packages are installed. See Install Third-Party Packages on a KVM
Host.
Procedure
cd /tmp
3 Download and copy the nsx-lcp file into the /tmp directory.
cd nsx-lcp-trusty_amd64/
/etc/init.d/openvswitch-switch force-reload-kmod
If the hypervisor uses DHCP on OVS interfaces, restart the network interface on which DHCP is
configured. You can manually stop the old dhclient process on the network interface and restart a new
dhclient process on that interface.
The installed packages in the output must match the packages in the nsx-lcp-trusty_amd64
directory.
Any errors are most likely caused by incomplete dependencies. The apt-get install -f command
can attempt to resolve dependencies and re-run the NSX-T Data Center installation.
What to do next
Add the host to the NSX-T Data Center management plane. See Deploy NSX Manager Nodes to Form a
Cluster Using CLI .
This allows you to build the NSX-T Data Center control-plane and management-plane fabric. NSX-T Data
Center kernel modules packaged in RPM files run within the hypervisor kernel and provide services such
as distributed routing, distributed firewall, and bridging capabilities.
You can download the NSX-T Data Center RPMs manually and make them part of the host image. Be
aware that download paths can change for each release of NSX-T Data Center. Always check the NSX-T
Data Center downloads page to get the appropriate RPMs.
Prerequisites
Procedure
2 Download and copy the nsx-lcp file into the /tmp directory.
cd nsx-lcp-rhel74_x86_64/
When you run the yum install command, any NSX-T Data Center dependencies are resolved,
assuming the RHEL or CentOS hosts can reach their respective repositories.
/usr/share/openvswitch/scripts/ovs-systemd-reload force-reload-kmod
If the hypervisor uses DHCP on OVS interfaces, restart the network interface on which DHCP is
configured. You can manually stop the old dhclient process on the network interface and restart a new
dhclient process on that interface.
7 To verify, you can run the rpm -qa | egrep 'nsx|openvswitch' command.
The installed packages in the output must match the packages in the nsx-rhel74 or nsx-centos74
directory.
What to do next
Add the host to the NSX-T Data Center management plane. See Deploy NSX Manager Nodes to Form a
Cluster Using CLI .
This allows you to build the NSX-T Data Center control-plane and management-plane fabric. NSX-T Data
Center kernel modules packaged in RPM files run within the hypervisor kernel and provide services such
as distributed routing, distributed firewall, and bridging capabilities.
You can download the NSX-T Data Center RPMs manually and make them part of the host image. Be
aware that download paths can change for each release of NSX-T Data Center. Always check the NSX-T
Data Center downloads page to get the appropriate RPMs.
Prerequisites
Procedure
2 Download and copy the nsx-lcp file into the /tmp directory.
cd nsx-lcp-linux64-sles12sp3
When you run the zypper install command, any NSX-T Data Center dependencies are resolved,
assuming the SUSE hosts can reach their respective repositories.
/usr/share/openvswitch/scripts/ovs-systemd-reload force-reload-kmod
If the hypervisor uses DHCP on OVS interfaces, restart the network interface on which DHCP is
configured. You can manually stop the old dhclient process on the network interface and restart a new
dhclient process on that interface.
7 To verify, you can run the zypper packages --installed-only | grep System | egrep
'openvswitch|nsx' command.
The installed packages in the output must match the packages in the nsx-lcp-linux64-sles12sp3
directory.
What to do next
Add the host to the NSX-T Data Center management plane. See Deploy NSX Manager Nodes to Form a
Cluster Using CLI .
Important Deploy the fully collapsed single vSphere cluster topology starting with NSX-T 2.4.2 or 2.5
release.
Figure 10-4. Topology: Single N-VDS Switch Managing Host Communication with NSX Edge
and Guest VMs
Prerequisites
n Verify on the vCenter Server that shared storage is available to be used by the hosts.
n Host TEP IP and NSX Edge TEP IP must be in a different VLAN. North-South traffic from Host
workloads is encapsulated in GENEVE and sent to an NSX Edge node with Source IP as Host TEP
and destination IP as NSX Edge TEP. Since these TEPs must sit in different VLAN or subnets, this
traffic must be routed through Top-of-rack (TOR) switches. Transport VLAN used for Host is VLAN
200 and transport VLAN used for NSX Edge is VLAN 600.
Procedure
1 Prepare four ESXi hosts with vmnic0 on vSS or vDS, vmnic1 is free.
2 On Host 1, install vCenter Server, configure a vSS/vDS port group, and install NSX Manager on the
port group created on the host.
a Create VLAN transport zone and overlay transport zone with a named teaming policy. See Create
Transport Zones.
b Create an IP pool or DHCP for tunnel endpoint IP addresses for the hosts. See Create an IP Pool
for Tunnel Endpoint IP Addresses.
c Create an IP pool or DHCP for tunnel endpoint IP addresses for the Edge node. See Create an IP
Pool for Tunnel Endpoint IP Addresses.
d Create an uplink profile with a named teaming policy. See Create an Uplink Profile.
e Configure hosts as transport nodes by applying a transport node profile. In this step, the transport
node profile only migrates vmnic1 (unused physical NIC) to the N-VDS switch. After the transport
node profile is applied to the cluster hosts, the N-VDS switch is created and vmnic1 is connected
to the N-VDS switch. See Add a Transport Node Profile.
vmnic1 on all hosts are added to the N-VDS switch. So, out of the two physical NICs, one is migrated
to the N-VDS switch. The vmnic0 interface is still connected to the vSS or vDS switch, which ensures
connectivity to the host is available.
4 In the NSX Manager UI, create VLAN-backed segments for NSX Manager, vCenter Server, and NSX
Edge. Ensure to select the correct teaming policy for each of the VLAN-backed segments. Do not use
VLAN trunk logical switch as the target. When creating the target segments in NSX Manager UI, in
the Enter List of VLANs field, enter only one VLAN value.
5 On Host 2, Host 3, and Host 4, you must migrate the vmk0 adapter and vmnic0 together from
VSS/VDS to N-VDS switch. Update the NSX-T configuration on each host. While migrating ensure
6 In the vCenter Server, go to Host 2, Host 3, and Host 4, and verify that vmk0 adapter is connected to
vmnic0 physical NIC on the N-VDS and must be reachable.
7 In the NSX Manager UI, go to Host 2, Host 3, and Host 4, and verify both pNICs are on the N-
VDSswitch.
8 On Host 2 and Host 3, from the NSX Manager UI, install NSX Manager and attach NSX Manager to
the segment. Wait for approximately 10 minutes for the cluster to form and verify that the cluster has
formed.
9 Power off the first NSX Manager node. Wait for approximately 10 minutes.
10 Reattach the NSX Manager and vCenter Server to the previously created logical switch. On host 4,
power on the NSX Manager. Wait for approximately 10 minutes to verify that the cluster is in a stable
state. With the first NSX Manager powered off, perform cold vMotion to migrate the NSX Manager
and vCenter Server from host 1 to host 4.
11 From the NSX Manager UI, go to Host 1, migrate vmk0 and vmnic0 together from VSS to N-VDS
switch.
12 In the Network Mapping for Install field, ensure that the vmk0 adapter is mapped to the Edge
Management Segment on the N-VDS switch.
13 On Host 1, install the NSX Edge VM from the NSX Manager UI.
15 To establish the north-south traffic connectivity, configure NSX Edge VM with an external router.
16 Verify that north-south traffic connectivity between the NSX Edge VM and the external router.
17 If there is a power failure scenario where the whole cluster is rebooted, the NSX-T management
component might not come up and communicate with N-VDS. To avoid this scenario, perform the
following steps:
Caution Any API command that is incorrectly run results in a loss of connectivity with the NSX
Manager.
Note In a single cluster configuration, management components are hosted on an N-VDS switch as
VMs. The N-VDS port to which the management component connects to by default is initialized as a
blocked port due to security considerations. If there is a power failure requiring all the four hosts to
reboot, the management VM port will be initialized in a blocked state. To avoid circular dependencies,
it is recommended to create a port on N-VDS in the unblocked state. An unblocked port ensures that
when the cluster is rebooted, the NSX-T management component can communicate with N-VDS to
resume normal function.
n UUID of the NSX Manager VM and migrates it to the static logical port which is in an unblocked
state.
If all the hosts are powered-off or powered-on or if an NSX Manager VM moves to another host, then
after the NSX Manager comes back up it gets attached to the unblocked port, so preventing loss of
connectivity with the management component of NSX-T.
a In the NSX Manager UI, go to Manager Mode > Networking > Logical Switches tab (3.0 and
later releases). Search for the Segment Compute VM segment. Select the Overview tab, find
and copy the UUID. The UUID used in this example is, c3fd8e1b-5b89-478e-abb5-d55603f04452.
n In the JSON payload, create logical ports with initialization status in UNBLOCKED_VLAN state
by replacing the value for logical_switch_id with the UUID of the previously created Edge
Management Segment.
n In the payload for each NSX Manager, the attachment_type_id and display_name values
will be different.
Important Repeat this step to create a total of four JSON files - three for NSX Managers and
one for vCenter Server Appliance (VCSA).
port1.json
{
"admin_state": "UP",
"attachment": {
"attachment_type": "VIF",
"id": "nsxmgr-port-147"
},
"display_name": "NSX Manager Node 147 Port",
"init_state": "UNBLOCKED_VLAN",
"logical_switch_id": "c3fd8e1b-5b89-478e-abb5-d55603f04452"
}
Where,
n attachment_type: Must be set to VIF. All VMs are connected to NSX-T switch ports using a
VIF ID.
n id: This is the VIF ID. It must be unique for each NSX Manager. If you have three NSX
Managers, there will be three payloads, and each one of them must have a different VIF ID.
To generate a unique UUID, log into the root shell of the NSX Manager and run /usr/bin/
uuidgen to generate a unique UUID.
n display_name: It must be unique to help NSX admin identify it from other NSX Manager
display names.
n init_state: With the value set to UNBLOCKED_VLAN, NSX unblocks the port for NSX Manager,
even if the NSX Manager is not available.
c If there are three NSX Managers deployed, you need to create three payloads, one for each
logical port of a NSX Manager. For example, port1.json, port2.json, port3.json.
"_last_modified_time" : 1574716624192,
"_system_owned" : false,
"_protection" : "NOT_PROTECTED",
"_revision" : 0
e Find out the VM instance ID for each of the NSX Manager. You can retrieve the instance ID from
the Inventory → Virtual Machines, select the NSX Manager VM, select the Overview tab and
copy the instance ID. Alternatively, search the instance ID from the managed object browser
(MOB) of vCenter Server. Add :4000 to the ID to get the VNIC hardware index of an NSX
Manager VM.
f Find out the transport node ID that hosts NSX Managers. If you have three NSX Manager , each
hosted on a different transport node, note down the tranport node IDs. For example, the three
transport node IDs are:
tn1: 12d19875-90ed-4c78-a6bb-a3b1dfe0d5ea
tn2: 4b6e182e-0ee3-403f-926a-fb7c8408a9b7
tn3: d7cec2c9-b776-4829-beea-1258d8b8d59b
g Retrieve the transport node configuration that is to be used as payloads when migrating the NSX
Manager to the newly created port.
For example,
curl -k -u '<user>:<password' https://nsxmgr/api/v1/transport-nodes/
12d19875-90ed-4c78-a6bb-a3b1dfe0d5ea > tn1.json
h Migrate the NSX Manager from the previous port to the newly created unblocked logical port on
the Edge Management Segment. The VIF-ID value is the attachment ID of the port created
previously for the NSX Manager.
For example,
root@nsx-mgr-147:/var/CollapsedCluster# curl -k -X PUT -u 'admin:VMware1!
VMware1!' -H 'Content-Type:application/json' -d @mgr.json 'https://
localhost/api/v1/transport-nodes/11161331-11f8-45c7-8747-34e7218b687f?
vnic=5028d756-d36f-719e-3db5-7ae24aa1d6f3:4000&vif=nsxmgr-port-147'
VLAN Micro-Segmentation
VLAN micro-segmentation walks you through selecting clusters, assigning uplinks, and migrating
VMkernal (VMK) ports.
n A transport node profile is created and assigned to the selected cluster. The profile name can be
edited after deployment.
n Users select the uplink to physical NICs (pNICs) mapping for each cluster. Both vSphere 7, and the
NSX-T Virtual Distributed Switch (N-VDS) are supported.
n Optional - NSX-T migrates selected VMkernal (VMK) ports to the N-VDS that is automatically created.
The user specifies a VLAN segment for each of the VMK interfaces. Power off all virtual machines
before migration.
n For ESXi hosts that are version 6.7 and earlier, NVDS is created and the uplink to physical NICs
mapping is displayed.
n If you have both version 7.0 ESXi hosts and version 7.0 VDS, the uplink mapping is displayed. These
uplinks are then mapped onto the VDS.
Before going through the wizard for VLAN backed micro-segmentation, you must have a compute
manager.
1 Navigate to System > Get Started. To prepare clusters for micro-segmentation using VLAN, click Get
Started - Prepare Clusters for VLAN Micro-segmentation.
2 Select the cluster you want to prepare from the list of available clusters. Only clusters which do not
have a transport node profile applied are listed. Expand each cluster to view the hosts it contains. No
stand-alone hosts are listed.
3 To filter the cluster by cluster name, host IP address, host name, or host version, click the filter icon
on the right of the screen.
4 Click Next.
5 Select the physical NICs (pNICs) that are assigned as uplinks for the N-VDS, or VDS on each cluster.
pNICs that are common across the cluster are listed. Both vSphere 7 and VDS are supported.
6 Click Next.
7 (Optional) To configure the interfaces for VMkernal migration, where virtual NICs are migrated to
VLAN segments, click Select. Power off all virtual machines before migration.
8 (Optional) Select an existing segment, or create a segment by entering a segment name and the
VLAN ID.
10 Click Finish.
Integrate host profiles extracted from an ESXi host with NSX-T to deploy ESXi and NSX-T VIBs on
stateful and stateless servers.
n Stateful Servers
This section helps you to set up a stateless cluster using vSphere Auto Deploy and NSX-T Transport
Node Profile to reprovision a host with a new image profile that contains a different version of ESXi and
NSX-T. Hosts that are set up for vSphere Auto Deploy use an auto-deploy server and vSphere host
profiles to customize hosts. These hosts can also be set up for NSX-T Transport Node Profile to configure
NSX-T on the hosts.
So, a stateless host can be set up for vSphere Auto Deploy and NSX-T Transport Node Profile to
reprovision a host with a custom ESXi and NSX-T version.
2 (Reference host) Create a Custom Image Profile. See Create a Custom Image Profile for Stateless
Hosts.
3 (Reference and Target hosts) Associate the Custom Image Profile. See Associate the Custom Image
with the Reference and Target Hosts.
4 (Reference host) Set up Network Configuration in ESXi. See Set Up Network Configuration on the
Reference Host.
5 (Reference host) Configure as a Transport Node in NSX. See Configure the Reference Host as a
Transport Node in NSX-T.
6 (Reference host) Extract and Verify Host Profile. See Extract and Verify the Host Profile.
7 (Reference and Target hosts) Verify the Host Profile Association with Stateless Cluster. See Verify the
Host Profile Association with Stateless Cluster.
9 (Target hosts) Trigger Auto Deployment. See Trigger Auto Deployment on Target Hosts.
a Before applying Transport Node Profile. See Reboot Hosts Before Applying TNP.
c After applying Transport Node Profile. See Reboot Hosts After Applying TNP.
10 Troubleshoot Host Profile and Transport Node Profile. See Troubleshoot Host Profile and Transport
Node Profile.
Supported Workflows
n With Image Profile and HostProfile
Prerequisites
n Only homogeneous clusters (all hosts within a cluster must be either stateless or stateful) are
supported.
The first time the reference host starts up, ESXi associates the default rule with the reference host. In this
procedure, we are adding a custom image profile ( ESXi and NSX VIBs) and associate the reference host
with the new custom image. An image profile with the NSX-T image significantly reduces the installation
time. The same custom image is associated with the target hosts in the stateless cluster.
Note Alternatively, you can add only an ESXi image profile to the reference and target stateless cluster.
The NSX-T VIBs are downloaded when you apply the transport node profile on the stateless cluster. See
Add a Software Depot.
Prerequisites
Ensure that the auto-deploy service and image builder service are enabled. See Using vSphere Auto
Deploy to Reprovision Hosts.
Procedure
a Log in to https://my.vmware.com.
b On the Download VMware NSX-T Data Center page, select the NSX-T version.
c In the Product Downloads page, search NSX-T Kernel Modules for a specific VMware ESXi
version.
The vSphere Web Client displays two depots created on the reference host.
4 Create a custom software depot to clone previously imported ESXi image and nsx-lcp packages.
a Select the ESXi Image profile from the ESXi software depot created in the earlier step.
b Click Clone.
c In the Clone Image Profile wizard, enter a name for the custom image to be created.
d Select the custom software depot where the cloned image ( ESXi) must be available.
e In the Select software packages window, select the Acceptance level to VMware Certified. The
ESXi VIBs are preselected.
f Identify and select the NSX-T packages manually from the list of packages and click Next.
g In the Ready to complete screen, verify the details and click Finish to create the cloned image
containing ESXi and NSX-T packages into the custom software depot.
What to do next
Associate the custom image with the reference and target hosts. See Associate the Custom Image with
the Reference and Target Hosts.
Associate the Custom Image with the Reference and Target Hosts
To start the reference host and target hosts with the new custom image containing ESXi and NSX
packages, associate the custom image profile.
At this point in the procedure, the custom image is only being associated to the reference and target
hosts but NSX installation does not happen.
Important Perform this custom image association procedure on both reference and target hosts.
Prerequisites
Procedure
1 On the ESXi host, navigate to Menu > Auto Deploy > Deployed Hosts.
2 To associate the custom image profile with a host, select the custom image.
4 In the Edit Image Profile Association wizard, click Browse and select the custom depot and select the
custom image profile.
6 Click Ok.
Results
What to do next
Set up Network Configuration on the Reference Host. See Set Up Network Configuration on the
Reference Host.
This network configuration is captured in the host profile which is extracted from the reference host.
During a stateless deployment, the host profile replicates this network configuration setting on each target
host.
Procedure
1 On the ESXi host, configure a vSphere Standard Switch (VSS) or Distributed Virtual switch (DVS) by
adding a VMkernel adapter.
2 Verify that the newly added VSS/DVS switch is displayed in the VMkernel adapters page.
What to do next
Configure the Reference Host as a Transport Node in NSX-T. See Configure the Reference Host as a
Transport Node in NSX-T.
Procedure
2 To locate the reference host, navigate to System -> Nodes -> Host Transport Node.
3 Create a VLAN transport zone to define the span of the virtual network. The span is defined by
attaching N-VDS switches to the transport zone. Based on this attachment, N-VDS can access
segments defined within the transport zone. See Create a Transport Zone.
4 Create a VLAN segment on the transport zone. The created segment is displayed as a logical switch.
d Click Save.
5 Create an uplink profile for the reference host that defines how an N-VDS or VDS switch connects to
the physical network. See, Create an Uplink Profile.
6 Configure the reference host as a transport node. See Configure a Managed Host Transport Node.
b (On a N-VDS switch) Click Configure NSX and select the previously created transport zone, N-
VDS, uplink profile.
c (On a VDS switch) Click Configure NSX and select the previously created transport zone, VDS,
uplink profile.
7 In the Network Mappings to Install section, click Add Mapping to add the VMkernel to Segment/
Logical switch mapping.
(On a VDS switch) After installation, configuration status of the reference host is displayed as
Success. In the vCenter Server, the VDS switch is displayed as NSX switch.
(On an N-VDS switch) During installation, VMkernel adapters and physical NICs are migrated from a
VSS or DVS switch to an N-VDS switch. After installation, configuration status of the reference host is
displayed as Success.
9 In vCenter Server, verify that the PNICs and VMkernels adapters on the VSS switch are migrated and
connected to the N-VDS switch.
Note On a VDS switch, VMkernel adapter and physical NIC migration is not supported. Attaching a
VMkernel adapter to an NSX Distributed Virtual port group is not supported.
What to do next
Extract and Verify the Host Profile. See Extract and Verify the Host Profile.
Procedure
1 To extract the host profile, Extract and Configure Host Profile from the Reference Host.
Results
The host profile contains configuration related to ESXi and NSX as the host was prepared for both
environments.
What to do next
Verify the Host Profile Association with Stateless Cluster. See Verify the Host Profile Association with
Stateless Cluster.
Without the host profile associated to the stateless cluster, new nodes joining the cluster cannot be auto
deployed with ESXi and NSX VIBs.
Procedure
1 Attach or Detach Host Profile to Stateless Cluster. See Attach or Detach Entities from a Host Profile.
2 In the Deployed Hosts tab, verify that the existing stateless host is associated with the correct image
and associated with the host profile.
3 If the host profile association is missing, select the target host and click Remediate Host Associations
to force update the image and host profile to the target host.
What to do next
Procedure
1 After attaching the host profile to the target cluster, if the hosts are not updated with custom values,
the system displays the following message.
2 To update host customizations, navigate to the host profile, click Actions -> Edit Host
Customizations.
3 For ESXi versions 67ep6, 67ep7, 67u2, enter the MUX user password.
4 Verify that all the required fields are updated with appropriate values.
What to do next
Trigger Auto Deployment on Target Hosts. See Trigger Auto Deployment on Target Hosts.
There are two ways to prepare hosts to trigger auto-deployment of ESXi and NSX-T VIBs to be
configured.
If you want to migrate VMkernel adapters when installing NSX-T on the hosts, see:
What to do next
Reboot hosts before applying TNP to the stateless cluster. See Reboot Hosts Before Applying TNP.
Procedure
1 Reboot hosts.
The target host starts with the ESXi image. After starting, the target host remains in maintenance
mode until the TNP profile is applied to the target host and NSX-T installation is complete. Profiles
are applied on hosts in the following order:
2 On the ESXi host, the VMkernel adapter is attached to a temporary segment named <N-
LogicalSegment> because the host is not yet a transport node. After NSX-T is installed the temporary
switch is replaced with the actual N-VDS switch and logical segment.
ESXi VIBs are applied to all the rebooted hosts. A temporary NSX switch in an ESXi host. When TNP
is applied to the hosts, the temporary switch is replaced by the actual NSX-T switch.
What to do next
Apply TNP to the stateless cluster. See Apply TNP on Stateless Cluster.
Procedure
1 Note down the settings extracted in the Host Profile from the reference host. The corresponding
entities in the TNP profile must have the same value. For example, the N-VDS name used in the Host
Profile and TNP must be the same.
For more information on extracted host profile settings, see Extract and Verify the Host Profile.
3 Add a TNP by entering all required field. See Add a Transport Node Profile.
Ensure that values of the following parameters are the same on both the new TNP profile and the
existing Host Profile.
Note On a VDS switch, migration of VMkernel adapters and physical NIC migration is not supported.
n Transport Zone: Ensure transport zone referenced in Host Profile and TNP is the same.
n N-VDS Name: Ensure N-VDS name referenced in Host Profile and TNP is the same.
n Uplink Profile: Ensure uplink profile referenced in Host Profile and TNP is the same.
n Teaming Policy:
n (On a VDS switch) In vCenter Server, when creating VDS uplinks, verify the NIC used in the
Host Profile and map that physical NIC to the VDS uplink. In NSX-T, you map NSX-T uplinks
to VDS uplinks. So, verify the configuration on the VDS switch in vCenter Server.
n (On an N-VDS switch) When mapping a physical NIC to an uplink profile, first verify the NIC
used in the Host Profile and map that physical NIC to the uplink profile.
n Network mapping for install: When mapping network during installation, first verify the VMkernel
to logical switch mapping on the Host Profile and add the same mapping in TNP.
n Network mapping for uninstall: When mapping network during uninstallation, first verify the
VMkernel to VSS/DVS switch mapping on the Host Profile and add the same mapping in TNP.
After applying TNP on target nodes, if the TNP configuration does not match Host Profile
configuration, the node might not come up because of compliance errors.
6 Verify that the TNP profile is successfully applied to the target cluster. It means that NSX is
successfully configured on all nodes of the cluster.
7 In vSphere, verify that the physical NICs or VMkernel adapters are attached to the N-VDS switch.
8 In NSX, verify that the ESXi host is configured successfully as a transport node.
What to do next
Alternatively, you can reboot a target host after applying TNP to the cluster. See Reboot Hosts After
Applying TNP.
Procedure
1 Apply TNP to the stateless cluster that is already prepared with host profile. See Create and Apply
TNP on Stateless Cluster.
2 Reboot hosts.
After applying TNP profile to the stateless cluster, when you reboot any new node joining the cluster
that node is automatically configured with NSX-T on the host.
What to do next
Ensure that you reboot any new node joining the cluster to automatically deploy and configure ESXi and
NSX-T on the rebooted node.
To troubleshoot issues related to host profile and transport node profile when configuring auto-
deployment, see Troubleshoot Host Profile and Transport Node Profile.
n (On an N-VDS switch) Migration of vmk0 adapter from VSS/DVS to N-VDS is not supported on NSX-
T 2.4 and NSX-T 2.4.1.
n (On an N-VDS switch) Migration of vmk0 adapter from VSS/DVS to N-VDS is supported on NSX-T
2.5.
Target Host Reference Host Configuration Steps To Auto Deploy Target Hosts
Target host has vmk0 adapter The host profile extracted from the 1 Attach the host profile to the target host.
configured. reference host has vmk0 configured
The vmk0 adapter is attached to a vSwitch.
on an N-VDS switch.
2 Update host customizations, if required.
In NSX-T, TNP has only vmk0
3 Reboot the host. The host profile is applied to
migration mapping configured.
the host. vmk0 is attached to a temporary
switch.
4 Apply TNP.
Target host has vmk0 adapter The host profile extracted from the 1 Attach the host profile to the target host.
configured. reference host has vmk0 on vSwitch
The vmk0 adapter is attached to a vSwitch, but
and vmk1 is on an N-VDS switch.
vmk1 is not realized on any switch.
In NSX-T, TNP has only vmk1
2 Update host customizations, if required.
migration mapping configured.
3 Reboot the host.
Target Host Reference Host Configuration Steps To Auto Deploy Target Hosts
Target host has vmk0 adapter The host profile extracted from the 1 Attach the host profile to the target host.
configured. reference host has vmk0 is configured
The vmk0 adapter is attached to a vSwitch, but
on a vSwitch and vmk1 is configured
vmk1 is not realized on any switch.
on an N-VDS switch.
2 Update host customizations, if required.
In NSX-T, TNP has vmk0 and vmk1
3 Reboot the host.
migration mappings configured.
The vmk0 adapter is attached to a vSwitch and
vmk1 is attached to a temporary NSX switch.
4 Apply TNP.
5 (optional) If the host remains non-compliant with
the host profile, reboot the host to make the host
compliant.
The target host is successfully deployed with ESXi
and NSX-T VIBs.
Target host has vmk0 and vmk1 The host profile extracted from the 1 Attach the host profile to the target host.
adapters configured. reference host has vmk0 on vSwitch
The vmk0 and vmk1 adapters are attached to a
and vmk1 configured on an N-VDS
vSwitch.
switch.
2 Update host customizations, if required.
In NSX-T, TNP has a vmk1 migration
3 Reboot the host.
mapping configured.
4 Apply TNP.
Target host has vmk0 and vmk1 The host profile extracted from the 1 Attach the host profile to the target host.
adapters configured. reference host has vmk0 and vmk1
The vmk0 and vmk1 adapters are attached to a
configured on an N-VDS switch.
vSwitch.
In NSX-T, TNP has vmk0 and vmk1
2 Update host customizations, if required.
migration mappings configured.
3 Reboot the host.
4 Apply TNP.
n (On an N-VDS switch) Migration of vmk0 adapter from VSS/DVS to N-VDS is not supported on NSX-
T 2.4 and NSX-T 2.4.1.
n (On an N-VDS switch) Migration of vmk0 adapter from VSS/DVS to N-VDS is supported on NSX-T
2.5.
Target Host State Reference Host Configuration Steps To Auto Deploy Target Hosts
Host is in powered-off state (first- The host profile extracted from the 1 Power on the host.
time start). It is later added to the reference host has VMkernel adapter
After the host is powered on.
cluster. 0 (vmk0) on vSwitch and VMkernel
n The host gets added to cluster.
The default auto-deploy rule is adapter 1 (vmk1) configured on an N-
VDS switch. n The host profile is applied on the target host.
configured for the target cluster
and associated with the host In NSX-T, TNP has only vmk1 n The vmk0 adapter is on vSwitch and vmk1
profile. migration mapping configured. adapter is on a temporary switch.
Host is in powered-off state (first- The host profile extracted from the 1 Power on the host.
time start). It is later added to the reference host has VMkernel adapter
After the host is powered on.
cluster. 0 (vmk0) and VMkernel adapter 1
n The host gets added to cluster.
The default auto-deploy rule is (vmk1) configured on an N-VDS
switch. n The host profile is applied on the target host.
configured for the target cluster
and associated to the host In NSX-T, TNP has vmk0 and vmk1 n The vmk0 and vmk1 adapters are on a
profile. migration configured. temporary switch.
Target Host State Reference Host Configuration Steps To Auto Deploy Target Hosts
Host is in powered-on state. It is The host profile extracted from the 1 Move the host to be part of the cluster.
later added to the cluster. reference host has VMkernel adapter 2 Reboot the host.
The default auto-deploy rule is 0 (vmk0) on vSwitch and VMkernel
After the host is rebooted the host profile is
configured for the target cluster adapter 1 (vmk1) configured on an N-
applied on the target host.
and associated to the host VDS switch.
profile. In NSX-T, TNP has a vmk1 migration n The vmk0 adapter is attached to a vSwitch,
Target host only has a vmk0 mapping configured. whereas the vmk1 adapter is attached to a
adapter configured on it. temporary NSX switch.
n TNP is triggered.
n vmk1 is migrated to the N-VDS switch.
3 (Optional) If the host remains non-compliant with
the host profile, reboot the host to make the host
compliant.
The host is successfully deployed with ESXi and
NSX-T VIBs.
Host is in powered-on state. It is The host profile extracted from the 1 Move the host to be part of the cluster.
later added to the cluster. reference host has VMkernel adapter 2 Reboot the host.
The default auto-deploy rule is 0 (vmk0) and VMkernel adapter 1
After the rebooting the host, the host profile is
configured for the target cluster (vmk1) configured on N-VDS.
applied to the target host.
and associated to the host In NSX-T, TNP has vmk0 and vmk1
profile. migration configured. n The vmk0 and vmk1 adapters are attached
Target host only has a vmk0 to a temporary NSX switch.
adapter configured on it. n TNP is triggered.
Target Host State Reference Host Configuration Steps To Auto Deploy Target Hosts
Host is in powered-on state. It is The host profile extracted from the 1 Move the host to be part of the cluster.
later added to the cluster. reference host has VMkernel adapter 2 Reboot the host.
The default auto-deploy rule is 0 (vmk0) on vSwitch and VMkernel
After the host is rebooted the host profile is
configured for the target cluster adapter 1 (vmk1) configured on an N-
applied on the target host.
and associated to the host VDS switch.
profile. In NSX-T, TNP has a vmk1 migration n The vmk0 adapter is attached to a vSwitch,
Target host has vmk0 and vmk1 configured. whereas the vmk1 adapter is attached to a
network mapping configured. temporary NSX switch.
n TNP is triggered.
n vmk1 is migrated to the N-VDS switch.
3 (Optional) If the host remains non-compliant with
the host profile, reboot the host to make the host
compliant.
The host is successfully deployed with ESXi and
NSX-T VIBs.
Host is in powered-on state. It is In the reference host, the host profile 1 Move the host to be part of the cluster.
later added to the cluster. has VMkernel adapter 0 (vmk0) and 2 Reboot the host.
The default auto-deploy rule is VMkernel adapter 1 (vmk1) configured
After the host is rebooted the host profile is
configured for the target cluster on an N-VDS switch.
applied on the target host.
and associated to the host In NSX-T, TNP has vmk0 and vmk1
profile. migration configured. n The vmk0 and vmk1 adapters are attached
Host has vmk0 and vmk1 to a temporary NSX switch.
network mapping configured. n TNP is triggered.
n The vmk0 and vmk1 adapters are migrated
to the N-VDS switch.
The host is successfully deployed with ESXi and
NSX-T VIBs.
Scenario Description
When multiple VMkernel adapters enabled to support For example, before migration, vmk0 is enabled to support
Management, vMotion and other traffic are migrated to Management traffic and vmk1 is enabled for vMotion traffic. After host
the same logical switch, VMkernel adapters get migrated reboot, vmk0 supports vMotion traffic and vmk1 supports Management
to logical switch after reboot. But the service on one traffic. This results in non-compliant error after reboot.
VMkernel adapter is enabled on a different adapter. Workaround: None. There is no impact as both VMkernel adapters are
on the same logical switch.
Host preparation progress is stuck at 60% while the node Issue: When a TNP is applied on a cluster, NSX-T is successfully
status displays UP. installed on the host and node status displays UP, but GUI still shows
60% progress.
Workaround: Reapply the TNP or TN configuration without any change
in the config. This will fix the status to 100% on the GUI.
Scenario Description
Even though VMkernel migration is successful there was Issue: When you migrate vmk0 the management interface from vSwitch
a validation error on the TN before host switches are to a logical switch, NSX-T is successfully installed on the host.
removed. VMkernel migration is successful, but TN status shows Partial Success
with error.
Validation before host switches removal failed: [error:
No management vmk will have PNIC after ['vmk1'] in ['9a
bb eb c1 04 81 40 e2-bc 3f 3e aa bd 14 62 1e'] lose all
PNICs.]; LogicalSwitch full-sync: LogicalSwitch full-sync
realization query skipped.
Workaround: None. Ignore the error message as VMkernel migration is
successful.
Reapplying a TNP where the Network Mapping for Install Issue: When a TNP configuration consists of vmk0 in the Networking
lists vmk0 results in host losing connectivity. Mapping for Install, the hosts loses connectivity.
Workaround: Instead of reapplying the TNP, reboot the host with
necessary configurations in TNP.
Cannot apply the host profile because MUX user Issue: Only on hosts running versions earlier than vSphere 6.7 U3.
password policy and password were not reset. Host remediation and host profile application on hosts might fail unless
the mux_user password is reset.
Workaround: Under Policies & Profiles, edit the host profile to modify
the mux_user password policy and reset the mux_user password.
Host Profile is not portable. Issue: None of the vCenter servers can use the host profile containing
NSX-T configuration.
Workaround: None.
Auto Deploy Rule Engine Issue: Host profile cannot be used in auto deploy rules to deploy new
clusters. If new clusters are deployed, the hosts get deployed with basic
networking and remain in maintenance mode.
Workaround: Prepare each cluster from NSX-T GUI. See Apply TNP on
Stateless Cluster.
Check compliance errors. Issue: Host profile remediation cannot fix the compliance errors related
to the NSX-T configuration.
n Physical NICs configured on Host Profile and TNP are different.
n Mapping between vNIC to LS mapping. Host Profile finds a
mismatch in the logical switch to vNIC mapping with the TNP
profile.
n VMkernel connected to N-VDS mismatch on Host Profile and TNP.
n Opaque switch mismatch on Host Profile and TNP.
Workaround: Ensure the NSX-T configuration matches on Host Profile
and TNP. Reboot the host to realize the configuration changes. The
host comes up.
Remediation Issue: If there are any NSX-T specific compliance errors, host profile
remediation on that cluster is blocked.
Incorrect configuration:
n Mapping between vNIC to LS mapping
n Mapping of physical NICs
Workaround: Ensure that the NSX-T configuration matches on Host
Profile and TNP. Reboot the host to realize the configuration changes.
The host comes up.
Scenario Description
Update Issue: If the user has changed NSX-T configuration in the cluster, then
extract a new host profile. Update the host profile manually for all the
settings that were lost.
Workaround: None.
Host-level transport node configuration Issue: After anportsport node was auto-deployed, it acts as individual
entity. Any update to that transport node might not match with the TNP.
Workaround: Update the cluster. Any update in a standalone transport
node cannot persist its migration specification. The migration might fail
to post the reboot.
PeerDNS configuration is not supported on the VMkernel Issue: If a VMkernel adapter selected for migration to NVDS is peer-
adapter selected for migration to the NVDS switch. DNS enabled, then host profile application fails.
Workaround: Edit the extracted host profile by disabling peer-DNS
setting on the VMkernel adapter that must be migrated to an NVDS
switch. Alternatively, ensure that you do not migrate peer-DNS enabled
VMkernel adapters to an NVDS switch.
DHCP address of the VMkernel NIC address not retained Issue: If the reference host is stateful, then any stateless hosts using
profile extracted from the stateful reference host cannot retain their
VMkernel management MAC address derived from PXE started MAC. It
results in DHCP addressing issues.
Workaround: Edit extracted host profile of stateful host and modify the
'Determine how MAC address for vmknic should be decided' to
'Use the MAC address from which the system was PXE started'.
Host Profile application failure in vCenter can lead to Issue: If host profile application fails in vCenter, NSX configuration
NSX configuration errors on the host. might also fail.
Workaround: In vCenter, verify that host profile was successfully
applied. Fix the errors and try again.
LAGS are not supported on stateless ESXi hosts. Issue: The uplink profile configured as LAGs in NSX is not supported in
a stateless ESXi host managed by a vCenter Server or in NSX.
Workaround: None.
Stateful Servers
Integrate host profiles of an ESXi host with NSX-T on stateful servers.
A stateful host is a host that retains all configurations and the installed VIBs even after it is rebooted.
While an auto-deploy server is needed for stateless hosts because the boot up files required to bring up a
stateless hosts are stored on the auto-deploy server, a stateful host does not need a similar infrastructure.
Because the boot up files required to bring up a stateful host is stored on its hard drive.
In this procedure, the reference host is outside of the stateful cluster and the target hosts in the cluster. A
target host can be within a cluster or a standalone host outside of the cluster. Prepare a cluster by
applying host profile and transport node profile (TN profile) , so that any new target hosts joining the
cluster is automatically prepared with NSX-T VIBs. Configure the target host as a transport node.
Similarly, for a standalone host, apply the host profile and configure NSX-T to install NSX-T VIBs and
when NSX-T configuration is complete, it becomes a transport node.
Note NSX-T VIBs are installed from TN profile and ESXi host configurations are applied by the Host
Profiles.
During the configuration of a target host into a transport node, VMkernel adapters and vmnics or physical
network interfaces that are attached to the VSS or VDS switch can be migrated and get connected to the
NSX-T virtual distributed switch, N-VDS switch.
Version Name 67ep6 67U2 67U3 67ep7 67U2C 6.5U3 6.5p03 7.0.0.1
NSX-T 3.0 Yes Yes Yes Yes Yes Yes Yes Yes
You can select a host either within the cluster or outside of the cluster to be the reference host. You need
to create a reference host because the host profile from the reference host is extracted and applied to a
target host. Both N-VDS and VDS switch types support migration of VMkernel adapters.
In this procedure, as an example the instructions are to migrate vmk0 (management traffic) and vmk1
(vMotion traffic) to an N-VDS switch.
Prerequisites
Procedure
a In vSphere, add vmk1 adapter. vmk0 is already present to serve management traffic.
a Using vSphere Web Client, before migrating vmk0 and vmk1, ensure a logical switch is created in
NSX-T.
b (Optional) Using NSX-T Manager UI, configure NSX such that after installation of NSX-T, vmk1
adapter mapped to a logical switch is migrated to the N-VDS switch.
c (Optional) Using NSX-T Manager UI, configure NSX-T such that after installation of NSX-T, vmk0
adapter mapped to a logical switch is migrated to the N-VDS switch.
d Using vSphere Web Client, ensure that vmk0 and vmk1 are connected to a logical switch on N-
VDS switch.
4 In your environment, there might be a number of vmkernel adapter that need to be migrated to the N-
VDS switch. However, before migrating vmk adapters from VSS/DVS to an N-VDS switch, ensure
that the configuration parameters on the target host match those on the reference host.
b Manually configure NSX-T on the host. When configuring the host as a transport node because
the host profile on the ESXi, ensure the following conditions are met.
d vmk1 adapter must be connected to the same logical switch that is used by the reference host.
e Target host must use the same IP pool that is used by the reference host.
f Uplink profile, LLDP, NIOC, network mapping for install, N-VDS configured on the target host
must be the same as configured on the reference host.
g Manually add VMkernel adapter, vmk1 and vmnic1 so that it is migrated from the VSS/DVS
switch to the N-VDS switch. See vmk1 migration scenarios.
What to do next
Scenarios when VMkernel adapters are migrated with and without host profiles applied to NSX-T.
vmk1 migration on a The target host is not configured as a transport node. As 1 Before applying the reference
standalone target host by the target host does not know about any NSX-T objects, host profile to migrate vmk1 to
applying reference host profile. host profile application fails. Remediation of host profile the logical switch on the target
on the target host fails. host, configure the target host
as a transport node, which
Error: Received SOAP response fault : installs NSX-T VIBs, creates an
generate HostConfigTask Spec.. N-VDS switch and migrates
vmk1 adapter from the VSS
switch to N-VDS switch.
When configuring the host as a
transport node because the host
profile on the ESXi, ensure the
following conditions are met:
n Host must belong to the same
transport zone.
n vmk1 adapter must be
connected to the same logical
switch that is used by the
reference host.
n Target host must use the same
IP pool that is used by the
reference host.
n Uplink profile, LLDP, NIOC,
network mapping for install, N-
VDS configured on the target
host must be the same as
configured on the reference
host.
vmk1 migration on target hosts Before applying the host profile to the target host, if you 1 Apply the reference host profile
in a stateful cluster. prepare the cluster by applying TN profile configured with to the target host that joined the
vmk1 mapped to the logical switch, then vmk1 migration cluster.
fails. 2 Remediate the host profile on
Error: vmk1 missing on the host. the target host to create vmk1
adapter on the target host.
3 Re-apply the TN profile to the
cluster to migrate vmk1 to the
target cluster.
vmk0 and vmk1 migration on a When configuring NSX-T on the standalone host, if the When configuring NSX-T on the
standalone host. Network Mapping for Install field does not specify vmk0 or target host, ensure that the Network
vmk1 mappings, then migration fails. Mapping for Install field is specified
with vmk0 and vmk1 mapped to the
same logical switch on the N-VDS.
vmk0 and vmk1 migration on a When applying TN profile to a cluster, if the Network Apply TN profile to the cluster.
cluster host. Mapping for Install field does not specify vmk0 or vmk1 When configuring TN profile to the
mappings, then migration fails. cluster, ensure that Network
Mapping for Install field is specified
with vmk0 and vmk1 mapped to a
logical switch on the N-VDS.
You do not need to apply a host profile to the target host as vmk0 already exists on it. vmk0 adapter
supports management traffic on an ESXi host.
vmk0 migration on a When configuring NSX-T on the target host, ensure that vmk0 is migrated to the logical
standalone host. the Network Mapping for Install field is specified with switch on the target host.
vmk0 mapped to a logical switch on the N-VDS.
vmk0 migration on a cluster Apply TN profile to the cluster. When configuring TN vmk0 is migrated to the logical
host. profile on the cluster, ensure that the Network Mapping switch on the target host.
for Install field is specified with vmk0 mapped to a logical
switch on the N-VDS.
NSX Cloud provides a single pane of glass for managing your public cloud networks.
NSX Cloud is agnostic of provider-specific networking that does not require hypervisor access in a public
cloud.
n You can develop and test applications using the same network and security profiles used in the
production environment.
n Developers can manage their applications until they are ready for deployment.
n With disaster recovery, you can recover from an unplanned outage or a security threat to your public
cloud.
n If you migrate your workloads between public clouds, NSX Cloud ensures that similar security policies
are applied to workload VMs regardless of their new location.
Internet
Private Cloud
Infra
NSX Manager Cluster Management
NSX CSM
Subnet
on-prem
Firewall
(optional) Hypervisors Hypervisors
Internet Internet
Optional Optional
Core Components
The core NSX Cloud components are:
n NSX Manager for the management plane with policy-based routing, role-based access control
(RBAC), control plane and runtime states defined.
n Cloud Service Manager (CSM) for integration with NSX Manager to provide public cloud-specific
information to the management plane.
n Public Cloud Gateway (PCG) for connectivity to the NSX management and control planes, NSX
Edge gateway services, and for API-based communications with the public cloud entities.
n NSX Tools functionality that provides NSX-managed datapath for workload VMs.
Onboard
Workload Create
Deploy NSX-T
VMs in NSX- NSX-T
Data Center Add Public Deploy Public
enforced or Data Center
On-prem Cloud Accounts Cloud Gateway
Native Cloud- Firewall
Components
enforced Policies
Mode
Note While planning your deployment, ensure that the on-prem NSX-T Data Center appliances have
good connectivity with the PCG deployed in the public cloud and Transit VPCs/VNets are be in the same
region as the Compute VPCs/VNets.
What to do next? Follow instructions at Using NSX Cloud in the NSX-T Data
Center Administration Guide.
Install CSM
The Cloud Service Manager (CSM) is a core component of NSX Cloud.
After installing NSX Manager, install CSM by following the same steps as for installing NSX Manager and
selecting nsx-cloud-service-manager as the VM role. See Install NSX Manager and Available
Appliances for instructions.
You can deploy CSM in the Extra Small VM size or higher, as required. See NSX Manager VM and Host
Transport Node System Requirements for details.
Prerequisites
n NSX Manager must be installed and you must have the username and password for the admin
account to log in to NSX Manager
n CSM must be installed and you must have the Enterprise Administrator role assigned in CSM.
Procedure
Option Description
NSX Manager Host Name Enter the fully qualified domain name (FQDN) of the NSX Manager, if available.
You may also enter the IP address of NSX Manager.
Admin Credentials Enter an Enterprise Administrator username and password for NSX Manager.
Manager Thumbprint Optionally, enter the NSX Manager's thumbrpint value. If you leave this field blank,
the system identifies the thumbprint and displays it in the next screen.
4 (Optional) If you did not provide a thumbprint value for NSX Manager, or if the value was incorrect,
the Verify Thumbprint screen appears. Select the checkbox to accept the thumbprint discovered by
the system.
5 Click Connect.
Note If you missed this setting in the setup wizard or if you want to change the associated NSX
Manager, log in to CSM, click System > Settings, and click Configure on the panel titled
Associated NSX Node.
6 (Optional) Set up the Proxy server. See instructions in (Optional) Configure Proxy Servers.
Table 12-2. Ports and Protocols Required for Public Cloud Connectivity with NSX-T Data
Center
From To Port Protocol Required for:
1 Transit/
Self-Managed
CSM REST API calls to PCG (over HTTPS).
VPC/VNet
(Port 80 from NSX-T Data Center 2.5.1 onwards
and Port 7442 for NSX-T Data Center 2.5.0 and earlier.)
Cloud Service
Manager
NSX Manager initiated NSX-RPC channel(s) to PCG G
W
(Port 443)
PCG
All public cloud communication from PCG and CSM is routed through the selected proxy server.
Proxy settings for PCG are independent of proxy settings for CSM. You can choose to have none or a
different proxy server for PCG.
n Credentials-based authentication.
n No authentication.
Procedure
1 Click System > Settings. Then click Configure on the panel titled Proxy Servers.
Note You can also provide these details when using the CSM Setup Wizard that is available when
you first install CSM.
Option Description
Default Use this radio button to indicate the default proxy server.
Authentication Optional. If you want to set up additional authentication, select this check box and
provide valid username and password.
No Proxy Select this option if you do not want to use any of the proxy servers configured.
Procedure
1 Configure vIDM for NSX Manager and CSM. See instructions at Configure VMware Identity Manager
Integration in the NSX-T Data Center Administration Guide.
2 Assign the same role to the vIDM user for NSX Manager and CSM, for example, Enterprise Admin
role assigned to the user named vIDM_admin. You must log in to NSX Manager and CSM each and
assign the same role to the same username. See Add a Role Assignment or Principal Identity in the
NSX-T Data Center Administration Guide for detailed instructions.
4 Enter the vIDM user's credentials. Once you log in, you can switch between NSX Manager and CSM
by clicking the Applications icon.
Note
n Connect VPCs/VNets with on-prem using suitable methods, such as Direct Connect for AWS or
Express Route for Microsoft Azure or site-to-site VPN with any VPN endpoint on-premises and PCG
acting as the VPN endpoint in your public cloud.
n If you choose to have a Transit/Compute topology, make sure there are peering connections
established between the Transit and Compute VPCs/VNets. You can have a single PCG manage
multiple compute VPCs/VNets. You may also choose to have a flat compute VPC/VNet architecture
with a PCG pair installed in each VPC/VNet. See Deploy the NSX Public Cloud Gateway for details
on PCG deployment options.
Overview:
n NSX Cloud provides a PowerShell script to generate the Service Principal and roles that use the
managed identity feature of Microsoft Azure to manage authentication while keeping your Microsoft
Azure credentials secure. You can also include multiple subscriptions under one Service Principal
using this script.
n You have the option of reusing the Service Principal for all your subscriptions, or to create new
Service Principals as required. There is an additional script if you want to create separate Service
Principals for additional subscriptions.
n For multiple subscriptions, whether you are using a single Service Principal for all, or multiple Service
Principals, you must update the JSON files for the CSM and PCG roles to add each additional
subscription name under the section AssignableScopes.
n If you already have an NSX Cloud Service Principal in your VNet, you can update it by running the
scripts again and leaving out the Service Principal name from the parameters.
n The Service Principal name must be unique for your Microsoft Azure Active Directory. You may use
the same Service Principal in different subscriptions under the same Active Directory domain, or
different Service Principals per subscription. But you cannot create two Service Principals with the
same name.
n You must either be the owner of or have permissions to create and assign roles in all the Microsoft
Azure subscriptions.
n Scenario 1: You have a single Microsoft Azure Subscription that you want to enable with NSX
Cloud.
n Scenario 2: You have multiple Microsoft Azure Subscriptions under the same Microsoft Azure
Directory, that you want to enable with NSX Cloud, but want to use one NSX Cloud Service
Principal across all your subscriptions.
n Scenario 3: You have multiple Microsoft Azure Subscriptions under the same Microsoft Azure
Directory, that you want to enable with NSX Cloud, but want to use different NSX Cloud Service
Principal names for different subscriptions.
2 (Optional) Create Service Principals for other subscriptions you want to link.
Note If using multiple subscriptions, whether using the same or different Service Principals, you
must add each subscription separately in CSM.
Prerequisites
n You must have PowerShell 5.0+ with the AzureRM Module installed.
n You must either be the owner of or have permissions to create and assign roles in all the Microsoft
Azure subscriptions.
Note The response time from Microsoft Azure can cause the script to fail when you run it the first time. If
the script fails, try running it again.
Procedure
2 Extract the following contents of the ZIP file in your Windows system:
Script/File Description
CreateNSXRoles.ps1 The PowerShell script to generate the NSX Cloud Service Principal and managed
identity roles for CSM and PCG. This script takes the following parameters:
n -subscriptionId <the Transit_VNet's_Azure_subscription_ID>
n (optional) -servicePrincipalName <Service_Principal_Name>
n (optional) -useOneServicePrincipal
AddServicePrincipal.ps1 An optional script required if you want to add multiple subscriptions and assign
different Service Principals to each subscription. See Scenario 3 in the following
steps. This script takes the following parameters:
n -computeSubscriptionId
<the_Compute_VNet's_Azure_subscription_ID>
n -transitSubscriptionId <the
Transit_VNet's_Azure_Subscription_ID>
n -csmRoleName <CSM_Role_Name>
n -servicePrincipalName <Service_Principal_Name>
nsx_csm_role.json A JSON template for the CSM role name and permissions. This file is required as
an input to the PowerShell script and must be in the same folder as the script.
nsx_pcg_role.json A JSON template for the PCG role name and permissions. This file is required as
an input to the PowerShell script and must be in the same folder as the script.
Note The default PCG (Gateway) Role Name is nsx-pcg-role. You need to
provide this value when adding your subscription in CSM.
3 Scenario 1: You have a single Microsoft Azure Subscription that you want to enable with NSX Cloud.
a From a PowerShell instance, go to the directory where you downloaded the Microsoft Azure
scripts and JSON files.
b Run the script named CreateNSXRoles.ps1 with the parameter -SubscriptionId, as follows:
Note If you want to override the default Service Principal name of nsx-service-admin, you
can also use the parameter -servicePrincipalName. The Service Principal name must be
unique in your Microsoft Azure Active Directory.
4 Scenario 2: You have multiple Microsoft Azure Subscriptions under the same Microsoft Azure
Directory, that you want to enable with NSX Cloud, but want to use one NSX Cloud Service Principal
across all your subscriptions.
a From a PowerShell instance, go to the directory where you downloaded the Microsoft Azure
scripts and JSON files.
b Edit each of the JSON files to add a list of other subscription IDs under the section titled
"AssignableScopes", for example:
"AssignableScopes": [
"/subscriptions/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"/subscriptions/aaaaaaaa-bbbb-cccc-dddd-ffffffffffff",
"/subscriptions/aaaaaaaa-bbbb-cccc-dddd-000000000000"
Note You must use the format shown in the example to add subscription IDs: "/subscriptions/
<Subscription_ID>"
c Run the script named CreateNSXRoles.ps1 with the parameters -subscriptionID and -
useOneServicePrincipal:
Note Omit the Service Principal name here if you want to use the default name: nsx-service-
admin. If that Service Principal name already exists in your Microsoft Azure Active Directory,
running this script without a Service Principal name updates that Service Principal.
5 Scenario 3: You have multiple Microsoft Azure Subscriptions under the same Microsoft Azure
Directory, that you want to enable with NSX Cloud, but want to use different NSX Cloud Service
Principal names for different subscriptions.
a From a PowerShell instance, go to the directory where you downloaded the Microsoft Azure
scripts and JSON files.
b Follow steps b and c from the second scenario to add multiple subscriptions to the
AssignableScopes section in each of the JSON files.
Note Omit the Service Principal name here if you want to use the default name: nsx-service-
admin. If that Service Principal name exists in your Microsoft Azure Active Directory, running this
script without a Service Principal name updates that Service Principal.
Parameter Value
6 Look for a file in the same directory where you ran the PowerShell script. It is named like:
NSXCloud_ServicePrincipal_<your_subscription_ID>_<NSX_Cloud_Service_Principal_nam
e>. This file contains the information required to add your Microsoft Azure subscription in CSM.
n Client ID
n Client Key
n Tenant ID
n Subscription ID
Results
n an Azure Resource Manager Service Principal for the NSX Cloud application.
Note Refer to the JSON files that are used to create the CSM and PCG roles for a list of permissions
available to them after the roles are created.
What to do next
Note When enabling NSX Cloud for multiple subscriptions, you must add each separate subscription to
CSM individually, for example, if you have five total subscriptions you must add five Microsoft Azure
accounts in CSM with all other values the same but different subscription IDs.
Prerequisites
n You must have the Enterprise Administrator role in NSX-T Data Center.
n You must have the output of the PowerShell script with details of the NSX Cloud Service Principal.
n You must have the value of the PCG role you provided when running the PowerShell script to create
the roles and the Service Principal. The default value is nsx-pcg-role.
Procedure
Option Description
Name Provide a suitable name to identify this account in CSM. You may have multiple
Microsoft Azure subscriptions that are associated with the same Microsoft Azure
tenant ID. Name your account account and you can name them appropriately in
CSM, for example, Azure-DevOps-Account, Azure-Finance-Account, etc.
Client ID Copy paste this value from the output of the PowerShell script.
Key Copy paste this value from the output of the PowerShell script.
Subscription ID Copy paste this value from the output of the PowerShell script.
Tenant ID Copy paste this value from the output of the PowerShell script.
Option Description
Gateway Role Name The default value is nsx-pcg-role. This value is available from the
nsx_pcg_role.json file if you changed the default.
Cloud Tags By default this option is enabled and allows your Microsoft Azure tags to be visible
in NSX Manager
4 Click Save.
CSM adds the account and you can see it in the Accounts section within three minutes.
5 Whitelist all the VMs in the VNet where you want VMs managed. This is not required, but highly
recommended for brownfield deployments because of Quarantine Policy impact when changed from
disabled to enabled.
What to do next
Overview:
n NSX Cloud provides a shell script that you can run from the AWS CLI of your AWS account to create
the IAM profile and role, and create a trust relationship for Transit and Compute VPCs .
n Scenario 1: You want to use a single AWS account with NSX Cloud.
n Scenario 2: You want to use multiple sub-accounts in AWS that are managed by a master AWS
account.
n Scenario 3: You want to use multiple AWS accounts with NSX Cloud, designating one account
where you will install the PCG, that is a Transit VPC, and other accounts that will link to this PCG,
that is, Compute VPCs. See Deploy the NSX Public Cloud Gateway for details on PCG
deployment options.
1 Use the NSX Cloud shell script to do the following. This step requires AWS CLI configured with the
account you want to add.
n (Optional) Create a trust relationship between the AWS account hosting the Transit VPC and the
AWS account hosting the Compute VPC.
If you plan to host a Transit VPC linked to multiple Compute VPCs in two different AWS accounts, you
can use the script to create a trust relationship between these accounts.
Note The PCG (Gateway) role name is nsx_pcg_service by default. If you want a different value for
the Gateway Role Name, you can change it in the script, but make a note of this value because it is
required for adding the AWS account in CSM.
Prerequisites
You must have the following installed and configured on your Linux or compatible system before you run
the script:
n AWS CLI configured for the account and the default region.
n jq (a JSON parser).
Note If using AWS GovCloud (US) accounts, ensure that your AWS CLI is configured for the GovCloud
(US) account and the default region is specified in the AWS CLI configuration file.
Procedure
u Scenario 1: You want to use a single AWS account with NSX Cloud.
bash nsx_csm_iam_script.sh
b Enter yes when prompted with the question Do you want to create an IAM user for CSM and
an IAM role for PCG? [yes/no]
c Enter a name for the IAM user when asked What do you want to name the IAM User?
Note The IAM user name must be unique in your AWS account.
d Enter no when asked Do you want to add trust relationship for any Transit VPC account?
[yes/no]
When the script runs successfully, the IAM profile and a role for PCG is created in your AWS account.
The values are saved in the output file named aws_details.txt in the same directory where you
ran the script. Next, follow instructions at Add your AWS Account in CSM and then Deploy PCG in a
VPC to finish the process of setting up a Transit or Self-Managed VPC.
u Scenario 2: You want to use multiple sub-accounts in AWS that are managed by one master AWS
account.
bash nsx_csm_iam_script.sh
b Enter yes when prompted with the question Do you want to create an IAM user for CSM and
an IAM role for PCG? [yes/no]
c Enter a name for the IAM user when asked What do you want to name the IAM User?
Note The IAM user name must be unique in your AWS account.
d Enter no when asked Do you want to add trust relationship for any Transit VPC account?
[yes/no]
Note With a master AWS account, if your Transit VPC has permission to view Compute VPCs in
the sub-accounts, you do not need to establish a trust relationship with your sub-accounts. If not,
follow the steps for Scenario 3 to set up multiple accounts.
When the script runs successfully, the IAM profile and a role for PCG is created in your AWS master
account. The values are saved in the output file in the same directory where you ran the script. The
filename is aws_details.txt. Next, follow instructions at Add your AWS Account in CSM and then
Deploy PCG in a VPC to finish the process of setting up a Transit or Self-Managed VPC.
u Scenario 3: You want to use multiple AWS accounts with NSX Cloud, designating one account for
Transit VPC and other accounts for Compute VPCs. See Deploy the NSX Public Cloud Gateway for
details on PCG deployment options.
a Make a note of the 12-digit AWS account number where you want to host the Transit VPC.
b Set up the Transit VPC in the AWS account by following steps a through d for Scenario 1 and
finish the process of adding the account in CSM.
c Download and run the NSX Cloud script from a Linux or compatible system in your other AWS
account where you want to host the Compute VPCs. Alternatively, you can use AWS profiles with
different account credentials to use the same system to run the script again for your other AWS
account.
d The script poses the question: Do you want to create an IAM user for CSM and an IAM role
for PCG? [yes/no]. Use the following guidance for the appropriate response:
This AWS account was already Enter no in response to Do you want to create an IAM user for CSM and an IAM
added to CSM. role for PCG? [yes/no]
This account has not been added to Enter yes in response to Do you want to create an IAM user for CSM and an
CSM before. IAM role for PCG? [yes/no]
e (Optional) If you answered yes to creating an IAM user for CSM and PCG in the previous
question, enter a name for the IAM user when asked What do you want to name the IAM User?.
The IAM user name must be unique in your AWS account.
f Enter yes when asked Do you want to add trust relationship for any Transit VPC account?
[yes/no]
g Enter or copy-paste the 12-digit AWS account number that you noted in step 1 when asked What
is the Transit VPC account number?
An IAM Trust Relationship is established between the two AWS accounts and an ExternalID is
generated by the script.
When the script runs successfully, the IAM profile and a role for PCG is created in your AWS master
account. The values are saved in the output file in the same directory where you ran the script. The
filename is aws_details.txt. Next, follow instructions at Add your AWS Account in CSM and then Link
to a Transit VPC or VNet to finish the process of linking to a Transit VPC.
Procedure
3 Click +Add and enter the following details using the output file aws_details.txt generated from the
NSX Cloud script:
Option Description
Discover Cloud Tags By default this option is enabled and allows your AWS tags to be visible in NSX
Manager
Gateway Role Name The default value is nsx_pcg_service. You can find this value in the output of the
script in the file aws_details.txt.
In the VPCs tab of CSM, you can view all the VPCs in your AWS account.
In the Instances tab of CSM, you can view the EC2 Instances in this VPC.
4 Whitelist all the VMs in the VPC where you want VMs managed. This is not required, but highly
recommended for brownfield deployments because of Quarantine Policy impact when changed from
disabled to enabled.
What to do next
Familiarize yourself with the following terminology explaining the PCG's architecture and deployment
modes for workload VM-management.
Note The PCG is deployed in a single default size for each supported public cloud:
AWS c5.xlarge
Note Some regions may not support this instance type. Refer to AWS documentation for details.
Architecure
The PCG can either be a standalone gateway appliance or shared between your public cloud VPCs or
VNets to achieve a hub and spoke topology.
Internet Internet
Express Route/
Optional
Direct Connect
Private Cloud or Site to Site VPN Transit VPC/VNet
over internet or
over Internet Web1 Web2
Gateway (IGW)
PCG Management
Firewall Subnet
(optional)
Downlink Subnet
Hypervisors Hypervisors
Modes of Deployment
Self-managed VPC/VNet: When you deploy the PCG in a VPC or VNet, it qualifies the VPC or VNet as
self-managed, that is, you can bring VMs hosted in this VPC or VNet under NSX management.
Transit VPC/VNet: A self-mananged VPC/VNet becomes a Transit VPC/VNet when you link Compute
VPCs/VNets to it.
Compute VPC/VNet: VPCs/VNets that do not have the PCG deployed on them but link to a Transit VPC/
VNet are called Compute VPCs/VNets.
n Management subnet: This subnet is used for management traffic between on-prem NSX-T Data
Center and PCG. The recommended range is /28.
n Uplink subnet: This subnet is used for north-south internet traffic. The recommended range is /24.
n Downlink subnet: This subnet encompasses the workload VM's IP address range, and should be
sized accordingly. Bear in mind that you may need to incorporate additional interfaces on the
workload VMs for debugging.
PCG deployment aligns with your network addressing plan with FQDNs for the NSX-T Data Center
components and a DNS server that can resolve these FQDNs.
Note It is not recommended to use IP addresses for connecting the public cloud with NSX-T Data
Center using PCG, but if you choose that option, do not change your IP addresses.
Modes of VM-Management
NSX Enforced Mode: In this mode, workload VMs are brought under NSX management by using NSX
Tools that have to be installed on each such workload VM after the tag "nsx.network=default" is applied to
them in AWS or Microsoft Azure.
Native Cloud Enforced Mode:In this mode, your workload VMs can be brought under NSX management
without the use of NSX Tools.
Quarantine Policy
Quarantine Policy: This is NSX Cloud's threat detection feature that works with your public cloud security
groups.
n In the NSX Enforced Mode you can enable or disable Quarantine Policy. It is recommended to have
the Quarantine Policy disabled and all your VMs whitelisted when onboarding workload VMs.
n In the Native Cloud Enforced Mode Quarantine Policy is always enabled and cannot be disabled.
Note Once a mode is selected for a Transit or Compute VPC/VNet, you cannot change the mode. If you
want to switch modes, you must undeploy the PCG and redeploy it in the desired mode.
Note You must have already installed and connected NSX Manager with CSM in your on-prem
deployment.
Overview
n Connect your Microsoft Azure subscription with on-prem NSX-T Data Center.
n Configure your VNets with the necessary CIDR blocks and subnets required by NSX Cloud.
n Synchronize time on the CSM appliance with the Microsoft Azure Storage server or NTP.
Connect your Microsoft Azure subscription with on-prem NSX-T Data Center
Every public cloud provides options to connect with an on-premises deployment. You can choose any of
the available connectivity options that suit your requirements. Refer to Microsoft Azure Reference
documentation for details.
Note You must review and implement the applicable security considerations and best practices by
Microsoft Azure, for example, all privileged user accounts accessing the Microsoft Azure portal or API
should have Multi Factor Authentication (MFA) enabled. MFA ensures only a legitimate user can access
the portal and reduces the likelihood of access even if credentials are stolen or leaked. For more
information and recommendations, refer to Microsoft Azure Security Center Documentation.
n One downlink subnet with a recommended range of /24, for the workload VMs.
n One, or two for HA, uplink subnets with a recommended range of /24, for routing of north-south traffic
leaving from or entering the VNet.
See Deploy the NSX Public Cloud Gateway for details on how these subnets are used.
The VNet in which you deploy a PCG can act as a Transit VNet to which other VNets can connect (known
as Compute VNets). This VNet can also manage VMs and act as a self-managed VNet.
Follow these instructions to deploy a PCG. If you want to link to an existing Transit VNet, see Link to a
Transit VPC or VNet .
Prerequisites
n The VNet on which you are deploying PCG must have the required subnets appropriately adjusted for
High Availability: uplink, downlink, and management.
Procedure
Option Description
SSH Public Key Provide an SSH public key that can be validated while deploying PCG. This is
required for each PCG deployment.
Quarantine Policy on the Associated You can only change the Quarantine Policy setting if you choose to manage
VNet workload VMs using NSX Tools (NSX Enforced Mode). Quarantine Policy is always
enabled in the Native Cloud Enforced Mode.
Leave this in the default disabled mode when you first deploy PCG. You can
change this value after onboarding VMs. See Manage Quarantine Policy in the
NSX-T Data Center Administration Guide for details.
Manage with NSX Tools Leave in the default disabled state to onboard workload VMs in the Native Cloud
Enforced Mode. If you want to install NSX Tools on your workload VMs to use the
NSX Enforced Mode, enable this option.
Auto-install NSX Tools This is only available when you enable Manage with NSX Tools. If selected, NSX
Tools are auto-installed on all workload VMs in the Transit/Self-managed/linked
Compute VNet if the tag nsx.network=default is applied to them.
Local Storage Account When you add a Microsoft Azure subscription to CSM, a list of your Microsoft
Azure Storage Accounts is available to CSM. Select the Storage Account from the
drop-down menu. When proceeding with deploying PCG, CSM copies the publicly
available VHD of the PCG into this Storage Account of the selected region.
Note If the VHD image has been copied to this storage account in the region
already for a previous PCG deployment, then the image is used from this location
for subsequent deployments to reduce the overall deployment time.
VHD URL If you want to use a different PCG image that is not available from the public
VMware repository, you can enter the URL of the PCG’s VHD here. The VHD must
be present in the same account and region where this VNet is created.
Note The VHD must be in the correct URL format. We recommend that you use
the Click to copy option in Microsoft Azure.
Option Description
Proxy Server Select a proxy server to use for internet-bound traffic from this PCG. The proxy
servers are configured in CSM. You can select the same proxy server as CSM if
one, or select a different proxy server from CSM, or select No Proxy Server.
See (Optional) Configure Proxy Servers for details on how to configure proxy
servers in CSM.
Advanced The advanced DNS settings provide flexibility in selecting DNS servers for
resolving NSX-T Data Center management components.
Obtain via Public Cloud Provider's Select this option if you want to use Microsoft Azure DNS settings. This is the
DHCP default DNS setting if you do not pick either of the options to override it.
Override Public Cloud Provider's DNS Select this option if you want to manually provide the IP address of one or more
Server DNS servers to resolve NSX-T Data Center appliances as well as the workload
VMs in this VNet.
Use Public Cloud Provider's DNS Select this option if you want to use the Microsoft Azure DNS server for resolving
server only for NSX-T Data Center the NSX-T Data Center management components. With this setting, you can use
Appliances two DNS servers: one for PCG that resolves NSX-T Data Center appliances; the
other for the VNet that resolves your workload VMs in this VNet.
6 Click Next.
Option Description
Enable HA for NSX Cloud Gateway Select this option to enable High Availability.
Public IP on Mgmt NIC Select Allocate New IP address to provide a public IP address to the
management NIC. You can manually provide the public IP address if you want to
reuse a free public IP address.
Public IP on Uplink NIC Select Allocate New IP address to provide a public IP address to the uplink NIC.
You can manually provide the public IP address if you want to reuse a free public
IP address.
What to do next
Follow instructions at Using NSX Cloud in the NSX-T Data Center Administration Guide.
Note You must have already installed and connected NSX Manager with CSM in your on-prem
deployment.
Overview
n Connect your AWS account with on-prem NSX Manager appliances using any of the available options
that best suit your requirements.
n Configure your VPC with subnets and other requirements for NSX Cloud.
Connect your AWS account with your on-prem NSX-T Data Center deployment
Every public cloud provides options to connect with an on-premises deployment. You can choose any of
the available connectivity options that suit your requirements. Refer to AWS Reference Documentation for
details.
Note You must review and implement the applicable security considerations and best practices by AWS;
refer to AWS Security Best Practices for details.
1 Assuming your VPC uses a /16 network, for each gateway that needs to be deployed, set up three
subnets.
Important If using High Availability, set up three additional subnets in a different Availability Zone.
n Management subnet: This subnet is used for management traffic between on-prem NSX-T Data
Center and PCG. The recommended range is /28.
n Uplink subnet: This subnet is used for north-south internet traffic. The recommended range
is /24.
n Downlink subnet: This subnet encompasses the workload VM's IP address range, and should
be sized accordingly. Bear in mind that you may need to incorporate additional interfaces on the
workload VMs for debugging purposes.
2 Ensure you have an Internet gateway (IGW) that is attached to this VPC.
3 Ensure the routing table for the VPC has the Destination set to 0.0.0.0/0 and the Target is the
IGW attached to the VPC.
4 Ensure you have DNS resolution and DNS hostnames enabled for this VPC.
The VPC in which you deploy a PCG can act as a Transit VPC to which other VPCs can connect (known
as Compute VPCs). This VPC can also manage VMs and act as a self-managed VPC.
Follow these instructions to deploy a PCG. If you want to link to an existing Transit VPC, see Link to a
Transit VPC or VNet .
Prerequisites
n The VPC on which you are deploying PCG must have the required subnets appropriately adjusted for
High Availability: uplink, downlink, and management.
n The configuration for your VPC's network ACL must include an ALLOW inbound rule.
Procedure
2 Click Clouds > AWS > <AWS_account_name> and go to the VPCs tab.
3 In the VPCs tab, select an AWS region name, for example, us-west. The AWS region must be the
same where you created the compute VPC.
Option Description
PEM File Select one of your PEM files from the drop-down menu. This file must be in the
same region where NSX Cloud was deployed and where you created your
compute VPC.
This uniquely identifies your AWS account.
Quarantine Policy on the Associated You can only change the Quarantine Policy setting if you choose to manage
VPC workload VMs using NSX Tools (NSX Enforced Mode). Quarantine Policy is always
enabled in the Native Cloud Enforced Mode
Leave this in the default disabled mode when you first deploy PCG. You can
change this value after onboarding VMs. See Manage Quarantine Policy in the
NSX-T Data Center Administration Guide for details.
Manage with NSX Tools Leave in the default disabled state to onboard workload VMs in the Native Cloud
Enforced Mode. If you want to install NSX Tools on your workload VMs to use the
NSX Enforced Mode, enable this option.
Option Description
Proxy Server Select a proxy server to use for internet-bound traffic from this PCG. The proxy
servers are configured in CSM. You can select the same proxy server as CSM if
one, or select a different proxy server from CSM, or select No Proxy Server.
See (Optional) Configure Proxy Servers for details on how to configure proxy
servers in CSM.
Override AMI ID Use this advanced feature to provide a different AMI ID for the PCG from the one
that is available in your AWS account.
Obtain via Public Cloud Provider's Select this option if you want to use AWS settings. This is the default DNS setting if
DHCP you do not pick either of the options to override it.
Override Public Cloud Provider's DNS Select this option if you want to manually provide the IP address of one or more
Server DNS servers to resolve NSX-T Data Center appliances as well as the workload
VMs in this VPC.
Use Public Cloud Provider's DNS Select this option if you want to use the AWS DNS server for resolving the NSX-T
server only for NSX-T Data Center Data Center management components. With this setting, you can use two DNS
Appliances servers: one for PCG that resolves NSX-T Data Center appliances; the other for
the VPC that resolves your workload VMs in this VPC.
7 Click Next.
Option Description
Enable HA for Public Cloud Gateway The recommended setting is Enable, that sets up a High Availability Active/Standby
pair to avoid an unscheduled downtime.
Primary gateway settings Select an Availability Zone such as us-west-1a, from the drop-down menu as the
primary gateway for HA.
Assign the uplink, downlink, and management subnets from the drop-down menu.
Secondary gateway settings Select another Availability Zone such as us-west-1b, from the drop-down menu
as the secondary gateway for HA.
The secondary gateway is used when the primary gateway fails.
Assign the uplink, downlink, and management subnets from the drop-down menu.
Public IP on Mgmt NIC Select Allocate New IP address to provide a public IP address to the
management NIC. You can manually provide the public IP address if you want to
reuse a free public IP address.
Public IP on Uplink NIC Select Allocate New IP address to provide a public IP address to the uplink NIC.
You can manually provide the public IP address if you want to reuse a free public
IP address.
Click Deploy.
9 Monitor the status of the primary (and secondary, if you selected it) PCG deployment. This process
can take 10-12 minutes.
What to do next
Follow instructions at Using NSX Cloud in the NSX-T Data Center Administration Guide.
Prerequisites
n Verify that the VPC/VNet you want to link is connected to the Transit VPC or VNet through VPN or
peering.
n Verify that the Compute VPC/VNet is in the same region as the Transit VPC/VNet.
Note In route-based IPSec VPN configuration, you must specify the IP address for the virtual tunnel
interface (VTI) port. This IP must be in a different subnet than workload VMs. This prevents workload VM
inbound traffic from being directed to the VTI port, from which it will be dropped.
Note In the public cloud, a default limit exists for the number of inbound/outbound rules per security
group and NSX Cloud creates default security groups. This affects how many Compute VPCs/VNets can
be linked to a Transit VPC/VNet. Assuming 1 CIDR block per VPC/VNet, NSX Cloud supports 10
Compute VPCs/VNets per Transit VPC/VNet. If you have more than 1 CIDR in any Compute VPC/VNet,
the number of supported Compute VPCs/VNets per Transit VPC/VNet reduces. You can adjust the default
limits by reaching out to your public cloud provider.
Procedure
2 Click Clouds > AWS / Azure > <public cloud_account_name> and go to the VPCs / VNets tab.
3 In the VPCs or VNets tab, select a region name where you are hosting one or more compute VPCs
or VNets.
Option Description
Transit VPC or VNet Select a Transit VPC or VNet from the dropdown menu. The Transit VPC or VNet
you select must be already linked with this VPC by way of VPN or peering.
Note If connecting to a Transit VNet, you must have a DNS forwarder configured
in that VNet and the tag nsx.dnsserver=<IP address of the DNS forwarder>
applied to the Transit VNet. See Microsoft Azure documentation for information on
setting up the DNS forwarder.
Default Quarantine Policy Leave this in the default disabled mode when you first deploy PCG. You can
change this value after onboarding VMs. See Manage Quarantine Policy in the
NSX-T Data Center Administration Guide for details.
Manage with NSX Tools Leave in the default disabled state to onboard workload VMs in the Native Cloud
Enforced Mode. If you want to install NSX Tools on your workload VMs to use the
NSX Enforced Mode, enable this option.
Auto-install NSX Tools This is only available when you choose to manage with NSX Tools and only for
Microsoft Azure VNets. If selected, NSX Tools are auto-installed on all workload
VMs in the Transit/Self-managed/linked Compute VNet if the tag
nsx.network=default is applied to them.
What to do next
Follow instructions at Using NSX Cloud in the NSX-T Data Center Administration Guide.
Important Do not delete any of these auto-created entities except if you are manually undeploying PCG.
See Troubleshooting PCG Undeployment for details.
System Entities
You can see the following entities under the System tab:
Edge Transport Nodes One Edge Transport Node n PublicCloudGatewayT Scope: Global
is created for each N-<VPC/VNET-ID>
deployed PCG, two if n PublicCloudGatewayT
deployed in high availability N-<VPC/VNET-ID>-
mode. preferred
Inventory Entities
The following entities are available under the Inventory tab:
One Group created at the Transit VPC/VNet level as a parent Scope: shared across all Compute VPCs/VNets
Group for individual segments created at the Compute VPC/
VNet level. cloud-<Transit VPC/VNet ID>-all-segments
Two Groups for each Compute VPC/VNet: Scope: shared across all Compute VPC/VNets
n Network CIDR Group for all CIDRs of the Compute VPC/
VNet: cloud-<Compute VPC/VNet ID>-cidr
n Local Segment Group for all managed segments within the
Compute VPC/VNet:cloud-<Compute VPC/VNet ID>-
local-segments
The following Groups are created for the currently supported Scope: Shared across all PCGs
public cloud services:
n aws-dynamo-db-service-endpoint
n aws-elb-service-endpoint
n aws-rds-service-endpoint
n aws-s3-service-endpoint
n azure-cosmos-db-service-endpoint
n azure-load-balancer-service-endpoint
n azure-sql-service-endpoint
n azure-storage-service-endpoint
Note For PCGs deployed or linked to in the Native Cloud Enforced Mode, all the workload VMs in the
VPC/VNet become available under Virtual Machines in NSX Manager.
Security Entities
The following entities are available under the Security tab:
Distributed Firewall (East- Two per Transit VPC/VNet: n cloud-stateless-<VPC/ n Stateful rule to allow traffic
West) n Stateless VNet ID> within local managed
n cloud-stateful-<VPC/VNet segments
n Stateful
ID> n Stateful rule to reject traffic
from unmanaged VMs
Gateway Firewall (North- One per Transit VPC/VNet cloud-<Transit VPC/VNet ID>
South)
Networking Entities
The following entities are created at different stages of onboarding and can be found under the
Networking tab:
Figure 12-3. Auto-created NSX-T Data Center Networking Entities After PCG is Deployed
A workload VM with the NSX agent installed on it is tagged with the "nsx.network:default" n A Segment is created for this
key:value in a subnet of a compute or self-managed VPC/VNet specific subnet of the compute
or self-managed VPC or VNet
n Hybrid ports are created for
each tagged workload VM that
has the NSX agent installed
on it
More workload VMs are tagged in the same subnet of the Compute or self-managed VPC/ n Hybrid ports are created for
VNet each tagged workload VM that
has the NSX agent installed
on it
Forwarding Policies
The following three forwarding rules are set up for a Compute VPC/VNet, including Self-managed Transit
VPC/VNet:
n Access any CIDR of the same Compute VPC over the public cloud's network (underlay)
n Route traffic pertaining to public cloud metadata services over the public cloud's network (underlay)
n Route everything not in the Compute VPC/VNet's CIDR block, or a known service, through the NSX-T
Data Center network (overlay)
Some auto configurations are common to all public clouds and both NSX management modes. Other
configurations are specific to either the public cloud or the NSX management mode.
Specific to AWS
The following are specific to AWS:
n In the AWS VPC, a new Type A Record Set gets added with the name nsx-gw.vmware.local into a
private hosted zone in Amazon Route 53. The IP address mapped to this record matches the
Management IP address of the PCG which is assigned by AWS using DHCP and will differ for each
VPC. This DNS entry in the private hosted zone in Amazon Route 53 is used by NSX Cloud to
resolve the PCG's IP address.
Note When you use custom DNS domain names defined in a private hosted zone in Amazon Route
53, the DNS Resolution and DNS Hostnames attributes must be set to Yes for your VPC settings in
AWS.
n A secondary IP for the uplink interface for PCG is created. An AWS Elastic IP is associated with this
secondary IP address. This configuration is for SNAT.
n A common Resource Group is created per region, per subscription. It is named like: nsx-default-
<region-name>-rg, for example: nsx-default-westus-rg. All VNets in this region share this
Resource Group. This Resource Group and all the NSX-created security groups named like default-
<vnet-ID>-sg are not deleted from the Microsoft Azure region after you off-board a VNet in this region
from NSX Cloud.
n The gw security groups are applied to the respective PCG interfaces in VPCs or VNets.
Table 12-8. Public Cloud Security Groups created by NSX Cloud for PCG interfaces
Security Group name Description
After workload VMs are matched with groups and corresponding security policies in NSX Manager,
security groups named like nsx-<GUID> are created in the public cloud for each matching security policy.
Note In AWS, Security Groups are created. In Microsoft Azure, Application Security Groups are created
corresponding to Groups in NSX Manager and Network Security Groups are created corresponding to
Security Policies in NSX Manager.
Security Group
name Available in Microsoft Azure? Available in AWS? Description
Table 12-9. Public Cloud Security Groups created by NSX Cloud for Workload VMs in the NSX
Enforced Mode
Security Group
name Available in Microsoft Azure? Available in AWS? Description
See instructions and further details at Onboarding VMs in the NSX Enforced Mode in the NSX-T Data
Center Administration Guide.
Follow the steps relevant to the NSX management mode your PCG is deployed in.
Procedure
Go to the VPC or VNet in your public cloud and remove the nsx.network=default tag from the
managed VMs.
With Quarantine Policy enabled, your VMs are assigned security groups in your public cloud that are
defined by NSX Cloud.
When you undeploy PCG, you must disable Quarantine Policy. Follow these steps: :
2 From Actions > Edit Configurations >, turn off the setting for Default Quarantine .
3 All VMs that are unmanaged or quarantined in this VPC or VNet will be assigned to the default
security group in AWS and the default-vnet-<vnet-id>-sg security group in Microsoft Azure.
4 If there are managed VMs while disabling Quarantine Policy, they retain their NSX Cloud-assigned
security groups. The first time you remove the nsx.network=default tag from such VMs to take
them out from NSX management, they are also assigned to the default security group in AWS and
the default-vnet-<vnet-id>-sg security group in Microsoft Azure.
Note The common Resource Group created in Microsoft Azure, that is named like: nsx-default-
<region-name>-rg, for example: nsx-default-westus-rg, is not removed when you undeploy
PCG. This Resource Group and all the NSX-created security groups named like default-<vnet-
ID>-sg are not deleted from the Microsoft Azure region. You can remove the NSX Cloud-specific
security group any time after the VNet is off-boarded.
See Auto-Configurations after PCG Deployment or Linking for a list of NSX Cloud security groups.
Identify entities which are associated with the PCG and delete them.
Note Do not delete the auto-created logical entities. These are deleted automatically after you click
Undeploy or Unlink from Transit VPC/VNet from CSM. See Auto-Configurations after PCG Deployment
or Linking for details.
n If using AWS, go to Clouds > AWS > VPCs. Click on the VPC on which one or a pair of PCGs is
deployed and running.
n If using Microsoft Azure, go to Clouds > Azure > VNets. Click on the VNet on which one or a pair
of PCGs is deployed and running.
The default entities created by NSX Cloud are removed automatically when the PCG is undeployed or
unlinked.
n Move all your workload VMs to a security group not created by NSX Cloud.
n Delete the NSX Cloud-created security groups in the public cloud as listed here: Auto-created
Public Cloud Configurations .
n For Microsoft Azure, also delete the NSX Cloud-created Resource Group named like
nsx-gw-<vnet ID>-rg.
n Delete the auto-created entities with the VPC/VNet ID in NSX Manager as listed here: Auto-created
NSX-T Logical Entities.
Note Do not delete the global entities that are auto-created. Only delete the ones that have the
VPC/VNet ID in their name.
To get started with Federation, you install a Global Manager appliance, configure the appliance as active,
and add locations.
Task Details
Make the Global Manager cluster active. See Make the Global Manager Active.
For further configuration tasks, such as preparing Edge clusters for stretched networking, and creating
objects from the Global Manager, see Federation in the NSX-T Data Center Administration Guide.
Procedure
1 Federation Requirements
To support Federation, your environment must meet the following requirements.
Federation Requirements
To support Federation, your environment must meet the following requirements.
n There must be a latency of 150 ms or less between locations with the Federation environment.
n The Global Manager and all Local Managers must have NSX-T Data Center 3.0 installed.
n The required ports must be open to allow communication between the Global Manager and Local
Managers. See VMware Ports and Protocols at https://ports.vmware.com/home/NSX-T-Data-Center.
n Global Manager supports only Policy Mode. Federation does not support Manager Mode. See
Overview of the NSX Manager for more information.
n For most configurations, the Local Manager cluster has the same configuration maximums as an NSX
Manager cluster. Go to VMware Configuration Maximums tool and select NSX-T Data Center.
Select the Federation category for NSX-T Data Center in the VMware Configuration Maximums tool
for exceptions and other Federation-specific values.
n For a given location, the following configurations contribute to the configuration maximum:
n Objects that were created on the Global Manager and include the location in its span.
You can view the capacity and usage on each Local Manager. See View the Usage and Capacity of
Categories of Objects in the NSX-T Data Center Administration Guide.
Installing a Global Manager appliance is similar to installing an NSX Manager appliance. The only
difference is that when you deploy the appliance, you select NSX Global Manager for the Rolename.
Important Do not configure the Global Manager node to publish its FQDN. You can restore a Global
Manager only from an IP address and not from an FQDN.
Prerequisites
n Verify that your environment meets the requirements for NSX Manager installation. See NSX
Manager VM and Host Transport Node System Requirements.
n Verify that you are installing the Global Manager appliance with NSX-T Data Center 3.0.
Procedure
n If you are installing Global Manager on vSphere, follow these instructions: Install NSX Manager
and Available Appliances.
n Select Small for the deployment configuration size. Select NSX Global Manager for the
Rolename.
n If you are installing Global Manager on KVM, follow these instructions: Install NSX Manager on
KVM.
n Select Small for the deployment configuration size. Select NSX Global Manager for the
Rolename.
2 Disable snapshots for the Global Manager appliance. See Disable Snapshots on NSX-T Data Center
Appliances.
Procedure
2 Select System > Location Manager. In the Global Manager tile, click Make Active. Provide a
descriptive name for the active Global Manager and click Save.
Add a Location
After you add a location to Global Manager, you can create objects from Global Manager that span that
location.
After you add a location to the Global Manager, the NSX Manager is called a Local Manager.
Prerequisites
n Verify that you have an NSX-T Data Center environment installed in the location you want to add.
You can add a new NSX-T Data Center environment or an NSX-T Data Center environment with an
existing configuration.
n The NSX-T Data Center environment in the new location must have three NSX Manager nodes
deployed and a cluster VIP or an external load balancer configured. See Configure a Virtual IP (VIP)
Address for a Cluster.
For a proof-of-concept environment, you can add a location that has only one NSX Manager node,
but you must still configure a cluster VIP or an external load balancer.
n Verify that the latency between the Global Manager and the location is 150 ms or less.
n Verify that the environment you are adding has NSX-T Data Center 3.0 installed.
Procedure
2 Select System > Location Manager and click Add On-Prem Location.
3 In the Add New Location dialog box, enter the Location details.
Option Description
FQDN/IP Enter the FQDN or IP address of the NSX Manager cluster VIP or external load
balancer. Do not enter an individual NSX Manager FQDN or IP.
Username and Password Provide the admin user's credentials for the NSX Manager at the location.
Option Description
SHA-256 Thumbprint There are two ways you can get the cluster SHA-256 thumbprint:
n If the NSX Manager cluster has a cluster VIP configured, you can log into any
NSX Manager node in the cluster and run this command:
bfae1a0a...
n If the NSX Manager cluster has an external load balancer configured, you can
get the thumbprint using the openssl command. Log into a system that has
OpenSSL installed, such as a Linux server, and run this command:
SHA256 Fingerprint=BF:AE:1A:0A...
For cluster-fqdn-or-ip, use the same FQDN or IP that you provided earlier in
the FQDN/IP text box.
The thumbprint is accepted with or without colons. Omit the text before the
thumbprint: SHA256 Fingerprint=.
Check Compatibility Click Check Compatibility to ensure that the location can be added. This checks
that the NSX-T Data Center version is compatible.
4 Click Save.
The location is added to the Global Manager. Information about all locations is displayed on the
System > Location Manager page.
The steps to uninstall NSX-T Data Center from a host transport node vary depending on the host type
and how it is configured.
The uninstall mapping determines where the interfaces are connected after the uninstall. There are
uninstall mappings for physical interfaces (vmnicX) and VMkernel interfaces (vmkX). When you uninstall,
VMkernel interfaces are moved from their current connections to the port groups specified in the uninstall
mapping. If a physical interface is included in the uninstall mapping, the physical interface is connected to
the appropriate vSphere Distributed Switch or vSphere Standard Switch based on the destination port
group of the VMkernel interfaces.
Caution Uninstalling NSX-T Data Center from an ESXi host is disruptive if the physical interfaces or
VMkernel interfaces are connected to N-VDS. If the host or cluster is participating in other applications
such as vSAN, those applications might be affected by the uninstall.
There are two places that you can configure network mappings for uninstall.
Note You must have a compute manager configured to apply a transport node profile to a cluster.
If a compute manager is configured, a host can have both a transport node configuration and a transport
node profile configuration. The most recently applied configuration is active. Verify that the network
mappings for uninstall are correctly configured on the active configuration.
Transport node configuration on a node cannot be overriden if underlying segments or VMs are
connected to that transport node. For example, consider a two ESXi host cluster, where host-1 is
configured as transport-node-1, but host-2 is unprepared. Segments and VMs are connected to transport-
node-1. After preparing host-1 as a transport node (associated to transport-zone-1), if you apply a
transport node profile to that cluster (associated to transport-zone-2), then NSX-T does not override the
transport node configuration with the transport node profile configuration. To successfully override
configuration on host-1, power off the VMs and disconnect the segment before applying the transport
node profile to associate host-1 to transport-zone-2 and disassociate it from transport-zone-1.
In this example, the cluster cluster-1 has transport node profile TNP-1 applied to it. The host tn-1 is
displaying Configuration Mismatch. This mismatch message indicates that a different configuration
has been applied to tn-1 after the transport node profile was applied. Transport node tn-2 uses the
network mappings from the transport node profile, and transport node tn-1 uses its own configuration.
Prerequisites
n Verify that you have appropriate port groups configured to use in the uninstall mapping. You must use
vSphere Distributed Switch ephemeral port groups or vSphere Standard Switch port groups.
n Configure a compute manager if you want to use a vSphere Distributed Switch port group in the
uninstall mappings for a standalone ESXi host. See Add a Compute Manager. If there is no compute
manager configured, you must use a vSphere Standard Switch port group.
Procedure
2 Select System > Fabric > Nodes > Host Transport Nodes.
3 For each host (on N-VDS Switch) you want to uninstall, verify that the network mapping for uninstall
includes a port group for each VMkernel interface that is on N-VDS. Add any missing mappings.
Important
n The port group in the network mapping for uninstall must be a vSphere Distributed Switch
ephemeral port group or a vSphere Standard Switch port group.
a To view VMkernel interfaces, log in vCenter Server, select the host, and click Configure >
VMkernel Adapters.
b If the transport node configuration is the active configuration, select the host and click Edit (for
standalone hosts) or Configure NSX (for managed hosts). Click Next, then click Network
Mappings for Uninstall. View the mappings in the VMKNic Mappings and Physical NIC
Mappings tabs.
c If the transport node profile is the active configuration, click the name of the transport node profile
for the cluster in the NSX Configuration column and click Edit. On the Host Switch tab, click
Network Mappings for Uninstall. View the mappings in the VMKNic Mappings and Physical
NIC Mappings tabs.
a As the NSX Manager UI does not allow you to configure network mapping for install or uninstall
when the host switch is vSphere Distributed Switch, ensure that you migrate back any VMkernel
adapters connected to NSX port groups to either a Distributed Virtual port group or a VSS port
group from vCenter Server. Uninstallation fails if there are any VMkernel adapters attached to an
NSX port groups on VDS.
For more information on transport node profiles, see Add a Transport Node Profile.
Caution Uninstalling NSX-T Data Center from an ESXi host is disruptive if the physical interfaces or
VMkernel interfaces are connected to N-VDS. If the host or cluster is participating in other applications
such as vSAN, those applications might be affected by the uninstall.
If you have not used a transport node profile to install NSX-T Data Center, or if you want to remove NSX-
T Data Center from a subset of the hosts in the cluster, see Uninstall NSX-T Data Center from a Host in a
vSphere Cluster.
Note Removing a host from a cluster does not uninstall NSX-T Data Center. Follow these instructions to
uninstall NSX-T Data Center from a host in a cluster: Uninstall NSX-T Data Center from a Host in a
vSphere Cluster.
Prerequisites
n Verify that the hosts you want to uninstall have network uninstall mappings configured. See Verify
Host Network Mappings for Uninstall.
n Verify that the hosts you want to uninstall are in maintenance mode in vSphere.
Procedure
2 Select System > Fabric > Nodes > Host Transport Nodes.
4 Select the cluster you want to uninstall, and click Remove NSX.
5 Verify that the NSX-T Data Center software is removed from the host.
If the NSX-T Data Center software is successfully removed, no VIBs are listed. If any NSX VIBs
remain on the host, contact VMware Support.
Caution Uninstalling NSX-T Data Center from an ESXi host is disruptive if the physical interfaces or
VMkernel interfaces are connected to N-VDS. If the host or cluster is participating in other applications
such as vSAN, those applications might be affected by the uninstall.
Prerequisites
n Verify that the hosts you want to uninstall have network uninstall mappings configured. See Verify
Host Network Mappings for Uninstall.
n Verify that the hosts you want to uninstall are in maintenance mode in vSphere.
Procedure
2 Select System > Fabric > Nodes > Host Transport Nodes.
4 If the cluster has a transport node profile applied, select the cluster, and click Actions > Detach TN
Profile.
If the cluster has a transport node profile applied, the NSX Configuration column for the cluster
displays the profile name.
6 Verify that the NSX-T Data Center software is removed from the host.
If the NSX-T Data Center software is successfully removed, no VIBs are listed. If any NSX VIBs
remain on the host, contact VMware Support.
7 If the cluster had a Transport Node Profile applied, and you want to reapply it, select the cluster, click
Configure NSX, and select the profile from the Select Deployment Profile drop-down menu.
Caution Uninstalling NSX-T Data Center from an ESXi host is disruptive if the physical interfaces or
VMkernel interfaces are connected to N-VDS. If the host or cluster is participating in other applications
such as vSAN, those applications might be affected by the uninstall.
Prerequisites
If you are uninstalling NSX-T Data Center from a standalone ESXi host, verify the following settings:
n Verify that the hosts you want to uninstall have network uninstall mappings configured. See Verify
Host Network Mappings for Uninstall.
n Verify that the hosts you want to uninstall are in maintenance mode in vSphere.
Procedure
2 Select System > Fabric > Nodes > Host Transport Nodes.
4 Select the host and click Delete. In the confirmation dialog box that is displayed, make sure Uninstall
NSX Components is selected, and Force Delete is deselected. Click Delete.
The NSX-T Data Center software is removed from the host. It might take up to 5 minutes for all NSX-
T Data Center software to be removed.
5 If the uninstall fails, select the host and click Delete again. In the confirmation dialog box, deselect
Uninstall NSX Components and select Force Delete.
The host transport node is deleted from the management plane, but the host might still have NSX-T
Data Center software installed.
6 Verify that the NSX-T Data Center software is removed from the host.
b Run the appropriate command to check for NSX-T Data Center software packages.
Red Hat Enterprise Linux and CentOS Linux rpm -qa | grep -E 'nsx|vsipfwlib'
If the NSX-T Data Center software is successfully removed, no packages are listed. If any NSX
software packages remain on the host, contact VMware Support.
In the vCenter Server, move an ESXi host 1 From the vSphere Web Client, log in to the The host is turned into a
in cluster-1 (prepared by applying vCenter Server. standalone managed host, which is
transport node profile) to the data center 2 Move the host in maintenance mode. displayed under 'Other Hosts'.
as a standalone host (not to another NSX-T Data Center is uninstalled
3 Move the host from cluster-1 that is
cluster). on the host.
prepared with a transport node profile out of
the cluster as a standalone managed host. If the host is under Configuration
NSX-T Data Center triggers uninstallation of Mismatch state, then the host
the configuration and NSX-T VIBs. remains in that state after it is
moved.
4 During uninstallation, the transport node is
deleted, NSX-T Data Center VIBs are
uninstalled.
5 In the NSX-T Data Center UI, the
uninstalled host is displayed under Other
Hosts on the same vCenter Server.
In the vCenter Server, move a prepared 1 From the vSphere Web Client, log in to the The host is moved from cluster-1 to
host from cluster-1 with transport node vCenter Server. cluster-2. A successfully configured
profile-1 to cluster-2 with transport node 2 Move the host in maintenance mode. host is applied with transport node
profile-2. profile-2.
3 As cluster-1 is prepared with transport node
profile-1, when a host from cluster-1 is If the host is in the failed state,
moved to cluster-2, then transport node ensure that the host is successfully
profile-2 is applied to the host. Only the new configured in NSX-T Data Center.
transport node profile-2 configuration is
applied to the host, whereas NSX-T Data
Center VIBs are not uninstalled from the
host.
4 If the NSX-T Data Center host is in a failed
configuration state, then it is not configured
after it moves to cluster-2. The host remains
in the failed state.
In the vCenter Server, move a host that is 1 From the vSphere Web Client, log in to the The host remains in Configuration
in the Configuration Mismatch state (NSX- vCenter Server. Mismatch state.
T Data Center state) from cluster-1 with 2 Move the host in maintenance mode.
transport node profile-1 to cluster-2 with 3 As the host is in the Configuration Mismatch
transport node profile-2. state, even though it is moved to cluster-2,
transport node profile-2 is not applied to it.
In the vCenter Server, move a host from 1 From the vSphere Web Client, log in to the If the NSX-T Data Center host is
cluster-1 with transport node profile-1 to vCenter Server. successfully configured, then NSX-
cluster-3 not applied with any transport 2 Move the host in maintenance mode. T Data Center uninstallation begins
node profile. 3 Move the host from cluster-1 to cluster-3. on the host.
If the NSX-T Data Center host is in
the failed configuration state, then
after it moves to cluster-3 the node
remains in the failed state.
In the vCenter Server, delete a host that is 1 From the vSphere Web Client, log in to the Uninstallation of NSX-T Data
in the Configuration Mismatch state (NSX- vCenter Server. Center does not begin because the
T Data Center state) because the host has 2 Move the host in maintenance mode. node configuration was in the
two different configurations applied to it - Configuration Mismatch state.
3 Remove the host from the vCenter Server
transport node configuration and transport To ensure that uninstallation
inventory.
node profile configurations. begins, ensure that the transport
node is configured with a single
configuration, either at the host-
level or at the cluster-level.
After uninstallation, go to the NSX
Manager UI and verify that the
managed host is moved out of the
cluster to become a standalone
unmanaged host.
In the vCenter Server, move a prepared 1 From the vSphere Web Client, log in to the NSX-T is uninstalled from the host.
host from cluster-1 with transport node vCenter Server.
profile-1 applied to: 2 Move the host in maintenance mode.
n Another cluster without any transport 3 Perform one of the actions:
node profile applied n Move the host to another cluster without
n Data center any transport node profile applied.
n Outside of the data center n Move the host as a standalone host in
the data center.
n Move the host to outside of the data
center
Issue Solution
Problem
On the ESXi host, you might see a similar log (esxupdate.log) message:
Cause
Unused VIBs on the ESXi host can be relatively large in size. These unused VIBs can result in insufficient
space in the bootbank or in the alt-bootbank when installing the required VIBs.
Solution
n Uninstall the VIBs that are no longer required and free up additional disk space.
For more information on deleting the unused VIBs, see the VMware knowledge base article at https://
kb.vmware.com/s/article/74864.