VMware NSX T Data Center - 2
VMware NSX T Data Center - 2
VMware NSX T Data Center - 2
Installation Guide
Modified on 23 APR 2019
VMware NSX-T Data Center 2.3
NSX-T Data Center Installation Guide
You can find the most up-to-date technical documentation on the VMware website at:
https://docs.vmware.com/
If you have comments about this documentation, submit your feedback to
[email protected]
VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com
Copyright © 2018, 2019 VMware, Inc. All rights reserved. Copyright and trademark information.
VMware, Inc. 2
Contents
VMware, Inc. 3
NSX-T Data Center Installation Guide
7 Host Preparation 97
Install Third-Party Packages on a KVM Host or Bare Metal Server 97
Verify Open vSwitch Version on RHEL KVM Hosts 100
Add a Hypervisor Host or Bare Metal Server to the NSX-T Data Center Fabric 101
Manual Installation of NSX-T Data Center Kernel Modules 104
Join the Hypervisor Hosts with the Management Plane 109
VMware, Inc. 4
NSX-T Data Center Installation Guide
The NSX-T Data Center Installation Guide describes how to install the VMware NSX-T™ Data Center
product. The information includes step-by-step configuration instructions, and suggested best practices.
Intended Audience
This information is intended for anyone who wants to install or use NSX-T Data Center. This information
is written for experienced system administrators who are familiar with virtual machine technology and
network virtualization concepts.
VMware, Inc. 5
Overview of NSX-T Data Center 1
In much the same way that server virtualization programmatically creates, snapshots, deletes and
restores software-based virtual machines (VMs), NSX-T Data Center network virtualization
programmatically creates, deletes, and restores software-based virtual networks.
With network virtualization, the functional equivalent of a network hypervisor reproduces the complete set
of Layer 2 through Layer 7 networking services (for example, switching, routing, access control,
firewalling, QoS) in software. As a result, these services can be programmatically assembled in any
arbitrary combination, to produce unique, isolated virtual networks in a matter of seconds.
NSX-T Data Center works by implementing three separate but integrated planes: management, control,
and data. The three planes are implemented as a set of processes, modules, and agents residing on
three types of nodes: manager, controller, and transport nodes.
n Every node hosts a management plane agent.
n The NSX Manager node hosts API services. Each NSX-T Data Center installation supports a single
NSX Manager node.
n NSX Controller nodes host the central control plane cluster daemons.
n NSX Manager and NSX Controller nodes may be co-hosted on the same physical server.
VMware, Inc. 6
NSX-T Data Center Installation Guide
n Transport nodes host local control plane daemons and forwarding engines.
Management
Plane Bus
MP Agent MP Agent
Centralized
Control Plane CCP Cluster
Localized
Control Plane
LCP
Daemons
Data Plane
Forwarding
Engine
n Management Plane
n Control Plane
n Data Plane
n Logical Switches
n Logical Routers
n Key Concepts
Management Plane
The management plane provides a single API entry point to the system, persists user configuration,
handles user queries, and performs operational tasks on all management, control, and data plane nodes
in the system.
VMware, Inc. 7
NSX-T Data Center Installation Guide
For NSX-T Data Center anything dealing with querying, modifying, and persisting user configuration is a
management plane responsibility, while dissemination of that configuration down to the correct subset of
data plane elements is a control plane responsibility. This means that some data belongs to multiple
planes depending on what stage of its existence it is in. The management plane also handles querying
recent status and statistics from the control plane, and sometimes directly from the data plane.
The management plane is the one and only source-of-truth for the configured (logical) system, as
managed by the user via configuration. Changes are made using either a RESTful API or the
NSX-T Data Center UI.
In NSX there is also a management plane agent (MPA) running on all controller cluster and transport
nodes. The MPA is both locally accessible and remotely accessible. On transport nodes it may perform
data plane related tasks as well.
n Input validation
n Policy management
NSX Manager
NSX Manager is a virtual appliance that provides the graphical user interface (GUI) and the REST APIs
for creating, configuring, and monitoring NSX-T Data Center components, such as logical switches, and
NSX Edge services gateways.
NSX Manager is the management plane for the NSX-T Data Center eco-system. NSX Manager provides
an aggregated system view and is the centralized network management component of
NSX-T Data Center. It provides configuration and orchestration of:
NSX Manager provides a method for monitoring and troubleshooting workloads attached to virtual
networks created by NSX-T Data Center. It allows seamless orchestration of both built-in and external
services. All security services, whether built-in or 3rd party, are deployed and configured by the
NSX-T Data Center management plane. The management plane provides a single window for viewing
services availability. It also facilitates policy based service chaining, context sharing, and inter-service
events handling. This simplifies the auditing of the security posture, streamlining application of identity-
based controls (for example, AD and mobility profiles).
NSX Manager also provides REST API entry-points to automate consumption. This flexible architecture
allows for automation of all configuration and monitoring aspects via any cloud management platform,
security vendor platform, or automation framework.
VMware, Inc. 8
NSX-T Data Center Installation Guide
The NSX-T Data Center Management Plane Agent (MPA) is an NSX Manager component that lives on
each and every node (hypervisor). The MPA is in charge of persisting the desired state of the system and
for communicating non-flow-controlling (NFC) messages such as configuration, statistics, status and real
time data between transport nodes and the management plane.
NSX Policy Manager provides a graphical user interface (GUI) and REST APIs to specify the intent
related to networking, security, and availability.
NSX Policy Manager accepts the intent from the user in the form of a tree-based data model and
configures the NSX Manager to realize that intent. The NSX Policy Manager supports communication
intent specification that configures a distributed firewall on the NSX Manager.
CSM is a virtual appliance that provides the graphical user interface (GUI) and the REST APIs for
onboarding, configuring, and monitoring your public cloud inventory.
Control Plane
Computes all ephemeral runtime state based on configuration from the management plane, disseminates
topology information reported by the data plane elements, and pushes stateless configuration to
forwarding engines.
The control plane is split into two parts in NSX-T Data Center, the central control plane (CCP), which runs
on the NSX Controller cluster nodes, and the local control plane (LCP), which runs on the transport
nodes, adjacent to the data plane it controls. The Central Control Plane computes some ephemeral
runtime state based on configuration from the management plane and disseminates information reported
by the data plane elements via the local control plane. The Local Control Plane monitors local link status,
computes most ephemeral runtime state based on updates from data plane and CCP, and pushes
stateless configuration to forwarding engines. The LCP shares fate with the data plane element which
hosts it.
NSX Controller
NSX Controller called as Central Control Plane (CCP) is an advanced distributed state management
system that controls virtual networks and overlay transport tunnels.
VMware, Inc. 9
NSX-T Data Center Installation Guide
NSX Controller is deployed as a cluster of highly available virtual appliances that are responsible for the
programmatic deployment of virtual networks across the entire NSX-T Data Center architecture. The
NSX-T Data Center CCP is logically separated from all data plane traffic, meaning any failure in the
control plane does not affect existing data plane operations. Traffic doesn’t pass through the controller;
instead the controller is responsible for providing configuration to other NSX Controller components such
as the logical switches, logical routers, and edge configuration. Stability and reliability of data transport
are central concerns in networking. To further enhance high availability and scalability, the NSX Controller
is deployed in a cluster of three instances.
Data Plane
Performs stateless forwarding/transformation of packets based on tables populated by the control plane
and reports topology information to the control plane, and maintains packet level statistics.
The data plane is the source of truth for the physical topology and status for example, VIF location, tunnel
status, and so on. If you are dealing with moving packets from one place to another, you are in the data
plane. The data plane also maintains status of and handles failover between multiple links/tunnels. Per-
packet performance is paramount with very strict latency or jitter requirements. Data plane is not
necessarily fully contained in kernel, drivers, userspace, or even specific userspace processes. Data
plane is constrained to totally stateless forwarding based on tables/rules populated by control plane.
The data plane also may have components that maintain some amount of state for features such as TCP
termination. This is different from the control plane managed state such as MAC:IP tunnel mappings,
because the state managed by the control plane is about how to forward the packets, whereas state
managed by the data plane is limited to how to manipulate payload.
NSX Edge
NSX Edge provides routing services and connectivity to networks that are external to the
NSX-T Data Center deployment.
NSX Edge is required for establishing external connectivity from the NSX-T Data Center domain, through
a Tier-0 router via BGP or static routing. Additionally, an NSX Edge must be deployed if you require
network address translation (NAT) services at either the Tier-0 or Tier-1 logical routers.
The NSX Edge gateway connects isolated, stub networks to shared (uplink) networks by providing
common gateway services such as NAT, and dynamic routing. Common deployments of NSX Edge
include in the DMZ and multi-tenant Cloud environments where the NSX Edge creates virtual boundaries
for each tenant.
Transport Zones
A transport zone is a logical construct that controls which hosts a logical switch can reach. It can span
one or more host clusters. Transport zones dictate which hosts and, therefore, which VMs can participate
in the use of a particular network.
VMware, Inc. 10
NSX-T Data Center Installation Guide
A Transport Zone defines a collection of hosts that can communicate with each other across a physical
network infrastructure. This communication happens over one or more interfaces defined as Virtual
Tunnel Endpoints (VTEPs).
The transport nodes are the hosts running the local control plane daemons and forwarding engines
implementing the NSX-T Data Center data plane. The transport nodes consists of a NSX-T Data Center
Virtual Distributed Switch (N-VDS), which is responsible for switching packets according to the
configuration of available network services.
If two transport nodes are in the same transport zone, VMs hosted on those transport nodes can "see"
and therefore be attached to NSX-T Data Center logical switches that are also in that transport zone. This
attachment makes it possible for the VMs to communicate with each other, assuming that the VMs have
Layer 2/Layer 3 reachability. If VMs are attached to switches that are in different transport zones, the VMs
cannot communicate with each other. Transport zones do not replace Layer 2/Layer 3 reachability
requirements, but they place a limit on reachability. Put another way, belonging to the same transport
zone is a prerequisite for connectivity. After that prerequisite is met, reachability is possible but not
automatic. To achieve actual reachability, Layer 2 and (for different subnets) Layer 3 networking must be
operational.
A host can serve as a transport node if it contains at least one NSX managed virtual distributed switch (N-
VDS, previously known as hostswitch). When you create a host transport node and then add the node to
a transport zone, NSX-T Data Center installs an N-VDS on the host. For each transport zone that the host
belongs to, a separate N-VDS is installed. The N-VDS is used for attaching VMs to NSX-T Data Center
logical switches and for creating NSX-T Data Center logical router uplinks and downlinks.
Logical Switches
The logical switching capability in the NSX-T Data Center platform provides the ability to spin up isolated
logical L2 networks with the same flexibility and agility that exists for virtual machines.
A logical switch provides a representation of Layer 2 switched connectivity across many hosts with Layer
3 IP reachability between them. If you plan to restrict some logical networks to a limited set of hosts or
you have custom connectivity requirements, you may find it necessary to create additional logical
switches.
These applications and tenants require isolation from each other for security, fault isolation, and to avoid
overlapping IP addressing issues. Endpoints, both virtual and physical, can connect to logical segments
and establish connectivity independently from their physical location in the data center network. This is
enabled through the decoupling of network infrastructure from logical network (i.e., underlay network from
overlay network) provided by NSX-T Data Center network virtualization.
Logical Routers
NSX-T Data Center logical routers provide North-South connectivity, thereby enabling tenants to access
public networks, and East-West connectivity between different networks within the same tenants. For
East - West connectivity, logical routers are distributed across the kernel of the hosts.
VMware, Inc. 11
NSX-T Data Center Installation Guide
With NSX-T Data Center it’s possible to create two-tier logical router topology: the top-tier logical router is
Tier 0 and the bottom-tier logical router is Tier 1. This structure gives both provider administrator and
tenant administrators complete control over their services and policies. Administrators control and
configure Tier-0 routing and services, and tenant administrators control and configure Tier-1. The north
end of Tier-0 interfaces with the physical network, and is where dynamic routing protocols can be
configured to exchange routing information with physical routers. The south end of Tier-0 connects to
multiple Tier-1 routing layer(s) and receives routing information from them. To optimize resource usage,
the Tier-0 layer does not push all the routes coming from the physical network towards Tier-1, but does
provide default information.
Southbound, the Tier-1 routing layer interfaces with the logical switches defined by the tenant
administrators, and provides one-hop routing function between them. For Tier-1 attached subnets to be
reachable from the physical network, route redistribution towards Tier-0 layer must the enabled. However,
there isn’t a classical routing protocol (such as OSPF or BGP) running between Tier-1 layer and Tier-0
layer, and all the routes go through the NSX-T Data Center control plane. Note that the two-tier routing
topology is not mandatory, if there is no need to separate provider and tenant, a single tier topology can
be created and in this scenario the logical switches are connected directly to the Tier-0 layer and there is
no Tier-1 layer.
A logical router consists of two optional parts: a distributed router (DR) and one or more service routers
(SR).
A DR spans hypervisors whose VMs are connected to this logical router, as well as edge nodes the
logical router is bound to. Functionally, the DR is responsible for one-hop distributed routing between
logical switches and/or logical routers connected to this logical router. The SR is responsible for delivering
services that are not currently implemented in a distributed fashion, such as stateful NAT.
A logical router always has a DR, and it has SRs if any of the following is true:
n The logical router is a Tier-0 router, even if no stateful services are configured
n The logical router is Tier-1 router linked to a Tier-0 router and has services configured that do not
have a distributed implementation (such as NAT, LB, DHCP )
The NSX-T Data Center management plane (MP) is responsible for automatically creating the structure
that connects the service router to the distributed router. The MP creates a transit logical switch and
allocates it a VNI, then creates a port on each SR and DR, connects them to the transit logical switch,
and allocates IP addresses for the SR and DR.
VMware, Inc. 12
NSX-T Data Center Installation Guide
Key Concepts
The common NSX-T Data Center concepts that are used in the documentation and user interface.
Control Plane Computes runtime state based on configuration from the management
plane. Control plane disseminates topology information reported by the
data plane elements, and pushes stateless configuration to forwarding
engines.
External Network A physical network or VLAN not managed by NSX-T Data Center. You can
link your logical network or overlay network to an external network through
an NSX Edge. For example, a physical network in a customer data center
or a VLAN in a physical environment.
Fabric Node Host that has been registered with the NSX-T Data Center management
plane and has NSX-T Data Center modules installed. For a hypervisor host
or NSX Edge to be part of the NSX-T Data Center overlay, it must be added
to the NSX-T Data Center fabric.
Logical Port Egress Outbound network traffic leaving the VM or logical network is called egress
because traffic is leaving virtual network and entering the data center.
Logical Port Ingress Inbound network traffic leaving the data center and entering the VM is
called ingress traffic.
Logical Router Port Logical network port to which you can attach a logical switch port or an
uplink port to a physical network.
Logical Switch Entity that provides virtual Layer 2 switching for VM interfaces and Gateway
interfaces. A logical switch gives tenant network administrators the logical
equivalent of a physical Layer 2 switch, allowing them to connect a set of
VMs to a common broadcast domain. A logical switch is a logical entity
independent of the physical hypervisor infrastructure and spans many
hypervisors, connecting VMs regardless of their physical location.
VMware, Inc. 13
NSX-T Data Center Installation Guide
Logical Switch Port Logical switch attachment point to establish a connection to a virtual
machine network interface or a logical router interface. The logical switch
port reports applied switching profile, port state, and link status.
Management Plane Provides single API entry point to the system, persists user configuration,
handles user queries, and performs operational tasks on all of the
management, control, and data plane nodes in the system. Management
plane is also responsible for querying, modifying, and persisting use
configuration.
NSX Controller Cluster Deployed as a cluster of highly available virtual appliances that are
responsible for the programmatic deployment of virtual networks across the
entire NSX-T Data Center architecture.
NSX Edge Cluster Collection of NSX Edge node appliances that have the same settings as
protocols involved in high-availability monitoring.
NSX Edge Node Component with the functional goal is to provide computational power to
deliver the IP routing and the IP services functions.
NSX Managed Virtual Software that runs on the hypervisor and provides traffic forwarding. The
Distributed Switch or NSX managed virtual distributed switch (N-VDS, previously known as
KVM Open vSwitch hostswitch) or OVS is invisible to the tenant network administrator and
provides the underlying forwarding service that each logical switch relies
on. To achieve network virtualization, a network controller must configure
the hypervisor virtual switch with network flow tables that form the logical
broadcast domains the tenant administrators defined when they created
and configured their logical switches.
NSX Manager Node that hosts the API services, the management plane, and the agent
services.
NSX-T Data Center NSX-T Data Center Unified Appliance is an appliance included in the
Unified Appliance NSX-T Data Center installation package. You can deploy the appliance in
the role of NSX Manager, Policy Manager, or Cloud Service Manager.
Currently, the appliance only supports one role at a time.
Open vSwitch (OVS) Open source software switch that acts as a virtual switch within XenServer,
Xen, KVM, and other Linux-based hypervisors.
VMware, Inc. 14
NSX-T Data Center Installation Guide
Overlay Logical Logical network implemented using Layer 2-in-Layer 3 tunneling such that
Network the topology seen by VMs is decoupled from that of the physical network.
Physical Interface Network interface on a physical server that a hypervisor is installed on.
(pNIC)
Tier-0 Logical Router Provider logical router is also known as Tier-0 logical router interfaces with
the physical network. Tier-0 logical router is a top-tier router and can be
realized as active-active or active-standby cluster of services router. The
logical router runs BGP and peers with physical routers. In active-standby
mode the logical router can also provide stateful services.
Tier-1 Logical Router Tier-1 logical router is the second tier router that connects to one Tier-0
logical router for northbound connectivity and one or more overlay networks
for southbound connectivity. Tier-1 logical router can be an active-standby
cluster of services router providing stateful services.
Transport Zone Collection of transport nodes that defines the maximum span for logical
switches. A transport zone represents a set of similarly provisioned
hypervisors and the logical switches that connect VMs on those
hypervisors.
Uplink Profile Defines policies for the links from hypervisor hosts to NSX-T Data Center
logical switches or from NSX Edge nodes to top-of-rack switches. The
settings defined by uplink profiles might include teaming policies,
active/standby links, the transport VLAN ID, and the MTU setting.
VM Interface (vNIC) Network interface on a virtual machine that provides connectivity between
the virtual guest operating system and the standard vSwitch or vSphere
distributed switch. The vNIC can be attached to a logical port. You can
identify a vNIC based on its Unique ID (UUID).
Virtual Tunnel Endpoint Enable hypervisor hosts to participate in an NSX-T Data Center overlay.
The NSX-T Data Center overlay deploys a Layer 2 network on top of an
existing Layer 3 network fabric by encapsulating frames inside of packets
and transferring the packets over an underlying transport network. The
underlying transport network can be another Layer 2 networks or it can
cross Layer 3 boundaries. The VTEP is the connection point at which the
encapsulation and decapsulation takes place.
VMware, Inc. 15
Preparing for Installation 2
Before installing NSX-T Data Center, make sure your environment is prepared.
System Requirements
NSX-T Data Center has specific requirements regarding hardware resources and software versions.
Hypervisor Requirements
Hypervisor Version CPU Cores Memory
NSX-T Data Center supports host preparation on RHEL 7.5, RHEL 7.4, Ubuntu 16.04, and CentOS 7.4.
NSX Manager and NSX Controller deployment is not supported on RHEL 7.5 and CentOS 7.4. NSX Edge
node deployment is supported only on vSphere.
For ESXi hosts, NSX-T Data Center supports the Host Profiles and Auto Deploy features on vSphere 6.7
U1 or higher.
Caution On RHEL, the yum update command might update the kernel version and break the
compatibility with NSX-T Data Center. Disable the automatic kernel update when you run yum update.
Also, after running yum install, verify that NSX-T Data Center supports the kernel version.
VMware, Inc. 16
NSX-T Data Center Installation Guide
CentOS 7.4 4 16 GB
Note NSX Manager Small VM should be used in lab and proof-of-concept deployments.
The NSX Manager resource requirements apply to the NSX Policy Manager and the
Cloud Service Manager.
Note Deploy three NSX Controllers to ensure a high availability and avoid any outage to the
NSX-T Data Center control plane.
Each NSX Controller cluster must on three separate physical hypervisor hosts to avoid a single physical
hypervisor host failure impacting the NSX-T Data Center control plane. See the NSX-T Data Center
Reference Design guide.
VMware, Inc. 17
NSX-T Data Center Installation Guide
For lab and proof-of-concept deployments without production workloads, you can have a single
NSX Controller to save resources.
You can only deploy small and large VM form factors from the vSphere OVF deployment user interface.
Note For NSX Manager and NSX Edge, the small appliance is for proof-of-concept deployments. The
medium appliance is suitable for a typical production environment and can support up to 64 hypervisors.
The large appliance is for large-scale deployments with more than 64 hypervisors.
Note VMXNET 3 vNIC is supported only for the NSX Edge VM.
For the DPDK support, the underlaying platform needs to meet the following requirements:
Note As NSX-T Data Center data plane uses network functions from Intel's Data Plane Development kit
(DPDK), only Intel-based CPUs are supported.
Hardware Type
VMware, Inc. 18
NSX-T Data Center Installation Guide
Cisco VIC 1387 Cisco UCS Virtual Interface Card 1387 0x0043
32 GB 8 200 GB
VMware, Inc. 19
NSX-T Data Center Installation Guide
Safari 10 Yes
VMware, Inc. 20
NSX-T Data Center Installation Guide
vCenter
NSX
Transport
Node(s)
TCP 5671, 8080
Data Plane (KVM/ESX)
TCP 443, 80 BFD: UDP 3784, 3785
GENEVE: UDP 6081
TCP 443
Management Plane
TCP 5671, 8080 HA/BFD: UDP 50263,
3784, 3785
TCP 7777,1100,1200, 1300 NSX Edge
UDP 11000-11004 Transport
Node(s)
By default, all certificates are self-signed certificates. The northbound GUI and API certificates and private
keys can be replaced by CA signed certificates.
There are internal daemons that communicate over the loopback or UNIX domain sockets:
In the RMQ user database (db), passwords are hashed with a non-reversible hash function. So h(p1) is
the hash of password p1.
VMware, Inc. 21
NSX-T Data Center Installation Guide
MP Management plane
Note To get access to NSX-T Data Center nodes, you must enable SSH on these nodes.
NSX Cloud Note See Enable Access to ports and protocols on CSM for Hybrid Connectivity for a list of
ports required for deploying NSX Cloud.
You can use an API call or CLI command to specify custom ports for transferring files (22 is the default)
and for exporting Syslog data (514 and 6514 are the defaults). If you do, you will need to configure the
firewall accordingly.
NSX Controllers, NSX Edge NSX Manager 8080 TCP Install-upgrade HTTP repository
nodes, Transport Nodes,
vCenter Server
NSX Controllers, NSX Edge NSX Manager 5671 TCP NSX messaging
nodes, Transport Nodes
NSX Manager Management SCP 22 TCP SSH (upload support bundle, backups,
Servers etc.)
VMware, Inc. 22
NSX-T Data Center Installation Guide
Table 2‑1. TCP and UDP Ports Used by NSX Manager (Continued)
Source Target Port Protocol Description
NSX Manager vCenter Server 443 TCP NSX Manager to compute manager
(vCenter Server) communication, when
configured.
You can use an API call or CLI command to specify custom ports for transferring files (22 is the default)
and for exporting Syslog data (514 and 6514 are the defaults). If you do, you will need to configure the
firewall accordingly.
NSX Controllers NSX Controller 11000 - 11004 UDP Tunnels to other cluster nodes. You must open
more ports if the cluster has more than 5
nodes.
VMware, Inc. 23
NSX-T Data Center Installation Guide
Table 2‑2. TCP and UDP Ports Used by NSX Controller (Continued)
Source Target Port Protocol Description
NSX Controllers NSX Controller 11000 - 11004 TCP Tunnels to other cluster nodes. You must open
more ports if the cluster has more than 5
nodes.
You can use an API call or CLI command to specify custom ports for transferring files (22 is the default)
and for exporting Syslog data (514 and 6514 are the defaults). If you do, you will need to configure the
firewall accordingly.
NSX Edge nodes, NSX Edge 3784, UDP BFD between the Transport Node TEP IP
Transport Nodes nodes 3785 address in the data.
NSX Agent NSX Edge 5555 TCP NSX Cloud - Agent on instance communicates
nodes to NSX Cloud Gateway.
NSX Edge nodes NSX Edge 6666 TCP NSX Cloud - NSX Edge local communication.
nodes
NSX Edge nodes NSX Manager 8080 TCP NAPI, NSX-T Data Center upgrade
VMware, Inc. 24
NSX-T Data Center Installation Guide
Table 2‑3. TCP and UDP Ports Used by NSX Edge (Continued)
Source Target Port Protocol Description
NSX Edge nodes Syslog Servers 6514 TCP Syslog over TLS
TCP and UDP Ports Used by vSphere ESXi , KVM Hosts, and Bare
Metal Server
vSphere ESXi, KVM hosts, and bare metal server when used as transport nodes need certain TCP and
UDP ports available.
Table 2‑4. TCP and UDP Ports Used by vSphere ESXi and KVM Hosts
Source Target Port Protocol Description
NSX Manager vSphere ESXi host 443 TCP Management and provisioning connection
NSX Manager KVM host 443 TCP Management and provisioning connection
vSphere ESXi host NSX Manager 567 TCP AMPQ Communication channel to
1 NSX Manager
vSphere ESXi host NSX Controller 123 TCP Control Plane - LCP to CCP
5 communication
VMware, Inc. 25
NSX-T Data Center Installation Guide
Table 2‑4. TCP and UDP Ports Used by vSphere ESXi and KVM Hosts (Continued)
Source Target Port Protocol Description
KVM host NSX Controller 123 TCP Control Plane - LCP to CCP
5 communication
vSphere ESXi host NSX Manager 808 TCP Install and upgrade HTTP repository
0
KVM host NSX Manager 808 TCP Install and upgrade HTTP repository
0
GENEVE Termination End Point GENEVE Termination 608 UDP Transport network
(TEP) End Point (TEP) 1
NSX-T Data Center transport NSX-T Data Center 378 UDP BFD Session between TEPs, in the
node transport node 4, datapath using TEP interface
378
5
2 Install NSX Controllers see, Chapter 5 NSX Controller Installation and Clustering.
3 Join NSX Controllers with the management plane, see Join NSX Controllers with the NSX Manager.
4 Create a master NSX Controller to initialize the control cluster, see Initialize the Control Cluster to
Create a Control Cluster Master.
5 Join NSX Controllers into a control cluster, see Join Additional NSX Controllers with the Cluster
Master.
NSX Manager installs NSX-T Data Center modules after the hypervisor hosts are added.
Note Certificates are created on hypervisor hosts when NSX-T Data Center modules are installed.
6 Join hypervisor hosts with the management plane, see Join the Hypervisor Hosts with the
Management Plane.
8 Join NSX Edges with the management plane, see Join NSX Edge with the Management Plane.
9 Create transport zones and transport nodes, see Chapter 8 Transport Zones and Transport Nodes.
VMware, Inc. 26
NSX-T Data Center Installation Guide
A virtual switch is created on each host. The management plane sends the host certificates to the
control plane, and the management plane pushes control plane information to the hosts. Each host
connects to the control plane over SSL presenting its certificate. The control plane validates the
certificate against the host certificate provided by the management plane. The controllers accept the
connection upon successful validation.
3 NSX-T Data Center modules can be installed on a hypervisor host before it joins the management
plane, or you can perform both procedures at the same time using the Fabric > Hosts > Add UI.
4 NSX Controller, NSX Edges, and hosts with NSX-T Data Center modules can join the management
plane at any time.
Post-Installation
When the hosts are transport nodes, you can create transport zones, logical switches, logical routers, and
other network components through the NSX Manager UI or API at any time. When NSX Controllers,
NSX Edges, and hosts join the management plane, the NSX-T Data Center logical entities and
configuration state are pushed to the NSX Controllers, NSX Edges, and hosts automatically.
For more information, see the NSX-T Data Center Administration Guide.
VMware, Inc. 27
Working with KVM 3
NSX-T Data Center supports KVM in two ways: 1) as a host transport node and 2) as a host for
NSX Manager and NSX Controller.
n Set Up KVM
Set Up KVM
If you plan to use KVM as a transport node or as a host for NSX Manager and NSX Controller guest VMs,
but you do not already have KVM setup, you can use the procedure described here.
Note The Geneve encapsulation protocol uses UDP port 6081. You must allow this port access in the
firewall on the KVM host.
Procedure
3 Add the line "kernel* redhat-release*" to configure yum to avoid any unsupported RHEL
upgrades.
If you plan to run NSX-T Container Plug-in, which has specific compatibility requirements, exclude the
container-related modules as well.
VMware, Inc. 28
NSX-T Data Center Installation Guide
Ubuntu
apt-get install -y qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils
virtinst virt-manager virt-viewer libguestfs-tools
RHEL
yum groupinstall "Virtualization Hypervisor"
yum groupinstall "Virtualization Client"
yum groupinstall "Virtualization Platform"
yum groupinstall "Virtualization Tools"
Ubuntu
kvm-ok
RHEL
lsmod | grep kvm
kvm_intel 53484 6
kvm 316506 1 kvm_intel
VMware, Inc. 29
NSX-T Data Center Installation Guide
7 For KVM to be used as a host for NSX Manager or NSX Controller, prepare the bridge network,
management interface, and NIC interfaces.
In the following example, the first Ethernet interface (eth0 or ens32) is used for connectivity to the
Linux machine itself. Depending on your deployment environment, this interface can use DHCP or
static IP settings. Before assigning uplink interfaces to the NSX-T hosts, ensure that the interfaces
scripts used by these uplinks are already configured. Without these interface files on the system, you
cannot successfully create a host transport node.
Linux
Distribution Network Configuration
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
auto br0
iface br0 inet static
address 192.168.110.51
netmask 255.255.255.0
network 192.168.110.0
broadcast 192.168.110.255
gateway 192.168.110.1
dns-nameservers 192.168.3.45
dns-search example.com
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0
Create a network definition xml file for the bridge. For example, create /tmp/bridge.xml with the
following lines:
<network>
<name>bridge</name>
<forward mode='bridge'/>
<bridge name='br0'/>
</network>
Define and start the bridge network with the following commands:
virsh net-define
bridge.xml
virsh net-start bridge
virsh net-autostart bridge
VMware, Inc. 30
NSX-T Data Center Installation Guide
Linux
Distribution Network Configuration
You can check the status of the bridge network with the following command:
DEVICE="ens32"
TYPE="Ethernet"
NAME="ens32"
UUID="<UUID>"
BOOTPROTO="none"
HWADDR="<HWADDR>"
ONBOOT="yes"
NM_CONTROLLED="no"
BRIDGE="br0"
Edit /etc/sysconfig/network-scripts/ifcfg-eth1:
DEVICE="eth1"
TYPE="Ethernet"
NAME="eth1"
UUID="<UUID>"
BOOTPROTO="none"
HWADDR="<HWADDR>"
ONBOOT="yes"
NM_CONTROLLED="no"
Edit /etc/sysconfig/network-scripts/ifcfg-eth2 :
DEVICE="eth2"
TYPE="Ethernet"
NAME="eth2"
UUID="<UUID>"
BOOTPROTO="none"
HWADDR="<HWADDR>"
ONBOOT="yes"
NM_CONTROLLED="no"
Edit /etc/sysconfig/network-scripts/ifcfg-br0:
DEVICE="br0"
BOOTPROTO="dhcp"
NM_CONTROLLED="no"
ONBOOT="yes"
TYPE="Bridge"
VMware, Inc. 31
NSX-T Data Center Installation Guide
In the following example, the first Ethernet interface (eth0 or ens32) is used for connectivity to the
Linux machine itself. Depending on your deployment environment, this interface can use DHCP or
static IP settings.
Linux
Distribution Network Configuration
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
auto eth1
iface eth1 inet manual
auto br0
iface br0 inet dhcp
bridge_ports eth0
DEVICE="ens32"
TYPE="Ethernet"
NAME="ens32"
UUID="<something>"
BOOTPROTO="none"
HWADDR="<something>"
ONBOOT="yes"
NM_CONTROLLED="no"
BRIDGE="br0"
Edit /etc/sysconfig/network-scripts/ifcfg-ens33:
DEVICE="ens33"
TYPE="Ethernet"
NAME="ens33"
UUID="<something>"
BOOTPROTO="none"
HWADDR="<something>"
ONBOOT="yes"
NM_CONTROLLED="no"
Edit /etc/sysconfig/network-scripts/ifcfg-br0:
DEVICE="br0"
BOOTPROTO="dhcp"
NM_CONTROLLED="no"
ONBOOT="yes"
TYPE="Bridge"
VMware, Inc. 32
NSX-T Data Center Installation Guide
After this step, once the KVM host is configured as a transport node, the bridge interface "nsx-
vtep0.0" is created. In Ubuntu, /etc/network/interfaces has entries such as the following:
In RHEL, the host NSX agent (nsxa) creates a configuration file called ifcfg-nsx-vtep0.0, which has
entries such as the following:
DEVICE=nsx-vtep0.0
BOOTPROTO=static
NETMASK=<IP address>
IPADDR=<subnet mask>
MTU=1600
ONBOOT=yes
USERCTL=no
NM_CONTROLLED=no
9 To make the networking changes take effect, restart networking service systemctl restart
network or reboot the Linux server.
KVM guest VM management is beyond the scope of this guide. However, here are some simple KVM CLI
commands to get your started.
To manage your guest VMs in the KVM CLI, you can use virsh commands. Following are some common
virsh commands. Refer to KVM documentation for additional information.
# List running
virsh list
# List all
virsh list --all
# Control instances
virsh start <instance>
virsh shutdown <instance>
virsh destroy <instance>
VMware, Inc. 33
NSX-T Data Center Installation Guide
In the Linux CLI, the ifconfig command shows the vnetX interface, which represents the interface
created for the guest VM. If you add additional guest VMs, additional vnetX interfaces are added.
ifconfig
...
VMware, Inc. 34
NSX Manager Installation 4
NSX Manager provides a graphical user interface (GUI) and REST APIs for creating, configuring, and
monitoring NSX-T Data Center components such as logical switches, logical routers, and firewalls.
NSX Manager provides a system view and is the management component of NSX-T Data Center.
An NSX-T Data Center deployment can have only one instance of NSX Manager. If NSX Manager is
deployed on an ESXi host, you can use the vSphere high availability (HA) feature to ensure the
availability of NSX Manager.
IP address An NSX Manager must have a static IP address. You cannot change the IP address after
installation.
Hostname When installing NSX Manager, specify a hostname that does not contain invalid
characters such as an underscore. If the hostname contains any invalid character, after
deployment the hostname will be set to nsx-manager. For more information about
hostname restrictions, see https://tools.ietf.org/html/rfc952 and
https://tools.ietf.org/html/rfc1123.
VMware Tools The NSX Manager VM running on ESXi has VMTools installed. Do not remove or
upgrade VMTools.
VMware, Inc. 35
NSX-T Data Center Installation Guide
Table 4‑1. NSX Manager Deployment, Platform, and Installation Requirements (Continued)
Requirements Description
System n Verify that the system requirements are met. See System Requirements.
n Verify that the required ports are open. See Ports and Protocols.
n If you do not already have one, create the target VM port group network. It is
recommended to place NSX-T Data Center appliances on a management VM
network.
If you have multiple management networks, you can add static routes to the other
networks from the NSX-T Data Center appliance.
n Plan your IPv4 IP address scheme. In this release of NSX-T Data Center, IPv6 is not
supported.
OVF Privileges Verify that you have adequate privileges to deploy an OVF template on the ESXi host.
A management tool that can deploy OVF templates, such as vCenter Server or the
vSphere Client. The OVF deployment tool must support configuration options to allow for
manual configuration.
OVF tool version must be 4.0 or later.
Note On an NSX Manager fresh install, reboot, or after an admin password change when prompted on
first login, it might take several minutes for the NSX Manager to start.
n If you specify a user name for the admin or audit user, the name must be unique. If you specify the
same name, it is ignored and the default names (admin and audit) is used.
n If the password for the admin user does not meet the complexity requirements, you must log in to
NSX Manager through SSH or at the console as the admin user. You are prompted to change the
password.
n If the password for the audit user does not meet the complexity requirements, the user account is
disabled. To enable the account, log in to NSX Manager through SSH or at the console as the admin
user and run the command set user audit to set the audit user's password (the current password
is an empty string).
VMware, Inc. 36
NSX-T Data Center Installation Guide
n If the password for the root user does not meet the complexity requirements, you must log in to
NSX Manager through SSH or at the console as root with the password vmware. You are prompted
to change the password.
Caution Changes made to the NSX-T Data Center while logged in with the root user credentials might
cause system failure and potentially impact your network. You can only make changes using the root
user credentials with the guidance of VMware Support team.
Note The core services on the appliance do not start until a password with sufficient complexity is set.
After you deploy NSX Manager from an OVA file, you cannot change the VM's IP settings by powering off
the VM and modifying the OVA settings from vCenter Server.
The NSX Policy Manager is a virtual appliance that lets you manage policies. You can configure policies
to specify rules for NSX-T Data Center components such as logical ports, IP addresses, and VMs.
NSX Policy Manager rules allow you to set high-level usage and resource access rules that are enforced
without specifying the exact details.
Cloud Service Manager is a virtual appliance that uses NSX-T Data Center components and integrates
them with your public cloud.
Note It is recommended that you use vSphere Web Client instead of vSphere Client. If you do not have
vCenter Server in your environment, use ovftool to deploy NSX Manager. See Install NSX Manager on
ESXi Using the Command-Line OVF Tool.
Procedure
1 Locate the NSX-T Data Center Unified Appliance OVA or OVF file.
Either copy the download URL or download the OVA file onto your computer.
2 In vSphere Web Client, launch the Deploy OVF template wizard and navigate or link to the .ova file.
3 Enter a name for the NSX Manager, and select a folder or datacenter.
The folder you select will be used to apply permissions to the NSX Manager.
VMware, Inc. 37
NSX-T Data Center Installation Guide
5 If you are installing in vCenter, select a host or cluster on which to deploy the NSX Manager
appliance.
6 Select the port group or destination network for the NSX Manager.
n Select the nsx-policy-manager role from the drop-down menu to install the NSX Policy Manager
appliance.
n Select the nsx-cloud-service-manager role from the drop-down menu to install the NSX Cloud
appliance.
9 (Optional) For optimal performance, reserve memory for the NSX-T Data Center component.
A memory reservation is a guaranteed lower bound on the amount of physical memory that the host
reserves for a virtual machine, even when memory is overcommitted. Set the reservation to a level
that ensures the NSX-T Data Center component has sufficient memory to run efficiently. See System
Requirements.
10 Open the console of the NSX-T Data Center component to track the boot process.
11 After the NSX-T Data Center component boots, log in to the CLI as admin and run the get
interface eth0 command to verify that the IP address was applied as expected.
12 Verify that your NSX-T Data Center component has the required connectivity.
n The NSX-T Data Center component can ping its default gateway.
n The NSX-T Data Center component can ping the hypervisor hosts that are in the same network
as the NSX-T Data Center component using the management interface.
n The NSX-T Data Center component can ping its DNS server and its NTP server.
VMware, Inc. 38
NSX-T Data Center Installation Guide
n If you enabled SSH, make sure that you can SSH to your NSX-T Data Center component.
If connectivity is not established, make sure the network adapter of the virtual appliance is in the
proper network or VLAN.
What to do next
By default, nsx_isSSHEnabled and nsx_allowSSHRootLogin are both disabled for security reasons.
When they are disabled, you cannot SSH or log in to the NSX Manager command line. If you enable
nsx_isSSHEnabled but not nsx_allowSSHRootLogin, you can SSH to NSX Manager but you cannot log in
as root.
Prerequisites
n Verify that the system requirements are met. See System Requirements.
n Verify that the required ports are open. See Ports and Protocols.
n If you do not already have one, create the target VM port group network. It is recommended to place
NSX-T Data Center appliances on a management VM network.
If you have multiple management networks, you can add static routes to the other networks from the
NSX-T Data Center appliance.
n Plan your IPv4 IP address scheme. In this release of NSX-T Data Center, IPv6 is not supported.
Procedure
n For a standalone host, run the ovftool command with the appropriate parameters.
C:\Users\Administrator\Downloads>ovftool
--name=nsx-manager
--X:injectOvfEnv
--X:logFile=ovftool.log
--allowExtraConfig
--datastore=ds1
--net="management"
--acceptAllEulas
--noSSLVerify
--diskMode=thin
--powerOn
--prop:nsx_role=nsx-manager
VMware, Inc. 39
NSX-T Data Center Installation Guide
--prop:nsx_ip_0=192.168.110.75
--prop:nsx_netmask_0=255.255.255.0
--prop:nsx_gateway_0=192.168.110.1
--prop:nsx_dns1_0=192.168.110.10
--prop:nsx_domain_0=corp.local
--prop:nsx_ntp_0=192.168.110.10
--prop:nsx_isSSHEnabled=<True|False>
--prop:nsx_allowSSHRootLogin=<True|False>
--prop:nsx_passwd_0=<password>
--prop:nsx_cli_passwd_0=<password>
--prop:nsx_hostname=nsx-manager
nsx-<component>.ova
vi://root:<password>@192.168.110.51
n For a host managed by vCenter Server, run the ovftool command with the appropriate parameters.
For example,
C:\Users\Administrator\Downloads>ovftool
--name=nsx-manager
--X:injectOvfEnv
--X:logFile=ovftool.log
--allowExtraConfig
--datastore=ds1
--network="management"
--acceptAllEulas
--noSSLVerify
--diskMode=thin
--powerOn
--prop:nsx_role=nsx-manager
--prop:nsx_ip_0=192.168.110.75
--prop:nsx_netmask_0=255.255.255.0
--prop:nsx_gateway_0=192.168.110.1
--prop:nsx_dns1_0=192.168.110.10
--prop:nsx_domain_0=corp.local
--prop:nsx_ntp_0=192.168.110.10
--prop:nsx_isSSHEnabled=<True|False>
--prop:nsx_allowSSHRootLogin=<True|False>
--prop:nsx_passwd_0=<password>
VMware, Inc. 40
NSX-T Data Center Installation Guide
--prop:nsx_cli_passwd_0=<password>
--prop:nsx_hostname=nsx-manager
nsx-<component>.ova
vi://[email protected]:<password>@192.168.110.24/?ip=192.168.110.51
n (Optional) For optimal performance, reserve memory for the NSX-T Data Center component.
A memory reservation is a guaranteed lower bound on the amount of physical memory that the host
reserves for a virtual machine, even when memory is overcommitted. Set the reservation to a level
that ensures the NSX-T Data Center component has sufficient memory to run efficiently. See System
Requirements.
n Open the console of the NSX-T Data Center component to track the boot process.
n After the NSX-T Data Center component boots, log in to the CLI as admin and run the get
interface eth0 command to verify that the IP address was applied as expected.
n Verify that your NSX-T Data Center component has the required connectivity.
n The NSX-T Data Center component can ping its default gateway.
n The NSX-T Data Center component can ping the hypervisor hosts that are in the same network
as the NSX-T Data Center component using the management interface.
n The NSX-T Data Center component can ping its DNS server and its NTP server.
n If you enabled SSH, make sure that you can SSH to your NSX-T Data Center component.
If connectivity is not established, make sure the network adapter of the virtual appliance is in the
proper network or VLAN.
VMware, Inc. 41
NSX-T Data Center Installation Guide
What to do next
The QCOW2 installation procedure uses guestfish, a Linux command-line tool to write virtual machine
settings into the QCOW2 file.
Prerequisites
n Verify that the password in the guestinfo adheres to the password complexity requirements so that
you can log in after installation. See Chapter 4 NSX Manager Installation.
Procedure
1 Download the NSX Manager QCOW2 image and then copy it to the KVM machine that will run the
NSX Manager using SCP or sync.
3 In the same directory where you saved the QCOW2 image, create a file called guestinfo (with no file
extension) and populate it with the NSX Manager VM's properties.
For example:
VMware, Inc. 42
NSX-T Data Center Installation Guide
In the example, nsx_isSSHEnabled and nsx_allowSSHRootLogin are both enabled. When they are
disabled, you cannot SSH or log in to the NSX Manager command line. If you enable
nsx_isSSHEnabled but not nsx_allowSSHRootLogin, you can SSH to NSX Manager but you cannot
log in as root.
4 Use guestfish to write the guestinfo file into the QCOW2 image.
After the guestinfo information is written into a QCOW2 image, the information cannot be
overwritten.
Starting install...
Creating domain... | 0 B 00:01
Connected to domain nsx-manager1
Escape character is ^]
nsx-manager1 login:
After the NSX Manager boots up, the NSX Manager console appears.
6 (Optional) For optimal performance, reserve memory for the NSX-T Data Center component.
A memory reservation is a guaranteed lower bound on the amount of physical memory that the host
reserves for a virtual machine, even when memory is overcommitted. Set the reservation to a level
that ensures the NSX-T Data Center component has sufficient memory to run efficiently. See System
Requirements.
7 Open the console of the NSX-T Data Center component to track the boot process.
8 After the NSX-T Data Center component boots, log in to the CLI as admin and run the get
interface eth0 command to verify that the IP address was applied as expected.
VMware, Inc. 43
NSX-T Data Center Installation Guide
MTU: 1500
Default gateway: 192.168.110.1
Broadcast address: 192.168.110.255
...
9 Verify that your NSX-T Data Center component has the required connectivity.
n The NSX-T Data Center component can ping its default gateway.
n The NSX-T Data Center component can ping the hypervisor hosts that are in the same network
as the NSX-T Data Center component using the management interface.
n The NSX-T Data Center component can ping its DNS server and its NTP server.
n If you enabled SSH, make sure that you can SSH to your NSX-T Data Center component.
If connectivity is not established, make sure the network adapter of the virtual appliance is in the
proper network or VLAN.
control-]
What to do next
After you install NSX Manager, you can join the Customer Experience Improvement Program (CEIP) for
NSX-T Data Center. See Customer Experience Improvement Program in the NSX-T Data Center
Administration Guide for more information about the program, including how to join or leave the program.
Prerequisites
Procedure
2 Scroll to the bottom of the EULA and accept the EULA terms.
VMware, Inc. 44
NSX-T Data Center Installation Guide
3 Select whether to join the VMware's Customer Experience Improvement Program (CEIP).
4 Click Save
VMware, Inc. 45
NSX Controller Installation and
Clustering 5
NSX Controller is an advanced distributed state management system that provides control plane
functions for NSX-T Data Center logical switching and routing functions.
NSX Controllers serve as the central control point for all logical switches within a network and maintains
information about all hosts, logical switches, and logical routers. NSX Controllers control the devices that
perform packet forwarding. These forwarding devices are known as virtual switches.
Virtual switches, such as NSX managed virtual distributed switch (N-VDS, previously known as
hostswitch) and Open vSwitch (OVS), reside on ESXi and other hypervisors such as KVM.
In a production environment, you must have an NSX Controller cluster with three members to avoid any
outage to the NSX control plane. Each controller should be placed on a unique hypervisor host, three
physical hypervisor hosts in total, to avoid a single physical hypervisor host failure impacting the NSX
control plane. For lab and proof-of-concept deployments where there are no production workloads, it is
acceptable to run a single controller to save resources.
VMware, Inc. 46
NSX-T Data Center Installation Guide
Table 5‑1. NSX Controller Deployment, Platform, and Installation Requirements (Continued)
Requirements Description
System Verify that the system requirements are met. See System
Requirements.
Ports Verify that the required ports are open. See Ports and
Protocols.
n If you specify a user name for the admin or audit user, the name must be unique. If you specify the
same name, it is ignored and the default names (admin and audit) are used.
n If the password for the admin user does not meet the complexity requirements, you must log in to
NSX Controller through SSH or at the console as the admin user. You are prompted to change the
password.
n If the password for the audit user does not meet the complexity requirements, the user account is
disabled. To enable the account, log in to NSX Controller through SSH or at the console as the admin
user and run the command set user audit to set the audit user's password (the current password
is an empty string).
VMware, Inc. 47
NSX-T Data Center Installation Guide
n If the password for the root user does not meet the complexity requirements, you must log in to
NSX Controller through SSH or at the console as root with the password vmware. You are prompted
to change the password.
Caution Changes made to the NSX-T Data Center while logged in with the root user credentials might
cause system failure and potentially impact your network. You can only make changes using the root
user credentials with the guidance of VMware Support team.
Note
n Do not use root privileges to install daemons or applications. Using the root privileges to install
daemons or applications can void your support contract. Use root privileges only when requested by
the VMware Support team.
n The core services on the appliance do not start until a password with sufficient complexity has been
set.
After you deploy an NSX Controller from an OVA file, you cannot change the VM's IP settings by
powering off the VM and modifying the OVA settings from vCenter Server.
NSX Manager allows you to deploy additional controllers automatically to an existing cluster that is
manually deployed. However, to delete a manually added controller from the cluster, you must manually
remove it from the cluster.
VMware, Inc. 48
NSX-T Data Center Installation Guide
Prerequisites
n vSphere ESXi host must have enough CPU, memory, and hard disk resources to support 12vCPUs,
48 GB RAM, and 360 GB Storage.
Procedure
2 In the NSX Manager UI, if it does not have a registered vCenter, go to the Fabric panel, click
Compute Manager, and add a Compute Manager.
4 On the Common Attributes page, enter the required values on the page.
8 (Optional) If you add a node to an existing cluster, enable Join Existing Cluster.
9 Enter and confirm the Shared Secret key that is required to initialize and form the cluster.
Note All controller nodes added to this cluster must use the same Shared Secret key.
11 Click Next.
13 Enter a valid hostname or fully qualified domain name for the controller node.
VMware, Inc. 49
NSX-T Data Center Installation Guide
15 (Optional) Select the resource pool. The resource pool only provides a pool of compute resources to
deploy the controller nodes. Assign specific storage resources.
18 Select the management interface that is used by the host to communicate with different components
within the host itself.
19 Enter a static IP address with port details (<IPAddress>/<PortNumber>) and net mask.
20 You can add multiple controllers. Click the + button and enter the controller details before beginning
the deployment.
21 Click Finish.
The automated controller installation begins. The controllers are first registered with the NSX
Manager before forming the cluster or joining an existing cluster.
22 Verify whether the controllers are registered with the NSX Manager.
c Alternatively, from the NSX Manager UI, verify that the Manager connectivity is UP.
c Alternatively, from the NSX Manager UI, verify that the Cluster connectivity is UP.
What to do next
Configure NSX Manager to install controllers and cluster automatically using APIs. See Configure
Automated Installation of Controller and Cluster Using API.
Procedure
1 Before you trigger the automatic creation of the controller cluster, you must fetch the vCenter Server
ID, compute ID, storage ID, and network ID required as the payload of the POST API.
VMware, Inc. 50
NSX-T Data Center Installation Guide
https://<vCenterServer_IPAddress>/mob.
4 In the Content properties page, go to the Value column search for data center, and click the group
link.
5 In the Group properties page, go to the Value column, and click the data center link.
6 In the Data Center properties page, copy the datastore value, network value that you want to use to
create the controller cluster.
8 In the Group properties page, copy the cluster value that you want to use to create the controller
cluster.
9 To fetch the vCenter Server ID, go to the NSX Manager UI and copy its ID from the Compute
Manager page.
10 POST https://<nsx-manager>/api/v1/cluster/nodes/deployments
REQUEST
{
"deployment_requests": [
{
"roles": ["CONTROLLER"],
"user_settings": {
"cli_password": "CLIp4$$w4rd",
"root_password": "ROOTp4$$w4rd"
},
"deployment_config": {
"placement_type": "VsphereClusterNodeVMDeploymentConfig",
"vc_id": "69874c95-51ed-4775-bba8-e0d13bdb4fed",
"management_network_id": "network-13",
"hostname": "controller-0",
"compute_id": "domain-s9",
"storage_id": "datastore-12",
"default_gateway_addresses":[
"10.33.79.253"
],
"management_port_subnets":[
{
"ip_addresses":[
"10.33.79.64"
],
"prefix_length":"22"
}
]
}
},
{
"roles": ["CONTROLLER"],
"user_settings": {
VMware, Inc. 51
NSX-T Data Center Installation Guide
"cli_password": "VMware$123",
"root_password": "VMware$123"
},
"deployment_config": {
"placement_type": "VsphereClusterNodeVMDeploymentConfig",
"vc_id": "69874c95-51ed-4775-bba8-e0d13bdb4fed",
"management_network_id": "network-13",
"hostname": "controller-1",
"compute_id": "domain-s9",
"storage_id": "datastore-12"
"default_gateway_addresses":[
"10.33.79.253"
],
"management_port_subnets":[
{
"ip_addresses":[
"10.33.79.65"
],
"prefix_length":"22"
}
]
}
}
],
"deployment_config": {
"placement_type": "VsphereClusterNodeVMDeploymentConfig",
"vc_id": "69874c95-51ed-4775-bba8-e0d13bdb4fed",
"management_network_id": "network-13",
"hostname": "controller-0",
"compute_id": "domain-s9",
"storage_id": "datastore-12",
"default_gateway_addresses":[
"10.33.79.253"
],
"management_port_subnets":[
{
"ip_addresses":[
"10.33.79.66"
],
"prefix_length":"22"
}
]
}
},
"clustering_config": {
"clustering_type": "ControlClusteringConfig",
"shared_secret": "123456",
"join_to_existing_cluster": false
}
}
Response
{
VMware, Inc. 52
NSX-T Data Center Installation Guide
"result_count": 2,
"results": [
{
"user_settings": {
"cli_password": "[redacted]",
"root_password": "[redacted]",
"cli_username": "admin"
},
"vm_id": "71f02260-644f-4482-aa9a-ab8570bb49a3",
"roles": [
"CONTROLLER"
],
"deployment_config": {
"placement_type": "VsphereClusterNodeVMDeploymentConfig",
"vc_id": "69874c95-51ed-4775-bba8-e0d13bdb4fed",
"management_network_id": "network-13",
"default_gateway_addresses": [
"10.33.79.253"
],
"hostname": "controller-0",
"compute_id": "domain-s9",
"storage_id": "datastore-12",
"management_port_subnets": [
{
"ip_addresses": [
"10.33.79.64"
],
"prefix_length": 22
}
]
},
"form_factor": "SMALL"
},
{
"user_settings": {
"cli_password": "[redacted]",
"root_password": "[redacted]",
"cli_username": "admin"
},
"vm_id": "38029a2b-b9bc-467f-8138-aef784e802cc",
"roles": [
"CONTROLLER"
],
"deployment_config": {
"placement_type": "VsphereClusterNodeVMDeploymentConfig",
"vc_id": "69874c95-51ed-4775-bba8-e0d13bdb4fed",
"management_network_id": "network-13",
"hostname": "controller-1",
"compute_id": "domain-s9",
"storage_id": "datastore-12"
},
VMware, Inc. 53
NSX-T Data Center Installation Guide
"form_factor": "MEDIUM"
}
]
}
11 You can view the status of deployment using the API call. GET https://<nsx-
manager>/api/v1/cluster/nodes/deployments
"result_count": 2,
"results": [
{
"user_settings": {
"cli_password": "[redacted]",
"root_password": "[redacted]"
},
"vm_id": "12f563af-af9f-48f3-848e-e9257c8740b0",
"roles": [
"CONTROLLER"
],
"deployment_config": {
"placement_type": "VsphereClusterNodeVMDeploymentConfig",
"vc_id": "15145422-47a1-4c55-81da-01d953151d1f",
"management_network_id": "network-158",
"hostname": "controller-0",
"compute_id": "domain-c154",
"storage_id": "datastore-157"
},
"form_factor": "SMALL",
},
{
"user_settings": {
"cli_password": "[redacted]",
"root_password": "[redacted]"
},
"vm_id": "cc21854c-265b-42de-af5f-05448c00777a",
"roles": [
"CONTROLLER"
],
"deployment_config": {
"placement_type": "VsphereClusterNodeVMDeploymentConfig",
"vc_id": "feb17651-49a7-4ce6-88b4-41d3f624e53b",
"management_network_id": "network-158",
"hostname": "controller-0",
"compute_id": "domain-c154",
"storage_id": "datastore-157"
},
"form_factor": "MEDIUM",
VMware, Inc. 54
NSX-T Data Center Installation Guide
}
]
What to do next
Procedure
5 Click Confirm.
NSX-T Data Center detaches the NSX Controller from the cluster, unregisters it from the NSX Manager,
powers it off, and deletes the NSX Controller.
What to do next
Install an NSX Controller on a vSphere ESXi host using GUI. See Install NSX Controller on ESXi Using a
GUI.
The installation succeeds if the password does not meet the requirements. However, when you log in for
the first time, you are prompted to change the password.
Important The core services on the appliance do not start until a password with sufficient complexity
has been set.
Important The NSX-T Data Center component virtual machine installations include VMware Tools.
Removal or upgrade of VMware Tools is not supported for NSX-T Data Center appliances.
Prerequisites
n Verify that the system requirements are met. See System Requirements.
n Verify that the required ports are open. See Ports and Protocols.
VMware, Inc. 55
NSX-T Data Center Installation Guide
n If you do not already have one, create the target VM port group network. It is recommended to place
NSX-T Data Center appliances on a management VM network.
If you have multiple management networks, you can add static routes to the other networks from the
NSX-T Data Center appliance.
n Plan your IPv4 IP address scheme. In this release of NSX-T Data Center, IPv6 is not supported.
n Verify that you have adequate privileges to deploy an OVF template on the ESXi host.
n Verify that hostnames do not include underscores. Otherwise, the hostname is set to nsx-controller.
n A management tool that can deploy OVF templates, such as vCenter Server or the vSphere Client.
The OVF deployment tool must support configuration options to allow for manual configuration.
Procedure
Either copy the download URL or download the OVA file onto your computer.
2 In the management tool, launch the Deploy OVF template wizard and navigate or link to the .ova file.
3 Enter a name for the NSX Controller, and select a folder or datacenter.
The folder you select will be used to apply permissions to the NSX Controller.
5 If you are using vCenter, select a host or cluster on which to deploy the NSX Controller appliance.
6 Select the port group or destination network for the NSX Controller.
8 (Optional) For optimal performance, reserve memory for the NSX-T Data Center component.
A memory reservation is a guaranteed lower bound on the amount of physical memory that the host
reserves for a virtual machine, even when memory is overcommitted. Set the reservation to a level
that ensures the NSX-T Data Center component has sufficient memory to run efficiently. See System
Requirements.
9 Open the console of the NSX-T Data Center component to track the boot process.
10 After the NSX-T Data Center component boots, log in to the CLI as admin and run the get
interface eth0 command to verify that the IP address was applied as expected.
VMware, Inc. 56
NSX-T Data Center Installation Guide
MTU: 1500
Default gateway: 192.168.110.1
Broadcast address: 192.168.110.255
...
11 Verify that your NSX-T Data Center component has the required connectivity.
n The NSX-T Data Center component can ping its default gateway.
n The NSX-T Data Center component can ping the hypervisor hosts that are in the same network
as the NSX-T Data Center component using the management interface.
n The NSX-T Data Center component can ping its DNS server and its NTP server.
n If you enabled SSH, make sure that you can SSH to your NSX-T Data Center component.
If connectivity is not established, make sure the network adapter of the virtual appliance is in the
proper network or VLAN.
What to do next
Join the NSX Controller with the management plane. See Join NSX Controllers with the NSX Manager.
By default, nsx_isSSHEnabled and nsx_allowSSHRootLogin are both disabled for security reasons.
When they are disabled, you cannot SSH or log in to the NSX Controller command line. If you enable
nsx_isSSHEnabled but not nsx_allowSSHRootLogin, you can SSH to NSX Controller but you cannot log
in as root.
Prerequisites
n Verify that the system requirements are met. See System Requirements.
n Verify that the required ports are open. See Ports and Protocols.
n If you do not already have one, create the target VM port group network. It is recommended to place
NSX-T Data Center appliances on a management VM network.
If you have multiple management networks, you can add static routes to the other networks from the
NSX-T Data Center appliance.
n Plan your IPv4 IP address scheme. In this release of NSX-T Data Center, IPv6 is not supported.
VMware, Inc. 57
NSX-T Data Center Installation Guide
Procedure
n For a standalone host, run the ovftool command with the appropriate parameters.
C:\Users\Administrator\Downloads>ovftool
--name=nsx-controller
--X:injectOvfEnv
--X:logFile=ovftool.log
--allowExtraConfig
--datastore=ds1
--network="management"
--noSSLVerify
--diskMode=thin
--powerOn
--prop:nsx_ip_0=192.168.110.210
--prop:nsx_netmask_0=255.255.255.0
--prop:nsx_gateway_0=192.168.110.1
--prop:nsx_dns1_0=192.168.110.10
--prop:nsx_domain_0=corp.local
--prop:nsx_ntp_0=192.168.110.10
--prop:nsx_isSSHEnabled=<True|False>
--prop:nsx_allowSSHRootLogin=<True|False>
--prop:nsx_passwd_0=<password>
--prop:nsx_cli_passwd_0=<password>
--prop:nsx_cli_audit_passwd_0=<password>
--prop:nsx_hostname=nsx-controller
<path/url to nsx component ova>
vi://root:<password>@192.168.110.51
n For a host managed by vCenter Server, run the ovftool command with the appropriate parameters.
C:\Users\Administrator\Downloads>ovftool
--name=nsx-controller
--X:injectOvfEnv
--X:logFile=ovftool.log
--allowExtraConfig
--datastore=ds1
--network="management"
--noSSLVerify
--diskMode=thin
--powerOn
--prop:nsx_ip_0=192.168.110.210
--prop:nsx_netmask_0=255.255.255.0
--prop:nsx_gateway_0=192.168.110.1
--prop:nsx_dns1_0=192.168.110.10
--prop:nsx_domain_0=corp.local
--prop:nsx_ntp_0=192.168.110.10
--prop:nsx_isSSHEnabled=<True|False>
--prop:nsx_allowSSHRootLogin=<True|False>
--prop:nsx_passwd_0=<password>
--prop:nsx_cli_passwd_0=<password>
VMware, Inc. 58
NSX-T Data Center Installation Guide
--prop:nsx_cli_audit_passwd_0=<password>
--prop:nsx_hostname=nsx-controller
<path/url to nsx component ova>
vi://[email protected]:<vcenter_password>@192.168.110.24/?ip=192.168.110.51
n (Optional) For optimal performance, reserve memory for the NSX-T Data Center component.
A memory reservation is a guaranteed lower bound on the amount of physical memory that the host
reserves for a virtual machine, even when memory is overcommitted. Set the reservation to a level
that ensures the NSX-T Data Center component has sufficient memory to run efficiently. See System
Requirements.
n Open the console of the NSX-T Data Center component to track the boot process.
n After the NSX-T Data Center component boots, log in to the CLI as admin and run the get
interface eth0 command to verify that the IP address was applied as expected.
n Verify that your NSX-T Data Center component has the required connectivity.
n The NSX-T Data Center component can ping its default gateway.
n The NSX-T Data Center component can ping the hypervisor hosts that are in the same network
as the NSX-T Data Center component using the management interface.
n The NSX-T Data Center component can ping its DNS server and its NTP server.
n If you enabled SSH, make sure that you can SSH to your NSX-T Data Center component.
If connectivity is not established, make sure the network adapter of the virtual appliance is in the
proper network or VLAN.
What to do next
Join the NSX Controller with the management plane. See Join NSX Controllers with the NSX Manager.
The QCOW2 installation procedure uses guestfish, a Linux command-line tool to write virtual machine
settings into the QCOW2 file.
VMware, Inc. 59
NSX-T Data Center Installation Guide
Prerequisites
Procedure
3 In the same directory where you saved the QCOW2 image, create a file called guestinfo (with no
file extension) and populate it with the NSX Controller VM's properties.
For example:
In the example, nsx_isSSHEnabled and nsx_allowSSHRootLogin are both enabled. When they are
disabled, you cannot SSH or log in to the NSX Controller command line. If you enable
nsx_isSSHEnabled but not nsx_allowSSHRootLogin, you can SSH to NSX Controller but you cannot
log in as root.
VMware, Inc. 60
NSX-T Data Center Installation Guide
4 Use guestfish to write the guestinfo file into the QCOW2 image.
If you are making multiple NSX Controllers, make a separate copy of the QCOW2 image for each
controller. After the guestinfo information is written into a QCOW2 image, the information cannot be
overwritten.
After the NSX Controller boots up, the NSX Controller console appears.
6 (Optional) For optimal performance, reserve memory for the NSX-T Data Center component.
A memory reservation is a guaranteed lower bound on the amount of physical memory that the host
reserves for a virtual machine, even when memory is overcommitted. Set the reservation to a level
that ensures the NSX-T Data Center component has sufficient memory to run efficiently. See System
Requirements.
7 Open the console of the NSX-T Data Center component to track the boot process.
8 After the NSX-T Data Center component boots, log in to the CLI as admin and run the get
interface eth0 command to verify that the IP address was applied as expected.
9 Verify that your NSX-T Data Center component has the required connectivity.
n The NSX-T Data Center component can ping its default gateway.
n The NSX-T Data Center component can ping the hypervisor hosts that are in the same network
as the NSX-T Data Center component using the management interface.
n The NSX-T Data Center component can ping its DNS server and its NTP server.
VMware, Inc. 61
NSX-T Data Center Installation Guide
n If you enabled SSH, make sure that you can SSH to your NSX-T Data Center component.
If connectivity is not established, make sure the network adapter of the virtual appliance is in the
proper network or VLAN.
What to do next
Join the NSX Controller with the management plane. See Join NSX Controllers with the NSX Manager.
Prerequisites
n Verify that you have admin privileges to log in to the NSX Manager and NSX Controller appliances.
Procedure
4 On each of the NSX Controller appliances, run the join management-plane command.
VMware, Inc. 62
NSX-T Data Center Installation Guide
5 Verify the result by running the get managers command on your NSX Controllers.
6 On the NSX Manager appliance, run the get management-cluster status command and make
sure the NSX Controllers are listed.
What to do next
Initialize the control cluster. See Initialize the Control Cluster to Create a Control Cluster Master.
Prerequisites
n Verify that you have admin privileges to log in to the NSX Controller appliance.
n Assign a shared secret password. A shared secret password is a user-defined shared secret
password (for example, "secret123").
Procedure
VMware, Inc. 63
NSX-T Data Center Installation Guide
For example:
Verify that is master and in majority are true, the status is active, and the Zookeeper Server
IP is reachable, ok.
uuid: 78d5b561-4f66-488d-9e53-089735eac1c1
is master: true
in majority: true
uuid address status
78d5b561-4f66-488d-9e53-089735eac1c1 192.168.110.34 active
VMware, Inc. 64
NSX-T Data Center Installation Guide
xc,lzxid=0x10000017a,lresp=604618294,llat=0,minlat=0,avglat=0,maxlat=1356)
/10.0.0.1:51730[1]
(queued=0,recved=45315,sent=45324,sid=0x100000f14a10006,lop=PING,est=1459376914516,to=40000,lcxid=0
x49,lzxid=0x10000017a,lresp=604623243,llat=0,minlat=0,avglat=0,maxlat=1630)
What to do next
Add additional NSX Controllers to the control cluster. See Join Additional NSX Controllers with the Cluster
Master.
Prerequisites
n Verify that you have admin privileges to log in to the NSX Controller appliances.
n Make sure the NSX Controller nodes have joined the management plane. See Join NSX Controllers
with the NSX Manager.
n Initialize the control cluster to create a control cluster master. You only need to initialize the first
controller.
n In the join control-cluster command, you must use an IP address, not a domain name.
n If you are using vCenter and you are deploying NSX-T Data Center controllers to the same cluster,
make sure to configure DRS anti-affinity rules. Anti-affinity rules prevent DRS from migrating more
than one node to a single host.
Procedure
VMware, Inc. 65
NSX-T Data Center Installation Guide
2 On the non-master NSX Controllers, run the set control-cluster security-model command
with a shared secret password. The shared-secret password entered for NSX-Controller2 and NSX-
Controller3 must match the shared-secret password entered on NSX-Controller1.
For example:
3 On the non-master NSX Controllers, run the get control-cluster certificate thumbprint
command.
The command output is a string of numbers that is unique to each NSX Controller.
For example:
n IP address with an optional port number of the non-master NSX Controllers (NSX-Controller2 and
NSX-Controller3 in the example)
Do not run the join commands on multiple controllers in parallel. Make sure the each join is
complete before joining another controller.
VMware, Inc. 66
NSX-T Data Center Installation Guide
Make sure that NSX-Controller2 has joined the cluster by running the get control-cluster
status command.
Make sure that NSX-Controller3 has joined the cluster by running the get control-cluster
status command.
5 On the two NSX Controller nodes that have joined the control cluster master, run the activate
control-cluster command.
Note Do not run the activate commands on multiple NSX Controllers in parallel. Make sure each
activation is complete before activating another controller.
For example:
On NSX-Controller2, run the get control-cluster status verbose command, and make sure
that the Zookeeper Server IP is reachable, ok.
On NSX-Controller3, run the get control-cluster status verbose command, and make sure
that the Zookeeper Server IP is reachable, ok.
The first UUID listed is for the control cluster as a whole. Each NSX Controller node has a UUID as
well.
If you try to join a controller to a cluster and the command set control-cluster security-model
or join control-cluster fails, the cluster configuration files might be in an inconsistent state.
VMware, Inc. 67
NSX-T Data Center Installation Guide
n On the NSX Controller that you try to join to the cluster, run the command deactivate
control-cluster.
n On the master controller, if the command get control-cluster status or get control-
cluster status verbose displays information about the failed controller, run the command
detach control-cluster <IP address of failed controller>.
What to do next
VMware, Inc. 68
NSX Edge Installation 6
The NSX Edge provides routing services and connectivity to networks that are external to the
NSX-T Data Center deployment. An NSX Edge is required if you want to deploy a tier-0 router or a tier-1
router with stateful services such as network address translation (NAT), VPN and so on.
PXE installation The Password string must be encrypted with sha-512 algorithm
for the root and admin user password.
Hostname When installing NSX Edge, specify a hostname that does not
contain invalid characters such as an underscore. If the
hostname contains any invalid character, after deployment the
hostname will be set to localhost. For more information
about hostname restrictions, see
https://tools.ietf.org/html/rfc952 and
https://tools.ietf.org/html/rfc1123.
VMware Tools The NSX Edge VM running on ESXi has VMTools installed. Do
not remove or upgrade VMTools.
System Verify that the system requirements are met. See System
Requirements.
VMware, Inc. 69
NSX-T Data Center Installation Guide
Table 6‑1. NSX Edge Deployment, Platforms, and Installation Requirements (Continued)
Requirements Description
NSX Ports Verify that the required ports are open. See Ports and
Protocols.
If you do not already have one, create the target VM port group
network. It is recommended to place NSX-T Data Center
appliances on a management VM network.
IP Addresses If you have multiple management networks, you can add static
routes to the other networks from the NSX-T Data Center
appliance.
Plan your IPv4 IP address scheme. In this release of
NSX-T Data Center, IPv6 is not supported.
IPv6 format is not supported.
OVF Template n Verify that you have adequate privileges to deploy an OVF
template on the ESXi host.
n Verify that hostnames do not include underscores.
Otherwise, the hostname is set to nsx-manager.
n A management tool that can deploy OVF templates, such
as vCenter Server or the vSphere Client.
NTP Server The same NTP server must be configured on all NSX Edge
servers in an Edge cluster.
n If you specify a user name for the admin or audit user, the name must be unique. If you specify the
same name, it is ignored and the default names (admin and audit) is used.
n If the password for the admin user does not meet the complexity requirements, you must log in to
NSX Edge through SSH or at the console as the admin user with the password vmware. You are
prompted to change the password.
n If the password for the audit user does not meet the complexity requirements, the user account is
disabled. To enable the account, log in to NSX Edge through SSH or at the console as the admin
user and run the command set user audit to set the audit user's password (the current password
is an empty string).
VMware, Inc. 70
NSX-T Data Center Installation Guide
n If the password for the root user does not meet the complexity requirements, you must log in to
NSX Edge through SSH or at the console as root with the password vmware. You are prompted to
change the password.
Caution Changes made to the NSX-T Data Center while logged in with the root user credentials might
cause system failure and potentially impact your network. You can only make changes using the root
user credentials with the guidance of VMware Support team.
Note The core services on the appliance do not start until a password with sufficient complexity has
been set.
After you deploy NSX Edge from an OVA file, you cannot change the VM's IP settings by powering off the
VM and modifying the OVA settings from vCenter Server.
VMware, Inc. 71
NSX-T Data Center Installation Guide
Transport Zone
NSX Edge Transport Node
Tier 0
SR
Intra-Tier 0 transit
169.254.0.0/28
Tier 0 DR DR
LCP
N-VDS
MP
bond
vmxnet3
physical ports
fp-ethX
pnic1 pnic2 managed by datapath
An NSX Edge node is the appliance that provides physical NICs to connect to the physical infrastructure.
These features include:
n NAT
n DHCP server
n Metadata proxy
n Edge firewall
When one of these services is configured or an uplink is defined on the logical router to connect to the
physical infrastructure, a SR is instantiated on the NSX Edge node. The NSX Edge node is also a
transport node just like compute nodes in NSX-T Data Center, and similar to compute node the
NSX Edge can connect to more than one transport zone – one for overlay and other for North-South
peering with external devices. There are two transport zones on the NSX Edge:
Overlay Transport Zone - Any traffic that originates from a VM participating in NSX-T Data Center domain
might require reachability to external devices or networks. This is typically described as external north-
south traffic. The NSX Edge node is responsible for decapsulating the overlay traffic received from
compute nodes as well as encapsulating the traffic sent to compute nodes.
VLAN Transport Zone - In addition to the encapsulate or decapsulate traffic function, NSX Edge nodes
also need a VLAN transport zone to provide uplink connectivity to the physical infrastructure.
VMware, Inc. 72
NSX-T Data Center Installation Guide
By default, the links between the SR and the DR use the 169.254.0.0/28 subnet. These intra-router transit
links are created automatically when you deploy a tier-0 or tier-1 logical router. You do not need to
configure or modify the link configuration unless the 169.254.0.0/28 subnet is already in use in your
deployment. On a tier-1 logical router, the SR is present only if you select an NSX Edge when creating
the tier-1 logical router.
The default address space assigned for the tier-0-to-tier-1 connections is 100.64.0.0/10. Each tier-0-to-
tier-1 peer connection is provided a /31 subnet within the 100.64.0.0/10 address space. This link is
created automatically when you create a tier-1 router and connect it to a tier-0 router. You do not need to
configure or modify the interfaces on this link unless the 100.64.0.0/10 subnet is already in use in your
deployment.
Each NSX-T Data Center deployment has a management plane cluster (MP) and a control plane cluster
(CCP). The MP and the CCP push configurations to each transport zone's local control plane (LCP).
When a host or NSX Edge joins the management plane, the management plane agent (MPA) establishes
connectivity with the host or NSX Edge, and the host or NSX Edge becomes an NSX-T Data Center
fabric node. When the fabric node is then added as a transport node, LCP connectivity is established with
the host or NSX Edge.
The High-Level Overview of NSX Edge figure shows an example of two physical NICs (pNIC1 and pNIC2)
that are bonded to provide high availability. The datapath manages the physical NICs. They can serve as
either VLAN uplinks to an external network or as tunnel endpoint links to internal NSX-T Data Center-
managed VM networks.
The best practice is to allocate at least two physical links to each NSX Edge that is deployed as a VM.
Optionally, you can overlap the port groups on the same pNIC using different VLAN IDs. The first network
link found is used for management. For example, on an NSX Edge VM, the first link found might be vnic1.
On a bare metal installation, the first link found might be eth0 or em0. The remaining links are used for
the uplinks and tunnels. For example, one might be for a tunnel endpoint used byNSX-T Data Center-
managed VMs. The other might be used for an NSX Edge-to-external TOR uplink.
You can view the physical link information of the NSX Edge, log in to the CLI as an administrator and run
the get interfaces and get physical-ports commands. In the API, you can use the GET
fabric/nodes/<edge-node-id>/network/interfaces API call.
Whether you install NSX Edge as a VM appliance or on bare metal, you have multiple options for the
network configuration, depending on your deployment.
VMware, Inc. 73
NSX-T Data Center Installation Guide
n Overlay for internal NSX-T Data Center tunneling between transport nodes.
You might do this if you want each NSX Edge to have only one N-VDS. Another design option is for the
NSX Edge to belong to multiple VLAN transport zones, one for each uplink.
The most common design choice is three transport zones: One overlay and two VLAN transport zones for
redundant uplinks.
For more information about transport zones, see About Transport Zones.
On the vSphere distributed switch or vSphere Standard switch, you must allocate at least two vmnics to
the NSX Edge for redundancy.
In the following sample physical topology, eth0 is used for management network, fp-eth0 is used for the
NSX-T Data Center overlay traffic, fp-eth1 is used for the VLAN uplink and fp-eth2 is not used. If fp-eth2
is not used, you must disconnect it.
Figure 6‑2. One Suggested Link Setup for NSX Edge VM Networking
ESXi
VM VDS/VSS
fp-eth0
Overlay N-VDS Overlay/Tunnel vNIC2
vmnic1
fp-eth1
VLAN uplink N-VDS Uplink vNIC3
fp-eth2
(Unused) vNIC4
The NSX Edge shown in this example belongs to two transport zones (one overlay and one VLAN) and
therefore has two N-VDS, one for tunnel and one for uplink traffic.
This screenshot shows the virtual machine port groups, nsx-tunnel, and vlan-uplink.
VMware, Inc. 74
NSX-T Data Center Installation Guide
During deployment, you must specify the network names that match the names configured on your VM
port groups. For example, to match the VM port groups in the example, your network ovftool settings can
be as follows if you were using the ovftool to deploy NSX Edge:
The example shown here uses the VM port group names Mgmt, nsx-tunnel, and vlan-uplink. You can use
any names for your VM port groups.
For example, on a standard vSwitch, you configure trunk ports as follows: . Host > Configuration >
Networking > Add Networking > Virtual Machine > VLAN ID All (4095).
NSX Edge VM can be installed on vSphere distributed switch or vSphere Standard switches.
NSX Edge VM can be installed on an NSX-T Data Center prepared host and configured as a transport
node. There are two types of deployment:
n NSX Edge VM can be deployed using VSS/VDS port groups where VSS/VDS consume separate
pNIC(s) on the host. Host transport node consumes separate pNIC(s) for N-VDS installed on the
host. N-VDS of the host transport node co-exists with a VSS or VDS, both consuming separate
pNICs. Host TEP (Tunnel End Point) and NSX Edge TEP can be in the same or different subnets.
n NSX Edge VM can be deployed using VLAN-backed logical switches on the N-VDS of the host
transport node. Host TEP and NSX Edge TEP must be in different subnets.
Multiple NSX Edge VMs can be installed on a single host, leveraging the same management, VLAN, and
overlay port groups.
For an NSX Edge VM deployed on an ESXi host that has the vSphere and not N-VDS, you must do the
following:
n Enable forged transmit for DHCP server running on this NSX Edge.
n Enable promiscuous mode for the NSX Edge VM to receive unknown unicast packets because MAC
learning is disabled by default. This is not necessary for vDS 6.6 or later, which has MAC learning
enabled by default.
VMware, Inc. 75
NSX-T Data Center Installation Guide
When a bare metal NSX Edge node is installed, a dedicated interface is retained for management. If
redundancy is desired, two NICs can be used for management plane high availability. These
management interfaces can also be 1G.
Bare metal NSX Edge node supports a maximum of 8 physical NICs for overlay traffic and uplink traffic to
top of rack (TOR) switches. For each of these 8 physical NICs on the server, an internal interface is
created following the naming scheme "fp-ethX". These internal interfaces are assigned to the DPDK
fastpath. There is complete flexibility in assigning fp-eth interfaces for overlay or uplink connectivity.
In the following sample physical topology, fp-eth0 and fp-eth1 are bonded and used for the
NSX-T Data Center overlay tunnel. fp-eth2 and fp-eth3 are used as redundant VLAN uplinks to TORs.
Figure 6‑3. One Suggested Link Setup for Bare-Metal NSX Edge Networking
fp-eth2 Uplink 1
eth3
VLAN uplink N-VDS
eth4
fp-eth3 Uplink 2
Prerequisites
n If a vCenter Server is registered as a compute Manager in NSX-T Data Center, you can use
NSX Manager UI to configure a host as an NSX Edge node and automatically deploy it on the
vCenter Server.
n Verify that the vCenter Server datastore on which the NSX Edge is being installed has a minimum of
120GB available.
VMware, Inc. 76
NSX-T Data Center Installation Guide
n Verify that the vCenter Server Cluster or Host has access to the specified networks and datastore in
the configuration.
Procedure
2 Select Fabric > Nodes > Edges > Add Edge VM.
6 Specify the CLI and the root passwords for the systems.
The restrictions on the root and CLI admin passwords also apply for automatic deployment.
The Compute Manager is the vCenter Server registered in the Management Plane.
8 For the Compute Manager, select a cluster from the drop-down menu or assign a resource pool.
It is recommended to add the NSX Edge in a cluster that provides network management utilities.
11 Select the host or resource pool. Only one host can be added at a time.
12 Select the IP address and type the management network IP addresses and paths on which to place
the NSX Edge interfaces. IP address entered must be in CIDR format.
The management network must be able to access the NSX Manager. It must receive its IP address
from a DHCP server. You can change the networks after the NSX Edge is deployed.
13 Add a default gateway if the management network IP address does not belong to same Layer 2 as
the NSX Manager network.
Verify that Layer 3 connectivity is available between NSX Manager and NSX Edge management
network.
The NSX Edge deployment takes a 1-2 minutes to complete. You can track the real-time status of the
deployment in the UI.
VMware, Inc. 77
NSX-T Data Center Installation Guide
What to do next
Before you add the NSX Edge to an NSX Edge cluster or configure as a transport node, make sure that
the newly created NSX Edge node appears as Node Ready.
Prerequisites
Procedure
Either copy the download URL or download the OVA file onto your computer.
2 In the management tool, launch the Deploy OVF template wizard and navigate or link to the .ova file.
3 Enter a name for the NSX Edge, and select a folder or vCenter Server datacenter.
The folder you select is used to apply permissions to the NSX Edge.
The system requirements vary depending on the configuration NSX Edge deployment size. See
System Requirements.
6 If you are installing in vCenter Server, select a host or cluster on which to deploy the NSX Edge
appliance.
You can change the networks after the NSX Edge is deployed.
9 (Optional) For optimal performance, reserve memory for the NSX-T Data Center component.
A memory reservation is a guaranteed lower bound on the amount of physical memory that the host
reserves for a virtual machine, even when memory is overcommitted. Set the reservation to a level
that ensures the NSX-T Data Center component has sufficient memory to run efficiently. See System
Requirements.
VMware, Inc. 78
NSX-T Data Center Installation Guide
10 Open the console of the NSX Edge to track the boot process.
If the console window does not open, make sure that pop-ups are allowed.
11 After the NSX Edge starts, log in to the CLI with admin privileges, , user name is admin and password
is default.
Note After NSX Edge starts, if you do not log in with admin credentials for the first time, the data
plane service does not automatically start on NSX Edge.
12 After the reboot, you can log in with either admin or root credentials. The default root password is
vmware.
13 Run the get interface eth0 command to verify that the IP address was applied as expected
Interface: eth0
Address: 192.168.110.37/24
MAC address: 00:50:56:86:62:4d
MTU: 1500
Default gateway: 192.168.110.1
Broadcast address: 192.168.110.255
...
If needed, run the set interface eth0 ip <CIDR> gateway <gateway-ip> plane mgmt
command to update the management interface. Optionally, you can start the SSH service with the
start service ssh command.
14 Verify that the NSX Edge appliance has the required connectivity.
If you enabled SSH, make sure that you can SSH to your NSX Edge.
n NSX Edge can ping the hypervisor hosts that are in the same network as the NSX Edge.
n NSX Edge can ping its DNS server and its NTP server.
Note If connectivity is not established, make sure the VM network adapter is in the proper network
or VLAN.
By default, the NSX Edge datapath claims all virtual machine NICs except the management NIC (the
one that has an IP address and a default route). If DHCP assigns the wrong NIC as management,
complete the tasks to correct the problem.
VMware, Inc. 79
NSX-T Data Center Installation Guide
c Place eth0 into the DHCP network and wait for an IP address to be assigned to eth0.
The datapath fp-ethX ports used for the VLAN uplink and the tunnel overlay are shown in the get
interfaces and get physical-port commands on the NSX Edge.
What to do next
Join the NSX Edge with the management plane. See Join NSX Edge with the Management Plane.
Prerequisites
n Verify that the system requirements are met. See System Requirements.
n Verify that the required ports are open. See Ports and Protocols.
n If you do not already have one, create the target VM port group network. It is recommended to place
NSX-T Data Center appliances on a management VM network.
If you have multiple management networks, you can add static routes to the other networks from the
NSX-T Data Center appliance.
n Plan your IPv4 IP address scheme. In this release of NSX-T Data Center, IPv6 is not supported.
n Verify that you have adequate privileges to deploy an OVF template on the ESXi host.
n Verify that hostnames do not include underscores. Otherwise, the hostname is set to localhost.
Procedure
n For a standalone host, run the ovftool command with the appropriate parameters.
C:\Users\Administrator\Downloads>ovftool
--name=nsx-edge-1
--deploymentOption=medium
--X:injectOvfEnv
--X:logFile=ovftool.log
--allowExtraConfig
--datastore=ds1
--net:"Network 0=Mgmt"
--net:"Network 1=nsx-tunnel"
--net:"Network 2=vlan-uplink"
--net:"Network 3=vlan-uplink"
VMware, Inc. 80
NSX-T Data Center Installation Guide
--acceptAllEulas
--noSSLVerify
--diskMode=thin
--powerOn
--prop:nsx_ip_0=192.168.110.37
--prop:nsx_netmask_0=255.255.255.0
--prop:nsx_gateway_0=192.168.110.1
--prop:nsx_dns1_0=192.168.110.10
--prop:nsx_domain_0=corp.local
--prop:nsx_ntp_0=192.168.110.10
--prop:nsx_isSSHEnabled=True
--prop:nsx_allowSSHRootLogin=True
--prop:nsx_passwd_0=<password>
--prop:nsx_cli_passwd_0=<password>
--prop:nsx_hostname=nsx-edge
<path/url to nsx component ova>
vi://root:<password>@192.168.110.51
n For a host managed by vCenter Server, run the ovftool command with the appropriate parameters.
C:\Users\Administrator\Downloads>ovftool
--name=nsx-edge-1
--deploymentOption=medium
--X:injectOvfEnv
--X:logFile=ovftool.log
--allowExtraConfig
--datastore=ds1
--net:"Network 0=Mgmt"
--net:"Network 1=nsx-tunnel"
--net:"Network 2=vlan-uplink"
--net:"Network 3=vlan-uplink"
--acceptAllEulas
--noSSLVerify
--diskMode=thin
--powerOn
--prop:nsx_ip_0=192.168.110.37
--prop:nsx_netmask_0=255.255.255.0
--prop:nsx_gateway_0=192.168.110.1
--prop:nsx_dns1_0=192.168.110.10
--prop:nsx_domain_0=corp.local
--prop:nsx_ntp_0=192.168.110.10
--prop:nsx_isSSHEnabled=True
--prop:nsx_allowSSHRootLogin=True
VMware, Inc. 81
NSX-T Data Center Installation Guide
--prop:nsx_passwd_0=<password>
--prop:nsx_cli_passwd_0=<password>
--prop:nsx_hostname=nsx-edge
<path/url to nsx component ova>
vi://[email protected]:<password>@192.168.110.24/?ip=192.168.210.53
n (Optional) For optimal performance, reserve memory for the NSX-T Data Center component.
A memory reservation is a guaranteed lower bound on the amount of physical memory that the host
reserves for a virtual machine, even when memory is overcommitted. Set the reservation to a level
that ensures the NSX-T Data Center component has sufficient memory to run efficiently. See System
Requirements.
n Open the console of the NSX Edge to track the boot process.
n After the NSX Edge starts, log in to the CLI with admin privileges, , user name is admin and password
is default.
n Run the get interface eth0 command to verify that the IP address was applied as expected
Interface: eth0
Address: 192.168.110.37/24
MAC address: 00:50:56:86:62:4d
MTU: 1500
Default gateway: 192.168.110.1
Broadcast address: 192.168.110.255
...
If needed, run the set interface eth0 ip <CIDR> gateway <gateway-ip> plane mgmt
command to update the management interface. Optionally, you can start the SSH service with the
start service ssh command.
n Verify that the NSX Edge appliance has the required connectivity.
If you enabled SSH, make sure that you can SSH to your NSX Edge.
VMware, Inc. 82
NSX-T Data Center Installation Guide
n NSX Edge can ping the hypervisor hosts that are in the same network as the NSX Edge.
n NSX Edge can ping its DNS server and its NTP server.
Note If connectivity is not established, make sure the VM network adapter is in the proper network
or VLAN.
By default, the NSX Edge datapath claims all virtual machine NICs except the management NIC (the
one that has an IP address and a default route). If DHCP assigns the wrong NIC as management,
complete the tasks to correct the problem.
c Place eth0 into the DHCP network and wait for an IP address to be assigned to eth0.
The datapath fp-ethX ports used for the VLAN uplink and the tunnel overlay are shown in the get
interfaces and get physical-port commands on the NSX Edge.
What to do next
Join the NSX Edge with the management plane. See Join NSX Edge with the Management Plane.
Note PXE boot installation is not supported for NSX Manager and NSX Controller. You cannot also
configure networking settings, such as the IP address, gateway, network mask, NTP, and DNS.
DHCP dynamically distributes IP settings to NSX-T Data Center components, such as NSX Edge. In a
PXE environment, the DHCP server allows NSX Edge to request and receive an IP address
automatically.
TFTP is a file-transfer protocol. The TFTP server is always listening for PXE clients on the network. When
it detects any network PXE client asking for PXE services, it provides the NSX-T Data Center component
ISO file and the installation settings contained in a preseed file.
VMware, Inc. 83
NSX-T Data Center Installation Guide
Prerequisites
n A PXE server must be available in your deployment environment. The PXE server can be set up on
any Linux distribution. The PXE server must have two interfaces, one for external communication and
another for providing DHCP IP and TFTP services.
If you have multiple management networks, you can add static routes to the other networks from the
NSX-T Data Center appliance.
n Verify that the preseeded configuration file has the parameters net.ifnames=0 and biosdevname=0
set after -- to persist after reboot.
Procedure
1 (Optional) Use a kickstart file to set up a new TFTP or DHCP services on an Ubuntu server.
A kickstart file is a text file that contains CLI commands that you run on the appliance after the first
boot.
Name the kickstart file based on the PXE server it is pointing to. For example:
nsxcli.install
The file must be copied to your Web server, for example at /var/www/html/nsx-
edge/nsxcli.install.
In the kickstart file, you can add CLI commands. For example, to configure the IP address of the
management interface:
stop dataplane
set interface eth0 <ip-cidr-format> plane mgmt
start dataplane
If you specify a password in the preseed.cfg file, use the same password in the kickstart file.
Otherwise, use the default password, which is "default".
2 Create two interfaces, one for management and another for DHCP and TFTP services.
Make sure that the DHCP/TFTP interface is in the same subnet that the NSX Edge resides in.
VMware, Inc. 84
NSX-T Data Center Installation Guide
For example, if the NSX Edge management interfaces are going to be in the 192.168.210.0/24
subnet, place eth1 in that same subnet.
4 Edit the /etc/default/isc-dhcp-server file, and add the interface that provides the DHCP
service.
INTERFACES="eth1"
5 (Optional) If you want this DHCP server to be the official DHCP server for the local network,
uncomment the authoritative; line in the /etc/dhcp/dhcpd.conf file.
...
authoritative;
...
6 In the /etc/dhcp/dhcpd.conf file, define the DHCP settings for the PXE network.
For example:
VMware, Inc. 85
NSX-T Data Center Installation Guide
9 Install Apache, TFTP, and other components that are required for PXE booting.
RUN_DAEMON="yes"
OPTIONS="-l -s /var/lib/tftpboot"
14 Copy or download the NSX Edge installer ISO file to a temporary folder.
15 Mount the ISO file and copy the install components to the TFTP server and the Apache server.
VMware, Inc. 86
NSX-T Data Center Installation Guide
You can use a Linux tool such as mkpasswd to create a password hash.
Password:
$6$SUFGqs[...]FcoHLijOuFD
a Modify the root password, edit /var/www/html/nsx-edge/preseed.cfg and search for the
following line:
You do not need to escape any special character such as $, ', ", or \.
c Add the usermod command to preseed.cfg to set the password for root, admin, or both.
For example, search for the echo 'VMware NSX Edge' line and add the following command.
The hash string is an example. You must escape all special characters. The root password in the
first usermod command replaces the password that is set in d-i passwd/root-password-
crypted password $6$tgm....
If you use the usermod command to set the password, the user is not prompted to change the
password at the first login. Otherwise, the user must change the password at the first login.
label nsxedge
kernel ubuntu-installer/amd64/linux
ipappend 2
append netcfg/dhcp_timeout=60 auto=true priority=critical vga=normal partman-
lvm/device_remove_lvm=true netcfg/choose_interface=auto debian-
installer/allow_unauthenticated=true preseed/url=http://192.168.210.82/nsx-edge/preseed.cfg
mirror/country=manual mirror/http/hostname=192.168.210.82 nsx-
kickstart/url=http://192.168.210.82/nsx-edge/nsxcli.install mirror/http/directory=/nsx-edge
initrd=ubuntu-installer/amd64/initrd.gz mirror/suite=xenial --
VMware, Inc. 87
NSX-T Data Center Installation Guide
allow booting;
allow bootp;
Note If an error is returned, for example: "stop: Unknown instance: start: Job failed to start", run
sudo /etc/init.d/isc-dhcp-server stop and then sudo /etc/init.d/isc-dhcp-server
start. The sudo /etc/init.d/isc-dhcp-server start command returns information about the
source of the error.
What to do next
Install NSX Edge using the bare-metal or the ISO file. See Install NSX Edge on Bare Metal or Install NSX
Edge via ISO File as a Virtual Appliance.
Prerequisites
Procedure
1 Create a bootable disk with the NSX Edge ISO file on it.
During power-on, the installer requests a network configuration via DHCP. If DHCP is not available in
your environment, the installer prompts you for IP settings.
By default, the root login password is vmware, and the admin login password is default.
4 Open the console of the NSX Edge to track the boot process.
If the console window does not open, make sure that pop-ups are allowed.
VMware, Inc. 88
NSX-T Data Center Installation Guide
5 After the NSX Edge starts, log in to the CLI with admin privileges, , user name is admin and password
is default.
Note After NSX Edge starts, if you do not log in with admin credentials for the first time, the data
plane service does not automatically start on NSX Edge.
6 After the reboot, you can log in with either admin or root credentials. The default root password is
vmware.
7 Run the get interface eth0 command to verify that the IP address was applied as expected
Interface: eth0
Address: 192.168.110.37/24
MAC address: 00:50:56:86:62:4d
MTU: 1500
Default gateway: 192.168.110.1
Broadcast address: 192.168.110.255
...
If needed, run the set interface eth0 ip <CIDR> gateway <gateway-ip> plane mgmt
command to update the management interface. Optionally, you can start the SSH service with the
start service ssh command.
8 Verify that the NSX Edge appliance has the required connectivity.
If you enabled SSH, make sure that you can SSH to your NSX Edge.
n NSX Edge can ping the hypervisor hosts that are in the same network as the NSX Edge.
n NSX Edge can ping its DNS server and its NTP server.
Note If connectivity is not established, make sure the VM network adapter is in the proper network
or VLAN.
By default, the NSX Edge datapath claims all virtual machine NICs except the management NIC (the
one that has an IP address and a default route). If DHCP assigns the wrong NIC as management,
complete the tasks to correct the problem.
VMware, Inc. 89
NSX-T Data Center Installation Guide
c Place eth0 into the DHCP network and wait for an IP address to be assigned to eth0.
The datapath fp-ethX ports used for the VLAN uplink and the tunnel overlay are shown in the get
interfaces and get physical-port commands on the NSX Edge.
What to do next
Join the NSX Edge with the management plane. See Join NSX Edge with the Management Plane.
Important The NSX-T Data Center component virtual machine installations include VMware Tools.
Removal or upgrade of VMware Tools is not supported for NSX-T Data Center appliances.
Prerequisites
Procedure
1 On a standalone host or in the vCenter Web client, create a VM and allocate the following resources:
n 3 VMXNET3 NICs. NSX Edge does not support the e1000 NIC driver.
n The appropriate system resources required for your NSX-T Data Center deployment.
VMware, Inc. 90
NSX-T Data Center Installation Guide
Make sure the CD/DVD drive device status is set to Connect at power on.
3 During ISO boot, open the VM console and choose Automated installation.
During power-on, the VM requests a network configuration via DHCP. If DHCP is not available in your
environment, the installer prompts you for IP settings.
By default, the root login password is vmware, and the admin login password is default.
When you log in for the first time, you are prompted to change the password. This password change
method has strict complexity rules, including the following:
n No dictionary words
VMware, Inc. 91
NSX-T Data Center Installation Guide
n No palindromes
Important The core services on the appliance do not start until a password with sufficient
complexity has been set.
4 (Optional) For optimal performance, reserve memory for the NSX-T Data Center component.
A memory reservation is a guaranteed lower bound on the amount of physical memory that the host
reserves for a virtual machine, even when memory is overcommitted. Set the reservation to a level
that ensures the NSX-T Data Center component has sufficient memory to run efficiently. See System
Requirements.
5 Open the console of the NSX Edge to track the boot process.
If the console window does not open, make sure that pop-ups are allowed.
6 After the NSX Edge starts, log in to the CLI with admin privileges, , user name is admin and password
is default.
Note After NSX Edge starts, if you do not log in with admin credentials for the first time, the data
plane service does not automatically start on NSX Edge.
7 After the reboot, you can log in with either admin or root credentials. The default root password is
vmware.
8 Run the get interface eth0 command to verify that the IP address was applied as expected
Interface: eth0
Address: 192.168.110.37/24
MAC address: 00:50:56:86:62:4d
MTU: 1500
Default gateway: 192.168.110.1
Broadcast address: 192.168.110.255
...
If needed, run the set interface eth0 ip <CIDR> gateway <gateway-ip> plane mgmt
command to update the management interface. Optionally, you can start the SSH service with the
start service ssh command.
9 Verify that the NSX Edge appliance has the required connectivity.
If you enabled SSH, make sure that you can SSH to your NSX Edge.
n NSX Edge can ping the hypervisor hosts that are in the same network as the NSX Edge.
n NSX Edge can ping its DNS server and its NTP server.
VMware, Inc. 92
NSX-T Data Center Installation Guide
Note If connectivity is not established, make sure the VM network adapter is in the proper network
or VLAN.
By default, the NSX Edge datapath claims all virtual machine NICs except the management NIC (the
one that has an IP address and a default route). If DHCP assigns the wrong NIC as management,
complete the tasks to correct the problem.
c Place eth0 into the DHCP network and wait for an IP address to be assigned to eth0.
The datapath fp-ethX ports used for the VLAN uplink and the tunnel overlay are shown in the get
interfaces and get physical-port commands on the NSX Edge.
What to do next
Join the NSX Edge with the management plane. See Join NSX Edge with the Management Plane.
Prerequisites
n Verify that your PXE server is configured for installation. See Prepare the PXE Server for NSX Edge
Installation.
n Verify that NSX Edge is installed using the bare-metal or the ISO file. See Install NSX Edge on Bare
Metal or Install NSX Edge via ISO File as a Virtual Appliance.
Procedure
1 Power on the NSX-T Data Center VM or the NSX-T Data Center Bare Metal Host.
The network is configured, partitions are created, and the NSX Edge components are installed.
When the NSX Edge login prompt appears, you can log in as admin or root.
By default, the root login password is vmware, and the admin login password is default.
3 (Optional) For optimal performance, reserve memory for the NSX-T Data Center component.
A memory reservation is a guaranteed lower bound on the amount of physical memory that the host
reserves for a virtual machine, even when memory is overcommitted. Set the reservation to a level
that ensures the NSX-T Data Center component has sufficient memory to run efficiently. See System
Requirements.
VMware, Inc. 93
NSX-T Data Center Installation Guide
4 Open the console of the NSX Edge to track the boot process.
If the console window does not open, make sure that pop-ups are allowed.
5 After the NSX Edge starts, log in to the CLI with admin privileges, , user name is admin and password
is default.
Note After NSX Edge starts, if you do not log in with admin credentials for the first time, the data
plane service does not automatically start on NSX Edge.
6 After the reboot, you can log in with either admin or root credentials. The default root password is
vmware.
7 Run the get interface eth0 command to verify that the IP address was applied as expected
Interface: eth0
Address: 192.168.110.37/24
MAC address: 00:50:56:86:62:4d
MTU: 1500
Default gateway: 192.168.110.1
Broadcast address: 192.168.110.255
...
If needed, run the set interface eth0 ip <CIDR> gateway <gateway-ip> plane mgmt
command to update the management interface. Optionally, you can start the SSH service with the
start service ssh command.
8 Verify that the NSX Edge appliance has the required connectivity.
If you enabled SSH, make sure that you can SSH to your NSX Edge.
n NSX Edge can ping the hypervisor hosts that are in the same network as the NSX Edge.
n NSX Edge can ping its DNS server and its NTP server.
Note If connectivity is not established, make sure the VM network adapter is in the proper network
or VLAN.
By default, the NSX Edge datapath claims all virtual machine NICs except the management NIC (the
one that has an IP address and a default route). If DHCP assigns the wrong NIC as management,
complete the tasks to correct the problem.
VMware, Inc. 94
NSX-T Data Center Installation Guide
c Place eth0 into the DHCP network and wait for an IP address to be assigned to eth0.
The datapath fp-ethX ports used for the VLAN uplink and the tunnel overlay are shown in the get
interfaces and get physical-port commands on the NSX Edge.
What to do next
Join the NSX Edge with the management plane. See Join NSX Edge with the Management Plane.
Prerequisites
Verify that you have admin privileges to log in to the NSX Edges and NSX Manager appliance.
Procedure
3 On the NSX Manager appliance, run the get certificate api thumbprint command.
The command output is a string of alphanumeric numbers that is unique to this NSX Manager.
For example:
VMware, Inc. 95
NSX-T Data Center Installation Guide
Verify the result by running the get managers command on your NSX Edges.
In the NSX Manager UI, the NSX Edge appears on the Fabric> Nodes > Edges page. The
NSX Manager connectivity should be Up. If the NSX Manager connectivity is not Up, try refreshing the
browser window.
What to do next
Add the NSX Edge as a transport node. See Create an NSX Edge Transport Node.
VMware, Inc. 96
Host Preparation 7
When hypervisor hosts are prepared to operate with NSX-T Data Center, they are known as fabric nodes.
Hosts that are fabric nodes have NSX-T Data Center modules installed and are registered with the
NSX-T Data Center management plane.
n Add a Hypervisor Host or Bare Metal Server to the NSX-T Data Center Fabric
Prerequisites
n (Red Hat and CentOS) Before you install the third-party packages, install the virtualization packages.
On the host, run the following commands:
If you are not able to install the packages, you can manually install them with the command yum
install glibc.i686 nspr on a new installation.
VMware, Inc. 97
NSX-T Data Center Installation Guide
n (Ubuntu) Before you install the third-party packages, install the virtualization packages. On the
Ubuntu host, run the following commands:
n (Bare metal server) There are no virtualization prerequisites for installing third-party packages.
Procedure
n On Ubuntu 16.04.2 LTS, make sure that the following third-party packages are installed on the host.
libunwind8
libgflags2v5
libgoogle-perftools4
traceroute
python-mako
python-simplejson
python-unittest2
python-yaml
python-netaddr
libprotobuf9v5
libboost-chrono1.58.0
libgoogle-glog0v5
dkms
libboost-date-time1.58.0
libleveldb1v5
libsnappy1v5
python-gevent
python-protobuf
ieee-data
libyaml-0-2
python-linecache2
python-traceback2
libtcmalloc-minimal4
python-greenlet
python-markupsafe
libboost-program-options1.58.0
If the dependency packages are not installed on Ubuntu 16.04.2 LTS, run apt-get install
<package> to manually install the packages.
n Verify that the Red Hat and CentOS hosts are registered and the respective repositories are
accessible.
Note If you prepare the host using the NSX-T Data Center UI, you must install the following
dependencies on the host.
VMware, Inc. 98
NSX-T Data Center Installation Guide
yum-utils
wget
redhat-lsb-core
tcpdump
boost-filesystem
PyYAML
boost-iostreams
boost-chrono
python-mako
python-netaddr
python-six
gperftools-libs
libunwind
snappy
boost-date-time
c-ares
libev
python-gevent
python-greenlet
PyYAML
c-ares
libev
libunwind
libyaml
python-beaker
python-gevent
python-greenlet
python-mako
python-markupsafe
python-netaddr
python-paste
python-tempita
n If you manually prepare the host that is already registered to RHEL or CentOS, you do not need to
install dependencies on the host. If the host is not registered, manually install the listed dependencies
using the yum install <package>
a Depending on your environment, install the listed Ubuntu, RHEL, or CentOS third-party packages
in this topic.
VMware, Inc. 99
NSX-T Data Center Installation Guide
Procedure
1 Verify the current version of the Open vSwitch installed on the host.
ovs-vswitchd --version
If you have a Open vSwitch newer or older version, you must replace that Open vSwitch version with
the supported one.
n kmod-openvswitch
n openvswitch
n openvswitch-selinux-policy
b Install the NSX-T Data Center either from the NSX Manger or follow the manual installation
procedure.
2 Alternatively, upgrade the Open vSwitch packages required by NSX-T Data Center.
b Download and copy the nsx-lcp file into the /tmp directory.
cd nsx-lcp-rhel74_x86_64/
What to do next
Add a hypervisor host to the NSX-T Data Center fabric. See Add a Hypervisor Host or Bare Metal Server
to the NSX-T Data Center Fabric.
You can skip this procedure if you installed the modules on the hosts manually and joined the hosts to the
management plane using the CLI.
Note For a KVM host on RHEL, you can use sudo credentials to perform host preparation activities.
Prerequisites
n For each host that you plan to add to the NSX-T Data Center fabric, first gather the following host
information:
n Hostname
n Management IP address
n Username
n Password
n For Ubuntu, verify that the required third-party packages are installed. See Install Third-Party
Packages on a KVM Host or Bare Metal Server.
Procedure
1 (Optional) Retrieve the hypervisor thumbprint so that you can provide it when adding the host to the
fabric.
b Retrieve the SHA-256 thumbprint from a KVM hypervisor, run the command in the KVM host.
2 In the NSX Manager CLI, verify that the install-upgrade service is running.
5 Enter the hostname, IP address, username, password, and the optional thumbprint.
For example:
For bare metal server, you can select the RHEL Server, Ubuntu Server, or CentOS Server from the
Operating System drop-down menu.
If you do not enter the host thumbprint, the NSX-T Data Center UI prompts you to use the default
thumbprint in the plain text format retrieved from the host.
For example:
When a host is successfully added to the NSX-T Data Center fabric, the NSX Manager Hosts page
displays Deployment Status: Installation Successful and MPA Connectivity: Up.
LCP Connectivity remains unavailable until after you have made the fabric node into a transport
node.
6 Verify that the NSX-T Data Center modules are installed on your host or bare metal server.
As a result of adding a host or bare metal server to the NSX-T Data Center fabric, a collection of
NSX-T Data Center modules are installed on the host or bare metal server.
On vSphere ESXi, the modules are packaged as VIBs. For KVM or bare metal server on RHEL, they
are packaged as RPMs. For KVM or bare metal server on Ubuntu, they are packaged as DEBs.
n On ESXi, type the command esxcli software vib list | grep nsx.
8 (Optional) Monitor the status in the API with the GET https://<nsx-
mgr>/api/v1/fabric/nodes/<node-id>/status API call.
9 (Optional) Change the polling intervals of certain processes, if you have 500 hypervisors or more.
The NSX Manager might experience high CPU usage and performance problems if there are more
than 500 hypervisors.
a Use the NSX-T Data Center CLI command copy file or the API POST /api/v1/node/file-
store/<file-name>?action=copy_to_remote_file to copy the
aggsvc_change_intervals.py script to a host.
b Run the script, which is located in the NSX-T Data Center file store.
What to do next
Note You cannot manually install of NSX-T Data Center kernel modules on a bare metal server.
You can download the NSX-T Data Center VIBs manually and make them part of the host image. The
download paths can change for each release of NSX-T Data Center. Always check the
NSX-T Data Center downloads page to get the appropriate VIBs.
Procedure
[root@host:~]: cd /tmp
3 Download and copy the nsx-lcp file into the /tmp directory.
Depending on what was already installed on the host, some VIBs might be installed, some might be
removed, and some might be skipped. A reboot is not required unless the command output says
Reboot Required: true.
As a result of adding an ESXi host to the NSX-T Data Center fabric, the following VIBs get installed on
the host.
n nsx-da—Collects discovery agent (DA) data about the hypervisor OS version, virtual machines, and
network interfaces. Provides the data to the management plane, to be used in troubleshooting tools.
n nsx-exporter—Provides host agents that report runtime state to the aggregation service running in the
management plane.
n nsx-host— Provides metadata for the VIB bundle that is installed on the host.
n nsx-lldp—Provides support for the Link Layer Discovery Protocol (LLDP), which is a link layer
protocol used by network devices for advertising their identity, capabilities, and neighbors on a LAN.
n nsx-netcpa—Provides communication between the central control plane and hypervisors. Receives
logical networking state from the central control plane and programs this state in the data plane.
n nsx-sfhc—Service fabric host component (SFHC). Provides a host agent for managing the lifecycle of
the hypervisor as a fabric host in the management plane's inventory. This provides a channel for
operations such as NSX-T Data Center upgrade and uninstall and monitoring of NSX-T Data Center
modules on hypervisors.
To verify, you can run the esxcli software vib list | grep nsx or esxcli software vib list | grep <yyyy-
mm-dd> command on the ESXi host, where the date is the day that you performed the installation.
What to do next
Add the host to the NSX-T Data Center management plane. See Join the Hypervisor Hosts with the
Management Plane.
You can download the NSX-T Data Center DEBs manually and make them part of the host image. Be
aware that download paths can change for each release of NSX-T Data Center. Always check the
NSX-T Data Center downloads page to get the appropriate DEBs.
Prerequisites
n Verify that the required third-party packages are installed. See Install Third-Party Packages on a KVM
Host or Bare Metal Server.
Procedure
cd /tmp
3 Download and copy the nsx-lcp file into the /tmp directory.
cd nsx-lcp-trusty_amd64/
/etc/init.d/openvswitch-switch force-reload-kmod
If the hypervisor uses DHCP on OVS interfaces, restart the network interface on which DHCP is
configured. You can manually stop the old dhclient process on the network interface and restart a new
dhclient process on that interface.
Any errors are most likely caused by incomplete dependencies. The apt-get install -f command
can attempt to resolve dependencies and re-run the NSX-T Data Center installation.
What to do next
Add the host to the NSX-T Data Center management plane. See Join the Hypervisor Hosts with the
Management Plane.
This allows you to build the NSX-T Data Center control-plane and management-plane fabric.
NSX-T Data Center kernel modules packaged in RPM files run within the hypervisor kernel and provide
services such as distributed routing, distributed firewall, and bridging capabilities.
You can download the NSX-T Data Center RPMs manually and make them part of the host image. Be
aware that download paths can change for each release of NSX-T Data Center. Always check the
NSX-T Data Center downloads page to get the appropriate RPMs.
Prerequisites
Procedure
2 Download and copy the nsx-lcp file into the /tmp directory.
cd nsx-lcp-rhel74_x86_64/
When you run the yum install command, any NSX-T Data Center dependencies are resolved,
assuming the RHEL or CentOS hosts can reach their respective repositories.
/etc/init.d/openvswitch force-reload-kmod
If the hypervisor uses DHCP on OVS interfaces, restart the network interface on which DHCP is
configured. You can manually stop the old dhclient process on the network interface and restart a new
dhclient process on that interface.
7 To verify, you can run the rpm -qa | egrep 'nsx|openvswitch|nicira' command.
The installed packages in the output must match the packages in the nsx-rhel74 or nsx-centos74
directory.
What to do next
Add the host to the NSX-T Data Center management plane. See Join the Hypervisor Hosts with the
Management Plane.
Prerequisites
Procedure
4 On the NSX Manager appliance, run the get certificate api thumbprint cli command.
The command output is a string of numbers that is unique to this NSX Manager.
For example:
5 On the hypervisor host, run the nsxcli command to enter the NSX-T Data Center CLI.
[user@host:~] nsxcli
host>
Verify the result by running the get managers command on your hosts.
In the NSX Manager UI in Fabric > Node > Hosts, verify that the host's MPA connectivity is Up.
You can also view the fabric host's state with the GET https://<nsx-mgr>/api/v1/fabric/nodes/<fabric-
node-id>/state API call:
{
"details": [],
"state": "success"
}
The management plane sends the host certificates to the control plane, and the control plane pushes
control plane information to the hosts.
The host connection to NSX-T Data Centers is initiated and sits in "CLOSE_WAIT" status until the host is
promoted to a transport node. You can see this with the esxcli network ip connection list | grep 1234
command.
What to do next
When you create a transport zone, you must specify an N-VDS mode, which can be either Standard or
Enhanced Datapath. When you add a transport node to a transport zone, the N-VDS associated with the
transport zone is installed on the transport node. Each transport zone supports a single N-VDS. An
enhanced datapath N-VDS has the performance capabilities to support NFV (Network Functions
Virtualization) workloads, supports both VLAN and overlay networks, and requires an ESXi host that
supports enhanced datapath N-VDS.
n Multiple overlay transport zones with advanced datapath N-VDS if the transport node is running on an
ESXi host.
If two transport nodes are in the same transport zone, VMs hosted on those transport nodes can be
attached to NSX-T Data Center logical switches that are also in that transport zone. This attachment
makes it possible for the VMs to communicate with each other, assuming that the VMs have Layer
2/Layer 3 reachability. If VMs are attached to switches that are in different transport zones, the VMs
cannot communicate with each other. Transport zones do not replace Layer 2/Layer 3 underlay
reachability requirements, but they place a limit on reachability. Put another way, belonging to the same
transport zone is a prerequisite for connectivity. After that prerequisite is met, reachability is possible but
not automatic. To achieve actual reachability, Layer 2 and (for different subnets) Layer 3 underlay
networking must be operational.
Suppose a single transport node contains both regular VMs and high-security VMs. In your network
design, the regular VMs should be able to reach each other but should not be able to reach the high-
security VMs. To accomplish this goal, you can place the secure VMs on hosts that belong to one
transport zone named secure-tz. The regular and secure VMs cannot be on the same transport node. The
regular VMs would then be on a different transport zone called general-tz. The regular VMs attach to an
NSX-T Data Center logical switch that is also in general-tz. The high-security VMs attach to an
NSX-T Data Center logical switch that is in the secure-tz. The VMs in different transport zones, even if
they are in the same subnet, cannot communicate with each other. The VM-to-logical switch connection is
what ultimately controls VM reachability. Thus, because two logical switches are in separate transport
zones, "web VM" and "secure VM" cannot reach each other.
For example, the following figure shows an NSX Edge that belongs to three transport zones: two VLAN
transport zones and overlay transport zone 2. Overlay transport zone 1 contains a host, an
NSX-T Data Center logical switch, and a secure VM. Because the NSX Edge does not belong to overlay
transport zone 1, the secure VM has no access to or from the physical architecture. In contrast, the Web
VM in overlay transport zone 2 can communicate with the physical architecture because the NSX Edge
belongs to overlay transport zone 2.
NSX Edge
tier-0
logical logical
switch 1 switch 2
Host Host
VM VM
secure web
VM VM
The N-VDS switch can be configured in the enhanced data path mode only on an ESXi host.
n Overlay traffic
n VLAN traffic
See VMware Compatibility Guide to know NIC cards that support enhanced data path.
On the VMware Compatibility Guide page, under the IO devices category, select ESXi 6.7, IO device
Type as Network, and feature as N-VDS Enhanced Datapath.
2 Download and install the NIC drivers from the My VMware page.
4 Create a transport zone with N-VDS in the enhanced data path mode.
5 Create a host transport node. Configure the enhanced data path N-VDS with logical cores and NUMA
nodes.
If the NUMA node location of either the VM or the physical NIC is not available, then the Load Balanced
Source teaming policy does not consider NUMA awareness to align VMs and NICs.
The teaming policy functions without NUMA awareness in the following conditions:
n The LAG uplink is configured with physical links from multiple NUMA nodes.
n The ESXi host failed to define NUMA information for either VM or physical links.
If you are using both ESXi and KVM hosts, one design option would be to use two different subnets for
the ESXi tunnel endpoint IP pool (sub_a) and the KVM tunnel endpoint IP Pool (sub_b). In this case, on
the KVM hosts a static route to sub_a needs to be added with a dedicated default gateway.
This is an example of the resulting routing table on an Ubuntu host where sub_a = 192.168.140.0 and
sub_b = 192.168.150.0. (The management subnet, for example, could be 192.168.130.0.)
The route can be added in at least two different ways. Of these two methods, the route persists after host
reboot only if you add the route by editing the interface. Adding a route using the route add command
does not persists after a host reboot.
In /etc/network/interfaces before "up ifconfig nsx-vtep0.0 up" add this static route:
Procedure
3 Enter the name of the IP pool, an optional description, and the network settings.
n Range of IP addresses
n Gateway
For example:
You can also view the IP pools with the GET https://<nsx-mgr>/api/v1/pools/ip-pools API call:
{
"cursor": "0036e2d8c2e8-f6d7-498e-821b-b7e44d2650a9ip-pool-1",
"sort_by": "displayName",
"sort_ascending": true,
"result_count": 1,
"results": [
{
"id": "e2d8c2e8-f6d7-498e-821b-b7e44d2650a9",
"display_name": "comp-tep",
"resource_type": "IpPool",
"subnets": [
{
"dns_nameservers": [
"192.168.110.10"
],
"allocation_ranges": [
{
"start": "192.168.250.100",
"end": "192.168.250.200"
}
],
"gateway_ip": "192.168.250.1",
"cidr": "192.168.250.0/24",
"dns_suffix": "corp.local"
}
],
"_last_modified_user": "admin",
"_last_modified_time": 1443649891178,
"_create_time": 1443649891178,
"_system_owned": false,
"_create_user": "admin",
"_revision": 0
}
]
}
What to do next
The settings defined by uplink profiles might include teaming policies, active/standby links, the transport
VLAN ID, and the MTU setting.
Uplink profiles allow you to consistently configure identical capabilities for network adapters across
multiple hosts or nodes. Uplink profiles are containers for the properties or capabilities that you want your
network adapters to have. Instead of configuring individual properties or capabilities for each network
adapter, you can specify the capabilities in uplink profiles, which you can then apply when you create
NSX-T Data Center transport nodes.
Standby uplinks are not supported with VM/appliance-based NSX Edge. When you install NSX Edge as a
virtual appliance, use the default uplink profile. For each uplink profile created for a VM-based NSX Edge,
the profile must specify only one active uplink and no standby uplink.
Note NSX Edge VMs do allow for multiple uplinks if you create a separate N-VDS for each uplink, using
a different VLAN for each. Each uplink needs a separate VLAN transport zone. This is to support a single
NSX Edge node that connects to multiple TOR switches.
Prerequisites
n Familiarize yourself with NSX Edge networking. See NSX Edge Networking Setup.
n Each uplink in the uplink profile must correspond to an up and available physical link on your
hypervisor host or on the NSX Edge node.
For example, your hypervisor host has two physical links that are up: vmnic0 and vmnic1. Suppose
vmnic0 is used for management and storage networks, while vmnic1 is unused. This might mean that
vmnic1 can be used as an NSX-T Data Center uplink, but vmnic0 cannot. To do link teaming, you
must have two unused physical links available, such as vmnic1 and vmnic2.
For an NSX Edge, tunnel endpoint and VLAN uplinks can use the same physical link. For example,
vmnic0/eth0/em0 might be used for your management network and vmnic1/eth1/em1 might be used
for your fp-ethX links.
Procedure
2 Select Fabric > Profiles > Uplink Profiles and click Add.
Option Description
Option Description
LAGs (Optional) Link aggregation groups (LAGs) using Link Aggregation Control
Protocol (LACP) for the transport network.
Teamings In the Teaming section, click Add and enter the details. The teaming policy
defines how the N-VDS uses its uplink for redundancy and traffic load balancing.
There are two teaming policy modes to configure teaming policy:
n Failover Order: An active uplink is specified along with an optional list of
standby uplinks. If the active uplink fails, the next uplink in the standby list
replaces the active uplink. No actual load balancing is performed with this
option.
n Load Balanced Source: A list of active uplinks is specified, and each
interface on the transport node is pinned to one active uplink. This
configuration allows use of several active uplinks at the same time.
Note On KVM hosts, only failover order teaming policy is supported. Load
balance source teaming policy is not supported.
(Only ESXi hosts) You can define the following policies for a transport zone:
n A Named teaming policy for every logical switch configured on the switch.
n A Default teaming policy for the entire switch.
Named teaming policy: A named teaming policy means that for every logical
switch you can define a specific teaming policy mode and uplinks. This policy type
gives you the flexibility to select uplinks depending on the bandwidth requirement.
n If you define a named teaming policy, N-VDS uses that named teaming policy
if it is specified by the attached transport zone and logical switch in the host.
n If you do not define any named teaming policies, N-VDS uses the default
teaming policy.
In addition to the UI, you can also view the uplink profiles with the GET /api/v1/host-switch-
profiles API call:
{
"result_count": 2,
"results": [
{
"resource_type": "UplinkHostSwitchProfile",
"id": "16146a24-122b-4274-b5dd-98b635e4d52d",
"display_name": "comp-uplink",
"transport_vlan": 250,
"teaming": {
"active_list": [
{
"uplink_type": "PNIC",
"uplink_name": "uplink-1"
}
],
"standby_list" : [ {
"uplink_name" : "uplink-2",
"uplink_type" : "PNIC"
} ],
"policy": "FAILOVER_ORDER"
},
"mtu": 1600,
"_last_modified_time": 1457984399526,
"_create_time": 1457984399526,
"_last_modified_user": "admin",
"_system_owned": false,
"_create_user": "admin",
"_revision": 0
},
{
"resource_type": "UplinkHostSwitchProfile",
"id": "c9e35cec-e9d9-4c51-b52e-17a5c1bd9a38",
"display_name": "vlan-uplink",
"transport_vlan": 100,
"teaming": {
"active_list": [
{
"uplink_type": "PNIC",
"uplink_name": "uplink-1"
}
],
"standby_list": [],
"policy": "FAILOVER_ORDER"
},
"named_teamings": [
{
"active_list": [
{
"uplink_type": "PNIC",
"uplink_name": "uplink-2"
}
],
"standby_list": [
{
"uplink_type": "PNIC",
"uplink_name": "uplink-1"
}
],
"policy": "FAILOVER_ORDER",
"name": "named teaming policy"
}
]
"mtu": 1600,
"_last_modified_time": 1457984399574,
"_create_time": 1457984399574,
"_last_modified_user": "admin",
"_system_owned": false,
"_create_user": "admin",
"_revision": 0
}
]
}
What to do next
An NSX-T Data Center environment can contain one or more transport zones based on your
requirements. A host can belong to multiple transport zones. A logical switch can belong to only one
transport zone.
NSX-T Data Center does not allow connection of VMs that are in different transport zones in the Layer 2
network. The span of a logical switch is limited to a transport zone, so virtual machines in different
transport zones cannot be on the same Layer 2 network.
The overlay transport zone is used by both host transport nodes and NSX Edges. When a host or
NSX Edge transport node is added to an overlay transport zone, an N-VDS is installed on the host or
NSX Edge.
The VLAN transport zone is used by the NSX Edge for its VLAN uplinks. When an NSX Edge is added to
a VLAN transport zone, a VLAN N-VDS is installed on the NSX Edge.
The N-VDS allows for virtual-to-physical packet flow by binding logical router uplinks and downlinks to
physical NICs.
When you create a transport zone, you must provide a name for the N-VDS that will be installed on the
transport nodes when they are later added to this transport zone. The N-VDS name can be whatever you
want it to be.
Procedure
Note In the enhanced datapath mode, only specific NIC configurations are supported. Ensure that
you configure the supported NICs.
8 Enter one or more uplink teaming policy names. These named teaming policies can be used by
logical switches attached to the transport zone. If the logical switches do not find a matching named
teaming policy, then the default uplink teaming policy is used.
10 (Optional) You can also view the new transport zone with the GET https://<nsx-
mgr>/api/v1/transport-zones API call.
{
"cursor": "00369b661aed-1eaa-4567-9408-ccbcfe50b416tz-vlan",
"result_count": 2,
"results": [
{
"resource_type": "TransportZone",
"description": "comp overlay transport zone",
"id": "efd7f38f-c5da-437d-af03-ac598f82a9ec",
"display_name": "tz-overlay",
"host_switch_name": "overlay-hostswitch",
"transport_type": "OVERLAY",
"transport_zone_profile_ids": [
{
"profile_id": "52035bb3-ab02-4a08-9884-18631312e50a",
"resource_type": "BfdHealthMonitoringProfile"
}
],
"_create_time": 1459547126454,
"_last_modified_user": "admin",
"_system_owned": false,
"_last_modified_time": 1459547126454,
"_create_user": "admin",
"_revision": 0,
"_schema": "/v1/schema/TransportZone"
},
{
"resource_type": "TransportZone",
"description": "comp vlan transport zone",
"id": "9b661aed-1eaa-4567-9408-ccbcfe50b416",
"display_name": "tz-vlan",
"host_switch_name": "vlan-uplink-hostwitch",
"transport_type": "VLAN",
"transport_zone_profile_ids": [
{
"profile_id": "52035bb3-ab02-4a08-9884-18631312e50a",
"resource_type": "BfdHealthMonitoringProfile"
}
],
"_create_time": 1459547126505,
"_last_modified_user": "admin",
"_system_owned": false,
"_last_modified_time": 1459547126505,
"_create_user": "admin",
"_revision": 0,
"_schema": "/v1/schema/TransportZone"
}
]
}
What to do next
Optionally, create a custom transport-zone profile and bind it to the transport zone. You can create
custom transport-zone profiles using the POST /api/v1/transportzone-profiles API. There is no UI
workflow for creating a transport-zone profile. After the transport-zone profile is created, you can find it to
the transport zone with the PUT /api/v1/transport-zones/<transport-zone-id> API.
For a KVM host, you can preconfigure the N-VDS, or you can have NSX Manager perform the
configuration. For an ESXi host, NSX Manager always configures the N-VDS.
Note If you plan to create transport nodes from a template VM, make sure that there are no certificates
on the host in /etc/vmware/nsx/. The netcpa agent does not create a certificate if a certificate already
exists.
Bare metal server supports an overlay and VLAN transport zone. You can use the management interface
to manage the bare metal server. The application interface allows you to access the applications on the
bare metal server.
Single physical NICs provide an IP address for both the management and application IP interfaces.
Dual physical NICs provide a physical NIC and a unique IP address for the management interface. Dual
physical NICs also provide a physical NIC and a unique IP address for the application interface.
Multiple physical NICs in a bonded configuration provide dual physical NICs and an unique IP address for
the management interface. Multiple physical NICs in a bonded configuration also provide dual physical
NICs and an unique IP address for the application interface.
Prerequisites
n The host must be joined with the management plane, and MPA connectivity must be Up on the
Fabric > Hosts page.
n An uplink profile must be configured, or you can use the default uplink profile.
n At least one unused physical NIC must be available on the host node.
Procedure
5 Select the transport zones that this transport node belongs to.
Option Description
For a non-KVM node, the N-VDS type is always Standard or Enhanced Datapath.
Option Description
N-VDS Name Must be the same as the N-VDS name of the transport zone that this node
belongs to.
NIOC Profile Select the NIOC profile from the drop-down menu.
Uplink Profile Select the uplink profile from the drop-down menu.
IP Pool If you selected Use IP Pool for IP assignment, specify the IP pool name.
Physical NICs Make sure that the physical NIC is not already in use (for example, by a standard
vSwitch or a vSphere distributed switch). Otherwise, the transport node state
remains in partial success, and the fabric node LCP connectivity fails to
establish.
For bare metal server, select the physical NIC that can be configured as the
uplink-1 port. The uplink-1 port is defined in the uplink profile.
If you only have one network adapter in your bare metal server, select that
physical NIC so that the uplink-1 port is assigned to both the management and
application interface.
Option Description
N-VDS Name Must be the same as the N-VDS name of the transport zone that this node
belongs to.
IP Pool If you selected Use IP Pool for an IP assignment, specify the IP pool name.
Physical NICs Select a physical NIC that is enhanced datapath capable. Make sure that the
physical NIC is not already in use (for example, by a standard vSwitch or a
vSphere distributed switch). Otherwise, the transport node state remains in
partial success, and the fabric node LCP connectivity fails to establish.
Option Description
CPU Config In the NUMA Node Index drop-down menu, select the NUMA node that you want
to assign to an N-VDS switch. The first NUMA node present on the node is
represented with the value 0.
You can find out the number for NUMA nodes on your host by running the esxcli
hardware memory get command.
Note If you want to change the number of NUMA nodes that have affinity with an
N-VDS switch, you can update the NUMA Node Index value.
In the Lcore per NUMA node drop-down menu, select the number of logical cores
that must be used by enhanced datapath.
You can find out the maximum number of logical cores that can be created on the
NUMA node by running the esxcli network ens maxLcores get command.
Note If you exhaust the available NUMA nodes and logical cores, any new
switch added to the transport node cannot be enabled for ENS traffic.
Option Description
N-VDS External ID Must be the same as the N-VDS name of the transport zone that this node
belongs to.
After adding the host as a transport node, the host connection to NSX Controllers changes to the Up
status.
u For ESXi, type the esxcli network ip connection list | grep 1234 command.
u For KVM, type the command netstat -anp --tcp | grep 1234.
{
"resource_type": "TransportNode",
"description": "",
"id": "95c8ce77-f895-43de-adc4-03a3ae2565e2",
"display_name": "node-comp-01b",
"tags": [],
"transport_zone_endpoints": [
{
"transport_zone_id": "efd7f38f-c5da-437d-af03-ac598f82a9ec",
"transport_zone_profile_ids": [
{
"profile_id": "52035bb3-ab02-4a08-9884-18631312e50a",
"resource_type": "BfdHealthMonitoringProfile"
}
]
}
],
"host_switches": [
{
"host_switch_profile_ids": [
{
"value": "8abdb6c0-db83-4e69-8b99-6cd85bfcc61d",
"key": "UplinkHostSwitchProfile"
},
{
"value": "9e0b4d2d-d155-4b4b-8947-fbfe5b79f7cb",
"key": "LldpHostSwitchProfile"
}
],
"host_switch_name": "overlay-hostswitch",
"pnics": [
{
"device_name": "vmnic1",
"uplink_name": "uplink-1"
}
],
"static_ip_pool_id": "c78ac522-2a50-43fe-816a-c459a210127e"
}
],
"node_id": "c551290a-f682-11e5-ae84-9f8726e1de65",
"_create_time": 1460051753373,
"_last_modified_user": "admin",
"_system_owned": false,
"_last_modified_time": 1460051753373,
"_create_user": "admin",
"_revision": 0
}
Note For a standard N-VDS, after the transport node is created, if you want to change the configuration,
such as IP assignment to the tunnel endpoint, you must do it through the NSX Manager GUI and not
through the CLI on the host.
What to do next
Migrate network interfaces from a vSphere Standard Switch to an NSX-T Virtual Distributed Switch. See
VMkernel Migration to an N-VDS Switch.
Note Automated NSX-T Data Center transport node creation is supported only on vCenter Server 6.5
Update 1, 6.5 Update 2, and 6.7.
If the transport node is already configured, then automated transport node creation is not applicable for
that node.
Prerequisites
n An uplink profile must be configured, or you can use the default uplink profile.
n At least one unused physical NIC must be available on the host node.
Procedure
Option Description
Automatically Install NSX Toggle the button to enable the installation of NSX-T Data Center on all the hosts
in the vCenter Server cluster.
Automatically Create Transport Node Toggle the button to enable the transport node creation on all the hosts in the
vCenter Server cluster. It is a required setting.
Transport Zone Select an existing transport node from the drop-down menu.
Uplink Profile Select an existing uplink profile from the drop-down menu or create a custom
uplink profile.
Note The hosts in a cluster must have the same uplink profile.
IP Assignment Select either Use DHCP or Use IP Pool from the drop-down menu.
If you select Use IP Pool, you must allocate an existing IP pool in the network
from the drop-down menu.
Physical NICs Make sure that the physical NIC is not already in use for example, by a standard
vSwitch or a vSphere distributed switch. Otherwise, the transport node state is
partially successful, and the fabric node LCP connectivity fails to establish.
You can use the default uplink or assign an existing uplink from the drop-down
menu.
Click Add PNIC to increase the number of NICs in the configuration.
NSX-T Data Center installation and transport node creation on each host in the cluster starts in
parallel. The entire process depends on the number of hosts in the cluster.
When a new host is added to the vCenter Server cluster, NSX-T Data Center installation and
transport node creation happens automatically.
7 (Optional) Remove an NSX-T Data Center installation and transport node from a host in the cluster.
Prerequisites
n Familiarize yourself with the steps to create an uplink profile. See Create an Uplink Profile.
n Familiarize yourself with the steps to create a host transport node. See Create a Host Transport
Node.
Procedure
2 Select Fabric > Profiles > Uplink Profiles and click Add.
6 In the Active Uplinks field, enter the name of the LAG that you added in the step 4. In this example,
the name is lag1.
12 In the N-VDS tab, select the uplink profile uplink-profile1 that was created in step 3.
13 In the Physical NICs field, you will see a dropdown list of physical NICs, and a dropdown list of
uplinks that you specified when you created the uplink profile. Specifically, you will see the uplinks
lag1-0 and lag1-1, corresponding to the LAG lag1 that was created in step 4. Select a physical
NIC for lag1-0 and a physical NIC for lag1-1.
The physical NICs and their VMkernel interfaces are initially attached to a VSS or VDS on a
vSphere ESXi host. These kernel interfaces are defined on these hosts to provide connectivity to the
management interface, storage, and other interfaces. After migration, the VMkernel interfaces and their
associated physical NICs connect to the N-VDS and handle traffic on the VLAN and overlay transport
zones.
In the following figure, if a host only has two physical NICs, you might want to assign both those NICs to
the N-VDS for redundancy.
Kernel interfaces
Management-VLAN-
Management
Logical switch
Storage-VLAN-
Storage
Logical switch
vMotion-VLAN-
vMotion
Logical switch
vSwitch0 N-VDS
vmk0, vmk1, vmk2 vmk0, vmk1, vmk2
ESXi ESXi
Before migration, the vSphere ESXi host has two uplinks derived from the two physical ports - vmnic0
and vmnic1. Here, vmnic0 is configured to be in an active state, attached to a VSS or VDS, whereas
vmnic1 is unused. In addition, there are three VMkernel interfaces: vmk0, vmk1, and vmk2.
You migrate VMkernel interfaces by using the NSX-T Data Center Manager UI or NSX-T Data Center
APIs. See NSX-T Data Center API Guide.
Post migration, the vmnic0, vmnic1, and their VMkernel interfaces are migrated to the N-VDS switch. Both
vmnic0 and vmnic1 are connected over VLAN and the overlay transport zones.
In this example, consider a vSphere ESXi host with two physical adapters, vmnic0 and vmnic1. The
default VSS or VDS switch on the host is configured with a single uplink mapped to vmnic0. The
VMkernel interface, vmk0 is also configured on VSS or VDS to run the management traffic on the node.
The aim is to migrate vmnic0 and vmk0 to the N-VDS switch.
As part of host preparation, VLAN and overlay transport zones are created to run management and VM
traffic respectively. An N-VDS switch is also created and configured with an uplink mapped to vmnic1.
After migration, NSX-T Data Center migrates both vmnic0 and vmk0 from the VSS or VDS switch to the
N-VDS switch on the node.
Prerequisites
n Verify that the physical network infrastructure provides the same LAN connectivity to vmnic1 and
vmnic0.
n Verify that the unused physical NIC, vmnic1, has Layer 2 connectivity with vmnic0.
n Ensure that all VMkernel interfaces involved in this migration belong to the same network. If you
migrate VMkernel interfaces to an uplink connected to a different network, the host might become
unreachable or non-functional.
Procedure
1 On the NSX Manager UI, go to Fabric -> Profile -> Uplink Profiles.
2 Create an uplink profile using vmnic0 as the active uplink and vmnic1 as the passive uplink.
4 Create an overlay and VLAN transport zones to handle VM traffic and management traffic
respectively.
Note The N-VDS name used in VLAN transport zone and OVERLAY transport zone must be the
same.
7 In the N-VDS tab, add an N-VDS by defining uplinks, physical adapters to be used by N-VDS.
The transport node is connected to the transport zones through a single uplink.
8 To ensure vmk0 and vmnic0 get connectivity to the VLAN transport zone after migration, create a
logical switch for the appropriate VLAN transport zone.
9 Select the Transport node, click Actions -> Migrate ESX VMkernel and Physical Adapters.
13 Add the physical adapter corresponding to the VMkernel interface. Ensure that at least one physical
adapter remains on the VSS or VDS switch.
14 Click Save.
17 Alternatively, in the vCenter Server, verify the VMkernel adapter is associated with the
NSX-T Data Center switch.
VMkernel interfaces and their corresponding physical adapters are migrated to N-VDS.
What to do next
NSX-T Data Center needs a port group to migrate VMkernel interfaces from the N-VDS switch to the VSS
or VDS switch. The port group accepts the network request to migrate these interfaces to the VSS or
VDS switch. The port member that participates in this migration is decided based on its bandwidth and
policy configuration.
Before you begin VMkernel migration back to the VSS or VDS switch, ensure that the VMkernel
interfaces are functional and connectivity is up on the N-VDS switch.
Prerequisites
Procedure
1 In the NSX Manager UI, go to Fabric -> Nodes -> Transport Nodes.
2 Select the Transport node, click Actions -> Migrate ESX VMkernel and Physical Adapters.
6 Add the physical adapter corresponding to the VMkernel interface. Ensure that at least one physical
adapter stays connected to the VSS or VDS switch.
7 Click Save.
10 Alternatively, in the vCenter Server, verify the VMkernel adapter is associated with the VSS or VDS
switch.
VMkernel interfaces and their corresponding physical adapters are migrated to N-VDS.
What to do next
You might want to migrate VMkernel interfaces using APIs. See Migrate Kernel Interfaces to an N-VDS
Using APIs.
Consider the host with two uplinks connected to respective physical NICs. In this procedure, you can
begin with migration of the storage kernel interface, vmk1, to N-VDS. After this kernel interface is
successfully migrated to N-VDS, you can migrate the management kernel interface.
Prerequisites
n Verify that the physical network infrastructure provides the same LAN connectivity to vmnic1 and
vmnic0.
n Verify that the unused physical NIC, vmnic1, has Layer 2 connectivity with vmnic0.
n Ensure that all VMkernel interfaces involved in this migration belong to the same network. If you
migrate VMkernel interfaces to an uplink connected to a different network, the host might become
unreachable or non-functional.
Procedure
1 Create a VLAN transport zone with the host_switch_name of the N-VDS used by the OVERLAY
transport zone.
2 Create a VLAN-backed logical switch in the VLAN transport zone with a VLAN ID that matches the
VLAN ID used by vmk1 on the VSS or VDS.
3 Add the vSphere ESXi transport node to the VLAN transport zone.
GET /api/v1/transport-nodes/<transportnode-id>
PUT https://<NSXmgr>/api/v1/transport-nodes/<transportnode-id> ?
if_id=<vmk>&esx_mgmt_if_migration_dest=<network>
Where the <transportnode-id> is the UUID of the transport node. The <vmk> is the name of the
VMkernel interface, vmk1. The <network> is the UUID of the target logical switch.
GET /api/v1/transport-nodes/<transportnode-id>/state
Wait until the migration state appears as SUCCESS. You can also verify the migration status of the
VMkernel interface in vCenter Server.
What to do next
You can migrate the remaining VMkernel interfaces and the management kernel interface of the VSS or
VDS to the N-VDS.
Then you can migrate the physical uplink vmnic0 and vmk0 to the N-VDS together in one step. Modify the
transport node configuration so that the vmnic0 is now configured as one of its uplinks.
Note If you want to migrate the uplink vmnic0 and kernel interface vmk0 separately, first migrate vmk0
and then migrate vmnic0. If you first migrate vmnic0, then vmk0 remains on the VSS or VDS without any
backing uplinks and you lose connectivity to the host.
Prerequisites
n Verify connectivity to the already migrated vmknics. See Migrate Kernel Interfaces to an N-VDS Using
APIs.
n If vmk0 and vmk1 use different VLANs, trunk VLAN must be configured on the physical switch
connected to PNICs vmnic0 and vmnic1 to support both VLANs.
n Verify that an external device can reach interfaces vmk1 on storage VLAN-backed logical switch and
vmk2 on the vMotion VLAN-backed logical switch.
Procedure
1 (Optional) Create a second management kernel interface on VSS or VDS and migrate this newly
created interface to N-VDS.
2 (Optional) From an external device , verify connectivity to the test management interface.
3 If vmk0 (management interface) uses a different VLAN than vmk1 (storage interface), create a VLAN-
backed logical switch in the VLAN transport zone with a VLAN ID that matches the VLAN ID used by
vmk0 on the VSS or VDS.
GET /api/v1/transport-nodes/<transportnode-id>
5 In the host_switch_spec:host_switches element of the configuration, add the vmnic0 to the pnics
table and assign it to a dedicated uplink, uplink-2.
Note While migrating the VM kernel interfaces, we assigned vmnic1 to uplink-1. It is necessary to
assign vmnic0, the management interface to a dedicated uplink for the migration to be successful and
the host to be reachable after migration.
"pnics": [ {
"device_name": "vmnic0",
"uplink_name": "uplink-2"
},
{
"device_name": "vmnic1",
"uplink_name": "uplink-1"
}
],
6 Migrate the management kernel interface, vmk0 to N-VDS using the updated configuration.
Where, <transportnode-id> is the UUID of the transport node. The <vmk> is the name of the
VMkernel management interface vmk0. The <network> is the UUID of the target logical switch.
GET /api/v1/transport-nodes/<transportnode-id>/state
Wait until the migration state appears as SUCCESS. In vCenter Server, you can verify whether the
kernel adapters are configured to display the new logical switch name.
What to do next
You can choose to revert the migration of the kernel interfaces and management interface from N-VDS to
a VSS or VDS switch.
Procedure
GET /api/v1/transport-nodes/<transportnode-id>/state
2 Retrieve the vSphere ESXi transport node configuration to find the physical NICs defined inside the
"host_switch_spec":"host_switches"element
GET /api/v1/transport-nodes/<transportnode-id>
"pnics": [
{ "device_name": "vmnic0",
"uplink_name": "uplink-2"
},
{ "device_name": "vmnic1",
"uplink_name": "uplink-1"
}
],
"pnics": [
{ "device_name": "vmnic1",
"uplink_name": "uplink-1"
}
],
4 Migrate the management interface, vmnic0 and vmk0, from N-VDS to VSS or VDS, using the
modified configuration.
Where, <vmk0_port_group> is the port group name that was assigned to vmk0 before migrating to
the logical switch.
GET /api/v1/transport-nodes/<transportnode-id>/state
GET /api/v1/transport-nodes/<transportnode-id>
7 Migrate vmk1 from N-VDS to VSS or VDS, using the preceding transport node configuration.
Where, <vmk1_port_group> is the port group name that was assigned to vmk1 before migrating to
the logical switch.
Note vmk0 or vmk1 must be migrated to the VSS or VDS with at least one physical NIC because the
VSS or VDS does not have any physical NIC associated with it.
GET /api/v1/transport-nodes/<transportnode-id>/state.
a The management kernel interface, vmk0 must not be migrated before there is an uplink interface
attached to VSS or VDS.
b Ensure that vmk0 receives its IP address from vmnic0, otherwise the IP might change, and other
components like VC might lose connectivity to the host through the old IP.
After creating a host transport node, the N-VDS gets installed on the host.
Procedure
3 Alternatively, view the N-VDS on ESXi with the esxcli network ip interface list command.
On ESXi , the command output should include a vmk interface (for example, vmk10) with a VDS
name that matches the name you used when you configured the transport zone and the transport
node.
vmk10
Name: vmk10
MAC Address: 00:50:56:64:63:4c
Enabled: true
Portset: DvsPortset-1
Portgroup: N/A
Netstack Instance: vxlan
VDS Name: overlay-hostswitch
VDS UUID: 18 ae 54 04 2c 6f 46 21-b8 ae ef ff 01 0c aa c2
VDS Port: 10
VDS Connection: 10
...
If you are using the vSphere Client, you can view the installed N-VDS in the UI by selecting host
Configuration > Network Adapters.
The KVM command to verify the N-VDS installation is ovs-vsctl show. Note that on KVM, the N-
VDS name is nsx-switch.0. It does not match the name in the transport node configuration. This is by
design.
# ovs-vsctl show
...
Bridge "nsx-switch.0"
Port "nsx-uplink.0"
Interface "em2"
Port "nsx-vtep0.0"
tag: 0
Interface "nsx-vtep0.0"
type: internal
Port "nsx-switch.0"
Interface "nsx-switch.0"
type: internal
ovs_version: "2.4.1.3340774"
The vmk10 interface receives an IP address from the NSX-T Data Center IP pool or DHCP, as shown
here:
In KVM, you can verify the tunnel endpoint and IP allocation with the ifconfig command.
# ifconfig
...
nsx-vtep0.0 Link encap:Ethernet HWaddr ba:30:ae:aa:26:53
inet addr:192.168.250.4 Bcast:192.168.250.255 Mask:255.255.255.0
...
{
"state": "success",
"host_switch_states": [
{
"endpoints": [
{
"default_gateway": "192.168.250.1",
"device_name": "vmk10",
"ip": "192.168.250.104",
"subnet_mask": "255.255.255.0",
"label": 69633
}
],
"transport_zone_ids": [
"efd7f38f-c5da-437d-af03-ac598f82a9ec"
],
"host_switch_name": "overlay-hostswitch",
"host_switch_id": "18 ae 54 04 2c 6f 46 21-b8 ae ef ff 01 0c aa c2"
}
],
"transport_node_id": "2d030569-5769-4a13-8918-0c309c63fdb9"
}
Note NSX-T Data Center does not support the same vCenter Server to be registered with more than
one NSX Manager.
Procedure
3 Click Add.
Option Description
Name and Description Type the name to identify the vCenter Server.
You can optionally describe any special details such as, the number of clusters in
the vCenter Server.
If you left the thumbprint value blank, you are prompted to accept the server provided thumbprint.
After you accept the thumbprint, it takes a few seconds for NSX-T Data Center to discover and
register the vCenter Server resources.
5 If the progress icon changes from In progress to Not registered, perform the following steps to
resolve the error.
a Select the error message and click Resolve. One possible error message is the following:
The Compute Managers panel shows a list of compute managers. You can click the manager's name to
see or edit details about the manager, or to manage tags that apply to the manager.
Procedure
See TCP and UDP Ports Used by vSphere ESXi, KVM Hosts, and Bare Metal Server.
See Add a Hypervisor Host or Bare Metal Server to the NSX-T Data Center Fabric.
See https://github.com/vmware/bare-metal-server-integration-with-nsxt.
NIOC profile introduces a mechanism to reserve bandwidth for system traffic based on the capacity of the
physical adapters on a host. Version 3 of the Network I/O Control feature offers improved network
resource reservation and allocation across the entire switch.
Network I/O Control version 3 for NSX-T Data Center supports resource management of system traffic
related to virtual machines and to infrastructure services, such as vSphere Fault Tolerance, and so on.
System traffic is strictly associated with an vSphere ESXi host.
n NFS Traffic: is traffic related to a file transfer in the network file system.
vCenter Server Server propagates the allocation from the distributed switch to each physical adapter on
the hosts that are connected to the switch.
n Shares: Shares, from 1 to 100, reflect the relative priority of a system traffic type against the other
system traffic types that are active on the same physical adapter. The relative shares assigned to a
system traffic type and the amount of data transmitted by other system features determine the
available bandwidth for that system traffic type.
n Reservation: The minimum bandwidth, in Mbps, that must be guaranteed on a single physical
adapter. The total bandwidth reserved among all system traffic types cannot exceed 75 percent of the
bandwidth that the physical network adapter with the lowest capacity can provide. Reserved
bandwidth that is unused becomes available to other types of system traffic. However, Network I/O
Control does not redistribute the capacity that system traffic does not use to virtual machine
placement.
n Limit: The maximum bandwidth, in Mbps or Gbps, that a system traffic type can consume on a single
physical adapter.
Note You can reserve no more than 75 percent of the bandwidth of a physical network adapter. For
example, if the network adapters connected to an ESXi host are 10 GbE, you can only allocate 7.5 Gbps
bandwidth to the various traffic types. You might leave more capacity unreserved. The host can allocate
the unreserved bandwidth dynamically according to shares, limits, and use. The host reserves only the
bandwidth that is enough for the operation of a system feature.
Procedure
4 Click + ADD.
c In the Host Infra Traffic Resource section, select a Traffic Type and enter values for Limit, Shares,
Reservation.
6 Click Add.
Procedure
1 Query the host to display both system-defined and user-defined host switch profiles.
2 GET https://<nsx-mgr>/api/v1/host-switch-profiles?include_system_owned=true.
In the sample response below displays the NIOC profile that is applied to the host.
{
"description": "This profile is created for Network I/O Control (NIOC).",
"extends": {
"$ref": "BaseHostSwitchProfile"+
},
"id": "NiocProfile",
"module_id": "NiocProfile",
"polymorphic-type-descriptor": {
"type-identifier": "NiocProfile"
},
"properties": {
"_create_time": {
"$ref": "EpochMsTimestamp"+,
"can_sort": true,
"description": "Timestamp of resource creation",
"readonly": true
},
"_create_user": {
"description": "ID of the user who created this resource",
"readonly": true,
"type": "string"
},
"_last_modified_time": {
"$ref": "EpochMsTimestamp"+,
"can_sort": true,
"description": "Timestamp of last modification",
"readonly": true
},
"_last_modified_user": {
"description": "ID of the user who last modified this resource",
"readonly": true,
"type": "string"
},
"_links": {
"description": "The server will populate this field when returning the resource. Ignored on PUT
and POST.",
"items": {
"$ref": "ResourceLink"+
},
"readonly": true,
"title": "References related to this resource",
"type": "array"
},
"_protection": {
"description": "Protection status is one of the following:
PROTECTED - the client who retrieved the entity is not allowed to modify it.
NOT_PROTECTED - the client who retrieved the entity is allowed to modify it
REQUIRE_OVERRIDE - the client who retrieved the entity is a super user and can modify it,
but only when providing the request header X-Allow-Overwrite=true.
UNKNOWN - the _protection field could not be determined for this entity.",
"readonly": true,
"title": "Indicates protection status of this resource",
"type": "string"
},
"_revision": {
"description": "The _revision property describes the current revision of the resource.
To prevent clients from overwriting each other's changes, PUT operations must include the
current _revision of the resource,
which clients should obtain by issuing a GET operation.
If the _revision provided in a PUT request is missing or stale, the operation will
be rejected.",
"readonly": true,
"title": "Generation of this resource config",
"type": "int"
},
"_schema": {
"readonly": true,
"title": "Schema for this resource",
"type": "string"
},
"_self": {
"$ref": "SelfResourceLink"+,
"readonly": true,
"_system_owned": {
"description": "Indicates system owned resource",
"readonly": true,
"type": "boolean"
},
"description": {
"can_sort": true,
"maxLength": 1024,
"title": "Description of this resource",
"type": "string"
},
"display_name": {
"can_sort": true,
"description": "Defaults to ID if not set",
"maxLength": 255,
"title": "Identifier to use when displaying entity in logs or GUI",
"type": "string"
},
"enabled": {
"default": true,
"description": "The enabled property specifies the status of NIOC feature.
When enabled is set to true, NIOC feature is turned on and the bandwidth allocations
specified for the traffic resources are enforced.
When enabled is set to false, NIOC feature is turned off and no bandwidth allocation is
guaranteed.
"nsx_feature": "Nioc",
"required": false,
"title": "Enabled status of NIOC feature",
"type": "boolean"
},
"host_infra_traffic_res": {
"description": "host_infra_traffic_res specifies bandwidth allocation for various traffic
resources.",
"items": {
"$ref": "ResourceAllocation"+
},
"nsx_feature": "Nioc",
"required": false,
"title": "Resource allocation associated with NiocProfile",
"type": "array"
},
"id": {
"can_sort": true,
"readonly": true,
"title": "Unique identifier of this resource",
"type": "string"
},
"required_capabilities": {
"help_summary":
"List of capabilities required on the fabric node if this profile is
used.
The required capabilities is determined by whether specific features are enabled in the
profile.",
"items": {
"type": "string"
},
"readonly": true,
"required": false,
"type": "array"
},
"resource_type": {
"$ref": "HostSwitchProfileType"+,
"required": true
},
"tags": {
"items": {
"$ref": "Tag"+
},
"maxItems": 30,
"title": "Opaque identifiers meaningful to the API user",
"type": "array"
}
},
"title": "Profile for Nioc",
"type": "object"
}
POST https://<nsx-mgr>/api/v1/host-switch-profiles
{
"description": "Specify limit, shares and reservation for all kinds of traffic.
Values for limit and reservation are expressed in percentage. And for shares,
the value is expressed as a number between 1-100.\nThe overall reservation among all traffic
types should not exceed 75%.
Otherwise, the API request will be rejected.",
"id": "ResourceAllocation",
"module_id": "NiocProfile",
"nsx_feature": "Nioc",
"properties": {
"limit": {
"default": -1.0,
"description": "The limit property specifies the maximum bandwidth allocation for a given
traffic type and is expressed in percentage. The default value for this
field is set to -1 which means the traffic is unbounded for the traffic
type. All other negative values for this property is not supported\nand will be rejected by
the API.",
"maximum": 100,
"minimum": -1,
"required": true,
"title": "Maximum bandwidth percentage",
"type": "number"
},
"reservation": {
"default": 0.0,
"maximum": 75,
"minimum": 0,
"required": true,
"title": "Minimum guaranteed bandwidth percentage",
"type": "number"
},
"shares": {
"default": 50,
"maximum": 100,
"minimum": 1,
"required": true,
"title": "Shares",
"type": "int"
},
"traffic_type": {
"$ref": "HostInfraTrafficType"+,
"required": true,
"title": "Resource allocation traffic type"
}
},
4 Update the transport node configuration with the NIOC profile ID of the newly created NIOC profile.
PUT https://<nsx-mgr>/api/v1/transport-nodes/<TN-id>
{
"resource_type": "TransportNode",
"description": "Updated NSX configured Test Transport Node",
"id": "77816de2-39c3-436c-b891-54d31f580961",
"display_name": "NSX Configured TN",
"host_switch_spec": {
"resource_type": "StandardHostSwitchSpec",
"host_switches": [
{
"host_switch_profile_ids": [
{
"value": "e331116d-f59e-4004-8cfd-c577aefe563a",
"key": "UplinkHostSwitchProfile"
},
{
"value": "9e0b4d2d-d155-4b4b-8947-fbfe5b79f7cb",
"key": "LldpHostSwitchProfile"
}
{
"value": "b0185099-8003-4678-b86f-edd47ca2c9ad",
"key": "NiocProfile"
}
],
"host_switch_name": "nsxvswitch",
"pnics": [
{
"device_name": "vmnic1",
"uplink_name": "uplink1"
}
],
"ip_assignment_spec": {
"resource_type": "StaticIpPoolSpec",
"ip_pool_id": "ecddcdde-4dc5-4026-ad4f-8857995d4c92"
}
}
]
},
"transport_zone_endpoints": [
{
"transport_zone_id": "e14c6b8a-9edd-489f-b624-f9ef12afbd8f",
"transport_zone_profile_ids": [
{
"profile_id": "52035bb3-ab02-4a08-9884-18631312e50a",
"resource_type": "BfdHealthMonitoringProfile"
}
]
}
],
"host_switches": [
{
"host_switch_profile_ids": [
{
"value": "e331116d-f59e-4004-8cfd-c577aefe563a",
"key": "UplinkHostSwitchProfile"
},
{
"value": "9e0b4d2d-d155-4b4b-8947-fbfe5b79f7cb",
"key": "LldpHostSwitchProfile"
}
],
"host_switch_name": "nsxvswitch",
"pnics": [
{
"device_name": "vmnic1",
"uplink_name": "uplink1"
}
],
"static_ip_pool_id": "ecddcdde-4dc5-4026-ad4f-8857995d4c92"
}
],
"node_id": "41a4eebd-d6b9-11e6-b722-875041b9955d",
"_revision": 0
}
5 Verify that the NIOC profile parameters are updated in the com.vmware.common.respools.cfg
section.
NIOC Version:3
Active Uplink Bit Map:15
Parent Respool ID:netsched.pools.persist.vm
}
NIOC profile is configured with pre-defined bandwidth allocation for applications running on
NSX-T Data Center hosts.
An NSX Edge can belong to one overlay transport zone and multiple VLAN transport zones. If a VM
requires access to the outside world, the NSX Edge must belong to the same transport zone that the
VM's logical switch belongs to. Generally, the NSX Edge belongs to at least one VLAN transport zone to
provide the uplink access.
Note If you plan to create transport nodes from a template VM, make sure that there are no certificates
on the host in /etc/vmware/nsx/. The netcpa agent does not create a new certificate if a certificate
already exists.
Prerequisites
n The NSX Edge must be joined with the management plane, and MPA connectivity must be Up on the
Fabric > Edges page. See Join NSX Edge with the Management Plane.
n An uplink profile must be configured or you can use the default uplink profile for bare-metal
NSX Edge nodes.
n At least one unused physical NIC must be available on the host or NSX Edge node.
Procedure
5 Select the transport zones that this transport node belongs to.
An NSX Edge transport node belongs to at least two transport zones, an overlay for
NSX-T Data Center connectivity and a VLAN for uplink connectivity.
Option Description
N-VDS Name Must match the names that you configured when you created the transport zones.
Uplink Profile Select the uplink profile from the drop-down menu.
The available uplinks depend on the configuration in the selected uplink profile.
IP Assignment Select Use IP Pool or Use Static IP List for the overlay N-VDS.
If you select Use Static IP List, you must specify a list of comma-separated IP
addresses, a gateway, and a subnet mask.
IP Pool If you selected Use IP Pool for IP assignment, specify the IP pool name.
Physical NICs Unlike a host transport node, which uses vmnicX as the physical NIC, an
NSX Edge transport node uses fp-ethX.
GET https://<nsx-mgr>/api/v1/transport-nodes/78a03020-a3db-44c4-a8fa-f68ad4be6a0c
{
"resource_type": "TransportNode",
"id": "78a03020-a3db-44c4-a8fa-f68ad4be6a0c",
"display_name": "node-comp-01b",
"transport_zone_endpoints": [
{
"transport_zone_id": "efd7f38f-c5da-437d-af03-ac598f82a9ec",
"transport_zone_profile_ids": [
{
"profile_id": "52035bb3-ab02-4a08-9884-18631312e50a",
"resource_type": "BfdHealthMonitoringProfile"
}
]
}
],
"host_switches": [
{
"host_switch_profile_ids": [
{
"value": "8abdb6c0-db83-4e69-8b99-6cd85bfcc61d",
"key": "UplinkHostSwitchProfile"
},
{
"value": "9e0b4d2d-d155-4b4b-8947-fbfe5b79f7cb",
"key": "LldpHostSwitchProfile"
}
],
"host_switch_name": "overlay-hostswitch",
"pnics": [
{
"device_name": "vmnic1",
"uplink_name": "uplink-1"
}
],
"static_ip_pool_id": "c78ac522-2a50-43fe-816a-c459a210127e"
}
],
"node_id": "c551290a-f682-11e5-ae84-9f8726e1de65",
"_create_time": 1459547122893,
"_last_modified_user": "admin",
"_last_modified_time": 1459547126740,
"_create_user": "admin",
"_revision": 1
}
{
"control_connection_status": {
"degraded_count": 0,
"down_count": 0,
"up_count": 1,
"status": "UP"
},
"tunnel_status": {
"down_count": 0,
"up_count": 0,
"status": "UNKNOWN",
"bfd_status": {
"bfd_admin_down_count": 0,
"bfd_up_count": 0,
"bfd_init_count": 0,
"bfd_down_count": 0
},
"bfd_diagnostic": {
"echo_function_failed_count": 0,
"no_diagnostic_count": 0,
"path_down_count": 0,
"administratively_down_count": 0,
"control_detection_time_expired_count": 0,
"forwarding_plane_reset_count": 0,
"reverse_concatenated_path_down_count": 0,
"neighbor_signaled_session_down_count": 0,
"concatenated_path_down_count": 0
}
},
"pnic_status": {
"degraded_count": 0,
"down_count": 0,
"up_count": 4,
"status": "UP"
},
"mgmt_connection_status": "UP",
"node_uuid": "cd4a8501-0ffc-44cf-99cd-55980d3d8aa6",
"status": "UNKNOWN"
}
What to do next
Add the NSX Edge node to an NSX Edge cluster. See Create an NSX Edge Cluster.
An NSX Edge transport node can be added to only one NSX Edge cluster.
After creating the NSX Edge cluster, you can later edit it to add additional NSX Edges.
Prerequisites
n Optionally, create an NSX Edge cluster profile for high availability (HA) at Fabric > Profiles > Edge
Cluster Profiles. You can also use the default NSX Edge cluster profile.
Procedure
Physical Machine refers to NSX Edges that are installed on bare metal. Virtual Machine refers to
NSX Edges that are installed as virtual machines/appliances.
6 For Virtual Machine, select either NSX Edge Node or Public Cloud Gateway Node from the Member
Type drop-down menu.
If the virtual machine is deployed in a public cloud environment, select Public Cloud Gateway
otherwise select NSX Edge Node.
7 From the Available column, select NSX Edges and click the right-arrow to move them to the
Selected column.
What to do next
You can now build logical network topologies and configure services. See the NSX-T Data Center
Administration Guide.
NSX Cloud is agnostic of provider-specific networking that does not require hypervisor access in a public
cloud.
n You can develop and test applications using the same network and security profiles used in the
production environment.
n Developers can manage their applications until they are ready for deployment.
n With disaster recovery, you can recover from an unplanned outage or a security threat to your public
cloud.
n If you migrate your workloads between public clouds, NSX Cloud ensures that similar security policies
are applied to workload VMs regardless of their new location.
n Deploy PCG
n Undeploy PCG
Internet Internet
Optional
PCG Management
Firewall Subnet
(optional) VPN VPN
Gateway Gateway Downlink Subnet
Hypervisors Hypervisors
n NSX Manager for the management plane with role-based access control (RBAC) defined.
n Cloud Service Manager for integration with NSX Manager to provide public cloud-specific information
to the management plane.
n NSX Public Cloud Gateway for connectivity to the NSX management and control planes, NSX Edge
gateway services, and for API-based communications with the public cloud entities.
n NSX Agent functionality that provides NSX-managed datapath for workload VMs.
Start
Use the Setup Wizard to Connect with on-prem NSX Deploy the NSX Cloud
connect CSM with NSX using a suitable option, PCG on your VNet
Manager and add proxy e.g. Site-to-site peering configured for NSX.
server if any.
Start
Use the Setup Wizard to Connect with on-prem NSX Deploy the NSX Cloud
connect CSM with NSX using a suitable option, PCG on your VPC
Manager and add proxy e.g. Direct Connect configured for NSX
server if any
Install CSM
The Cloud Service Manager (CSM) is an essential component of NSX Cloud.
Install CSM after installing the core NSX-T Data Center components.
See Install NSX Manager and Available Appliances for detailed instructions.
In addition, you must also enable publishing the NSX Manager's FQDN using the NSX-T API.
{
"publish_fqdns": true,
"_revision": 0
}
Example response:
{
"publish_fqdns": true,
"_revision": 1
}
Prerequisites
n NSX Manager must be installed and you must have admin privileges to log in to NSX Manager
n CSM must be installed and you must have the Enterprise Administrator role assigned in CSM.
Procedure
4 Click System > Settings. Then click Configure on the panel titled Associated NSX Node.
Note You can also provide these details when using the CSM Setup Wizard that is available when
you first install CSM.
Option Description
NSX Manager Host Name Enter the fully qualified domain name (FQDN) of the NSX Manager, if available.
You may also enter the IP address of NSX Manager.
Admin Credentials Enter a username and password with the Enterprise Administrator role.
Manager Thumbprint Enter the NSX Manager's thumbrpint value you obtained in step 2.
6 Click Connect.
All public cloud communication from PCG and CSM is routed through the selected proxy server.
Proxy settings for PCG are independent of proxy settings for CSM. You can choose to have none or a
different proxy server for PCG.
n Credentials-based authentication.
n No authentication.
Procedure
1 Click System > Settings. Then click Configure on the panel titled Proxy Servers.
Note You can also provide these details when using the CSM Setup Wizard that is available when
you first install CSM.
Option Description
Default Use this radio button to indicate the default proxy server.
Authentication Optional. If you want to set up additional authentication, select this check box and
provide valid username and password.
Option Description
No Proxy Select this option if you do not want to use any of the proxy servers configured.
Open up the following network ports and protocols to allow connectivity with your on-prem NSX Manager
deployment:
Table 9‑1.
From To Protocol/Port Description
Important All NSX-T Data Center infrastructure communication leverages SSL-based encryption.
Ensure your firewall allows SSL traffic over non-standard ports.
Note You must have already installed and connected NSX Manager with CSM in your on-prem
deployment.
Overview
n Connect your Microsoft Azure subscription with on-prem NSX-T Data Center.
n Configure your VNets with the necessary CIDR blocks and subnets required by NSX Cloud.
n Synchronize time on the CSM appliance with the Microsoft Azure Storage server or NTP.
Connect your Microsoft Azure subscription with on-prem NSX-T Data Center
Every public cloud provides options to connect with an on-premises deployment. You can choose any of
the available connectivity options that suit your requirements. See Microsoft Azure reference
documentation for details.
Note You must review and implement the applicable security considerations and best practices by
Microsoft Azure, for example, all privileged user accounts accessing the Microsoft Azure portal or API
should have Multi Factor Authentication (MFA) enabled. MFA ensures only a legitimate user can access
the portal and reduces the likelihood of access even if credentials are stolen or leaked. For more
information and recommendations, refer to the Azure Security Center Documentation.
n One downlink subnet with a recommended range of /24, for the workload VMs.
n One, or two for HA, uplink subnets with a recommended range of /24, for routing of north-south traffic
leaving from or entering the VNet.
Note You must have already installed and connected NSX Manager with CSM in your on-prem
deployment.
Overview
n Connect your AWS account with on-prem NSX Manager appliances using any of the available options
that best suit your requirements.
n Configure your VPC with subnets and other requirements for NSX Cloud.
Connect your AWS account with your on-prem NSX-T Data Center
deployment
Every public cloud provides options to connect with an on-premises deployment. You can choose any of
the available connectivity options that suit your requirements. See AWS reference documentation for
details.
Note You must review and implement the applicable security considerations and best practices by AWS;
see AWS Security Best Practices.
1 Assuming your VPC uses a /16 network, for each gateway that needs to be deployed, set up three
subnets.
Important If using High Availability, set up three additional subnets in a different Availability Zone.
n Management subnet: This subnet is used for management traffic between on-prem
NSX-T Data Center and PCG. The recommended range is /28.
n Uplink subnet: This subnet is used for north-south internet traffic. The recommended range
is /24.
n Downlink subnet: This subnet encompasses the workload VM's IP address range, and should
be sized accordingly. Bear in mind that you may need to incorporate additional interfaces on the
workload VMs for debugging purposes.
2 Ensure you have an Internet gateway (IGW) that is attached to this VPC.
3 Ensure the routing table for the VPC has the Destination set to 0.0.0.0/0 and the Target is the
IGW attached to the VPC.
4 Ensure you have DNS resolution and DNS hostnames enabled for this VPC.
Note If you already added an AWS account to CSM, update the MTU in NSX Manager > Fabric >
Profiles > Uplink Profiles > PCG-Uplink-HostSwitch-Profile to 1500 before adding the Microsoft Azure
account. This can also be done using the NSX Manager REST APIs.
For NSX Cloud to operate in your subscription, you need to create a new Service Principal to grant the
required access to NSX-T Data Center. You also need to create MSI roles for CSM and PCG.
For NSX Cloud to operate in your Microsoft Azure subscription, you need to generate MSI roles for CSM
and PCG and a Service Principal for NSX Cloud.
This is achieved by running the NSX Cloud PowerShell script. In addition, you need two files in the JSON
format as parameters. When you run the PowerShell script with required parameters, the following
constructs are created:
n an Azure Resource Manager Service Principal for the NSX Cloud application.
Note The response time from Microsoft Azure can cause the script to fail when you run it the first time. If
the script fails, try running it again.
Prerequisites
n You must be the owner of the Microsoft Azure subscription for which you want to run the script to
generate the NSX Cloud Service Principal.
Procedure
2 Extract the following contents of the ZIP file in your Windows system:
Filename Description
CreateNSXRoles.ps1 This is the PowerShell script to generate the NSX Cloud Service Principal and
MSI roles for CSM and PCG
nsx_csm_role.json This file contains the CSM role name and permissions for this role in Microsoft
Azure. This is an input to the PowerShell script and must be in the same folder as
the script.
nsx_pcg_role.json This file contains the PCG role name and permissions for this role in Microsoft
Azure. This is an input to the PowerShell script and must be in the same folder as
the script. The default PCG (Gateway) Role Name is nsx-pcg-role.
Note If you are creating roles for multiple subscriptions in your Microsoft Azure Active Directory, you
must change the CSM and PCG role names for each subscription in the respective JSON files and
rerun the script.
3 Run the script with your Microsoft Azure Subscription ID as a parameter. The parameter name is
subscriptionId.
For example,
This creates a Service Principal for NSX Cloud, a role with appropriate privileges for CSM and PCG,
and attaches the CSM and PCG roles to the NSX Cloud Service Principal.
4 Look for a file in the same directory where you ran the PowerShell script. It is named like:
NSXCloud_ServicePrincipal_<your_subscription_ID>_<NSX_Cloud_Service_Principal_nam
e>. This file contains the information you need to add your Microsoft Azure subscription in CSM.
n Client ID
n Client Key
n Tenant ID
n Subscription ID
Note Refer to the JSON files that are used to create the CSM and PCG roles for a list of
permissions available to them after the roles are created.
What to do next
Prerequisites
n You must have the Enterprise Administrator role in NSX-T Data Center.
n You must have the output of the PowerShell script with details of the NSX Cloud Service Principal.
n You must have the value of the PCG role you provided when running the PowerShell script to create
the roles and the Service Principal.
Procedure
Option Description
Name Provide a suitable name to identify this account in CSM. You may have multiple
Microsoft Azure subscriptions that are associated with the same Microsoft Azure
tenant ID. Name your account account and you can name them appropriately in
CSM, for example, Azure-DevOps-Account, Azure-Finance-Account, etc.
Client ID Copy paste this value from the output of the PowerShell script.
Key Copy paste this value from the output of the PowerShell script.
Subscription ID Copy paste this value from the output of the PowerShell script.
Tenant ID Copy paste this value from the output of the PowerShell script.
Gateway Role Name The default value is nsx-pcg-role. This value is available from the
nsx_pcg_role.json file if you changed the default.
Cloud Tags By default this option is enabled and allows your Microsoft Azure tags to be
visible in NSX Manager
4 Click Save.
CSM adds the account and you can see it in the Accounts section within a few minutes.
What to do next
1 Use the NSX Cloud script, that requires AWS CLI, to do the following:
For NSX Cloud to operate in your AWS account, you need to generate an IAM profile and a role for PCG.
This is achieved by running the NSX Cloud shell script using the AWS CLI that creates the following
constructs:
Prerequisites
n You must have the AWS CLI installed and configured using your AWS account's Access Key and
Secret Key.
n You must have a unique IAM profile name picked out to supply to the script. The Gateway Role Name
is attached to this IAM profile
Procedure
2 Run the script and enter a name for the IAM profile when prompted. For example,
bash AWS_create_NSXCloud_credentials.sh
3 When the script runs successfully, the IAM profile and a role for PCG is created in your AWS account.
The values are saved in the output file in the same directory where you ran the script. The filename is
aws_details.txt.
Note The PCG (Gateway) role name is nsx_pcg_service by default. You can change it in the script
if you want a different value for the Gateway Role Name. This value is required for adding the AWS
account in CSM, therefore you must make a note of it if changing the default value.
What to do next
Procedure
3 Click +Add and enter the following details using the output file aws_details.txt generated from the
NSX Cloud script:
Option Description
Cloud Tags By default this option is enabled and allows your AWS tags to be visible in
NSX Manager
Gateway Role Name The default value is nsx_pcg_service. You can find this value in the output of
the script in the file aws_details.txt.
In the VPCs tab of CSM, you can view all the VPCs in your AWS account.
In the Instances tab of CSM, you can view the EC2 Instances in this VPC.
What to do next
Deploy PCG
The NSX Public Cloud Gateway (PCG) provides north-south connectivity between the public cloud and
the NSX-T Data Center on-prem management components.
Prerequisites
n The VPC or VNet on which you are deploying PCG must have the required subnets appropriately
adjusted for High Availability: uplink, downlink, and management.
PCG deployment aligns with your network addressing plan with FQDNs for the NSX-T Data Center
components and a DNS server that can resolve these FQDNs.
Note It is not recommended to use IP addresses for connecting the public cloud with
NSX-T Data Center using PCG, but if you choose that option, do not change your IP addresses.
Procedure
Option Description
SSH Public Key Provide an SSH public key that can be validated while deploying PCG. This is
required for each PCG deployment.
Quarantine Policy on the Associated Leave this in the default disabled mode when you first deploy PCG. You can
VNet change this value after onboarding VMs. See Manage Quarantine Policy in the
NSX-T Data Center Administration Guide for details.
Local Storage Account When you add a Microsoft Azure subscription to CSM, a list of your Microsoft
Azure Storage Accounts is available to CSM. Select the Storage Account from the
drop-down menu. When proceeding with deploying PCG, CSM copies the publicly
available VHD of the PCG into this Storage Account of the selected region.
Note If the VHD image has been copied to this storage account in the region
already for a previous PCG deployment, then the image is used from this location
for subsequent deployments to reduce the overall deployment time.
VHD URL If you want to use a different PCG image that is not available from the public
VMware repository, you can enter the URL of the PCG’s VHD here. The VHD
must be present in the same account and region where this VNet is created.
Proxy Server Select a proxy server to use for internet-bound traffic from this PCG. The proxy
servers are configured in CSM. You can select the same proxy server as CSM if
one, or select a different proxy server from CSM, or select No Proxy Server.
See (Optional) Configure Proxy Servers for details on how to configure proxy
servers in CSM.
Advanced The advanced DNS settings provide flexibility in selecting DNS servers for
resolving NSX-T Data Center management components.
Obtain via Public Cloud Provider's Select this option if you want to use Microsoft Azure DNS settings. This is the
DHCP default DNS setting if you do not pick either of the options to override it.
Override Public Cloud Provider's DNS Select this option if you want to manually provide the IP address of one or more
Server DNS servers to resolve NSX-T Data Center appliances as well as the workload
VMs in this VNet.
Use Public Cloud Provider's DNS Select this option if you want to use the Microsoft Azure DNS server for resolving
server only for NSX-T Data Center the NSX-T Data Center management components. With this setting, you can use
Appliances two DNS servers: one for PCG that resolves NSX-T Data Center appliances; the
other for the VNet that resolves your workload VMs in this VNet.
6 Click Next.
Option Description
Enable HA for NSX Cloud Gateway Select this option to enable High Availability.
Public IP on Mgmt NIC Select Allocate New IP address to provide a public IP address to the
management NIC. You can manually provide the public IP address if you want to
reuse a free public IP address.
Public IP on Uplink NIC Select Allocate New IP address to provide a public IP address to the uplink NIC.
You can manually provide the public IP address if you want to reuse a free public
IP address.
What to do next
Onboard your workload VMs. See Onboarding and Managing Workload VMs in the NSX-T Data Center
Administration Guide for the Day-N workflow.
Procedure
2 Click Clouds > AWS > <AWS_account_name> and go to the VPCs tab.
3 In the VPCs tab, select an AWS region name, for example, us-west. The AWS region must be the
same where you created the compute VPC.
Option Description
PEM File Select one of your PEM files from the drop-down menu. This file must be in the
same region where NSX Cloud was deployed and where you created your
compute VPC.
This uniquely identifies your AWS account.
Quarantine Policy on the Associated The default selection is Enabled. This is recommended for greenfield
VPC deployments. If you already have VMs launched in your VPC, disable the
Quarantine policy. See Manage Quarantine Policy in the NSX-T Data Center
Administration Guide for details.
Proxy Server Select a proxy server to use for internet-bound traffic from this PCG. The proxy
servers are configured in CSM. You can select the same proxy server as CSM if
one, or select a different proxy server from CSM, or select No Proxy Server.
See (Optional) Configure Proxy Servers for details on how to configure proxy
servers in CSM.
Option Description
Override AMI ID Use this advanced feature to provide a different AMI ID for the PCG from the one
that is available in your AWS account.
Obtain via Public Cloud Provider's Select this option if you want to use AWS settings. This is the default DNS setting
DHCP if you do not pick either of the options to override it.
Override Public Cloud Provider's DNS Select this option if you want to manually provide the IP address of one or more
Server DNS servers to resolve NSX-T Data Center appliances as well as the workload
VMs in this VPC.
Use Public Cloud Provider's DNS Select this option if you want to use the AWS DNS server for resolving the
server only for NSX-T Data Center NSX-T Data Center management components. With this setting, you can use two
Appliances DNS servers: one for PCG that resolves NSX-T Data Center appliances; the other
for the VPC that resolves your workload VMs in this VPC.
7 Click Next.
Option Description
Enable HA for Public Cloud Gateway The recommended setting is Enable, that sets up a High Availability
Active/Standby pair to avoid an unscheduled downtime.
Primary gateway settings Select an Availability Zone such as us-west-1a, from the drop-down menu as
the primary gateway for HA.
Assign the uplink, downlink, and management subnets from the drop-down menu.
Secondary gateway settings Select another Availability Zone such as us-west-1b, from the drop-down menu
as the secondary gateway for HA.
The secondary gateway is used when the primary gateway fails.
Assign the uplink, downlink, and management subnets from the drop-down menu.
Public IP on Mgmt NIC Select Allocate New IP address to provide a public IP address to the
management NIC. You can manually provide the public IP address if you want to
reuse a free public IP address.
Public IP on Uplink NIC Select Allocate New IP address to provide a public IP address to the uplink NIC.
You can manually provide the public IP address if you want to reuse a free public
IP address.
Click Deploy.
9 Monitor the status of the primary (and secondary, if you selected it) PCG deployment. This process
can take 10-12 minutes.
What to do next
Onboard your workload VMs. See Onboarding and Managing Workload VMs in the NSX-T Data Center
Administration Guide for the Day-N workflow.
n The PCG is added to Edge Cluster. In a High Availability deployment, there are two PCGs.
n The PCG (or PCGs) is registered as a Transport Node with two Transport Zones created.
n A default NSGroup with the name PublicCloudSecurityGroup is created that has the following
members:
n Logical ports, one each for the PCG uplink ports, if you have HA enabled.
n IP address
n LogicalSwitchToLogicalSwitch
n LogicalSwitchToAnywhere
n AnywhereToLogicalSwitch
Note These DFW rules block all traffic and need to be adjusted according to your specific
requirements.
2 Browse to Fabric > Nodes > Edge. Public Cloud Gateway should be listed as an Edge Node.
3 Verify that Deployment Status, Manager Connection and Controller Connection are connected (status
shows Up with a green dot).
4 Browse to Fabric > Nodes > Edge Clusters to verify that the Edge Cluster and PCG were added as
part of this cluster.
5 Browse to Fabric > Nodes > Transport Nodes to verify that PCG is registered as a Transport Node
and is connected to two Transport Zones that were auto-created while deploying PCG:
6 Verify whether the logical switches and the tier-0 logical router have been created and the logical
router added to the Edge Cluster.
n In the AWS VPC, a new Type A Record Set is added with the name nsx-gw.vmware.local. The IP
address mapped to this record matches the Management IP address of PCG. This is assigned by
AWS using DHCP and will differ for each VPC.
n A secondary IP for the uplink interface for PCG is created. An AWS Elastic IP is associated with this
secondary IP address. This configuration is for SNAT.
Table 9‑2. Public Cloud Security Groups created by NSX Cloud for PCG interfaces
Security Group
name Available in Microsoft Azure? Available in AWS? Full Name
Table 9‑3. Public Cloud Security Groups created by NSX Cloud for Workload VMs
Security Group
name Available in Microsoft Azure? Available in AWS? Descriptiom
Table 9‑3. Public Cloud Security Groups created by NSX Cloud for Workload VMs
(Continued)
Security Group
name Available in Microsoft Azure? Available in AWS? Descriptiom
Undeploy PCG
Refer to this flowchart for the steps involved in undeploying PCG.
n To undeploy PCG, the following conditions must be satisfied:No workload VMs in the VPC or VNet
must be NSX-managed.
n All user-created logical entities associated with the PCG must be deleted.
No
Stop
Go to the VPC or VNet in your public cloud and remove the nsx.network tag from the managed VMs.
With Quarantine Policy enabled, your VMs are assigned security groups defined by NSX Cloud. When
you undeploy PCG, you need to disable Quarantine Policy and specify a fallback security group that the
VMs can be assigned to when they are removed from the NSX Cloud security groups.
Note The fallback security group must be an existing user-defined security group in your public cloud.
You cannot use any of the NSX Cloud security groups as a fallback security group. See Constructs
Created after Deploying PCG for a list of NSX Cloud security groups.
Disable Quarantine Policy for the VPC or VNet from which you are undeploying PCG:
n From Actions > Edit Configurations >, turn off the setting for Default Quarantine .
n Enter a value for a fallback security group that VMs will be assigned.
n All VMs that are unmanaged or quarantined in this VPC or VNet will get the fallback security group
assigned to them.
n If all VMs are unmanaged, they get assigned to the fallback security group.
n If there are managed VMs while disabling Quarantine Policy, they retain their NSX Cloud-assigned
security groups. The first time you remove the nsx.network tag from such VMs to take them out
from NSX management, they are also assigned the fallback security group.
Note See Managing Quarantine Policy in the NSX-T Data Center Administration Guide for instructions
and more information on the effects of enabling and disabling the Quarantine Policy.
Note Do not delete the logical entities created automatically when PCG is deployed. See Constructs
Created after Deploying PCG
n Fabric-Nodes: Edges
n Fabric-Profiles: PCG-Uplink-HostSwitch-Profile
n Switching: PublicCloud-Global-SpoofGuardProfile
n If using AWS, go to Clouds > AWS > VPCs. Click on the VPC on which one or a pair of PCGs is
deployed and running.
n If using Microsoft Azure, go to Clouds > Azure > VNets. Click on the VNet on which one or a pair
of PCGs is deployed and running.
The default entities created by NSX Cloud are removed automatically when a PCG is undeployed.
n Remove a Host From NSX-T Data Center or Uninstall NSX-T Data Center Completely
Procedure
2 In your VM management tool, detach all VMs from any logical switches and connect the VMs to non
NSX-T Data Center networks.
3 For KVM hosts, SSH to the hosts and power off the VMs.
shutdown -h now
5 In the NSX Manager UI or API, delete all logical switch ports and then all logical switches.
6 In the NSX Manager UI or API, delete all NSX Edges and then all NSX Edge clusters.
The following procedure describes how to perform a clean uninstall of NSX-T Data Center.
Prerequisites
If the VM management tool is vCenter Server, put the vSphere host in maintenance mode.
Procedure
1 In the NSX Manager, select Fabric > Nodes > Transport Nodes and delete the host transport
nodes.
Deleting the transport node causes the N-VDS to be removed from the host. You can confirm this by
running the following command.
ovs-vsctl show
2 In the NSX Manager CLI, verify that the NSX-T Data Center install-upgrade service is running.
3 Uninstall the host from the management plane and remove the NSX-T Data Center modules.
It might take up to 5 minutes for all NSX-T Data Center modules to be removed.
There are several methods you can use to remove the NSX-T Data Center modules:
n In the NSX Manager, select Fabric > Nodes > Hosts > Delete.
Make sure Uninstall NSX Components is checked. This causes the NSX-T Data Center
modules to be uninstalled on the host.
Remove the RHEL 7.4 dependency packages - json_spirit, python-greenlet, libev, protobuf,
leveldb, python-gevent, python-simplejson, glog.
Note that using Fabric > Nodes > Hosts > Delete with the Uninstall NSX Components option
unchecked is not meant to be used to unregister a host. It is only meant as a workaround for
hosts that are in a bad state.
n (Hosts managed by a compute manger) In the NSX Manager, select Fabric > Nodes > Hosts >
Transport Nodes > Delete Host.
In the NSX Manager, select Fabric > Nodes > Hosts > Compute Manager > Configure Cluster
Manager and uncheck Automatically Install NSX. Select node and click Uninstall NSX.
Make sure Uninstall NSX Components is checked. This causes the NSX-T Data Center
modules to be uninstalled on the host.
Note This API does not remove the dependency packages from the nsx-lcp bundle.
Remove the RHEL 7.4 dependency packages - json_spirit, python-greenlet, libev, protobuf,
leveldb, python-gevent, python-simplejson, glog.
b On the host's NSX-T Data Center CLI, run the following command to detach the host from the
management plane.
e Power off the VMs on the host or migrate them to another host.
f On the host, run the following command to manually uninstall the NSX-T Data Center
configuration and modules. This command is supported on all host types.
What to do next
After making this change, the host is removed from the management plane and can no longer take part in
the NSX-T Data Center overlay.
If you are removing NSX-T Data Center completely, in your VM management tool, shut down
NSX Manager, NSX Controllers, and NSX Edges and delete them from the disk.