Vmwarevcf 51
Vmwarevcf 51
Design Guide
07 NOV 2023
VMware Cloud Foundation 5.1
VMware Cloud Foundation Design Guide
You can find the most up-to-date technical documentation on the VMware by Broadcom website at:
https://docs.vmware.com/
VMware by Broadcom
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com
©
Copyright 2021-2023 Broadcom. All Rights Reserved. The term “Broadcom” refers to Broadcom Inc.
and/or its subsidiaries. For more information, go to https://www.broadcom.com. All trademarks, trade
names, service marks, and logos referenced herein belong to their respective companies.
VMware by Broadcom 2
Contents
VMware by Broadcom 3
VMware Cloud Foundation Design Guide
VMware by Broadcom 4
VMware Cloud Foundation Design Guide
10 VMware Aria Suite Lifecycle Design for VMware Cloud Foundation 159
Logical Design for VMware Aria Suite Lifecycle for VMware Cloud Foundation 160
Network Design for VMware Aria Suite Lifecycle 162
Data Center and Environment Design for VMware Aria Suite Lifecycle 163
Locker Design for VMware Aria Suite Lifecycle 165
VMware Aria Suite Lifecycle Design Requirements and Recommendations for VMware Cloud
Foundation 166
VMware by Broadcom 5
VMware Cloud Foundation Design Guide
VMware by Broadcom 6
About VMware Cloud Foundation Design
Guide
The VMware Cloud Foundation Design Guide contains a design model for VMware Cloud
Foundation (also called VCF) that is based on industry best practices for SDDC implementation.
The VMware Cloud Foundation Design Guide provides the supported design options for VMware
Cloud Foundation, and a set of decision points, justifications, implications, and considerations for
building each component.
Intended Audience
This VMware Cloud Foundation Design Guide is intended for cloud architects who are familiar
with and want to use VMware Cloud Foundation to deploy and manage an SDDC that meets the
requirements for capacity, scalability, backup and restore, and extensibility for disaster recovery
support.
To apply this VMware Cloud Foundation Design Guide, you must be acquainted with the Getting
Started with VMware Cloud Foundation documentation and with the VMware Cloud Foundation
Release Notes. See VMware Cloud Foundation documentation.
For performance best practices for vSphere, see Performance Best Practices for VMware
vSphere 8.0 Update 1.
Design Elements
This VMware Cloud Foundation Design Guide contains requirements and recommendations for
the design of each component of the SDDC. In situations where a configuration choice exists,
requirements and recommendations are available for each choice. Implement only those that are
relevant to your target configuration.
VMware by Broadcom 7
VMware Cloud Foundation Design Guide
n Single VMware Cloud Foundation instance with multiple availability zones (also known as
stretched deployment). The default vSphere cluster of the workload domain is stretched
between two availability zones by using Chapter 6 vSAN Design for VMware Cloud
Foundation and configuring vSphere Cluster Design Requirements and Recommendations
for VMware Cloud Foundation and BGP Routing Design for VMware Cloud Foundation
accordingly.
n Multiple VMware Cloud Foundation instances. You deploy several instances of VMware Cloud
Foundation to address requirements for scale and co-location of users and resources.
n Multiple VMware Cloud Foundation instances with multiple availability zones. You apply the
configuration for stretched clusters for a single VMware Cloud Foundation instance to one or
more additional VMware Cloud Foundation instances in your environment.
VMware by Broadcom 8
VMware Cloud Foundation Design Guide
VMware by Broadcom 9
VMware Cloud Foundation
Concepts 1
To design a VMware Cloud Foundation deployment, you need to understand certain VMware
Cloud Foundation concepts.
Read the following topics next:
Architecture Models
Decide on a model according to your organization's requirements and your environment's
resource capabilities. Implement a standard architecture for workload provisioning and mobility
across VMware Cloud Foundation instances according to production best practices. If you plan to
deploy a small-scale environment, or if you are working on an SDDC proof-of-concept, implement
a consolidated architecture.
VMware by Broadcom 10
VMware Cloud Foundation Design Guide
Do you need to
No
minimize hardware
requirements?
Yes
VMware by Broadcom 11
VMware Cloud Foundation Design Guide
Management domain n First domain deployed. n Guaranteed sufficient n You must carefully
n Contains the resources for size the domain to
following management management accommodate planned
appliances for all components deployment of VI
workload domains: workload domains and
additional management
n vCenter Server
components.
n NSX Manager
n Hardware might not be
n SDDC Manager
fully utilized until full-
n Optional. VMware scale deployment has
Aria Suite been reached.
components
n Optional.
Management
domain NSX Edge
nodes
n Has dedicated ESXi
hosts
n First domain to
upgrade.
VMware by Broadcom 12
VMware Cloud Foundation Design Guide
VI workload domain n Represents an n Can share an NSX This workload domain type
additional workload Manager instance with cannot provide distinct
domain for running other VI workload vCenter Single Sign-On
customer workloads. domains. domains for customer
n Shares a vCenter n All workload domains workloads.
Single Sign-On domain can be managed
with the management through a single pane
domain. of glass.
n Shares identity provider n Minimizes password
configuration with the management overhead.
management domain. n Allows for independent
n Has dedicated ESXi life cycle management.
hosts.
VMware by Broadcom 13
VMware Cloud Foundation Design Guide
Figure 1-2. Choosing a VMware Cloud Foundation Workload Domain Type for Customer
Workloads
Start choosing
a workload domain type
No
No
No
Is a VI workload
No domain with Yes
a dedicated
vCenter Single
Sign-on domain
required?
VMware by Broadcom 14
VMware Cloud Foundation Design Guide
You create multiple availability zones for the purpose of creating vSAN stretched clusters.
Using multiple availability zones can improve availability of management components and
workloads running within the SDDC, minimize downtime of services, and improve SLAs.
Availability zones are typically located either within the same data center, but in different
racks, chassis, rooms, or in different data centers with low-latency high-speed links
connecting them. One availability zone can contain several fault domains.
Note Only stretched clusters created by using the Stretch Cluster API, and are therefore
vSAN storage based, are considered by and treated as stretched clusters by VMware Cloud
Foundation.
VMware by Broadcom 15
VMware Cloud Foundation Design Guide
Topology Description
Single Instance - Single Availability Zone Workload domains are deployed in a single availability
zone.
Single Instance - Multiple Availability Zones Workload domains might be stretched between two
availability zones.
Multiple Instances - Single Availability Zone per VMware Workload domains in each instance are deployed in a
Cloud Foundation instance single availability zone.
Multiple Instances - Multiple Availability Zones per Workload domains in each instance might be stretched
VMware Cloud Foundation instance between two availability zones.
Start choosing
a VMware Cloud Foundation topology
Do you need
No disaster recovery Yes
or more than 24
VI workload
domains?
Use the Single Instance - Use the Single Instance - Use the Multiple Instance - Use the Multiple Instance -
Single Avalibility Multiple Avalibility Single Avalibility Multiple Avalibility
Zone topology Zones topology Zones topology Zones topology
VMware by Broadcom 16
VMware Cloud Foundation Design Guide
The Single Instance - Single Availability Zone topology relies on vSphere HA to protect against
host failures.
Figure 1-4. Single VMware Cloud Foundation Instance with a Single Availability Zone
VCF Instance
VI Workload
Domain
Management
Domain
Attributes Detail
Workload domain cluster rack mappings n Workload domain cluster in a single rack
n Workload domain cluster spanning multiple racks
VMware by Broadcom 17
VMware Cloud Foundation Design Guide
Incorporating multiple availability zones in your design can help reduce the blast radius of a
failure and can increase application availability. You usually deploy multiple availability zones
across two independent data centers.
VCF Instance
VI Workload
Domain
VI Workload
Domain
VI Workload
Domain
VI Workload
Domain
Management
Domain
VMware by Broadcom 18
VMware Cloud Foundation Design Guide
Attributes Detail
Workload domain cluster rack mappings n Workload domain cluster in a single rack
n Workload domain cluster spanning multiple racks
n Workload domain cluster with multiple availability
zones, each zone in a single rack
n Workload domain cluster with multiple availability
zones, each zone spanning multiple racks
Incorporating multiple VMware Cloud Foundation instances in your design can help reduce
the blast radius of a failure and can increase application availability across larger geographical
distances than cannot be achieved by using multiple availability zones. You usually deploy this
topology in the same data center for scale or across independent data centers for resilience.
VMware by Broadcom 19
VMware Cloud Foundation Design Guide
Figure 1-6. Multiple Instance - Single Availability Zone Topology for VMware Cloud Foundation
VCF Instance 1
VI Workload
Domain
Management
Domain
VCF Instance 2
VI Workload
Domain
Management
Domain
VMware by Broadcom 20
VMware Cloud Foundation Design Guide
Attributes Detail
Workload domain cluster rack mapping n Workload domain cluster in a single rack
n Workload domain cluster spanning multiple racks
Incorporating multiple VMware Cloud Foundation instances into your design can help reduce
the blast radius of a failure and can increase application availability across larger geographical
distances that cannot be achieved using multiple availability zones.
VMware by Broadcom 21
VMware Cloud Foundation Design Guide
Figure 1-7. Multiple Instance - Multiple Availability Zones Topology for VMware Cloud
Foundation
VCF Instance 1
VI Workload
Domain
VI Workload
Domain
VI Workload
Domain
VI Workload
Domain
Management
Domain
VCF Instance 2
VI Workload
Domain
VMware by Broadcom 22
VMware Cloud Foundation Design Guide
Attributes Detail
Workload domain cluster rack mapping n Workload domain cluster in a single rack
n Workload domain cluster spanning multiple racks
n Workload domain cluster with multiple availability
zones, each zone in a single rack
n Workload domain cluster with multiple availability
zones, each zone spanning multiple racks
VMware by Broadcom 23
VMware Cloud Foundation Design Guide
Multiple Instances - Single Availability Zone per Instance Multiple Instances - Multiple Availability Zones per
instance
Chapter 11 Workspace ONE Access Design for VMware Standard Workspace ONE Access
Cloud Foundation
External Services Design Elements for VMware Cloud External Services Design Requirements
Foundation
VMware by Broadcom 24
VMware Cloud Foundation Design Guide
Physical Network Design Elements for VMware Cloud Leaf-Spine Physical Network Design Requirements
Foundation
Leaf-Spine Physical Network Design Requirements for
NSX Federation
vSAN Design Elements for VMware Cloud Foundation vSAN Design Requirements
ESXi Design Elements for VMware Cloud Foundation ESXi Server Design Requirements
vCenter Single Sign-On Design Elements vCenter Single Sign-on Design Requirements for Multiple
vCenter - Single vCenter Single Sign-On Domain Topology
vSphere Cluster Design Elements for VMware Cloud vSphere Cluster Design Requirements
Foundation
vSphere Cluster Design Requirements for Stretched
Clusters
vSphere Networking Design Elements for VMware Cloud vSphere Networking Design Recommendations
Foundation
VMware by Broadcom 25
VMware Cloud Foundation Design Guide
NSX Global Manager Design Elements NSX Global Manager Design Requirements for NSX
Federation
BGP Routing Design Elements for VMware Cloud BGP Routing Design Requirements
Foundation
BGP Routing Design Requirements for Stretched Clusters
Overlay Design Elements for VMware Cloud Foundation Overlay Design Requirements
Application Virtual Network Design Elements for VMware Application Virtual Network Design Requirements
Cloud Foundation
Application Virtual Network Design Requirements for NSX
Federation
Load Balancing Design Elements for VMware Cloud Load Balancing Design Requirements
Foundation
Load Balancing Design Requirements for NSX Federation
SDDC Manager Design Elements for VMware Cloud SDDC Manager Design Requirements
Foundation
SDDC Manager Design Recommendations
vSAN Design Elements for VMware Cloud Foundation vSAN Design Requirements
VMware by Broadcom 26
VMware Cloud Foundation Design Guide
ESXi Design Elements for VMware Cloud Foundation ESXi Server Design Requirements
vCenter Single Sign-On Design Elements vCenter Single Sign-on Design Requirements for Multiple
vCenter - Single SSO Domain Topology
vSphere Cluster Design Elements for VMware Cloud vSphere Cluster Design Requirements VMware Cloud
Foundation Foundation
vSphere Networking Design Elements for VMware Cloud vSphere Networking Design Recommendations
Foundation
NSX Global Manager Design Elements NSX Global Manager Design Requirements for NSX
Federation
BGP Routing Design Elements for VMware Cloud BGP Routing Design Requirements
Foundation
VMware by Broadcom 27
VMware Cloud Foundation Design Guide
Overlay Design Elements for VMware Cloud Foundation Overlay Design Requirements
Table 1-14. VMware Aria Suite Lifecycle and Workspace ONE Access Design Elements
VMware Aria Suite Lifecycle Design Elements for VMware VMware Aria Suite Lifecycle Design Requirements
Cloud Foundation
VMware Aria Suite Lifecycle Design Requirements for
Stretched Clusters
Workspace ONE Access Design Elements for VMware Workspace ONE Access Design Requirements
Cloud Foundation
Workspace ONE Access Design Requirements for
Stretched Clusters
Life Cycle Management Design Elements for VMware Life Cycle Management Design Requirements
Cloud Foundation
Information Security Design Elements for VMware Cloud Account and Password Management Design
Foundation Recommendations
VMware by Broadcom 28
VMware Cloud Foundation Design Guide
Information Security Design Elements for VMware Cloud Certificate Management Design Recommendations
Foundation
Multiple Instances - Single Availability Zone per Instance Single Instance - Multiple Availability Zones per instance
Chapter 11 Workspace ONE Access Design for VMware Standard Workspace ONE Access
Cloud Foundation
External Services Design Elements for VMware Cloud External Services Design Requirements
Foundation
VMware by Broadcom 29
VMware Cloud Foundation Design Guide
Physical Network Design Elements for VMware Cloud Leaf-Spine Physical Network Design Requirements
Foundation
Leaf-Spine Physical Network Design Recommendations
vSAN Design Elements for VMware Cloud Foundation vSAN Design Requirements
ESXi Design Elements for VMware Cloud Foundation ESXi Server Design Requirements
vCenter Single Sign-On Design Elements vCenter Single Sign-on Design Requirements for Multiple
vCenter - Single vCenter Single Sign-On Domain Topology
vSphere Cluster Design Elements for VMware Cloud vSphere Cluster Design Requirements
Foundation
vSphere Cluster Design Requirements for Stretched
Clusters
vSphere Networking Design Elements for VMware Cloud vSphere Networking Design Recommendations
Foundation
VMware by Broadcom 30
VMware Cloud Foundation Design Guide
BGP Routing Design Elements for VMware Cloud BGP Routing Design Requirements
Foundation
BGP Routing Design Requirements for Stretched Clusters
Overlay Design Elements for VMware Cloud Foundation Overlay Design Requirements
Application Virtual Network Design Elements for VMware Application Virtual Network Design Requirements
Cloud Foundation
Load Balancing Design Elements for VMware Cloud Load Balancing Design Requirements
Foundation
SDDC Manager Design Elements for VMware Cloud SDDC Manager Design Requirements
Foundation
SDDC Manager Design Recommendations
vSAN Design Elements for VMware Cloud Foundation vSAN Design Requirements
ESXi Design Elements for VMware Cloud Foundation ESXi Server Design Requirements
vCenter Single Sign-On Design Elements vCenter Single Sign-on Design Requirements for Multiple
vCenter - Single SSO Domain Topology
vSphere Cluster Design Elements for VMware Cloud vSphere Cluster Design Requirements VMware Cloud
Foundation Foundation
VMware by Broadcom 31
VMware Cloud Foundation Design Guide
vSphere Networking Design Elements for VMware Cloud vSphere Networking Design Recommendations
Foundation
BGP Routing Design Elements for VMware Cloud BGP Routing Design Requirements
Foundation
BGP Routing Design Requirements for Stretched Clusters
Overlay Design Elements for VMware Cloud Foundation Overlay Design Requirements
Table 1-23. VMware Aria Suite Lifecycle and Workspace ONE Access Design Elements
VMware Aria Suite Lifecycle Design Elements for VMware VMware Aria Suite Lifecycle Design Requirements
Cloud Foundation
VMware Aria Suite Lifecycle Design Requirements for
Stretched Clusters
Workspace ONE Access Design Elements for VMware Workspace ONE Access Design Requirements
Cloud Foundation
Workspace ONE Access Design Requirements for
Stretched Clusters
Life Cycle Management Design Elements for VMware Life Cycle Management Design Requirements
Cloud Foundation
VMware by Broadcom 32
VMware Cloud Foundation Design Guide
Information Security Design Elements for VMware Cloud Account and Password Management Design
Foundation Recommendations
Information Security Design Elements for VMware Cloud Certificate Management Design Recommendations
Foundation
Chapter 11 Workspace ONE Access Design for VMware Standard Workspace ONE Access
Cloud Foundation
VMware by Broadcom 33
VMware Cloud Foundation Design Guide
External Services Design Elements for VMware Cloud External Services Design Requirements
Foundation
Physical Network Design Elements for VMware Cloud Leaf-Spine Physical Network Design Requirements
Foundation
Leaf-Spine Physical Network Design Recommendations
vSAN Design Elements for VMware Cloud Foundation vSAN Design Requirements
ESXi Design Elements for VMware Cloud Foundation ESXi Server Design Requirements
vCenter Single Sign-On Design Elements vCenter Single Sign-on Design Requirements for Multiple
vCenter - Single vCenter Single Sign-On Domain Topology
vSphere Cluster Design Elements for VMware Cloud vSphere Cluster Design Requirements
Foundation
vSphere Cluster Design Recommendations
vSphere Networking Design Elements for VMware Cloud vSphere Networking Design Recommendations
Foundation
BGP Routing Design Elements for VMware Cloud BGP Routing Design Requirements
Foundation
BGP Routing Design Recommendations
Overlay Design Elements for VMware Cloud Foundation Overlay Design Requirements
VMware by Broadcom 34
VMware Cloud Foundation Design Guide
Application Virtual Network Design Elements for VMware Application Virtual Network Design Requirements
Cloud Foundation
Load Balancing Design Elements for VMware Cloud Load Balancing Design Requirements
Foundation
SDDC Manager Design Elements for VMware Cloud SDDC Manager Design Requirements
Foundation
SDDC Manager Design Recommendations
Table 1-31. VMware Aria Suite Lifecycle and Workspace ONE Access Design Elements
VMware Aria Suite Lifecycle Design Elements for VMware VMware Aria Suite Lifecycle Design Requirements
Cloud Foundation
VMware Aria Suite Lifecycle Design Recommendations
Workspace ONE Access Design Elements for VMware Workspace ONE Access Design Requirements
Cloud Foundation
Workspace ONE Access Design Recommendations
Life Cycle Management Design Elements for VMware Life Cycle Management Design Requirements
Cloud Foundation
Information Security Design Elements for VMware Cloud Account and Password Management Design
Foundation Recommendations
Information Security Design Elements for VMware Cloud Certificate Management Design Recommendations
Foundation
VMware by Broadcom 35
Workload Domain Cluster to
Rack Mapping in VMware Cloud
Foundation
2
VMware Cloud Foundation distributes the functionality of the SDDC across multiple workload
domains and vSphere clusters. A workload domain, whether it is the management workload
domain or a VI workload domain, is a logical abstraction of compute, storage, and network
capacity, and consists of one or more clusters. Each cluster can exist vertically in a single rack or
be spanned horizontally across multiple racks.
The relationship between workload domain clusters and data center racks in VMware Cloud
Foundation is not one-to-one. While a workload domain cluster is an atomic unit of repeatable
building blocks, a rack is a unit of size. Because workload domain clusters can have different
sizes, you map workload domain clusters to data center racks according to your requirements
and physical infrastructure constraints. You determine the total number of racks for each cluster
type according to your scalability needs.
Workload domain cluster in a single rack The workload domain cluster occupies a single rack.
Workload domain cluster spanning multiple racks n The management domain can span multiple racks if
the data center fabric can provide Layer 2 adjacency,
such as BGP EVPN, between racks. If the Layer 3
fabric does not support this requirement, then the
management cluster should be mapped to a single
rack.
n A VI workload domain can span multiple racks. If you
are using a Layer 3 network fabric, NSX Edge clusters
cannot be hosted on clusters that span racks.
VMware by Broadcom 36
VMware Cloud Foundation Design Guide
Workload domain cluster with multiple availability zones, To span multiple availability zones, the network fabric
each zone in a single rack must support stretched Layer 2 networks and Layer 3
routed networks between the availability zones.
Workload domain cluster with multiple availability zones, n A VI workload domain cluster with customer
each zone spanning multiple racks workloads and no NSX Edge clusters can span racks
by using Layer 3 network fabric without Layer 2
adjacency between racks. If you are using a Layer 3
network fabric, NSX Edge clusters cannot be hosted
on clusters that span racks.
n To span multiple racks, the network fabric must
support stretched Layer 2 networks between these
racks if NSX Edge clusters are deployed on the
vSphere cluster.
n To span multiple availability zones, the network fabric
must support stretched Layer 2 networks and Layer 3
routed networks between availability zones.
VMware by Broadcom 37
VMware Cloud Foundation Design Guide
Data Center
Fabric
ToR ToR
Switch Switch
VI Workload
Domain Cluster
Management
Domain Cluster
Rack
VMware by Broadcom 38
VMware Cloud Foundation Design Guide
Data Center
Fabric
VI Workload
Domain Cluster
Management
Domain Cluster
Rack Rack
VMware by Broadcom 39
VMware Cloud Foundation Design Guide
Figure 2-3. Workload Domains with Multiple Availability Zones, Each Zone in One Rack
Data Center
Fabric
VI Workload Domain
Stretched Cluster
Management Domain
Stretched Cluster
Rack Rack
VMware by Broadcom 40
Supported Storage Types for
VMware Cloud Foundation 3
Storage design for VMware Cloud Foundation includes the design for principal and supplemental
storage.
Principal storage is used during the creation of a workload domain and is capable of running
workloads. Supplemental storage can be added after the creation of a workload domain and can
be capable of running workloads or be used for data at rest storage such as virtual machine
templates, backup data, and ISO images.
Special considerations apply if you plan to add clusters to the management domain, for example,
to separate additional management components that require specific hardware resources or
might impact the performance of the main management components in the default cluster, or,
in the case of the consolidated architecture of VMware Cloud Foundation, to separate customer
workloads from the management components.
VMware Cloud Foundation supports the following principal and supplemental storage
combinations:
VMware by Broadcom 41
VMware Cloud Foundation Design Guide
Note For a consolidated VMware Cloud Foundation architecture model, the storage types that
are supported for the management domain apply.
VMware by Broadcom 42
External Services Design for
VMware Cloud Foundation 4
IP addressing scheme, name resolution, and time synchronization must support the requirements
for VMware Cloud Foundation deployments.
Table 4-1. External Services Design Requirements for VMware Cloud Foundation
VCF-EXT-REQD-NET-001 Allocate statically assigned Ensures stability across the You must provide precise
IP addresses and host VMware Cloud Foundation IP address management.
names for all workload instance, and makes it
domain components. simpler to maintain, track,
and implement a DNS
configuration.
VCF-EXT-REQD-NET-002 Configure forward and Ensures that all You must provide
reverse DNS records components are accessible DNS records for each
for all workload domain by using a fully qualified component.
components. domain name instead of by
using IP addresses only. It
is easier to remember and
connect to components
across the VMware Cloud
Foundation instance.
VMware by Broadcom 43
Physical Network Infrastructure
Design for VMware Cloud
Foundation
5
Design of the physical data center network includes defining the network topology for
connecting physical switches and ESXi hosts, determining switch port settings for VLANs and
link aggregation, and designing routing.
A software-defined network (SDN) both integrates with and uses components of the physical
data center. SDN integrates with your physical network to support east-west transit in the data
center and north-south transit to and from the SDDC networks.
n Core-Aggregation-Access
n Leaf-Spine
n Hardware SDN
Note Leaf-Spine is the default data center network deployment topology used for VMware
Cloud Foundation.
n Leaf-Spine Physical Network Design Requirements and Recommendations for VMware Cloud
Foundation
When designing the VLAN and subnet configuration for your VMware Cloud Foundation
deployment, consider the following guidelines:
VMware by Broadcom 44
VMware Cloud Foundation Design Guide
Table 5-1. VLAN and Subnet Guidelines for VMware Cloud Foundation
NSX Federation Between Multiple
All Deployment Topologies Multiple Availability Zones VMware Cloud Foundation Instances
n Ensure your subnets are scaled n For network segments which n An RTEP network segment should
appropriately to allow for are stretched between availability have a VLAN ID and Layer
expansion as expanding at a later zones, the VLAN ID must meet 3 range that are specific to
time can be disruptive. the following requirements: the VMware Cloud Foundation
n Use the IP address of the floating n Be the same in both instance.
interface for Virtual Router availability zones with the n In a VMware Cloud Foundation
Redundancy Protocol (VRPP) or same Layer 3 network instance with multiple availability
Hot Standby Routing Protocol segments. zones, the RTEP network
(HSRP) as the gateway. n Have a Layer 3 gateway segment must be stretched
n Use the RFC 1918 IPv4 address at the first hop that is between the zones and assigned
space for these subnets and highly available such that it the same VLAN ID and IP range.
allocate one octet by VMware tolerates the failure of an n All Edge RTEP networks must
Cloud Foundation instance and entire availability zone. reach each other.
another octet by function. n For network segments of the
same type which are not
stretched between availability
zones, the VLAN ID can be the
same or different between the
zones.
When deploying VLANs and subnets for VMware Cloud Foundation, they must conform to the
following requirements according to the VMware Cloud Foundation topology:
VMware by Broadcom 45
VMware Cloud Foundation Design Guide
Figure 5-1. Choosing a VLAN Model for Host and Management VM Traffic
Is separate security
Yes access required for
ESXi host and
VM management?
No
VMware by Broadcom 46
VMware Cloud Foundation Design Guide
Table 5-2. VLANs and Subnets for VMware Cloud Foundation (continued)
VMware Cloud Foundation Instances VMware Cloud Foundation Instances
Function with a Single Availability Zone with Multiple Availability Zones
Edge RTEP n Required for NSX Federation only n Required for NSX Federation only
n Highly available gateway within n Must be stretched within the
the instance instance
n Highly available gateway across
availability zones within the
instance
VMware by Broadcom 47
VMware Cloud Foundation Design Guide
Data Center
Fabric Spine
ESXi ESXi
Hosts Hosts
VMware by Broadcom 48
VMware Cloud Foundation Design Guide
Table 5-3. Leaf-Spine Physical Network Design Requirements for VMware Cloud Foundation
VCF-NET-REQD-CFG-004 Set the MTU size to at least n Improves traffic When adjusting the MTU
1,700 bytes (recommended throughput. packet size, you must
9,000 bytes for jumbo n Supports Geneve by also configure the entire
frames) on the physical increasing the MTU size network path (VMkernel
switch ports, vSphere to a minimum of 1,600 network adapters, virtual
Distributed Switches, bytes. switches, physical switches,
vSphere Distributed Switch and routers) to support the
n Geneve is an extensible
port groups, and N-VDS same MTU packet size.
protocol. The MTU
switches that support the In an environment with
size might increase
following traffic types: multiple availability zones,
with future capabilities.
n Overlay (Geneve) While 1,600 bytes the MTU must be
n vSAN is sufficient, an MTU configured on the entire
size of 1,700 bytes network path between the
n vSphere vMotion
provides more room for zones.
increasing the Geneve
MTU size without the
need to change the
MTU size of the
physical infrastructure.
VMware by Broadcom 49
VMware Cloud Foundation Design Guide
Table 5-4. Leaf-Spine Physical Network Design Requirements for NSX Federation in VMware
Cloud Foundation
VCF-NET-REQD-CFG-005 Set the MTU size to at least n Jumbo frames are When adjusting the MTU
1,500 bytes (1,700 bytes not required between packet size, you must
preferred; 9,000 bytes VMware Cloud also configure the entire
recommended for jumbo Foundation instances. network path, that is,
frames) on the components However, increased virtual interfaces, virtual
of the physical network MTU improves traffic switches, physical switches,
between the VMware throughput. and routers to support the
Cloud Foundation instances n Increasing the RTEP same MTU packet size.
for the following traffic MTU to 1,700
types. bytes minimizes
n NSX Edge RTEP fragmentation for
standard-size workload
packets between
VMware Cloud
Foundation instances.
VCF-NET-REQD-CFG-006 Ensure that the latency A latency lower than 500 None.
between VMware Cloud ms is required for NSX
Foundation instances that Federation.
are connected in an NSX
Federation is less than 500
ms.
VMware by Broadcom 50
VMware Cloud Foundation Design Guide
Table 5-5. Leaf-Spine Physical Network Design Recommendations for VMware Cloud Foundation
VCF-NET-RCMD-CFG-001 Use two ToR switches for Supports the use of Requires two ToR switches
each rack. two 10-GbE (25-GbE or per rack which might
greater recommended) increase costs.
links to each server,
provides redundancy and
reduces the overall design
complexity.
VCF-NET-RCMD-CFG-004 Assign persistent IP n Ensures that endpoints If you add more hosts to
configurations for NSX have a persistent TEP the cluster, expanding the
tunnel endpoints (TEPs) IP address. static IP pools might be
that use static IP pools n In VMware Cloud required.
instead of dynamic IP pool Foundation, TEP IP
addressing. assignment by using
static IP pools is
recommended for all
topologies.
n This configuration
removes any
requirement for
external DHCP services.
VMware by Broadcom 51
VMware Cloud Foundation Design Guide
Table 5-5. Leaf-Spine Physical Network Design Recommendations for VMware Cloud Foundation
(continued)
VCF-NET-RCMD-CFG-005 Configure the trunk ports Reduces the time to Although this design
connected to ESXi NICs as transition ports over to the does not use the STP,
trunk PortFast. forwarding state. switches usually have STP
configured by default.
VCF-NET-RCMD-CFG-006 Configure VRRP, HSRP, or Ensures that the VLANs Requires configuration of a
another Layer 3 gateway that are stretched high availability technology
availability method for between availability zones for the Layer 3 gateways in
these networks. are connected to a the data center.
n Management highly- available gateway.
Otherwise, a failure in the
n Edge overlay
Layer 3 gateway will cause
disruption in the traffic in
the SDN setup.
Table 5-6. Leaf-Spine Physical Network Design Recommendations for NSX Federation in
VMware Cloud Foundation
VMware by Broadcom 52
vSAN Design for VMware Cloud
Foundation 6
VMware Cloud Foundation uses VMware vSAN as the principal storage type for the management
domain and is recommended for use as principal storage in VI workload domains. You must
determine the size of the compute and storage resources for the vSAN storage, and the
configuration of the network carrying vSAN traffic. For multiple availability zones, you extend
the resource size and determine the configuration of the vSAN witness host.
VMware by Broadcom 53
VMware Cloud Foundation Design Guide
vSphere Cluster
ESXi Hosts
Virtual
Machines
Software-Defined Storage
Storage
vSAN
Physical Disks
VMware by Broadcom 54
VMware Cloud Foundation Design Guide
Management domain (default cluster) Four nodes minimum n Must be stretched first
n 8 node minimum, equally
distributed across availability
zones
n vSAN witness appliance in a third
fault domain
Management domain (additional n Three nodes minimum n Six nodes minimum, equally
clusters) n Four nodes minimum is distributed across availability
recommended for higher zones
availability n Eight nodes minimum is
recommended for higher
availability
n vSAN witness appliance in a third
fault domain
VI workload domain (all clusters) n Three nodes minimum n Six nodes minimum, equally
n Four nodes minimum is distributed across availability
recommended for higher zones
availability n Eight nodes minimum is
recommended for higher
availability
n vSAN witness appliance in a third
fault domain
VMware by Broadcom 55
VMware Cloud Foundation Design Guide
No
Yes Is 25-GbE
Use vSAN OSA networking Use vSAN ESA
a constraint?
No
Is a hybrid
Yes No
vSAN configuration
required?
n A vSAN hybrid storage configuration requires both magnetic devices and flash caching
devices. The cache tier must be at least 10% of the size of the capacity tier.
n An all-flash vSAN configuration requires flash devices for both the caching and capacity
tiers.
n VMware vSAN ReadyNodes or hardware from the VMware Compatibility Guide to build
your own.
VMware by Broadcom 56
VMware Cloud Foundation Design Guide
n All storage devices claimed by vSAN contribute to capacity and performance. Each host's
storage devices claimed by vSAN form a storage pool. The storage pool represents the
amount of caching and capacity provided by the host to the vSAN datastore.
n ESXi hosts must be on the vSAN ESA Ready Node HCL with a minimum of 512 GB RAM
per host.
Note vSAN ESA stretched clusters are not supported by VMware Cloud Foundation.
For best practices, capacity considerations, and general recommendations about designing and
sizing a vSAN cluster, see the VMware vSAN Design and Sizing Guide.
Consider the overall traffic bandwidth and decide how to isolate storage traffic.
n Consider how much vSAN data traffic is running between ESXi hosts.
n The amount of storage traffic depends on the number of VMs that are running in the cluster,
and on how write-intensive the I/O process is for the applications running in the VMs.
For information on the physical network setup for vSAN traffic, and other system traffic, see
Chapter 5 Physical Network Infrastructure Design for VMware Cloud Foundation.
For information on the virtual network setup for vSAN traffic, and other system traffic, see
Logical vSphere Networking Design for VMware Cloud Foundation.
Physical NIC speed For best and predictable performance (IOPS) of the
environment, this design uses a minimum of a 10-GbE
connection, with 25-GbE recommended, for use with
vSAN OSA all-flash configurations.
For vSAN ESA, 25-GbE connection is recommended.
VMkernel network adapters for vSAN The vSAN VMkernel network adapter on each ESXi host
is created when you enable vSAN on the cluster. Connect
the vSAN VMkernel network adapters on all ESXi hosts in
a cluster to a dedicated distributed port group, including
ESXi hosts that are not contributing storage resources to
the cluster.
VMware by Broadcom 57
VMware Cloud Foundation Design Guide
Tiny 10 750
VMware by Broadcom 58
VMware Cloud Foundation Design Guide
VMware Cloud Foundation uses vSAN witness traffic separation where you can use a VMkernel
adapter for vSAN witness traffic that is different from the adapter for vSAN data traffic. In this
design, you configure vSAN witness traffic in the following way:
n On each ESXi host in both availability zones, place the vSAN witness traffic on the
management VMkernel adapter.
n On the vSAN witness appliance, use the same VMkernel adapter for both management and
witness traffic.
For information about vSAN witness traffic separation, see vSAN Stretched Cluster Guide on
VMware Cloud Platform Tech Zone.
Management network
Routed to the management networks in both availability zones. Connect the first VMkernel
adapter of the vSAN witness appliance to this network. The second VMkernel adapter on the
vSAN witness appliance is not used.
n Management traffic
Witness
Appliance Physical
Upstream
Router
Witness
Management
Network
AZ 1: Management
Domain Management
Network
AZ 1: VI Workload AZ 2: VI Workload
Domain Management Domain
Network Management
Network
AZ 1: VI Workload AZ 2: VI Workload
Domain vSAN Domain vSAN
Network Network
VMware by Broadcom 59
VMware Cloud Foundation Design Guide
For related vSphere cluster requirements and recommendations, see vSphere Cluster Design
Requirements and Recommendations for VMware Cloud Foundation.
Table 6-5. vSAN ESA Design Requirements for VMware Cloud Foundation
Requireme
nt ID Design Requirement Justification Implication
VCF- Verify the hardware Prevents hardware-related Limits the number of compatible hardware
VSAN- components used failures during workload configurations that can be used.
REQD- in your vSAN deployment
CFG-003 deployment are on
the vSAN Hardware
Compatibility List.
VMware by Broadcom 60
VMware Cloud Foundation Design Guide
Table 6-6. vSAN Design Requirements for Stretched Clusters with VMware Cloud Foundation
Requireme
nt ID Design Requirement Justification Implication
VCF- Add the following Provides the necessary You might need additional policies if third-
VSAN- setting to the default protection for virtual party virtual machines are to be hosted in
REQD- vSAN storage policy: machines in each availability these clusters because their performance or
CFG-004 Site disaster tolerance zone, with the ability to availability requirements might differ from
= Site mirroring - recover from an availability what the default VMware vSAN policy
stretched cluster zone outage. supports.
VCF- Configure two fault Fault domains are mapped to You must provide additional raw storage when
VSAN- domains, one for availability zones to provide the site mirroring - stretched cluster option is
REQD- each availability zone. logical host separation and selected, and fault domains are enabled.
CFG-005 Assign each host ensure a copy of vSAN
to their respective data is always available even
availability zone fault when an availability zone
domain. goes offline.
VCF- Configure an individual The vSAN storage policy of You must configure additional vSAN storage
VSAN- vSAN storage policy a stretched cluster cannot be policies.
REQD- for each stretched shared with other clusters.
CFG-007 cluster.
VCF- Deploy a vSAN witness Ensures availability of vSAN You must provide a third physically separate
VSAN- appliance in a location witness components in the location that runs a vSphere environment.
WTN- that is not local to the event of a failure of one of You might use a VMware Cloud Foundation
REQD- ESXi hosts in any of the availability zones. instance in a separate physical location.
CFG-001 the availability zones.
VCF- Deploy a witness Ensures the witness The vSphere environment at the witness
VSAN- appliance that appliance is sized to support location must satisfy the resource
WTN- corresponds to the the projected workload requirements of the witness appliance.
REQD- required cluster storage consumption.
CFG-002 capacity.
VCF- Connect the first Enables connecting the The management networks in both availability
VSAN- VMkernel adapter of witness appliance to the zones must be routed to the management
WTN- the vSAN witness workload domain vCenter network in the witness site.
REQD- appliance to the Server.
CFG-003 management network
in the witness site.
VCF- Allocate a statically Simplifies maintenance and Requires precise IP address management.
VSAN- assigned IP address tracking, and implements a
WTN- and host name to the DNS configuration.
REQD- management adapter
CFG-004 of the vSAN witness
appliance.
VMware by Broadcom 61
VMware Cloud Foundation Design Guide
Table 6-6. vSAN Design Requirements for Stretched Clusters with VMware Cloud Foundation
(continued)
Requireme
nt ID Design Requirement Justification Implication
VCF- Configure forward Enables connecting the vSAN You must provide DNS records for the vSAN
VSAN- and reverse DNS witness appliance to the witness appliance.
WTN- records for the vSAN workload domain vCenter
REQD- witness appliance for Server by FQDN instead of IP
CFG-005 the VMware Cloud address.
Foundation instance.
VCF- Configure time Prevents any failures n An operational NTP service must be
VSAN- synchronization by in the stretched cluster available in the environment.
WTN- using an internal NTP configuration that are caused n All firewalls between the vSAN witness
REQD- time for the vSAN by time mismatch between appliance and the NTP servers must allow
CFG-006 witness appliance. the vSAN witness appliance NTP traffic on the required network ports.
and the ESXi hosts in
both availability zones and
workload domain vCenter
Server.
VCF- Provide sufficient raw capacity to Ensures that sufficient resources are present in the None.
VSAN- meet the planned needs of the workload domain cluster, preventing the need to
RCMD- workload domain cluster. expand the vSAN datastore in the future.
CFG-001
VCF- Ensure that at least 30% of free This reserved capacity is set aside for host Increases
VSAN- space is always available on the maintenance mode data evacuation, component the amount
RCMD- vSAN datastore,. rebuilds, rebalancing operations, and VM of available
CFG-002 snapshots. storage
needed.
VMware by Broadcom 62
VMware Cloud Foundation Design Guide
Table 6-7. vSAN Design Recommendations for VMware Cloud Foundation (continued)
Recommen
dation ID Design Recommendation Justification Implication
VCF- Use the default VMware vSAN n Provides the level of redundancy that is You might
VSAN- storage policy. needed in the workload domain cluster. need
RCMD- n Provides the level of performance that is additional
CFG-003 enough for the individual workloads. policies for
third-party
virtual
machines
hosted in
these
clusters
because
their
performanc
e or
availability
requirement
s might
differ from
what the
default
VMware
vSAN policy
supports.
VCF- Leave the default virtual machine Sparse virtual swap files consume capacity on None.
VSAN- swap file as a sparse object on vSAN only as they are accessed. As a result,
RCMD- vSAN. you can reduce the consumption on the vSAN
CFG-004 datastore if virtual machines do not experience
memory over-commitment, which would require
the use of the virtual swap file.
VCF- Use the existing vSphere n Reduces the complexity of the network design. All traffic
VSAN- Distributed Switch instance for the n Reduces the number of physical NICs required. types can
RCMD- workload domain cluster. be shared
CFG-005 over
common
uplinks.
VMware by Broadcom 63
VMware Cloud Foundation Design Guide
Table 6-7. vSAN Design Recommendations for VMware Cloud Foundation (continued)
Recommen
dation ID Design Recommendation Justification Implication
VCF- Configure jumbo frames on the n Simplifies configuration because jumbo frames Every
VSAN- VLAN for vSAN traffic. are also used to improve the performance of device in
RCMD- vSphere vMotion and NFS storage traffic. the network
CFG-006 n Reduces the CPU overhead, resulting in high must
network usage. support
jumbo
frames.
VCF- Configure vSAN in an all-flash Meets the performance needs of the default All vSAN
VSAN- configuration in the default workload domain cluster. disks must
RCMD- workload domain cluster. be flash
CFG-007 disks, which
might cost
more than
magnetic
disks.
Table 6-8. vSAN OSA Design Recommendations for with VMware Cloud Foundation
Recommen
dation ID Design Recommendation Justification Implication
VCF- Ensure that the storage I/O Storage controllers with lower queue depths can Limits the
VSAN- controller has a minimum queue cause performance and stability problems when number of
RCMD- depth of 256 set. running vSAN. compatible
CFG-008 vSAN ReadyNode servers are configured with the I/O
correct queue depths for vSAN. controllers
that can be
used for
storage.
VCF- Do not use the storage I/O Running non-vSAN disks, for example, VMFS, on a If non-vSAN
VSAN- controllers that are running vSAN storage I/O controller that is running a vSAN disk disks are
RCMD- disk groups for another purpose. group can impact vSAN performance. required in
CFG-009 ESXi hosts,
you must
have an
additional
storage I/O
controller in
the host.
VMware by Broadcom 64
VMware Cloud Foundation Design Guide
Table 6-8. vSAN OSA Design Recommendations for with VMware Cloud Foundation (continued)
Recommen
dation ID Design Recommendation Justification Implication
VCF- Configure vSAN with a minimum of Reduces the size of the fault domain and Using
VSAN- two disk groups per ESXi host. spreads the I/O load over more disks for better multiple disk
RCMD- performance. groups
CFG-010 requires
more disks
in each ESXi
host.
VCF- For the cache tier in each disk Provides enough cache for both hybrid or all-flash Using larger
VSAN- group, use a flash-based drive that vSAN configurations to buffer I/O and ensure disk flash disks
RCMD- is at least 600 GB large. group performance. can increase
CFG-011 Additional space in the cache tier does not the initial
increase performance. host cost.
VMware by Broadcom 65
VMware Cloud Foundation Design Guide
Table 6-9. vSAN ESA Design Recommendations for with VMware Cloud Foundation
Recommen
dation ID Design Recommendation Justification Implication
VCF- Activate auto-policy management. Configures optimized storage policies based on You must
VSAN- the cluster type and the number of hosts in activate
RCMD- the cluster inventory. Changes to the number of auto-policy
CFG-012 hosts in the cluster or Host Rebuild Reserve will managemen
prompt you to make a suggested adjustment to t manually.
the optimized storage policy.
VCF- Activate vSAN ESA compression. Activated by default, it also improves PostgreSQL
VSAN- performance. databases
RCMD- and other
CFG-013 applications
might use
their own
compressio
n
capabilities.
In these
cases, using
a storage
policy with
the
compressio
n capability
turned off
will save
CPU cycles.
You can
disable
vSAN ESA
compressio
ns for such
workloads
through the
use of the
Storage
Policy
Based
Managemen
t (SPBM)
framework.
VCF- Use NICs with a minimum 25-GbE 10-GbE NICs will limit the scale and performance of Requires 25-
VSAN- capacity. a vSAN ESA cluster because usually performance GbE or
RCMD- requirements increase over the lifespan of the faster
CFG-014 cluster. network
fabric.
VMware by Broadcom 66
VMware Cloud Foundation Design Guide
Table 6-10. vSAN Design Recommendations for Stretched Clusters with VMware Cloud
Foundation
Recommen
dation ID Design Recommendation Justification Implication
VCF- Configure the vSAN witness Removes the requirement to have static routes on The
VSAN- appliance to use the first VMkernel the witness appliance as witness traffic is routed managemen
WTN- adapter, that is the management over the management network. t networks
RCMD- interface, for vSAN witness traffic. in both
CFG-001 availability
zones must
be routed to
the
managemen
t network in
the witness
site.
VCF- Place witness traffic on the Separates the witness traffic from the vSAN data The
VSAN- management VMkernel adapter of traffic. Witness traffic separation provides the managemen
WTN- all the ESXi hosts in the workload following benefits: t networks
RCMD- domain. n Removes the requirement to have static routes in both
CFG-002 from the vSAN networks in both availability availability
zones to the witness site. zones must
be routed to
n Removes the requirement to have jumbo
the
frames enabled on the path between each
managemen
availability zone and the witness site because
t network in
witness traffic can use a regular MTU size of
the witness
1500 bytes.
site.
VMware by Broadcom 67
vSphere Design for VMware Cloud
Foundation 7
The vSphere design includes determining the configuration of the vCenter Server instances, ESXi
hosts, vSphere clusters,and vSphere networking for a VMware Cloud Foundation environment.
VMware by Broadcom 68
VMware Cloud Foundation Design Guide
To provide the resources required to run the management and workload components of the
VMware Cloud Foundation instance, each ESXi host consists of the following elements:
n Storage devices
n Network interfaces
ESXi Host
Compute
CPU Memory
Storage
Network
For detailed sizing based on the overall profile of the VMware Cloud Foundation instance you
plan to deploy, see VMware Cloud Foundation Planning and Preparation Workbook.
VMware by Broadcom 69
VMware Cloud Foundation Design Guide
The configuration and assembly process for each system should be standardized, with all
components installed in the same manner on each ESXi host. Because standardization of
the physical configuration of the ESXi hosts removes variability, the infrastructure is easily
managed and supported. ESXi hosts are deployed with identical configuration across all cluster
members, including storage and networking configurations. For example, consistent PCIe card
slot placement, especially for network interface controllers, is essential for accurate mapping
of physical network interface controllers to virtual network resources. By using identical
configurations, you have an even balance of virtual machine storage components across storage
and compute resources.
VMware by Broadcom 70
VMware Cloud Foundation Design Guide
VMware by Broadcom 71
VMware Cloud Foundation Design Guide
VCF-ESX-REQD-CFG-002 Ensure each ESXi host n Ensures workloads Assemble the server
matches the required will run without specification and number
CPU, memory and storage contention even during according to the sizing in
specification. failure and maintenance VMware Cloud Foundation
conditions. Planning and Preparation
Workbook which is based
on projected deployment
size.
VMware by Broadcom 72
VMware Cloud Foundation Design Guide
VCF-ESX-RCMD-CFG-001 Use vSAN ReadyNodes Your management domain Hardware choices might be
with vSAN storage for is fully compatible with limited.
each ESXi host in the vSAN at deployment. If you plan to use a
management domain. For information about the server configuration that is
models of physical servers not a vSAN ReadyNode,
that are vSAN-ready, see your CPU, disks and I/O
vSAN Compatibility Guide modules must be listed on
for vSAN ReadyNodes. the VMware Compatibility
Guide under CPU Series
and vSAN Compatibility List
aligned to the ESXi version
specified in VMware Cloud
Foundation 5.1 Release
Notes.
VCF-ESX-RCMD-CFG-002 Allocate hosts with uniform A balanced cluster has You must apply vendor
configuration across the these advantages: sourcing, budgeting,
default management n Predictable and procurement
vSphere cluster. performance even considerations for uniform
during hardware server nodes on a per
failures cluster basis.
n Minimal impact of
resynchronization or
rebuild operations on
performance
VCF-ESX-RCMD-CFG-003 When sizing CPU, do Although multithreading Because you must provide
not consider multithreading technologies increase more physical CPU
technology and associated CPU performance, the cores, costs increase and
performance gains. performance gain depends hardware choices become
on running workloads and limited.
differs from one case to
another.
VCF-ESX-RCMD-CFG-004 Install and configure all Provides hosts that have None.
ESXi hosts in the default large memory, that is,
management cluster to greater than 512 GB,
boot using a 128-GB device with enough space for
or larger. the scratch partition when
using vSAN.
VMware by Broadcom 73
VMware Cloud Foundation Design Guide
VCF-ESX-RCMD-CFG-006 For workloads running in Simplifies the configuration Increases the amount
the default management process. of replication traffic for
cluster, save the virtual management workloads
machine swap file at the that are recovered as part
default location. of the disaster recovery
process.
VCF-ESX-RCMD-NET-001 Place the ESXi hosts Enables the separation of Increases the number of
in each management the physical VLAN between VLANs required.
domain cluster on a host ESXi hosts and the other
management network that management components
is separate from the VM for security reasons.
management network.
VCF-ESX-RCMD-NET-002 Place the ESXi hosts Enables the separation of Increases the number of
in each VI workload the physical VLAN between VLANs required. For each
domain on a separate host the ESXi hosts in different VI workload domain, you
management VLAN-backed VI workload domains for must allocate a separate
network. security reasons. management subnet.
VCF-ESX-RCMD-SEC-001 Deactivate SSH access on Ensures compliance with You must activate SSH
all ESXi hosts in the the vSphere Security access manually for
management domain by Configuration Guide and troubleshooting or support
having the SSH service with security best activities as VMware Cloud
stopped and using the practices. Foundation deactivates
default SSH service policy Disabling SSH access SSH on ESXi hosts
Start and stop manually . reduces the risk of security after workload domain
attacks on the ESXi hosts deployment.
through the SSH interface.
VCF-ESX-RCMD-SEC-002 Set the advanced setting n Ensures compliance You must turn off
UserVars.SuppressShellWa with the vSphere SSH enablement warning
rning to 0 across all ESXi Security Configuration messages manually when
hosts in the management Guide and with security performing troubleshooting
domain. best practices or support activities.
n Enables the warning
message that appears
in the vSphere Client
every time SSH access
is activated on an ESXi
host.
VMware by Broadcom 74
VMware Cloud Foundation Design Guide
n High Availability Design for vCenter Server for VMware Cloud Foundation
Protecting vCenter Server is important because it is the central point of management and
monitoring for each workload domain.
n vCenter Server Design Requirements and Recommendations for VMware Cloud Foundation
Each workload domain in VMware Cloud Foundation is managed by a single vCenter
Server instance. You determine the size of this vCenter Server instance and its storage
requirements according to the number of ESXi hosts per cluster and the number of virtual
machines you plan to run on these clusters.
VMware by Broadcom 75
VMware Cloud Foundation Design Guide
VCF Instance
Access
User Interface
API
Management Domain
Management
Domain vCenter Server
VI Workload Domain
vCenter Server
VI Workload Domain
vCenter Server
Supporting
Infrastructure: Shared
Storage, DNS, NTP
Virtual Infrastructure
ESXi
VMware by Broadcom 76
VMware Cloud Foundation Design Guide
n One vCenter Server instance for the management n One vCenter Server instance for the management
domain that manages the management components domain that manages the management components
of the SDDC, such as the vCenter Server instances for of the SDDC, such as vCenter Server instances for
the VI workload domains, NSX Manager cluster nodes, the VI workload domains, NSX Manager cluster nodes,
SDDC Manager, and other solutions. SDDC Manager, and other solutions.
n Optionally, additional vCenter Server instances for the n Optionally, additional vCenter Server instances for the
VI workload domains to support customer workloads. VI workload domains to support customer workloads.
n vSphere HA protecting all vCenter Server appliances. n vSphere HA protecting all vCenter Server appliances.
n A should-run-on-host-in-group VM-Host affinity rule
in vSphere DRS specifying that the vCenter Server
appliances should run in the primary availability zone
unless an outage in this zone occurs.
When you deploy a workload domain, you select a vCenter Server appliance size that is suitable
for the scale of your environment. The option that you select determines the number of CPUs
and the amount of memory of the appliance. For detailed sizing according to a collective profile
of the VMware Cloud Foundation instance you plan to deploy, refer to the VMware Cloud
Foundation Planning and Preparation Workbook .
VMware by Broadcom 77
VMware Cloud Foundation Design Guide
VMware Cloud Foundation supports only vSphere HA as a high availability method for vCenter
Server.
VMware by Broadcom 78
VMware Cloud Foundation Design Guide
Table 7-7. vCenter Server Design Requirements for VMware Cloud Foundation
VMware by Broadcom 79
VMware Cloud Foundation Design Guide
Table 7-7. vCenter Server Design Requirements for VMware Cloud Foundation (continued)
Table 7-8. vCenter Server Design Recommendations for VMware Cloud Foundation
VCF-VCS-RCMD-CFG-002 Deploy a vCenter Server Ensures resource The default size for a
appliance with the availability and usage management domain is
appropriate storage size. efficiency per workload Small and for VI Workload
domain. Domains is Medium. To
override these values, you
must use the API.
VCF-VCS-RCMD-CFG-003 Protect workload domain vSphere HA is the only vCenter Server becomes
vCenter Server appliances supported method to unavailable during a
by using vSphere HA. protect vCenter Server vSphere HA failover.
availability in VMware
Cloud Foundation.
VCF-VCS-RCMD-CFG-004 In vSphere HA, set the vCenter Server is the If the restart priority for
restart priority policy management and control another virtual machine
for the vCenter Server plane for physical and is set to highest, the
appliance to high. virtual infrastructure. In a connectivity delay for the
vSphere HA event, to management components
ensure the rest of the will be longer.
SDDC management stack
comes up faultlessly, the
workload domain vCenter
Server must be available
first, before the other
management components
come online.
VMware by Broadcom 80
VMware Cloud Foundation Design Guide
Table 7-9. vCenter Server Design Recommendations for vSAN Stretched Clusters with VMware
Cloud Foundation
You select the vCenter Single Sign-On topology according to the needs and design objectives of
your deployment.
VMware by Broadcom 81
VMware Cloud Foundation Design Guide
Table 7-10. vCenter Single Sign-On Topologies for VMware Cloud Foundation
VMware Cloud Foundation vCenter Single Sign-On Benefits Drawbacks
Topology Domain Topology
Multiple vCenter Server Instances One vCenter Single Sign- Enables sharing of Limited to 15 workload
- Single vCenter Single Sign-On On domain with the vCenter Server roles, domains per VMware
Domain management domain and tags and licenses Cloud Foundation
all VI workload domain between all workload instance including the
vCenter Server instances domain instances. management domain.
in enhanced linked mode
(ELM) using a ring
topology.
Multiple vCenter Server Instances n One vCenter Single n Enables isolation at Additional password
- Multiple vCenter Single Sign-On Sign-On domain the vCenter Single management overhead
Domains with at least Sign-On domain layer per vCenter Single Sign-
the management for increased security On domain.
domain vCenter Server separation.
instance n Supports up to 25
n Additional VI workload workload domains
domains, each with per VMware Cloud
their own isolated Foundation instance.
vCenter Single Sign-On
domain.
Figure 7-3. Single vCenter Server Instance - Single vCenter Single Sign-On Domain
VMware Cloud
Foundation Instance
Management Domain
vCenter Server
Because the Single vCenter Server Instance - Single vCenter Single Sign-On Domain topology
contains a single vCenter Server instance by definition, no relevant design requirements or
recommendations for vCenter Single Sign-On are needed.
VMware by Broadcom 82
VMware Cloud Foundation Design Guide
Figure 7-4. Multiple vCenter Server Instances - Single vCenter Single Sign-On Domain
Ring topology
Replication formed during Replication
Partner domain creation Partner
VMware by Broadcom 83
VMware Cloud Foundation Design Guide
Table 7-11. Design Requirements for the Multiple vCenter Server Instance - Single vCenter Single
Sign-on Domain Topology for VMware Cloud Foundation
VCF-VCS-REQD-SSO- Join all vCenter Server When all vCenter Server n Only one vCenter Single
STD-001 instances within aVMware instances are in the Sign-On domain exists.
Cloud Foundation instance same vCenter Single Sign- n The number of
to a single vCenter Single On domain, they can linked vCenter Server
Sign-On domain. share authentication and instances in the
license data across all same vCenter Single
components. Sign-On domain is
limited to 15 instances.
Because each workload
domain uses a
dedicated vCenter
Server instance, you
can deploy up to
15 domains within
each VMware Cloud
Foundation instance.
VMware by Broadcom 84
VMware Cloud Foundation Design Guide
Figure 7-5. Multiple vCenter Server Instances - Multiple vCenter Single Sign-On Domain
Ring topology
Replication formed during Replication
Partner domain creation Partner
VMware by Broadcom 85
VMware Cloud Foundation Design Guide
Table 7-12. Design Requirements for Multiple vCenter Server Instance - Multiple vCenter Single
Sign-On Domain Topology for VMware Cloud Foundation
VCF-VCS-REQD-SSO- Create all vCenter Server n Enables isolation at n Each vCenter server
ISO-001 instances within a VMware the vCenter Single instance is managed
Cloud Foundation instance Sign-On domain layer through its own pane of
in their own unique vCenter for increased security glass using a different
Single Sign-On domains. separation. set of administrative
n Supports up to 25 credentials.
workload domains. n You must manage
password rotation
for each vCenter
Single Sign-On domain
separately.
n vSphere Cluster Life Cycle Method Design for VMware Cloud Foundation
vSphere Lifecycle Manager is used to manage the vSphere clusters in each VI workload
domain.
n vSphere Cluster Design Requirements and Recommendations for VMware Cloud Foundation
The design of a vSphere cluster is a subject to a minimum number of hosts, design
requirements, and design recommendations.
When you design the cluster layout in vSphere, consider the following guidelines:
n Compare the capital costs of purchasing fewer, larger ESXi hosts with the costs of purchasing
more, smaller ESXi hosts. Costs vary between vendors and models. Evaluate the risk of losing
one larger host in a scaled-up cluster and the impact on the business with the higher chance
of losing one or more smaller hosts in a scale-out cluster.
n Evaluate the operational costs of managing a few ESXi hosts with the costs of managing
more ESXi hosts.
VMware by Broadcom 86
VMware Cloud Foundation Design Guide
Figure 7-6. Logical vSphere Cluster Layout with a Single Availability Zone for VMware Cloud
Foundation
VCF Instance
vSphere Cluster
Workload Domain
vCenter Server
VMware by Broadcom 87
VMware Cloud Foundation Design Guide
Figure 7-7. Logical vSphere Cluster Layout for Multiple Availability Zones for VMware Cloud
Foundation
VCF Instance
vSphere Cluster
Workload Domain
vCenter Server
Availability Zone 1
Availability Zone 2
Cluster types per VI workload domain A VI workload domain can include either local clusters or
a remote cluster.
Latency between the central site and the remote site n Maximum: 100 ms
Bandwidth between the central site and the remote site n Minimum: 10 Mbps
VMware by Broadcom 88
VMware Cloud Foundation Design Guide
When you deploy a workload domain, you choose a vSphere cluster life cycle method according
to your requirements.
vSphere Lifecycle Manager vSphere Lifecycle Manager n Supports vSAN n An initial cluster image
images images contain base stretched clusters. is required during
images, vendor add-ons, n Supports VI workload workload domain or
firmware, and drivers. domains with vSphere cluster deployment.
with Tanzu.
n Supports NVIDIA GPU-
enabled clusters.
n Supports 2-node NFS,
FC, or vVols clusters.
vSphere Lifecycle Manager An upgrade baseline n Supports vSAN n Not supported for
baselines contains the ESXi image stretched clusters. NVIDIA GPU-enabled
and a patch baseline n Supports VI workload clusters.
contains the respective domains with vSphere n Not supported for 2-
patches for ESXi host. with Tanzu. node NFS, FC, or vVols
clusters.
For vSAN design requirements and recommendations, see vSAN Design Requirements and
Recommendations for VMware Cloud Foundation.
The requirements for the ESXi hosts in a workload domain in VMware Cloud Foundation
are related to the system requirements of the workloads hosted in the domain. The ESXi
requirements include number, server configuration, amount of hardware resources, networking,
and certificate management. Similar best practices help you design optimal environment
operation
VMware by Broadcom 89
VMware Cloud Foundation Design Guide
vSAN (two 8 6
availability zones)
Reserved capacity Single availability n 25% CPU and n 33% CPU and memory
for handling ESXi zone memory n Tolerates one host failure
host failures per n Tolerates one
cluster host failure
Table 7-16. vSphere Cluster Design Requirements for VMware Cloud Foundation
VMware by Broadcom 90
VMware Cloud Foundation Design Guide
Table 7-16. vSphere Cluster Design Requirements for VMware Cloud Foundation (continued)
Table 7-17. vSphere Cluster Design Requirements for vSAN Stretched Clusters with VMware
Cloud Foundation
VCF-CLS-REQD-CFG-007 Enable the Override Enables routing the vSAN vSAN networks across
default gateway for this data traffic through the availability zones must
adapter setting on the vSAN network gateway have a route to each other.
vSAN VMkernel adapters rather than through the
on all ESXi hosts. management gateway.
VCF-CLS-REQD-CFG-008 Create a host group for Makes it easier to manage You must create and
each availability zone and which virtual machines run maintain VM-Host DRS
add the ESXi hosts in in which availability zone. group rules.
the zone to the respective
group.
VMware by Broadcom 91
VMware Cloud Foundation Design Guide
Table 7-18. vSphere Cluster Design Recommendations for VMware Cloud Foundation
VCF-CLS-RCMD-CFG-001 Use vSphere HA to protect vSphere HA supports a You must provide sufficient
all virtual machines against robust level of protection resources on the remaining
failures. for both ESXi host and hosts so that virtual
virtual machine availability. machines can be restarted
on those hosts in the event
of a host outage.
VCF-CLS-RCMD-CFG-002 Set host isolation response vSAN requires that the If a false positive event
to Power Off and restart host isolation response be occurs, virtual machines are
VMs in vSphere HA. set to Power Off and to powered off and an ESXi
restart virtual machines on host is declared isolated
available ESXi hosts. incorrectly.
VCF-CLS-RCMD-CFG-005 Set the advanced Enables triggering a restart If you want to specifically
cluster setting of a management appliance enable I/O monitoring,
das.iostatsinterval to 0 when an OS failure occurs you must configure
to deactivate monitoring and heartbeats are not the das.iostatsinterval
the storage and network received from VMware advanced setting.
I/O activities of the Tools instead of waiting
management appliances. additionally for the I/O
check to complete.
VMware by Broadcom 92
VMware Cloud Foundation Design Guide
Table 7-18. vSphere Cluster Design Recommendations for VMware Cloud Foundation (continued)
VCF-CLS-RCMD-CFG-006 Enable vSphere DRS on all Provides the best If a vCenter Server outage
clusters, using the default trade-off between load occurs, the mapping from
fully automated mode with balancing and unnecessary virtual machines to ESXi
medium threshold. migrations with vSphere hosts might be difficult to
vMotion. determine.
VCF-CLS-RCMD-CFG-007 Enable Enhanced vMotion Supports cluster upgrades You must enable EVC only
Compatibility (EVC) on all without virtual machine if the clusters contain hosts
clusters in the management downtime. with CPUs from the same
domain. vendor.
You must enable EVC on
the default management
domain cluster during
bringup.
VCF-CLS-RCMD-CFG-008 Set the cluster EVC mode Supports cluster upgrades None.
to the highest available without virtual machine
baseline that is supported downtime.
for the lowest CPU
architecture on the hosts in
the cluster.
VCF-CLS-RCMD-LCM-001 Use images as the life cycle vSphere Lifecycle Manager An initial cluster image
management method for VI images simplify the is required during
workload domains. management of firmware workload domain or cluster
and vendor add-ons deployment.
manually.
VMware by Broadcom 93
VMware Cloud Foundation Design Guide
Table 7-19. vSphere Cluster Design Recommendations for vSAN Stretched Clusters with VMware
Cloud Foundation
VCF-CLS-RCMD-CFG-009 Increase admission control Allocating only half of a In a cluster of 8 ESXi hosts,
percentage to half of the stretched cluster ensures the resources of only 4
ESXi hosts in the cluster. that all VMs have enough ESXi hosts are available for
resources if an availability use.
zone outage occurs. If you add more ESXi hosts
to the default management
cluster, add them in pairs,
one per availability zone.
VCF-CLS-RCMD-CFG-010 Create a virtual machine Ensures that virtual You must add virtual
group for each availability machines are located machines to the allocated
zone and add the VMs in only in the assigned group manually.
the zone to the respective availability zone to
group. avoid unnecessary vSphere
vMotion migrations.
VCF-CLS-RCMD-CFG-011 Create a should-run-on- Ensures that virtual You must manually create
hosts-in-group VM-Host machines are located the rules.
affinity rule to run each only in the assigned
group of virtual machines availability zone to
on the respective group avoid unnecessary vSphere
of hosts in the same vMotion migrations.
availability zone.
VMware Cloud Foundation supports NSX Overlay traffic over a single vSphere Distributed Switch
per cluster. Additional distributed switches are supported for other traffic types.
When using vSAN ReadyNodes, you must define the number of vSphere Distributed Switches at
workload domain deployment time. You cannot add additional vSphere Distributed Switches post
deployment.
VMware by Broadcom 94
VMware Cloud Foundation Design Guide
Table 7-20. Configuration Options for vSphere Distributed Switch for VMware Cloud Foundation
vSphere Distributed Management Domain VI Workload Domain
Switch Configuration Options Options Benefits Drawbacks
Single vSphere n One vSphere n One vSphere Requires the least All traffic shares the
Distributed Switch Distributed Distributed number of physical same two uplinks.
for hosts with two Switch for each Switch for each NICs and switch
physical NICs cluster with all cluster with all ports.
traffic using two traffic using two
uplinks. uplinks.
Single vSphere n One vSphere n One vSphere n Provides support n You must provide
Distributed Switch for Distributed Distributed for traffic additional
hosts with four or six Switch for each Switch for each separation across physical NICs and
physical NICs cluster with four cluster with four different uplinks. switch ports.
uplinks by using or six uplinks.
the predefined
profiles in
the Deployment
Parameters
Workbook in
VMware Cloud
Builder to deploy
the default
management
cluster.
n One vSphere
Distributed
Switch for each
cluster with
four or six
uplinks by using
the VMware
Cloud Builder
API to deploy
the default
management
cluster.
VMware by Broadcom 95
VMware Cloud Foundation Design Guide
Table 7-20. Configuration Options for vSphere Distributed Switch for VMware Cloud Foundation
(continued)
vSphere Distributed Management Domain VI Workload Domain
Switch Configuration Options Options Benefits Drawbacks
n Maximum
16 vSphere
Distributed
Switches per
cluster. You use
the VMware
Cloud Builder
API to deploy
the default
management
cluster using
combinations
of vSphere
Distributed
Switches and
physical NIC
configurations
that are
not available
as predefined
profiles in
the Deployment
Parameters
Workbook
n You can use
only one of
the vSphere
Distributed
Switches for NSX
overlay traffic.
VMware by Broadcom 96
VMware Cloud Foundation Design Guide
Table 7-21. Distributed Port Group Configuration for VMware Cloud Foundation
Occurs only on
saturation of the active
uplink.
n Notify Switches: Yes
Table 7-22. Default VMkernel Adapters for a Workload Domain per Availability Zone
Recommended MTU Size
VMkernel Adapter Service Connected Port Group Activated Services (Bytes)
VMware by Broadcom 97
VMware Cloud Foundation Design Guide
Table 7-23. vSphere Networking Design Recommendations for VMware Cloud Foundation
VCF-VDS-RCMD-CFG-001 Use a single vSphere n Reduces the complexity Increases the number
Distributed Switch per of the network design. of vSphere Distributed
cluster. n Reduces the size of the Switches that must be
fault domain. managed.
VCF-VDS-RCMD-CFG-002 Configure the MTU size n Supports the MTU size When adjusting the MTU
of the vSphere Distributed required by system packet size, you must
Switch to 9000 for jumbo traffic types. also configure the entire
frames. n Improves traffic network path (VMkernel
throughput. ports, virtual switches,
physical switches, and
routers) to support the
same MTU packet size.
VCF-VDS-RCMD-DPG-001 Use ephemeral port binding Using ephemeral port Port-level permissions and
for the Management VM binding provides the option controls are lost across
port group. for recovery of the vCenter power cycles, and no
Server instance that is historical context is saved.
managing the distributed
switch.
VCF-VDS-RCMD-DPG-002 Use static port binding for Static binding ensures a None.
all non-management port virtual machine connects
groups. to the same port on
the vSphere Distributed
Switch. This allows for
historical data and port
level monitoring.
VMware by Broadcom 98
VMware Cloud Foundation Design Guide
Table 7-23. vSphere Networking Design Recommendations for VMware Cloud Foundation
(continued)
VCF-VDS-RCMD-NIO-001 Enable Network I/O Control Increases resiliency and Network I/O Control
on vSphere Distributed performance of the might impact network
Switch of the management network. performance for critical
domain cluster. traffic types if
misconfigured.
VCF-VDS-RCMD-NIO-002 Set the share value for By keeping the default None.
management traffic to setting of Normal,
Normal. management traffic is
prioritized higher than
vSphere vMotion but
lower than vSAN traffic.
Management traffic is
important because it
ensures that the hosts
can still be managed
during times of network
contention.
VCF-VDS-RCMD-NIO-003 Set the share value for During times of network During times of network
vSphere vMotion traffic to contention, vSphere contention, vMotion takes
Low. vMotion traffic is not longer than usual to
as important as virtual complete.
machine or storage traffic.
VCF-VDS-RCMD-NIO-004 Set the share value for Virtual machines are the None.
virtual machines to High. most important asset in the
SDDC. Leaving the default
setting of High ensures that
they always have access to
the network resources they
need.
VCF-VDS-RCMD-NIO-006 Set the share value for By default, VVMware Cloud None.
other traffic types to Low. Foundation does not use
other traffic types, like
vSphere FT traffic. Hence,
these traffic types can be
set the lowest priority.
VMware by Broadcom 99
NSX Design for VMware Cloud
Foundation 8
In VMware Cloud Foundation, you use NSX for connecting management and customer virtual
machines by using virtual network segments and routing. You also create constructs for solutions
that are deployed for a single VMware Cloud Foundation instance or are available across multiple
VMware Cloud Foundation instances. These constructs provide routing to the data center and
load balancing.
Component Description
NSX Manager n Provides the user interface and the REST API for creating,
configuring, and monitoring NSX components, such as segments,
and Tier-0 and Tier-1 gateways.
n In a deployment with NSX Federation, NSX Manager is called NSX
Local Manager.
NSX Edge nodes n Is a special type of transport node which contains service router
components.
n Provides north-south traffic connectivity between the physical
data center networks and the NSX SDN networks. Each NSX Edge
node has multiple interfaces where traffic flows.
n Can provide east-west traffic flow between virtualized workloads.
They provide stateful services such as load balancers and
DHCP. In a deployment with multiple VMware Cloud Foundation
instances, east-west traffic between the VMware Cloud
Foundation instances flows through the NSX Edge nodes too.
NSX Federation (optional design extension) n Propagates configurations that span multiple NSX instances in
a single VMware Cloud Foundation instance or across multiple
VMware Cloud Foundation instances. You can stretch overlay
segments, activate failover of segment ingress and egress traffic
between VMware Cloud Foundation instances, and implement a
unified firewall configuration.
n In a deployment with multiple VMware Cloud Foundation
instances, you use NSX to provide cross-instance services
to SDDC management components that do not have native
support for availability at several locations, such as VMware Aria
Automation and VMware Aria Operations.
n Connect only workload domains of matching types (management
domain to management domain or VI workload domain to VI
workload domain).
Component Description
NSX Global Manager (Federation only) n Is part of deployments with multiple VMware Cloud Foundation
instances where NSX Federation is required. NSX Global Manager
can connect multiple NSX Local Manager instances under a single
global management plane.
n Provides the user interface and the REST API for creating,
configuring, and monitoring NSX global objects, such as global
virtual network segments, and global Tier-0 and Tier-1 gateways.
n Connected NSX Local Manager instances create the global objects
on the underlying software-defined network that you define
from NSX Global Manager. An NSX Local Manager instance
directly communicates with other NSX Local Manager instances to
synchronize configuration and state needed to implement a global
policy.
n NSX Global Manager is a deployment-time role that you assign to
an NSX Manager appliance.
NSX Manager instance shared between VI n An NSX Manager instance can be shared between up to 14 VI
workload domains workload domains that are part of the same vCenter Single Sign-
On domain.
n VI workload domains sharing an NSX Manager instance must use
the same vSphere cluster life cycle method.
n Using a shared NSX Manager instance reduces resource
requirements for the management domain.
n A single transport zone is shared across all clusters in all VI
workload domains that share the NSX Manager instance.
n The management domain NSX instance cannot be shared.
n Isolated workload domain NSX instances cannot be shared.
NSX Manager Cluster n Three appropriately sized nodes n Three appropriately sized nodes
with a virtual IP (VIP) address with with a VIP address with an anti-
an anti-affinity rule to keep them affinity rule to keep them on
on different hosts. different hosts.
n vSphere HA protects the cluster n vSphere HA protects the cluster
nodes applying high restart nodes applying high restart
priority priority
n vSphere DRS rule should-run-on-
hosts-in-group keeps the NSX
Manager VMs in the first
availability zone.
NSX Global Manager Cluster n Manually deployed three n Manually deployed three
(Conditional) appropriately sized nodes with a appropriately sized nodes with a
VIP address with an anti-affinity VIP address with an anti-affinity
rule to run them on different rule to run them on different
hosts. hosts.
n One active and one standby n One active and one standby
cluster. cluster.
n vSphere HA protects the cluster n vSphere HA protects the cluster
nodes applying high restart nodes applying high restart
priority. priority.
n vSphere DRS rule should-run-on-
hosts-in-group keeps the NSX
Global Manager VMs in the first
availability zone.
NSX Edge Cluster n Two appropriately sized NSX n Two appropriately sized NSX
Edge nodes with an anti-affinity Edge nodes in the first availability
rule to separate them on different zone with an anti-affinity rule to
hosts. separate them on different hosts.
n vSphere HA protects the cluster n vSphere HA protects the cluster
nodes applying high restart nodes applying high restart
priority. priority.
n vSphere DRS rule should-run-on-
hosts-in-group keeps the NSX
Edge VMs in the first availability
zone.
Transport Nodes n Each ESXi host acts as a host n Each ESXi host acts as a host
transport node. transport node.
n Two edge transport nodes. n Two edge transport nodes in the
first availability zone.
Transport zones n One VLAN transport zone for n One VLAN transport zone for
north-south traffic. north-south traffic.
n Maximum one overlay transport n Maximum one overlay transport
zone for overlay segments per zone for overlay segments per
NSX instance. NSX instance.
n One VLAN tranpsort zone for n One or more VLAN tranpsort
VLAN-backed segments. zones for VLAN-backed
segments.
VLANs and IP subnets allocated to See VLANs and Subnets for VMware See VLANs and Subnets for VMware
NSX Cloud Foundation. Cloud Foundation.
For information about the networks
for virtual infrastructure management,
see Distributed Port Group Design.
Routing configuration n BGP for a single VMware Cloud n BGP with path prepend to
Foundation instance. control ingress traffic and local
n In a VMware Cloud Foundation preference to control egress
deployment with NSX Federation, traffic through the first availability
BGP with ingress and egress zone during normal operating
traffic to the first VMware condition.
Cloud Foundation instance during n In a VMware Cloud Foundation
normal operating conditions. deployment with NSX Federation,
BGP with ingress and egress
traffic to the first instance during
normal operating conditions.
For a description of the NSX logical component in this design, see Table 8-1. NSX Logical
Concepts and Components.
Figure 8-1. NSX Logical Design for a Single Instance - Single Availability Zone Topology
Access Supporting
NSX Edge
Cluster Infrastructure
User Interface
DNS NTP
API NSX Edge NSX Edge
Node 1 Node 2
NSX
Manager Cluster
Workload Domain
vCenter Server
NSX NSX NSX
Manager Manager Manager
1 2 3
NSX
Transport Nodes
n Unified appliances that have both the NSX Local Manager and NSX Controller roles. They
provide management and control plane capabilities.
n NSX Edge nodes in the workload domain that provide advanced services such as load
balancing, and north-south connectivity.
n ESXi hosts in the workload domain that are registered as NSX transport nodes to provide
distributed routing and firewall services to workloads.
Figure 8-2. NSX Logical Design for a Single Instance - Multiple Availability Zone Topology
Access Supporting
NSX Edge
Cluster Infrastructure
User Interface
DNS NTP
API NSX Edge NSX Edge
Node 1 Node 2
NSX
Manager Cluster
Workload Domain
vCenter Server
NSX NSX NSX
Manager Manager Manager
1 2 3
n Unified appliances that have both the NSX Local Manager and NSX Controller roles. They
provide management and control plane capabilities.
n NSX Edge nodes that provide advanced services such as load balancing, and north-south
connectivity.
n ESXi hosts that are distributed evenly across availability zones in the workload domain and
are registered as NSX transport nodes to provide distributed routing and firewall services to
workloads.
Figure 8-3. NSX Logical Design for a Multiple Instance - Single Availability Zone Topology
VCF Instance A VCF Instance B
NSX NSX
NSX Edge NSX Edge
Transport Nodes Transport Nodes
Node Cluster Node Cluster
n Unified appliances that have both the NSX Local Manager and NSX Controller roles. They
provide management and control plane capabilities.
n NSX Edge nodes that provide advanced services such as load balancing, and north-south
connectivity.
n ESXi hosts in the workload domain that are registered as NSX transport nodes to provide
distributed routing and firewall services to workloads.
n NSX Global Manager cluster in each of the first two VMware Cloud Foundation instances.
You deploy the NSX Global Manager cluster in each VMware Cloud Foundation instance so
that you can use NSX Federation for global management of networking and security services.
Figure 8-4. NSX Logical Design for Multiple Instance - Multiple Availability Zone Topology
VCF Instance A VCF Instance B
ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi
n Unified appliances that have both the NSX Local Manager and NSX Controller roles. They
provide management and control plane capabilities.
n NSX Edge nodes that provide advanced services such as load balancing, and north-south
connectivity.
n ESXi hosts that are distributed evenly across availability zones in the workload domain in a
VMware Cloud Foundation instance, and are registered as NSX transport nodes to provide
distributed routing and firewall services to workloads.
n NSX Global Manager cluster in each of the first two VMware Cloud Foundation instances.
You deploy the NSX Global Manager cluster in each VMware Cloud Foundation instance so
that you can use NSX Federation for global management of networking and security services.
When you deploy NSX Manager appliances, either with a local or global scope, you select to
deploy the appliance with a size that is suitable for the scale of your environment. The option
that you select determines the number of CPUs and the amount of memory of the appliance. For
detailed sizing according to the overall profile of the VMware Cloud Foundation instance you plan
to deploy, see VMware Cloud Foundation Planning and Preparation Workbook.
Note To deploy an NSX Manager appliance in the VI workload domain with a size different from
the default one, you must use the API.
Table 8-4. NSX Manager Design Requirements for VMware Cloud Foundation
VCF-NSX-LM-REQD- Deploy three NSX Manager Supports high availability of You must have sufficient
CFG-002 nodes in the default the NSX manager cluster. resources in the default
vSphere cluster in the cluster of the management
management domain for domain to run three NSX
configuring and managing Manager nodes.
the network services for
the workload domain.
Table 8-5. NSX Manager Design Recommendations for VMware Cloud Foundation
VCF-NSX-LM-RCMD- Deploy appropriately sized Ensures resource The default size for
CFG-001 nodes in the NSX Manager availability and usage a management domain
cluster for the workload efficiency per workload is Medium, and for VI
domain. domain. workload domains is Large.
VCF-NSX-LM-RCMD- Create a virtual IP (VIP) Provides high availability of n The VIP address
CFG-002 address for the NSX the user interface and API feature provides high
Manager cluster for the of NSX Manager. availability only. It
workload domain. does not load-balance
requests across the
cluster.
n When using the VIP
address feature, all NSX
Manager nodes must
be deployed on the
same Layer 2 network.
Table 8-5. NSX Manager Design Recommendations for VMware Cloud Foundation (continued)
VCF-NSX-LM-RCMD- Apply VM-VM anti-affinity Keeps the NSX Manager You must allocate at
CFG-003 rules in vSphere Distributed appliances running on least four physical hosts
Resource Scheduler different ESXi hosts for so that the three
(vSphere DRS) to the NSX high availability. NSX Manager appliances
Manager appliances. continue running if an ESXi
host failure occurs.
VCF-NSX-LM-RCMD- In vSphere HA, set the n NSX Manager If the restart priority
CFG-004 restart priority policy implements the control for another management
for each NSX Manager plane for virtual appliance is set to highest,
appliance to high. network segments. the connectivity delay for
vSphere HA restarts management appliances
the NSX Manager will be longer.
appliances first so that
other virtual machines
that are being powered
on or migrated by using
vSphere vMotion while
the control plane is
offline lose connectivity
only until the control
plane quorum is re-
established.
n Setting the restart
priority to high reserves
the highest priority for
flexibility for adding
services that must be
started before NSX
Manager.
Table 8-6. NSX Manager Design Recommendations for Stretched Clusters in VMware Cloud
Foundation
optimal way, such as the number and size of the nodes, high availability, on a standard or
stretched management cluster.
Table 8-7. NSX Global Manager Design Requirements for VMware Cloud Foundation
Table 8-8. NSX Global Manager Design Recommendations for VMware Cloud Foundation
VCF-NSX-GM-RCMD- Deploy three NSX Global Provides high availability You must have sufficient
CFG-001 Manager nodes for the for the NSX Global resources in the default
workload domain to Manager cluster. cluster of the management
support NSX Federation domain to run three NSX
across VMware Cloud Global Manager nodes.
Foundation instances.
VCF-NSX-GM-RCMD- Create a virtual IP (VIP) Provides high availability of n The VIP address
CFG-003 address for the NSX Global the user interface and API feature provides high
Manager cluster for the of NSX Global Manager. availability only. It
workload domain. does not load-balance
requests across the
cluster.
n When using the VIP
address feature, all NSX
Global Manager nodes
must be deployed on
the same Layer 2
network.
Table 8-8. NSX Global Manager Design Recommendations for VMware Cloud Foundation
(continued)
VCF-NSX-GM-RCMD- Apply VM-VM anti-affinity Keeps the NSX Global You must allocate at
CFG-004 rules in vSphere DRS to Manager appliances least four physical hosts
the NSX Global Manager running on different ESXi so that the three
appliances. hosts for high availability. NSX Manager appliances
continue running if an ESXi
host failure occurs.
VCF-NSX-GM-RCMD- In vSphere HA, set the n NSX Global Manager n Management of NSX
CFG-005 restart priority policy for implements the global components will
each NSX Global Manager management plane for be unavailable until the
appliance to medium. global segments and NSX Global Manager
firewalls. virtual machines restart.
n The NSX Global
NSX Global Manager is
Manager cluster is
not required for control
deployed in the
plane and data plane
management domain,
connectivity.
where the total number
n Setting the restart
of virtual machines
priority to medium
is limited and where
reserves the high
it competes with
priority for services that
other management
impact the NSX control
components for restart
or data planes.
priority.
Table 8-8. NSX Global Manager Design Recommendations for VMware Cloud Foundation
(continued)
VCF-NSX-GM-RCMD- Set the NSX Global Enables recoverability of Must be done manually.
CFG-007 Manager cluster in the NSX Global Manager in
second VMware Cloud the second VMware Cloud
Foundation instance as Foundation instance if a
standby for the workload failure in the first instance
domain. occurs.
Table 8-9. NSX Global Manager Design Recommendations for Stretched Clusters in VMware
Cloud Foundation
VCF-NSX-GM-RCMD- Add the NSX Global Ensures that, by default, Done automatically by
CFG-008 Manager appliances to the the NSX Global Manager VMware Cloud Foundation
virtual machine group for appliances are powered when stretching a cluster.
the first availability zone. on a host in the primary
availability zone.
Deployment Model for the NSX Edge Nodes for VMware Cloud
Foundation
For NSX Edge nodes, you determine the form factor, number of nodes and placement according
to the requirements for network services in a VMware Cloud Foundation workload domain.
An NSX Edge node is an appliance that provides centralized networking services which cannot
be distributed to hypervisors, such as load balancing, NAT, VPN, and physical network uplinks.
Some services, such as Tier-0 gateways, are limited to a single instance per NSX Edge node.
However, most services can coexist in these nodes.
NSX Edge nodes are grouped in one or more edge clusters, representing a pool of capacity for
NSX services.
An NSX Edge node can be deployed as a virtual appliance, or installed on bare-metal hardware.
The edge node on bare-metal hardware can have better performance capabilities at the expense
of more difficult deployment and limited deployment topology use cases. For details on the
trade-offs of using virtual or bare-metal NSX Edges, see the NSX documentation.
NSX Edge virtual appliance deployed n Deployment and life cycle n Might not provide best
by using SDDC Manager management by using SDDC performance in individual
Manager workflows that call NSX customer scenarios
Manager
n Automated password
management by using SDDC
Manager
n Benefits from vSphere HA
recovery
n Can be used across availability
zones
n Easy to scale up by modifying
the specification of the virtual
appliance
NSX Edge virtual appliance deployed n Benefits from vSphere HA n Might not provide best
by using NSX Manager recovery performance in individual
n Can be used across availability customer scenarios
zones n Manually deployed by using NSX
n Easy to scale up by modifying Manager
the specification of the virtual n Manual password Management
appliance by using NSX Manager
n Cannot be used to support
Application Virtual Networks
(AVNs) in the management
domain
Bare-metal NSX Edge appliance n Might provide better performance n Has hardware compatibility
in individual customer scenarios requirements
n Requires individual hardware
life cycle management and
monitoring of failures, firmware
and drivers
n Manual password management
n Must be manually deployed and
connected to the environment
n Requires manual recovery after
hardware failure
n Requires deploying a bare-metal
NSX Edge appliance in each
availability zone for network
failover
n Deploying a bare-metal edge in
each availability zone requires
considering asymmetric routing
n Requires edge fault domains if
more than one edge is deployed
in each availability zone for
Active/Standby Tier-0 or Tier-1
gateways
For detailed sizing according to the overall profile of the VMware Cloud Foundation instance you
plan to deploy, see VMware Cloud Foundation Planning and Preparation Workbook.
Network Design for the NSX Edge Nodes for VMware Cloud
Foundation
In each VMware Cloud Foundation instance, you implement an NSX Edge configuration with a
single N-VDS. You connect the uplink network interfaces of the edge appliance to VLAN trunk
port groups that are connected to particular physical NICs on the host.
If you plan to deploy multiple VMware Cloud Foundation instances, apply the same network
design to the NSX Edge cluster in the second and other additional VMware Cloud Foundation
instances.
ToR
Switches
vmnic0 vmnic1
Edge N-VDS
ESXi Host
Uplink Policy Design for the NSX Edge Nodes for VMware Cloud Foundation
A transport node can participate in an overlay and VLAN network. Uplink profiles define policies
for the links from the NSX Edge transport nodes to top of rack switches. Uplink profiles are
containers for the properties or capabilities for the network adapters. Uplink profiles are applied
to the N-VDS of the edge node.
Uplink profiles can use either load balance source or failover order teaming. If using load balance
source, multiple uplinks can be active. If using failover order, only a single uplink can be active.
Teaming can be configured by using the default teaming policy or a user-defined named teaming
policy. You can use named teaming policies to pin traffic segments to designated edge uplinks.
Table 8-12. NSX Edge Design Requirements for VMware Cloud Foundation
Table 8-12. NSX Edge Design Requirements for VMware Cloud Foundation (continued)
VCF-NSX-EDGE-REQD- Use a dedicated VLAN A dedicated edge overlay n You must have routing
CFG-003 for edge overlay that is network provides support between the VLANs for
different from the host for edge mobility in edge overlay and host
overlay VLAN. support of advanced overlay.
deployments such as n You must allocate
multiple availability zones another VLAN in
or multi-rack clusters. the data center
infrastructure for edge
overlay.
Table 8-13. NSX Edge Design Requirements for NSX Federation in VMware Cloud Foundation
VCF-NSX-EDGE-REQD- Allocate a separate VLAN The RTEP network must be You must allocate another
CFG-005 for edge RTEP overlay that on a VLAN that is different VLAN in the data center
is different from the edge from the edge overlay infrastructure.
overlay VLAN. VLAN. This is an NSX
requirement that provides
support for configuring
different MTU size per
network.
Table 8-14. NSX Edge Design Recommendations for VMware Cloud Foundation
VCF-NSX-EDGE-RCMD- Use appropriately sized Ensures resource You must provide sufficient
CFG-001 NSX Edge virtual availability and usage compute resources to
appliances. efficiency per workload support the chosen
domain. appliance size.
VCF-NSX-EDGE-RCMD- Deploy the NSX Edge Simplifies the configuration Workloads and NSX Edges
CFG-002 virtual appliances to the and minimizes the number share the same compute
default vSphere cluster of ESXi hosts required for resources.
of the workload domain, initial deployment.
sharing the cluster between
the workloads and the
edge appliances.
VCF-NSX-EDGE-RCMD- Deploy two NSX Edge Creates the minimum size For a VI workload domain,
CFG-003 appliances in an edge NSX Edge cluster while additional edge appliances
cluster in the default satisfying the requirements might be required to satisfy
vSphere cluster of the for availability. increased bandwidth
workload domain. requirements.
VCF-NSX-EDGE-RCMD- Apply VM-VM anti-affinity Keeps the NSX Edge nodes None.
CFG-004 rules for vSphere DRS to running on different ESXi
the virtual machines of the hosts for high availability.
NSX Edge cluster.
Table 8-14. NSX Edge Design Recommendations for VMware Cloud Foundation (continued)
VCF-NSX-EDGE-RCMD- In vSphere HA, set the n The NSX Edge nodes If the restart priority for
CFG-005 restart priority policy for are part of the another VM in the cluster
each NSX Edge appliance north-south data path is set to highest, the
to high. for overlay segments. connectivity delays for
vSphere HA restarts the edge appliances will be
NSX Edge appliances longer.
first to minimise the
time an edge VM is
offline.
n Setting the restart
priority to high reserves
highest for future
needs.
Table 8-15. NSX Edge Design Recommendations for Stretched Clusters in VMware Cloud
Foundation
BGP routing is the routing option recommended for VMware Cloud Foundation.
North-South Routing
The routing design considers different levels of routing in the environment, such as number and
type of gateways in NSX, dynamic routing protocol, and others.
Table 8-17. Considerations for the Operating Model for North-South Service Routers
North-South Service
Router Operating Model Description Benefits Drawbacks
Figure 8-6. BGP North-South Routing for VMware Cloud Foundation Instances with a Single
Availability Zone
VCF Instance A
ToR
Switches
Data Center A
BGP ASN eBGP
ECMP
Uplink VLAN 1 BDF (Optional) Default Route
Uplink VLAN 2
Tier-0 SR SR
Gateway
DR DR
SDDC BGP ASN
SR SR
Tier-1
DR DR DR DR DR DR
Gateway
NSX NSX ESXi ESXi ESXi ESXi
Edge Edge Host 1 Host 2 Host 3 Host 4
Node 1 Node 2
Figure 8-7. BGP North-South Routing for VMware Cloud Foundation Instances with Multiple
Availability Zones
VCF Instance A
Uplink VLAN 2
Tier-0 SR SR
Gateway
DR DR
SDDC BGP ASN
SR SR
DR DR DR DR DR DR Tier-1 DR DR DR DR
Gateway
ESXi ESXi ESXi ESXi NSX NSX ESXi ESXi ESXi ESXi
Host 1 Host 2 Host 3 Host 4 Edge Edge Host 1 Host 2 Host 3 Host 4
Node 1 Node 2
Local egress allows traffic to leave any location which the network spans. The use of local-egress
would require controlling local-ingress to prevent asymmetrical routing. This design does not
use local-egress. Instead, this design uses a preferred and failover VMware Cloud Foundation
instances for all networks.
Figure 8-8. BGP North-South Routing for VMware Cloud Foundation Instances with NSX
Federation
G G
L L
Each VMware Cloud Foundation instance that is in the scope of a Tier-0 gateway can be
configured as primary or secondary. A primary instance passes traffic for any other SDN service
such as Tier-0 logical segments or Tier-1 gateways. A secondary instance routes traffic locally but
does not egress traffic outside the SDN or advertise networks in the data center.
When deploying an additional VMware Cloud Foundation instance, the Tier-0 gateway in the first
instance is extended to the new instance.
In this design, the Tier-0 gateway in each VMware Cloud Foundation instance is configured as
primary. Although the Tier-0 gateway technically supports local-egress, the design does not
recommend the use of local-egress. Ingress and egress traffic is controlled at the Tier-1 gateway
level.
Each VMware Cloud Foundation instance has its own NSX Edge cluster with associated uplink
VLANs for north-south traffic flow for that instance. The Tier-0 gateway in each instance peers
with the top of rack switches over eBGP.
Figure 8-9. BGP Peering to Top of Rack Switches for VMware Cloud Foundation Instances with
NSX Federation
Data Center
Network
ToR ToR
Management Management
VLAN VLAN
Outbound: Outbound:
Tier-0 -No segments attached NSX Edge NSX Edge Tier-0 -No segments attached
Tier-1- Global and local segments Cluster Cluster Tier-1- Global and local segments
Primary at this location Primary at this location
Edge Edge
Node Node
Tier-0 SR iBGP -Inter-SR Routing Tier-0 SR
Active/Active Active/Active
SDN G Tier-0
Active/Active
Primary/Primary
NSX SDN
SDDC BGP ASN
Any logical segments connected to the Tier-1 gateway follow the span of the Tier-1 gateway. If
the Tier-1 gateway spans several VMware Cloud Foundation instances, any segments connected
to that gateway become available in both instances.
Using a Tier-1 gateway enables more granular control on logical segments in the first and second
VVMware Cloud Foundation instances. You use three Tier-1 gateways - one in each VMware
Cloud Foundation instance for segments that are local to the instance, and one for segments
which span the two instances.
Table 8-18. Location Configuration of the Tier-1 Gateways for Multiple VMware Cloud Foundation
Instances
First VMware Cloud Second VMware Cloud
Tier-1 Gateway Foundation Instance Foundation Instance Ingress and Egress Traffic
The Tier-1 gateway advertises its networks to the connected local-instance unit of the Tier-0
gateway. In the case of primary-secondary location configuration, the Tier-1 gateway advertises
its networks only to the Tier-0 gateway unit in the location where the Tier-1 gateway is primary.
The Tier-0 gateway unit then re-advertises those networks to the data center in the sites
where that Tier-1 gateway is primary. During failover of the components in the first VMware
Cloud Foundation instance, an administrator must manually set the Tier-1 gateway in the second
VMware Cloud Foundation instance as primary. Then, networks become advertised through the
Tier-1 gateway unit in the second instance.
In a Multiple Instance-Multiple Availability Zone topology, the same Tier-0 and Tier-1 gateway
architecture applies. The ESXi transport nodes from the second availability zone are also
attached to the Tier-1 gateway as per the Figure 8-7. BGP North-South Routing for VMware
Cloud Foundation Instances with Multiple Availability Zones design.
BGP Routing
The BGP routing design has the following characteristics:
n Is a proven protocol that is designed for peering between networks under independent
administrative control - data center networks and the NSX SDN.
Note These design recommendations do not include BFD. However, if faster convergence than
BGP timers is required, you must enable BFD on the physical network and also on the NSX Tier-0
gateway.
Table 8-19. BGP Routing Design Requirements for VMware Cloud Foundation
VCF-NSX-BGP-REQD- To enable ECMP between Supports multiple equal- Additional VLANs are
CFG-001 the Tier-0 gateway and cost routes on the Tier-0 required.
the Layer 3 devices gateway and provides
(ToR switches or upstream more resiliency and better
devices), create two bandwidth use in the
VLANs. network.
The ToR switches or
upstream Layer 3 devices
have an SVI on one of the
two VLANS, and each Edge
node in the cluster has an
interface on each VLAN.
Table 8-19. BGP Routing Design Requirements for VMware Cloud Foundation (continued)
VCF-NSX-BGP-REQD- Create a VLAN transport Enables the configuration Additional VLAN transport
CFG-003 zone for edge uplink traffic. of VLAN segments on the zones might be required
N-VDS in the edge nodes. if the edge nodes are not
connected to the same top
of rack switch pair.
VCF-NSX-BGP-REQD- Deploy a Tier-1 gateway Creates a two-tier routing A Tier-1 gateway can only
CFG-004 and connect it to the Tier-0 architecture. be connected to a single
gateway. Abstracts the NSX logical Tier-0 gateway.
components which interact In cases where multiple
with the physical data Tier-0 gateways are
center from the logical required, you must create
components which provide multiple Tier-1 gateways.
SDN services.
Table 8-20. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation
VCF-NSX-BGP-REQD- Extend the uplink VLANs Because the NSX Edge You must configure a
CFG-006 to the top of rack switches nodes will fail over stretched Layer 2 network
so that the VLANs are between the availability between the availability
stretched between both zones, ensures uplink zones by using physical
availability zones. connectivity to the top network infrastructure.
of rack switches in
both availability zones
regardless of the zone
the NSX Edge nodes are
presently in.
VCF-NSX-BGP-REQD- Provide this SVI Enables the communication You must configure a
CFG-007 configuration on the top of of the NSX Edge nodes to stretched Layer 2 network
the rack switches. the top of rack switches in between the availability
n In the second both availability zones over zones by using the physical
availability zone, the same uplink VLANs. network infrastructure.
configure the top
of rack switches or
upstream Layer 3
devices with an SVI on
each of the two uplink
VLANs.
n Make the top of
rack switch SVI in
both availability zones
part of a common
stretched Layer 2
network between the
availability zones.
VCF-NSX-BGP-REQD- Provide this VLAN Supports multiple equal- n Extra VLANs are
CFG-008 configuration: cost routes on the Tier-0 required.
n Use two VLANs to gateway, and provides n Requires stretching
enable ECMP between more resiliency and better uplink VLANs between
the Tier-0 gateway and bandwidth use in the availability zones
the Layer 3 devices network.
(top of rack switches or
Leaf switches).
n The ToR switches
or upstream Layer 3
devices have an SVI to
one of the two VLANS
and each NSX Edge
node has an interface
to each VLAN.
Table 8-20. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation (continued)
VCF-NSX-BGP-REQD- Create an IP prefix list Used in a route map to You must manually create
CFG-009 that permits access to prepend a path to one or an IP prefix list that is
route advertisement by any more autonomous system identical to the default one.
network instead of using (AS-path prepend) for BGP
the default IP prefix list. neighbors in the second
availability zone.
VCF-NSX-BGP-REQD- Create a route map-out n Used for configuring You must manually create
CFG-010 that contains the custom neighbor relationships the route map.
IP prefix list and an AS- with the Layer 3 The two NSX Edge nodes
path prepend value set to devices in the second will route north-south
the Tier-0 local AS added availability zone. traffic through the second
twice. n Ensures that all ingress availability zone only if
traffic passes through the connection to their
the first availability BGP neighbors in the first
zone. availability zone is lost, for
example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.
VCF-NSX-BGP-REQD- Create an IP prefix list that Used in a route map to You must manually create
CFG-011 permits access to route configure local-reference an IP prefix list that is
advertisement by network on learned default-route identical to the default one.
0.0.0.0/0 instead of using for BGP neighbors in the
the default IP prefix list. second availability zone.
Table 8-20. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation (continued)
VCF-NSX-BGP-REQD- Apply a route map-in that n Used for configuring You must manually create
CFG-012 contains the IP prefix list for neighbor relationships the route map.
the default route 0.0.0.0/0 with the Layer 3 The two NSX Edge nodes
and assign a lower local- devices in the second will route north-south
preference , for example, availability zone. traffic through the second
80, to the learned default n Ensures that all egress availability zone only if
route and a lower local- traffic passes through the connection to their
preference, for example, 90 the first availability BGP neighbors in the first
any routes learned. zone. availability zone is lost, for
example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.
VCF-NSX-BGP-REQD- Configure the neighbors of Makes the path in and The two NSX Edge nodes
CFG-013 the second availability zone out of the second will route north-south
to use the route maps as In availability zone less traffic through the second
and Out filters respectively. preferred because the AS availability zone only if
path is longer and the the connection to their
local preference is lower. BGP neighbors in the first
As a result, all traffic passes availability zone is lost, for
through the first zone. example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.
Table 8-21. BGP Routing Design Requirements for NSX Federation in VMware Cloud Foundation
VCF-NSX-BGP-REQD- Extend the Tier-0 gateway n Supports ECMP north- The Tier-0 gateway
CFG-014 to the second VMware south routing on all deployed in the second
Cloud Foundation instance. nodes in the NSX Edge instance is removed.
cluster.
n Enables support for
cross-instance Tier-1
gateways and cross-
instance network
segments.
Table 8-21. BGP Routing Design Requirements for NSX Federation in VMware Cloud Foundation
(continued)
Table 8-21. BGP Routing Design Requirements for NSX Federation in VMware Cloud Foundation
(continued)
VCF-NSX-BGP-REQD- Assign the NSX Edge n Enables cross-instance You must manually fail over
CFG-018 cluster in each VMware network span between and fail back the cross-
Cloud Foundation instance the first and instance network from
to the stretched Tier-1 second VMware Cloud the standby NSX Global
gateway. Set the first Foundation instances. Manager.
VMware Cloud Foundation n Enables deterministic
instance as primary and ingress and egress
the second instance as traffic for the cross-
secondary. instance network.
n If a VMware Cloud
Foundation instance
failure occurs, enables
deterministic failover of
the Tier-1 traffic flow.
n During the recovery
of the inaccessible
VMware Cloud
Foundation instance,
enables deterministic
failback of the
Tier-1 traffic flow,
preventing unintended
asymmetrical routing.
n Eliminates the need
to use BGP attributes
in the first and
second VMware Cloud
Foundation instances
to influence location
preference and failover.
VCF-NSX-BGP-REQD- Assign the NSX Edge n Enables instance- You can use the
CFG-019 cluster in each VMware specific networks to be service router that is
Cloud Foundation instance isolated to their specific created for the Tier-1
to the local Tier-1 gateway instances. gateway for networking
for that VMware Cloud n Enables deterministic services. However, such
Foundation instance. flow of ingress and configuration is not
egress traffic for required for network
the instance-specific connectivity.
networks.
VCF-NSX-BGP-REQD- Set each local Tier-1 Prevents the need to use None.
CFG-020 gateway only as primary in BGP attributes in primary
that instance. Avoid setting and secondary instances
the gateway as secondary to influence the instance
in the other instances. ingress-egress preference.
Table 8-22. BGP Routing Design Recommendations for VMware Cloud Foundation
Recommendation Recommendation
Recommendation ID Design Recommendation Justification Implication
VCF-NSX-BGP-RCMD- Configure the BGP Keep Provides a balance By using longer timers to
CFG-002 Alive Timer to 4 and Hold between failure detection detect if a router is not
Down Timer to 12 or lower between the top of responding, the data about
between the top of tack rack switches and the such a router remains in
switches and the Tier-0 Tier-0 gateway, and the routing table longer. As
gateway. overburdening the top of a result, the active router
rack switches with keep- continues to send traffic to
alive traffic. a router that is down.
These timers must be
aligned with the data
center fabric design of your
organization.
Table 8-22. BGP Routing Design Recommendations for VMware Cloud Foundation (continued)
Recommendation Recommendation
Recommendation ID Design Recommendation Justification Implication
Table 8-23. BGP Routing Design Recommendations for NSX Federation in VMware Cloud
Foundation
Virtual Segments
Geneve provides the overlay capability to create isolated, multi-tenant broadcast domains in NSX
across data center fabrics, and enables customers to create elastic, logical networks that span
physical network boundaries, and physical locations.
Transport Zones
A transport zone identifies the type of traffic, VLAN or overlay, and the vSphere Distributed
Switch name. You can configure one or more VLAN transport zones and a single overlay
transport zone per virtual switch. A transport zone does not represent a security boundary.
VMware Cloud Foundation supports a single overlay transport zone per NSX Instance. All
vSphere clusters, within and across workload domains that share the same NSX instance
subsequently share the same overlay transport zone.
Edge Edge
Node 1 Node 2 ESXi
vSphere Distributed
N-VDS
Switch with NSX
Uplink profiles can use either load balance source or failover order teaming. If using load balance
source, multiple uplinks can be active. If using failover order, only a single uplink can be active.
Hierarchical Two-Tier The ESXi host transport nodes are grouped according
to their TEP IP subnet. One ESXi host in each subnet
is responsible for replication to an ESXi host in another
subnet. The receiving ESXi host replicates the traffic to
the ESXi hosts in its local subnet.
The source ESXi host transport node knows about the
groups based on information it has received from the
control plane. The system can select an arbitrary ESXi
host transport node as the mediator for the source subnet
if the remote mediator ESXi host node is available.
Head-End The ESXi host transport node at the origin of the frame
to be flooded on a segment sends a copy to every
other ESXi host transport node that is connected to this
segment.
Table 8-25. Overlay Design Requirements for VMware Cloud Foundation (continued)
VCF-NSX-OVERLAY-REQD- Create a single overlay n Ensures that overlay All clusters in all workload
CFG-004 transport zone in the NSX segments are domains that share the
instance for all overlay connected to an same NSX Manager share
traffic across the host and NSX Edge node for the same transport zone.
NSX Edge transport nodes services and north-
of the workload domain. south routing.
n Ensures that all
segments are available
to all ESXi hosts
and NSX Edge nodes
configured as transport
nodes.
Table 8-27. Overlay Design Recommendations for Stretched Clusters inVMware Cloud
Foundation
VCF-NSX-OVERLAY-RCMD- Configure an NSX sub- n You can use static Changes to the
CFG-003 transport node profile. IP pools for the host host transport node
TEPs in each availability configuration are done at
zone. the vSphere cluster level.
n The NSX transport
node profile can
remain attached when
using two separate
VLANs for host TEPs
at each availability
zone as required for
clusters that are based
on vSphere Lifecycle
Manager images.
n Using an external DHCP
server for the host
overlay VLANs in both
availability zones is not
required.
Benefits n Supports IP mobility with dynamic routing. Uses the data center fabric
n Limits the number of VLANs needed in the data center fabric. for the network segment
and the next-hop gateway.
n In an environment with multiple availability zones, limits the
number of VLANs needed to expand from an architecture
with one availability zone to an architecture with two
availability zones.
Requirement Requires routing between the data center fabric and the NSX
Edge nodes.
ToR Switches
ECMP
Cross-Instance Local-Instance
Tier-1 Gateway Tier-1 Gateway
Cross-Instance Local-Instance
IP Subnet IP Subnet
Cross-Instance Local-Instance
NSX Segment NSX Segment
VMware Aria Lifecycle Local-Instance VMware Aria
Clustered WSA Suite Component
VMware Aria Suite Component
with Cross-Instance Mobility
For the design for specific VMware Aria Suite components, see this design and VMware
Validated Solutions. For identity and access management design for NSX, see Identity and
Access Management for VMware Cloud Foundation.
Important If you plan to use NSX Federation in the management domain, create the AVNs
before you enable the federation. Creating AVNs in an environment where NSX Federation is
already active is not supported.
With NSX Federation, an NSX segment can span multiple instances of NSX and VMware Cloud
Foundation. A single network segment can be available in different physical locations over
the NSX SDN. In an environment with multiple VMware Cloud Foundation instances, the cross-
instance NSX network in the management domain is extended between the first two instances.
This configuration provides IP mobility for management components which fail over from the first
to the second instance.
Table 8-29. Application Virtual Network Design Requirements for VMware Cloud Foundation
VCF-NSX-AVN-REQD- Create one cross-instance Prepares the environment Each NSX segment requires
CFG-001 NSX segment for the for the deployment of a unique IP address space.
components of a VMware solutions on top of VMware
Aria Suite application Cloud Foundation, such as
or another solution that VMware Aria Suite, without
requires mobility between a complex physical network
VMware Cloud Foundation configuration.
instances. The components of
the VMware Aria Suite
application must be
easily portable between
VMware Cloud Foundation
instances without requiring
reconfiguration.
VCF-NSX-AVN-REQD- Create one or more local- Prepares the environment Each NSX segment requires
CFG-002 instance NSX segments for the deployment of a unique IP address space.
for the components of solutions on top of VMware
a VMware Aria Suite Cloud Foundation, such as
application or another VMware Aria Suite, without
solution that are assigned a complex physical network
to a specific VMware Cloud configuration.
Foundation instance.
Table 8-30. Application Virtual Network Design Requirements for NSX Federation in VMware
Cloud Foundation
VCF-NSX-AVN-REQD- Extend the cross-instance Enables workload mobility Each NSX segment requires
CFG-003 NSX segment to the without a complex physical a unique IP address space.
second VMware Cloud network configuration.
Foundation instance. The components of
a VMware Aria Suite
application must be
easily portable between
VMware Cloud Foundation
instances without requiring
reconfiguration.
VCF-NSX-AVN-REQD- In each VMware Cloud Enables workload mobility Each NSX segment requires
CFG-004 Foundation instance, create within a VMware a unique IP address space.
additional local-instance Cloud Foundation instance
NSX segments. without complex physical
network configuration.
Each VMware Cloud
Foundation instance should
have network segments to
support workloads which
are isolated to that VMware
Cloud Foundation instance.
Table 8-31. Application Virtual Network Design Recommendations for VMware Cloud Foundation
A standalone Tier-1 gateway is created to provide load balancing services with a service interface
on the cross-instance application virtual network.
Figure 8-12. NSX Logical Load Balancing Design for VMware Cloud Foundation
NSX Edge
Cluster
NSX
Load Balancer
Service
NSX Tier-1
Gateway
NSX
Manager Cluster Supporting
Infrastructure
NSX
Transport Nodes
Management Cluster
Table 8-32. Load Balancing Design Requirements for VMware Cloud Foundation
VCF-NSX-LB-REQD- When creating load Provides load balancing to You must connect
CFG-002 balancing services for applications connected to the gateway to each
Application Virtual the cross-instance network. network that requires load
Networks, connect the balancing.
standalone Tier-1 gateway
to the cross-instance NSX
segments.
Table 8-33. Load Balancing Design Requirements for NSX Federation in VMware Cloud
Foundation
VCF-NSX-LB-REQD- Connect the standalone Provides load balancing to You must connect
CFG-005 Tier-1 gateway in the applications connected to the gateway to each
second VMware Cloud the cross-instance network network that requires load
Foundationinstance to in the second VMware balancing.
the cross-instance NSX Cloud Foundation instance.
segment.
Table 8-33. Load Balancing Design Requirements for NSX Federation in VMware Cloud
Foundation (continued)
n SDDC Manager Design Requirements and Recommendations for VMware Cloud Foundation
API API
VMware Depot VMware Depot
vCenter vCenter
Server Server
vCenter vCenter
Server Server
n Life cycle management of the virtual infrastructure components in all workload domains and
of VMware Aria Suite Lifecycle
n Certificate management
n Backup configuration
n A single SDDC Manager appliance is deployed on the n A single SDDC Manager appliance is deployed on the
management network. management network.
n vSphere HA protects the SDDC Manager appliance. n vSphere HA protects the SDDC Manager appliance.
n A vSphere DRS rule specifies that the SDDC Manager
appliance should run on an ESXi host in the first
availability zone.
Table 9-2. SDDC Manager Design Requirements for VMware Cloud Foundation
Table 9-3. SDDC Manager Design Recommendations for VMware Cloud Foundation
VCF-SDDCMGR-RCMD- Connect SDDC Manager SDDC Manager must be The rules of your
CFG-001 to the Internet for able to download install organization might not
downloading software and upgrade software permit direct access to the
bundles. bundles for deployment of Internet. In this case, you
VI workload domains and must download software
solutions, and for upgrade bundles for SDDC Manager
from a repository. manually.
VCF-SDDCMGR-RCMD- Configure a network proxy To protect SDDC Manager The proxy must not
CFG-002 to connect SDDC Manager against external attacks use authentication because
to the Internet. from the Internet. SDDC Manager does
not support proxy with
authentication.
VCF-SDDCMGR-RCMD- Configure SDDC Manager Software bundles for Requires the use of a
CFG-003 with a VMware Customer VMware Cloud Foundation VMware Customer Connect
Connect account with are stored in a repository user account with access to
VMware Cloud Foundation that is secured with access VMware Cloud Foundation
entitlement to check for controls. licensing.
and download software Sites without an internet
bundles. connection can use local
upload option instead.
You deploy VMware Aria Suite Lifecycle by using SDDC Manager. SDDC Manager deploys
VMware Aria Suite Lifecycle in VMware Cloud Foundation mode. In this mode, VMware Aria Suite
Lifecycle is integrated with SDDC Manager, providing the following benefits:
n Integration with the SDDC Manager inventory to retrieve infrastructure details when creating
environments for Workspace ONE Access and VMware Aria Suite components, such as NSX
segments and vCenter Server details.
n Automation of the NSX load balancer configuration when deploying Workspace ONE Access,
VMware Aria Operations, and VMware Aria Automation.
n Deployment details for VMware Aria Suite Lifecycle environments are populated in the SDDC
Manager inventory and can be queried using the SDDC Manager API.
n Day-two workflows in SDDC Manager to connect VMware Aria Operations for Logs and
VMware Aria Operations to workload domains.
n The ability to manage password life cycle for Workspace ONE Access and VMware Aria Suite
components.
For information about deploying VMware Aria Suite components, see VMware Validated
Solutions.
n Logical Design for VMware Aria Suite Lifecycle for VMware Cloud Foundation
n Data Center and Environment Design for VMware Aria Suite Lifecycle
n VMware Aria Suite Lifecycle Design Requirements and Recommendations for VMware Cloud
Foundation
Logical Design
In a VMware Cloud Foundation environment, you use VMware Aria Suite Lifecycle in VMware
Cloud Foundation mode. In this mode, VMware Aria Suite Lifecycle is integrated with VMware
Cloud Foundation in the following way:
n SDDC Manager deploys the VMware Aria Suite Lifecycle appliance. Then, you deploy the
VMware Aria Suite products that are supported by VMware Cloud Foundation by using
VMware Aria Suite Lifecycle.
n Supported versions are controlled by the VMware Aria Suite Lifecycle appliance and Product
Support Packs. See the VMware Interoperability Matrix.
n To orchestrate the deployment, patching, and upgrade of Workspace ONE Access and the
VMware Aria Suite products, VMware Aria Suite Lifecycle communicates with SDDC Manager
and the management domain vCenter Server in the environment.
n SDDC Manager configures the load balancer for Workspace ONE Access, VMware Aria
Operations, and VMware Aria Automation.
VCF Instance A
Identity Management
Access Workspace
ONE Access
Integration
User Interface
SDDC
REST API Manager
VMware Aria
Lifecycle
in VMware Cloud
Life Cycle Management Foundation Mode
Endpoint
VMware Aria
Suite Components
vCenter
Server
Workspace
ONE Access
Shared Storage
VCF Instance B
Access Integration
Shared Storage
According to the VMware Cloud Foundation topology deployed, VMware Aria Suite Lifecycle is
deployed in one or more locations and is responsible for the life cycle of the VMware Aria Suite
components in one or more VMware Cloud Foundation instances.
VMware Cloud Foundation instances might be connected for the following reasons:
n Over-arching management of those instances from the same VMware Aria Suite
deployments.
n A single VMware Aria Suite n A single VMware Aria Suite The VMware Aria Suite Lifecycle
Lifecycle appliance deployed on Lifecycle appliance deployed on instance in the first VMware Cloud
the cross-instance NSX segment. the cross-instance NSX segment. Foundation instance provides life
n vSphere HA protects the VMware n vSphere HA protects the VMware cycle management for:
Aria Suite Lifecycle appliance. Aria Suite Lifecycle appliance. n Workspace ONE Access
Life cycle management for: n A should-run vSphere DRS rule n VMware Aria Suite
n Workspace ONE Access specifies that the VMware Aria VMware Aria Suite Lifecycle in
Suite Lifecycle appliance should each additional VMware Cloud
n VMware Aria Suite
run on an ESXi host in the first Foundation instance provides life
availability zone. cycle management for:
Life cycle management for: n VMware Aria Operations for Logs
n Workspace ONE Access
n VMware Aria Suite
VMware Aria Suite Lifecycle must have routed access to the management VLAN through the
Tier-0 gateway in the NSX instance for the management domain.
Local-Instance Local-Instance
Cross-Instance NSX Tier-1 Gateway
NSX Tier-1 Gateway NSX Tier-1 Gateway
APP APP
OS OS
Product Support
VMware Aria Suite Lifecycle provides several methods to obtain and store product binaries for
the install, patch, and upgrade of the VMware Aria Suite products.
Method Description
Product Upload n You can upload and discover product binaries to the VMware Aria Suite
Lifecycle appliance.
VMware Customer Connect n You can integrate vVMware Aria Suite Lifecycle with VMware
Customer Connect to access and download VMware Aria Suite product
entitlements from an online depot over the Internet. This method
simplifies, automates, and organizes the repository.
You create data centers and environments in VMware Aria Suite Lifecycle to manage the life
cycle operations on the VMware Aria Suite products and to support the growth of the SDDC.
Construct Definition
Table 10-4. Logical Datacenter to vCenter Server Mappings in VMware Aria Suite Lifecycle
Cross-instance n Management domain vCenter Server for Supports the deployment of cross-instance
the local VMware Cloud Foundation components, such as Workspace ONE
instance. Access, VMware Aria Operations, and
n Management domain vCenter Server for VMware Aria Automation, including any per-
an additional VMware Cloud Foundation instance collector components.
instance.
Local-instance Management domain vCenter Server for the Supports the deployment of VMware Aria
local VMware Cloud Foundation instance. Operations for Logs.
VMware Cloud Foundation Mode n Infrastructure details for the deployed products,
including vCenter Server, networking, DNS and NTP
information are retrieved from the SDDC Manager
inventory.
n Successful deployment details are synced back to the
SDDC Manager inventory.
n Limited to one instance of each VMware Aria Suite
product.
Note You can deploy new VMware Aria Suite products to the SDDC environment or import
existing product deployments.
Passwords
VMware Aria Suite Lifecycle stores passwords in the locker repository which are referenced
during life cycle operations on data centers, environments, products, and integrations.
Table 10-7. Life Cycle Operations Use of Locker Passwords in VMware Aria Suite Lifecycle
Products n Product administrator password, for example, the admin password for
an individual product.
n Product appliance password, for example, the root password for an
individual product.
Certificates
VMware Aria Suite Lifecycle stores certificates in the Locker repository which can be referenced
during product life cycle operations. Externally provided certificates, such as Certificate
Authority-signed certificates, can be imported or certificates can be generated by the VMware
Aria Suite Lifecycle appliance.
Licenses
VMware Aria Suite Lifecycle stores licenses in the Locker repository which can be referenced
during product life cycle operations. Licenses can be validated and added to the repository
directory or imported through an integration with VMware Customer Connect.
Table 10-8. VMware Aria Suite Lifecycle Design Requirements for VMware Cloud Foundation
VCF-VASL- Deploy a VMware Aria Suite Provides life cycle You must ensure that
REQD-CFG-001 Lifecycle instance in the management operations for the required resources are
management domain of each VMware Aria Suite applications available.
VMware Cloud Foundation and Workspace ONE Access.
instance to provide life cycle
management for VMware Aria
Suite and Workspace ONE
Access.
VCF-VASL- Deploy VMware Aria Suite n Deploys VMware Aria Suite None.
REQD-CFG-002 Lifecycle by using SDDC Lifecycle in VMware Cloud
Manager. Foundation mode, which
enables the integration
with the SDDC Manager
inventory for product
deployment and life cycle
management of VMware
Aria Suite components.
n Automatically configures
the standalone Tier-1
gateway required for load
balancing the clustered
Workspace ONE Access
and VMware Aria Suite
components.
VCF-VASL- Place the VMware Aria Provides a consistent You must use an
REQD-CFG-004 Suite Lifecycle appliance deployment model for implementation in NSX to
on an overlay-backed management applications. support this networking
(recommended) or VLAN- configuration.
backed NSX network segment.
VCF-VASL- Import VMware Aria Suite n You can review the validity, When using the API, you must
REQD-CFG-005 product licenses to the Locker details, and deployment specify the Locker ID for the
repository for product life cycle usage for the license across license to be used in the JSON
operations. the VMware Aria Suite payload.
products.
n You can reference and
use licenses during product
life cycle operations, such
as deployment and license
replacement.
Table 10-8. VMware Aria Suite Lifecycle Design Requirements for VMware Cloud Foundation
(continued)
VCF-VASL- Configure datacenter objects You can deploy and manage You must manage a separate
REQD-ENV-001 in VMware Aria Suite the integrated VMware Aria datacenter object for the
Lifecycle for local and cross- Suite components across the products that are specific to
instance VMware Aria Suite SDDC as a group. each instance.
deployments and assign the
management domain vCenter
Server instance to each data
center.
VCF-VASL- If deploying VMware Aria n Supports deployment and You can manage instance-
REQD-ENV-003 Operations or VMware Aria management of the specific components, such as
Automation, create a cross- integrated VMware Aria remote collectors, only in
instance environment in Suite products across an environment that is cross-
VMware Aria Suite Lifecycle VMware Cloud Foundation instance.
instances as a group.
n Enables the deployment
of instance-specific
components, such as
VMware Aria Operations
remote collectors. In
VMware Aria Suite
Lifecycle, you can deploy
and manage VMware
Aria Operations remote
collector objects only
in an environment that
contains the associated
cross-instance components.
Table 10-8. VMware Aria Suite Lifecycle Design Requirements for VMware Cloud Foundation
(continued)
VCF-VASL- Use the custom vCenter Server VMware Aria Suite Lifecycle You must maintain the
REQD-SEC-001 role for VMware Aria Suite accesses vSphere with the permissions required by the
Lifecycle that has the minimum minimum set of permissions custom role.
privileges required to support that are required to support
the deployment and upgrade the deployment and upgrade
of VMware Aria Suite products. of VMware Aria Suite products.
SDDC Manager automates the
creation of the custom role.
VCF-VASL- Use the service account in n Provides the following n You must maintain the life
REQD-SEC-002 vCenter Server for application- access control features: cycle and availability of the
to-application communication n VMware Aria Suite service account outside of
from VMware Aria Suite Lifecycle accesses SDDC manager password
Lifecycle to vSphere. Assign vSphere with the rotation.
global permissions using the minimum set of
custom role. required permissions.
n You can introduce
improved accountability
in tracking request-
response interactions
between the
components of the
SDDC.
n SDDC Manager automates
the creation of the service
account.
Table 10-9. VMware Aria Suite Lifecycle Design Requirements for Stretched Clusters in VMware
Cloud Foundation
VCF-VASL- For multiple availability zones, Ensures that, by default, the If VMware Aria Suite Lifecycle
REQD-CFG-006 add the VMware Aria Suite VMware Aria Suite Lifecycle is deployed after the creation
Lifecycle appliance to the VM appliance is powered on a host of the stretched management
group for the first availability in the first availability zone. cluster, you must add the
zone. VMware Aria Suite Lifecycle
appliance to the VM group
manually.
Table 10-10. VMware Aria Suite Lifecycle Design Requirements for NSX Federation in VMware
Cloud Foundation
VCF-VASL- Configure the DNS settings Improves resiliency in the event As you scale from a
REQD-CFG-007 for the VMware Aria Suite of an outage of external deployment with a single
Lifecycle appliance to use DNS services for a VMware Cloud VMware Cloud Foundation
servers in each instance. Foundation instance. instance to one with multiple
VMware Cloud Foundation
instances, the DNS settings
of the VMware Aria Suite
Lifecycle appliance must be
updated.
VCF-VASL- Configure the NTP settings Improves resiliency if an outage As you scale from a
REQD-CFG-008 for the VMware Aria Suite of external services for a deployment with a single
Lifecycle appliance to use NTP VMware Cloud Foundation VMware Cloud Foundation
servers in each VMware Cloud instance occurs. instance to one with multiple
Foundation instance. VMware Cloud Foundation
instances, the NTP settings
on the VMware Aria Suite
Lifecycle appliance must be
updated.
Table 10-11. VMware Aria Suite Lifecycle Design Recommendations for VMware Cloud
Foundation
Recommendati
on ID Design Recommendation Justification Implication
VCF-VASL- Obtain product binaries for n You can upgrade VMware The site must have an Internet
RCMD-LCM-001 install, patch, and upgrade in Aria Suite products connection to use VMware
VMware Aria Suite Lifecycle based on their general Customer Connect.
from VMware Customer availability and endpoint Sites without an Internet
Connect. interoperability rather than connection should use the local
being listed as part of upload option instead.
VMware Cloud Foundation
bill of materials (BOM).
n You can deploy and
manage binaries in an
environment that does not
allow access to the Internet
or are dark sites.
VCF-VASL- Enable integration between n Enables authentication to You must deploy and configure
RCMD-SEC-001 VMware Aria Suite Lifecycle VMware Aria Suite Lifecycle Workspace ONE Access
and your corporate identity by using your corporate to establish the integration
source by using the Workspace identity source. between VMware Aria Suite
ONE Access instance. n Enables authorization Lifecycle and your corporate
through the assignment identity sources.
of organization and cloud
services roles to enterprise
users and groups defined
in your corporate identity
source.
VCF-VASL- Create corresponding security Streamlines the management n You must create the
RCMD-SEC-002 groups in your corporate of VMware Aria Suite Lifecycle security groups outside of
directory services for VMware roles for users. the SDDC stack.
Aria Suite Lifecycle roles: n You must set the desired
n VCF directory synchronization
n Content Release Manager interval in Workspace
ONE Access to ensure
n Content Developer
that changes are available
within a reasonable period.
n Directory integration to authenticate users against an identity provider (IdP), such as Active
Directory or LDAP.
n Access policies that consist of rules to specify criteria that users must meet to authenticate.
The Workspace ONE Access instance that is integrated with VMware Aria Suite Lifecycle
provides identity and access management services to VMware Aria Suite solutions that either run
in a VMware Cloud Foundation instance or must be available across VMware Cloud Foundation
instances.
For identity management design for a VMware Aria Suite product, see VMware Cloud Foundation
Validated Solutions .
For identity and access management for components other than VMware Aria Suite, such as
NSX, you can deploy a standalone Workspace ONE Access instance. See Identity and Access
Management for VMware Cloud Foundation.
n Sizing Considerations for Workspace ONE Access for VMware Cloud Foundation
n Integration Design for Workspace ONE Access with VMware Cloud Foundation
n Workspace ONE Access Design Requirements and Recommendations for VMware Cloud
Foundation
VCF Instance A
Identity Provider
Directory Services
eg. AD, LDAP
Access
User Interface
Rest API
Standard
Workspace
ONE Access
Primary
Supporting Components:
Postgres
Supporting Infrastructure:
Shared Storage
DNS, NTP, SMTP
VMware Aria
Lifecycle
VMware Aria
Suite Component
n A single-node Workspace ONE Access instance n A single-node Workspace ONE Access instance
deployed on an overlay-backed (recommended) or deployed on an overlay-backed (recommended) or
VLAN-backed NSX segment. VLAN-backed NSX segment.
n SDDC solutions that are portable across VMware n SDDC solutions that are portable across VMware
Cloud Foundation instances are integrated with the Cloud Foundation instances are integrated with the
Workspace ONE Access instance in the first VMware Workspace ONE Access instance in the first VMware
Cloud Foundation instance. Cloud Foundation instance.
n A should-run-on-hosts-in-group vSphere DRS rule
ensures that, under normal operating conditions, the
Workspace ONE Access node runs on a management
ESXi host in the first availability zone.
VCF Instance A
Identity Provider
Directory Services
e.g. AD, LDAP
Access
User Interface
REST API
NSX
Load Balancer
Clustered
Workspace
ONE Access
Supporting Components:
Postgres
Supporting Infrastructure:
Shared Storage,
DNS, NTP, SMTP
Cross - and
Local-Instance Solutions
VMware Aria
Lifecycle
VMware Aria
Suite Component
n A three-node Workspace ONE Access cluster n A three-node Workspace ONE Access cluster
behind an NSX load balancer and deployed on an behind an NSX load balancer and deployed on an
overlay-backed (recommended) or VLAN-backed NSX overlay-backed (recommended) or VLAN-backed NSX
segment is deployed in the first VMware Cloud segment.
Foundation instance. n All Workspace ONE Access services and databases
n All Workspace ONE Access services and databases are configured for high availability using a native
are configured for high availability using a native cluster configuration. SDDC solutions that are portable
cluster configuration. SDDC solutions that are portable across VMware Cloud Foundation instances are
across VMware Cloud Foundation instances are integrated with this Workspace ONE Access cluster.
integrated with this Workspace ONE Access cluster. n Each node of the three-node cluster is configured as a
n Each node of the three node cluster is configured as a connector to any relevant identity providers
connector to any relevant identity providers n vSphere HA protects the Workspace ONE Access
n vSphere HA protects the Workspace ONE Access nodes.
nodes. n A vSphere DRS anti-affinity rule ensures that the
n vSphere DRS anti-affinity rules ensure that the Workspace ONE Access nodes run on different ESXi
Workspace ONE Access nodes run on different ESXi hosts.
hosts. n A should-run-on-hosts-in-group vSphere DRS rule
n Additional single-node Workspace ONE Access ensures that, under normal operating conditions, the
instance is deployed on an overlay-backed Workspace ONE Access nodes run on management
(recommended) or VLAN-backed NSX segment in all ESXi hosts in the first availability zone.
other VMware Cloud Foundation instances. n Additional single-node Workspace ONE Access
instance is deployed on an overlay-backed
(recommended) or VLAN-backed NSX segment in all
other VMware Cloud Foundation instances.
For detailed sizing based on the overall profile of the VMware Cloud Foundation instance you
plan to deploy, see VMware Cloud Foundation Planning and Preparation Workbook.
Network Segment
This network design has the following features:
n All Workspace ONE Access components have routed access to the management VLAN
through the Tier-0 gateway in the NSX instance for the management domain.
n Routing to the management network and other external networks is dynamic and is based on
the Border Gateway Protocol (BGP).
APP
OS
WSA Node 1
VCF Instance A
VCF Instance A
Load Balancing
A Workspace ONE Access cluster deployment requires a load balancer to manage connections
to the Workspace ONE Access services.
Load-balancing services are provided by NSX. During the deployment of the Workspace ONE
Access cluster or scale-out of a standard deployment, VMware Aria Suite Lifecycle and SDDC
Manager coordinate to automate the configuration of the NSX load balancer. The load balancer is
configured with the following settings:
Table 11-4. Clustered Workspace ONE Access Load Balancer Configuration (continued)
After the integration, information security and access control configurations for the integrated
SDDC products can be configured.
See VMware Cloud Foundation Validated Solutions for the design for specific VMware Aria Suite
components including identity management.
Deployment Type
You consider the deployment type, standard or cluster, according to the design objectives for
the availability and number of users that the system and integrated SDDC solutions must support.
You deploy Workspace ONE Access on the default management vSphere cluster.
Standard (Recommended) n Single node n Can be scaled out to n Does not provide high
n NSX load balancer a 3-node cluster behind availability for Identity
automatically deployed. an NSX load balancer Provider connectors.
n Can leverage vSphere
HA for recovery after a
failure occurs.
n Consumes less
resources.
Table 11-7. Workspace ONE Access Design Requirements for VMware Cloud Foundation
VCF-WSA-REQD-SEC-001 Import certificate authority- n You can reference When using the API, you
signed certificates to and use certificate must specify the Locker ID
the Locker repository authority-signed for the certificate to be
for Workspace ONE certificates during used in the JSON payload.
Access product life cycle product life cycle
operations. operations, such
as deployment and
certificate replacement.
Table 11-7. Workspace ONE Access Design Requirements for VMware Cloud Foundation
(continued)
Table 11-7. Workspace ONE Access Design Requirements for VMware Cloud Foundation
(continued)
VCF-WSA-REQD-CFG-007 If using clustered n During the deployment You must use the load
Workspace ONE Access, of Workspace ONE balancer that is configured
use the NSX load balancer Access by using by SDDC Manager and the
that is configured by SDDC VMware Aria Suite integration with VMware
Manager on a dedicated Lifecycle, SDDC Aria Suite Lifecycle.
Tier-1 gateway. Manager automates the
configuration of an
NSX load balancer
for Workspace ONE
Access to facilitate
scale-out.
Table 11-8. Workspace ONE Access Design Requirements for Stretched Clusters in VMware
Cloud Foundation
VCF-WSA-REQD-CFG-008 Add the Workspace ONE Ensures that, by default, n If the Workspace
Access appliances to the the Workspace ONE ONE Access instance
VM group for the first Access cluster nodes are is deployed after
availability zone. powered on a host in the the creation of the
first availability zone. stretched management
cluster, you must add
the appliances to the
VM group manually.
n ClusteredWorkspace
ONE Access might
require manual
intervention after a
failure of the active
availability zone occurs.
Table 11-9. Workspace ONE Access Design Requirements for NSX Federation in VMware Cloud
Foundation
VCF-WSA-REQD-CFG-010 Configure the NTP settings Improves resiliency if If you scale from a
on Workspace ONE Access an outage of external deployment with a single
cluster nodes to use NTP services for a VMware VMware Cloud Foundation
servers in each VMware Cloud Foundation instance instance to one with
Cloud Foundation instance. occurs. multiple VMware Cloud
Foundation instances,
the NTP settings on
Workspace ONE Access
must be updated.
Table 11-10. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
VCF-WSA-RCMD-CFG-001 Protect all Workspace Supports high availability None for standard
ONE Access nodes using for Workspace ONE deployments.
vSphere HA. Access. Clustered Workspace ONE
Access deployments might
require intervention if an
ESXi host failure occurs.
VCF-WSA-RCMD-CFG-003 When using Active Provides the following n You must manage the
Directory as an Identity access control features: password life cycle of
Provider, use an Active n Workspace ONE this account.
Directory user account with Access connects to the n If authentication to
a minimum of read-only Active Directory with more than one Active
access to Base DNs for the minimum set of Directory domain is
users and groups as the required permissions to required, additional
service account for the bind and query the accounts are required
Active Directory bind. directory. for the Workspace ONE
n You can introduce Access connector to
improved accountability bind to each Active
in tracking request- Directory domain over
response interactions LDAP.
between the
Workspace ONE
Access and Active
Directory.
Table 11-10. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)
VCF-WSA-RCMD-CFG-004 Configure the directory n Limits the number You must manage
synchronization to of replicated groups the groups from
synchronize only groups required for each your enterprise
required for the integrated product. directory selected for
SDDC solutions. n Reduces the replication synchronization to
interval for group Workspace ONE Access.
information.
VCF-WSA-RCMD-CFG-007 Add a filter to the Limits the number of To ensure that replicated
Workspace ONE Access replicated users for user accounts are managed
directory settings to Workspace ONE Access within the maximums, you
exclude users from the within the maximum scale. must define a filtering
directory replication. schema that works for your
organization based on your
directory attributes.
Table 11-10. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)
VCF-WSA-RCMD-CFG-008 Configure the mapped You can configure the User accounts in your
attributes included when minimum required and organization's enterprise
a user is added to the extended user attributes directory must have
Workspace ONE Access to synchronize directory the following required
directory. user accounts for the attributes mapped:
Workspace ONE Access n firstname, for example,
to be used as an givenname for Active
authentication source for Directory
cross-instance VMware
n lastName, for example,
Aria Suite solutions.
sn for Active Directory
VCF-WSA-RCMD-CFG-009 Configure the Workspace Ensures that any changes Schedule the
ONE Access directory to group memberships in synchronization interval
synchronization frequency the corporate directory to be longer than the
to a reoccurring schedule, are available for integrated time to synchronize from
for example, 15 minutes. solutions in a timely the enterprise directory.
manner. If users and groups
are being synchronized
to Workspace ONE
Access when the
next synchronization is
scheduled, the new
synchronization starts
immediately after the end
of the previous iteration.
With this schedule, the
process is continuous.
Table 11-10. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)
VCF-WSA-RCMD-SEC-002 Configure a password You can set a policy for You must set the policy
policy for Workspace Workspace ONE Access in accordance with your
ONE Access local local directory users that organization policies and
directory users, admin and addresses your corporate regulatory standards, as
configadmin. policies and regulatory applicable.
standards. You must apply the
The password policy is password policy on the
applicable only to the Workspace ONE Access
local directory users and cluster nodes.
does not impact your
organization directory.
Life cycle management of a VMware Cloud Foundation instance is the process of performing
patch updates or upgrades to the underlying management components.
SDDC Manager SDDC Manager performs its own life Not applicable
cycle management.
NSX Local Manager SDDC Manager uses the NSX upgrade coordinator service in the NSX Local
Manager.
NSX Edges SDDC Manager uses the NSX upgrade coordinator service in NSX Manager.
NSX Global Manager You manually use the NSX upgrade coordinator service in the NSX Global
Manager.
vCenter Server You use SDDC Manager for life cycle management of all vCenter Server
instances.
VMware Aria Suite Lifecycle VMware Aria Suite Lifecycle performs Not applicable
its own life cycle management.
Table 12-2. Life Cycle Management Design Requirements for VMware Cloud Foundation
VCF-LCM-REQD-001 Use SDDC Manager to Because the deployment The operations team must
perform the life cycle scope of SDDC Manager understand and be aware
management of the covers the full VMware of the impact of a
following components: Cloud Foundation stack, patch, update, or upgrade
n SDDC Manager SDDC Manager performs operation by using SDDC
patching, update, or Manager.
n NSX Manager
upgrade of these
n NSX Edges
components across all
n vCenter Server workload domains.
n ESXi
VCF-LCM-REQD-002 Use VMware Aria Suite VMware Aria Suite n You must deploy
Lifecycle to manage the Lifecycle automates the life VMware Aria Suite
life cycle of the following cycle of VMware Aria Suite Lifecycle by using
components: Lifecycle and Workspace SDDC Manager.
n VMware Aria Suite ONE Access. n You must manually
Lifecycle apply Workspace
n Workspace ONE ONE Access patches,
Access updates, and hotfixes.
Patches, updates, and
hotfixes for Workspace
ONE Access are not
generally managed by
VMware Aria Suite
Lifecycle.
VCF-LCM-RCMD-001 Use vSphere Lifecycle n With vSphere Lifecycle n Updating the firmware
Manager images to manage Manager images, with images requires
the life cycle of vSphere firmware updates are an OEM-provided
clusters. carried out through hardware support
firmware and driver manager plug-in, which
add-ons, which you add integrates with vSphere
to the image you use to Lifecycle Manager.
manage a cluster. n An updated vSAN
n You can check the Hardware Compatibility
hardware compatibility List (vSAN HCL) is
of the hosts in a cluster required during bring-
against the VMware up.
Compatibility Guide.
n You can validate
a vSphere Lifecycle
Manager image to
check if it applies to
all hosts in the cluster.
You can also perform a
remediation pre-check.
Table 12-3. Life Cycle Management Design Requirements for NSX Federation in VMware Cloud
Foundation
VCF-LCM-REQD-003 Use the upgrade The version of SDDC n You must explicitly
coordinator in NSX Manager in this design is plan upgrades of the
to perform life cycle not currently capable of life NSX Global Manager
management on the NSX cycle operations (patching, nodes. An upgrade
Global Manager appliances. update, or upgrade) for of the NSX Global
NSX Global Manager. Manager nodes might
require a cascading
upgrade of the NSX
Local Manager nodes
and underlying SDDC
Manager infrastructure
before upgrading the
NSX Global Manager
nodes.
n You must always align
the version of the NSX
Global Manager nodes
with the rest of the
SDDC stack in VMware
Cloud Foundation.
VCF-LCM-REQD-004 Establish an operations The versions of NSX Global The administrator must
practice to ensure that Manager and NSX Local establish and follow an
prior to the upgrade of Manager nodes must be operations practice by
any workload domain, the compatible with each other. using a runbook or
impact of any version Because SDDC Manager automated process to
upgrades is evaluated does not provide life ensure a fully supported
in relation to the need cycle operations (patching, and compliant bill of
to upgrade NSX Global update, or upgrade) materials prior to any
Manager. for the NSX Global upgrade operation.
Manager nodes, upgrade
to an unsupported version
cannot be prevented.
VCF-LCM-REQD-005 Establish an operations The versions of NSX Global The administrator must
practice to ensure that Manager and NSX Local establish and follow an
prior to the upgrade of Manager nodes must be operations practice by
the NSX Global Manager, compatible with each other. using a runbook or
the impact of any Because SDDC Manager automated process to
version change is evaluated does not provide life ensure a fully supported
against the existing NSX cycle operations (patching, and compliant bill of
Local Manager nodes and update, or upgrade) materials prior to any
workload domains. for the NSX Global upgrade operation.
Manager nodes, upgrade
to an unsupported version
cannot be prevented.
After you deploy VMware Aria Operations for Logs by using VMware Aria Suite Lifecycle
in VMware Cloud Foundation mode, SDDC Manager configures VMware Aria Suite Lifecycle
logging to VMware Aria Operations for Logs over the log ingestion API. For information about
on-premises VMware Aria Operations for Logs in VMware Cloud Foundation, see Intelligent
Logging and Analytics for VMware Clod Foundation.
n SSH
n SSH
ESXi n Direct Console User Interface SSH and ESXi shell are deactivated
(DCUI) by default.
n ESXi Shell
n SSH
n VMware Host Client
Method Description
For more information on password complexity, account lockout or integration with additional
Identity Providers, refer to the Identity and Access Management for VMware Cloud Foundation.
Table 14-2. Account and Password Management in VMware Cloud Foundation (continued)
NSX Global Manager admin n Manual by using the n Local appliance account
NSX Global Manager UI n OS level, API, and
or API application access
n Default Expiry: 90 days
Table 14-2. Account and Password Management in VMware Cloud Foundation (continued)
Table 14-2. Account and Password Management in VMware Cloud Foundation (continued)
Table 14-3. Design Requirements for Account and Password Management for VMware Cloud
Foundation
VCF-ACTMGT-REQD- Enable scheduled n Increases the security You must retrieve new
SEC-001 password rotation in SDDC posture of your SDDC. passwords by using the API
Manager for all accounts n Simplifies password if you must use accounts
supporting scheduled management interactively.
rotation. across your
SDDC management
components.
Access to all management component interfaces must be over a Secure Socket Layer (SSL)
connection. During deployment, each component is assigned a certificate from a default signing
CA. To provide secure access to each component, replace the default certificate with a trusted
enterprise CA-signed certificate.
VMware Aria Suite Lifecycle Management domain VMCA Using SDDC Manager
Note * To use enterprise CA-Signed certificates with ESXi, the initial deployment of VMware
Cloud Foundation must be done using the API providing the Trusted Root certificate.
Table 14-5. Certificate Management Design Recommendations for VMware Cloud Foundation
VCF-SDDC-RCMD-SEC-001 Replace the default VMCA Ensures that the Replacing the default
or signed certificates on communication to all certificates with trusted
all management virtual management components CA-signed certificates from
appliances with a certificate is secure. a certificate authority might
that is signed by an internal increase the deployment
certificate authority. preparation time because
you must generate and
submit certificate requests.
VCF-SDDC-RCMD-SEC-002 Use a SHA-2 algorithm The SHA-1 algorithm is Not all certificate
or higher for signed considered less secure and authorities support SHA-2
certificates. has been deprecated. or higher.
VCF-SDDC-RCMD-SEC-003 Perform SSL certificate life SDDC Manager supports Certificate management
cycle management for all automated SSL certificate for NSX Global Manager
management appliances by lifecycle management instances must be done
using SDDC Manager. rather than requiring a manually.
series of manual steps.
n VMware Aria Suite Lifecycle Design Elements for VMware Cloud Foundation
For full design details, see Architecture Models and Workload Domain Types in VMware Cloud
Foundation.
For full design details, see Chapter 4 External Services Design for VMware Cloud Foundation.
Table 15-3. External Services Design Requirements for VMware Cloud Foundation
VCF-EXT-REQD-NET-001 Allocate statically assigned Ensures stability across the You must provide precise
IP addresses and host VMware Cloud Foundation IP address management.
names for all workload instance, and makes it
domain components. simpler to maintain, track,
and implement a DNS
configuration.
VCF-EXT-REQD-NET-002 Configure forward and Ensures that all You must provide
reverse DNS records components are accessible DNS records for each
for all workload domain by using a fully qualified component.
components. domain name instead of by
using IP addresses only. It
is easier to remember and
connect to components
across the VMware Cloud
Foundation instance.
For full design details, see Chapter 5 Physical Network Infrastructure Design for VMware Cloud
Foundation.
Table 15-4. Leaf-Spine Physical Network Design Requirements for VMware Cloud Foundation
VCF-NET-REQD-CFG-004 Set the MTU size to at least n Improves traffic When adjusting the MTU
1,700 bytes (recommended throughput. packet size, you must
9,000 bytes for jumbo n Supports Geneve by also configure the entire
frames) on the physical increasing the MTU size network path (VMkernel
switch ports, vSphere to a minimum of 1,600 network adapters, virtual
Distributed Switches, bytes. switches, physical switches,
vSphere Distributed Switch and routers) to support the
n Geneve is an extensible
port groups, and N-VDS same MTU packet size.
protocol. The MTU
switches that support the In an environment with
size might increase
following traffic types: multiple availability zones,
with future capabilities.
n Overlay (Geneve) While 1,600 bytes the MTU must be
n vSAN is sufficient, an MTU configured on the entire
size of 1,700 bytes network path between the
n vSphere vMotion
provides more room for zones.
increasing the Geneve
MTU size without the
need to change the
MTU size of the
physical infrastructure.
Table 15-5. Leaf-Spine Physical Network Design Requirements for NSX Federation in VMware
Cloud Foundation
VCF-NET-REQD-CFG-005 Set the MTU size to at least n Jumbo frames are When adjusting the MTU
1,500 bytes (1,700 bytes not required between packet size, you must
preferred; 9,000 bytes VMware Cloud also configure the entire
recommended for jumbo Foundation instances. network path, that is,
frames) on the components However, increased virtual interfaces, virtual
of the physical network MTU improves traffic switches, physical switches,
between the VMware throughput. and routers to support the
Cloud Foundation instances n Increasing the RTEP same MTU packet size.
for the following traffic MTU to 1,700
types. bytes minimizes
n NSX Edge RTEP fragmentation for
standard-size workload
packets between
VMware Cloud
Foundation instances.
VCF-NET-REQD-CFG-006 Ensure that the latency A latency lower than 500 None.
between VMware Cloud ms is required for NSX
Foundation instances that Federation.
are connected in an NSX
Federation is less than 500
ms.
Table 15-6. Leaf-Spine Physical Network Design Recommendations for VMware Cloud
Foundation
VCF-NET-RCMD-CFG-001 Use two ToR switches for Supports the use of Requires two ToR switches
each rack. two 10-GbE (25-GbE or per rack which might
greater recommended) increase costs.
links to each server,
provides redundancy and
reduces the overall design
complexity.
VCF-NET-RCMD-CFG-004 Assign persistent IP n Ensures that endpoints If you add more hosts to
configurations for NSX have a persistent TEP the cluster, expanding the
tunnel endpoints (TEPs) IP address. static IP pools might be
that use static IP pools n In VMware Cloud required.
instead of dynamic IP pool Foundation, TEP IP
addressing. assignment by using
static IP pools is
recommended for all
topologies.
n This configuration
removes any
requirement for
external DHCP services.
Table 15-6. Leaf-Spine Physical Network Design Recommendations for VMware Cloud
Foundation (continued)
VCF-NET-RCMD-CFG-005 Configure the trunk ports Reduces the time to Although this design
connected to ESXi NICs as transition ports over to the does not use the STP,
trunk PortFast. forwarding state. switches usually have STP
configured by default.
VCF-NET-RCMD-CFG-006 Configure VRRP, HSRP, or Ensures that the VLANs Requires configuration of a
another Layer 3 gateway that are stretched high availability technology
availability method for between availability zones for the Layer 3 gateways in
these networks. are connected to a the data center.
n Management highly- available gateway.
Otherwise, a failure in the
n Edge overlay
Layer 3 gateway will cause
disruption in the traffic in
the SDN setup.
Table 15-7. Leaf-Spine Physical Network Design Recommendations for NSX Federation in
VMware Cloud Foundation
After you set up the physical storage infrastructure, the configuration tasks for most design
decisions are automated in VMware Cloud Foundation. You must perform the configuration
manually only for a limited number of design elements as noted in the design implication.
For full design details, see Chapter 6 vSAN Design for VMware Cloud Foundation.
Table 15-9. vSAN ESA Design Requirements for VMware Cloud Foundation
Requireme
nt ID Design Requirement Justification Implication
VCF- Verify the hardware Prevents hardware-related Limits the number of compatible hardware
VSAN- components used failures during workload configurations that can be used.
REQD- in your vSAN deployment
CFG-003 deployment are on
the vSAN Hardware
Compatibility List.
Table 15-10. vSAN Design Requirements for Stretched Clusters with VMware Cloud Foundation
Requireme
nt ID Design Requirement Justification Implication
VCF- Add the following Provides the necessary You might need additional policies if third-
VSAN- setting to the default protection for virtual party virtual machines are to be hosted in
REQD- vSAN storage policy: machines in each availability these clusters because their performance or
CFG-004 Site disaster tolerance zone, with the ability to availability requirements might differ from
= Site mirroring - recover from an availability what the default VMware vSAN policy
stretched cluster zone outage. supports.
VCF- Configure two fault Fault domains are mapped to You must provide additional raw storage when
VSAN- domains, one for availability zones to provide the site mirroring - stretched cluster option is
REQD- each availability zone. logical host separation and selected, and fault domains are enabled.
CFG-005 Assign each host ensure a copy of vSAN
to their respective data is always available even
availability zone fault when an availability zone
domain. goes offline.
VCF- Configure an individual The vSAN storage policy of You must configure additional vSAN storage
VSAN- vSAN storage policy a stretched cluster cannot be policies.
REQD- for each stretched shared with other clusters.
CFG-007 cluster.
Table 15-10. vSAN Design Requirements for Stretched Clusters with VMware Cloud Foundation
(continued)
Requireme
nt ID Design Requirement Justification Implication
VCF- Deploy a vSAN witness Ensures availability of vSAN You must provide a third physically separate
VSAN- appliance in a location witness components in the location that runs a vSphere environment.
WTN- that is not local to the event of a failure of one of You might use a VMware Cloud Foundation
REQD- ESXi hosts in any of the availability zones. instance in a separate physical location.
CFG-001 the availability zones.
VCF- Deploy a witness Ensures the witness The vSphere environment at the witness
VSAN- appliance that appliance is sized to support location must satisfy the resource
WTN- corresponds to the the projected workload requirements of the witness appliance.
REQD- required cluster storage consumption.
CFG-002 capacity.
VCF- Connect the first Enables connecting the The management networks in both availability
VSAN- VMkernel adapter of witness appliance to the zones must be routed to the management
WTN- the vSAN witness workload domain vCenter network in the witness site.
REQD- appliance to the Server.
CFG-003 management network
in the witness site.
VCF- Allocate a statically Simplifies maintenance and Requires precise IP address management.
VSAN- assigned IP address tracking, and implements a
WTN- and host name to the DNS configuration.
REQD- management adapter
CFG-004 of the vSAN witness
appliance.
VCF- Configure forward Enables connecting the vSAN You must provide DNS records for the vSAN
VSAN- and reverse DNS witness appliance to the witness appliance.
WTN- records for the vSAN workload domain vCenter
REQD- witness appliance for Server by FQDN instead of IP
CFG-005 the VMware Cloud address.
Foundation instance.
VCF- Configure time Prevents any failures n An operational NTP service must be
VSAN- synchronization by in the stretched cluster available in the environment.
WTN- using an internal NTP configuration that are caused n All firewalls between the vSAN witness
REQD- time for the vSAN by time mismatch between appliance and the NTP servers must allow
CFG-006 witness appliance. the vSAN witness appliance NTP traffic on the required network ports.
and the ESXi hosts in
both availability zones and
workload domain vCenter
Server.
VCF- Provide sufficient raw capacity to Ensures that sufficient resources are present in the None.
VSAN- meet the planned needs of the workload domain cluster, preventing the need to
RCMD- workload domain cluster. expand the vSAN datastore in the future.
CFG-001
VCF- Ensure that at least 30% of free This reserved capacity is set aside for host Increases
VSAN- space is always available on the maintenance mode data evacuation, component the amount
RCMD- vSAN datastore,. rebuilds, rebalancing operations, and VM of available
CFG-002 snapshots. storage
needed.
VCF- Use the default VMware vSAN n Provides the level of redundancy that is You might
VSAN- storage policy. needed in the workload domain cluster. need
RCMD- n Provides the level of performance that is additional
CFG-003 enough for the individual workloads. policies for
third-party
virtual
machines
hosted in
these
clusters
because
their
performanc
e or
availability
requirement
s might
differ from
what the
default
VMware
vSAN policy
supports.
VCF- Leave the default virtual machine Sparse virtual swap files consume capacity on None.
VSAN- swap file as a sparse object on vSAN only as they are accessed. As a result,
RCMD- vSAN. you can reduce the consumption on the vSAN
CFG-004 datastore if virtual machines do not experience
memory over-commitment, which would require
the use of the virtual swap file.
VCF- Use the existing vSphere n Reduces the complexity of the network design. All traffic
VSAN- Distributed Switch instance for the n Reduces the number of physical NICs required. types can
RCMD- workload domain cluster. be shared
CFG-005 over
common
uplinks.
Table 15-11. vSAN Design Recommendations for VMware Cloud Foundation (continued)
Recommen
dation ID Design Recommendation Justification Implication
VCF- Configure jumbo frames on the n Simplifies configuration because jumbo frames Every
VSAN- VLAN for vSAN traffic. are also used to improve the performance of device in
RCMD- vSphere vMotion and NFS storage traffic. the network
CFG-006 n Reduces the CPU overhead, resulting in high must
network usage. support
jumbo
frames.
VCF- Configure vSAN in an all-flash Meets the performance needs of the default All vSAN
VSAN- configuration in the default workload domain cluster. disks must
RCMD- workload domain cluster. be flash
CFG-007 disks, which
might cost
more than
magnetic
disks.
Table 15-12. vSAN OSA Design Recommendations for with VMware Cloud Foundation
Recommen
dation ID Design Recommendation Justification Implication
VCF- Ensure that the storage I/O Storage controllers with lower queue depths can Limits the
VSAN- controller has a minimum queue cause performance and stability problems when number of
RCMD- depth of 256 set. running vSAN. compatible
CFG-008 vSAN ReadyNode servers are configured with the I/O
correct queue depths for vSAN. controllers
that can be
used for
storage.
VCF- Do not use the storage I/O Running non-vSAN disks, for example, VMFS, on a If non-vSAN
VSAN- controllers that are running vSAN storage I/O controller that is running a vSAN disk disks are
RCMD- disk groups for another purpose. group can impact vSAN performance. required in
CFG-009 ESXi hosts,
you must
have an
additional
storage I/O
controller in
the host.
Table 15-12. vSAN OSA Design Recommendations for with VMware Cloud Foundation
(continued)
Recommen
dation ID Design Recommendation Justification Implication
VCF- Configure vSAN with a minimum of Reduces the size of the fault domain and Using
VSAN- two disk groups per ESXi host. spreads the I/O load over more disks for better multiple disk
RCMD- performance. groups
CFG-010 requires
more disks
in each ESXi
host.
VCF- For the cache tier in each disk Provides enough cache for both hybrid or all-flash Using larger
VSAN- group, use a flash-based drive that vSAN configurations to buffer I/O and ensure disk flash disks
RCMD- is at least 600 GB large. group performance. can increase
CFG-011 Additional space in the cache tier does not the initial
increase performance. host cost.
Table 15-13. vSAN ESA Design Recommendations for with VMware Cloud Foundation
Recommen
dation ID Design Recommendation Justification Implication
VCF- Activate auto-policy management. Configures optimized storage policies based on You must
VSAN- the cluster type and the number of hosts in activate
RCMD- the cluster inventory. Changes to the number of auto-policy
CFG-012 hosts in the cluster or Host Rebuild Reserve will managemen
prompt you to make a suggested adjustment to t manually.
the optimized storage policy.
VCF- Activate vSAN ESA compression. Activated by default, it also improves PostgreSQL
VSAN- performance. databases
RCMD- and other
CFG-013 applications
might use
their own
compressio
n
capabilities.
In these
cases, using
a storage
policy with
the
compressio
n capability
turned off
will save
CPU cycles.
You can
disable
vSAN ESA
compressio
ns for such
workloads
through the
use of the
Storage
Policy
Based
Managemen
t (SPBM)
framework.
VCF- Use NICs with a minimum 25-GbE 10-GbE NICs will limit the scale and performance of Requires 25-
VSAN- capacity. a vSAN ESA cluster because usually performance GbE or
RCMD- requirements increase over the lifespan of the faster
CFG-014 cluster. network
fabric.
Table 15-14. vSAN Design Recommendations for Stretched Clusters with VMware Cloud
Foundation
Recommen
dation ID Design Recommendation Justification Implication
VCF- Configure the vSAN witness Removes the requirement to have static routes on The
VSAN- appliance to use the first VMkernel the witness appliance as witness traffic is routed managemen
WTN- adapter, that is the management over the management network. t networks
RCMD- interface, for vSAN witness traffic. in both
CFG-001 availability
zones must
be routed to
the
managemen
t network in
the witness
site.
VCF- Place witness traffic on the Separates the witness traffic from the vSAN data The
VSAN- management VMkernel adapter of traffic. Witness traffic separation provides the managemen
WTN- all the ESXi hosts in the workload following benefits: t networks
RCMD- domain. n Removes the requirement to have static routes in both
CFG-002 from the vSAN networks in both availability availability
zones to the witness site. zones must
be routed to
n Removes the requirement to have jumbo
the
frames enabled on the path between each
managemen
availability zone and the witness site because
t network in
witness traffic can use a regular MTU size of
the witness
1500 bytes.
site.
The configuration tasks for most design requirements and recommendations are automated
in VMware Cloud Foundation. You must perform the configuration manually only for a limited
number of decisions as noted in the design implications.
For full design details, see ESXi Design for VMware Cloud Foundation.
VCF-ESX-REQD-CFG-002 Ensure each ESXi host n Ensures workloads Assemble the server
matches the required will run without specification and number
CPU, memory and storage contention even during according to the sizing in
specification. failure and maintenance VMware Cloud Foundation
conditions. Planning and Preparation
Workbook which is based
on projected deployment
size.
VCF-ESX-RCMD-CFG-001 Use vSAN ReadyNodes Your management domain Hardware choices might be
with vSAN storage for is fully compatible with limited.
each ESXi host in the vSAN at deployment. If you plan to use a
management domain. For information about the server configuration that is
models of physical servers not a vSAN ReadyNode,
that are vSAN-ready, see your CPU, disks and I/O
vSAN Compatibility Guide modules must be listed on
for vSAN ReadyNodes. the VMware Compatibility
Guide under CPU Series
and vSAN Compatibility List
aligned to the ESXi version
specified in VMware Cloud
Foundation 5.1 Release
Notes.
VCF-ESX-RCMD-CFG-002 Allocate hosts with uniform A balanced cluster has You must apply vendor
configuration across the these advantages: sourcing, budgeting,
default management n Predictable and procurement
vSphere cluster. performance even considerations for uniform
during hardware server nodes on a per
failures cluster basis.
n Minimal impact of
resynchronization or
rebuild operations on
performance
VCF-ESX-RCMD-CFG-003 When sizing CPU, do Although multithreading Because you must provide
not consider multithreading technologies increase more physical CPU
technology and associated CPU performance, the cores, costs increase and
performance gains. performance gain depends hardware choices become
on running workloads and limited.
differs from one case to
another.
VCF-ESX-RCMD-CFG-004 Install and configure all Provides hosts that have None.
ESXi hosts in the default large memory, that is,
management cluster to greater than 512 GB,
boot using a 128-GB device with enough space for
or larger. the scratch partition when
using vSAN.
VCF-ESX-RCMD-CFG-006 For workloads running in Simplifies the configuration Increases the amount
the default management process. of replication traffic for
cluster, save the virtual management workloads
machine swap file at the that are recovered as part
default location. of the disaster recovery
process.
VCF-ESX-RCMD-NET-001 Place the ESXi hosts Enables the separation of Increases the number of
in each management the physical VLAN between VLANs required.
domain cluster on a host ESXi hosts and the other
management network that management components
is separate from the VM for security reasons.
management network.
VCF-ESX-RCMD-NET-002 Place the ESXi hosts Enables the separation of Increases the number of
in each VI workload the physical VLAN between VLANs required. For each
domain on a separate host the ESXi hosts in different VI workload domain, you
management VLAN-backed VI workload domains for must allocate a separate
network. security reasons. management subnet.
VCF-ESX-RCMD-SEC-001 Deactivate SSH access on Ensures compliance with You must activate SSH
all ESXi hosts in the the vSphere Security access manually for
management domain by Configuration Guide and troubleshooting or support
having the SSH service with security best activities as VMware Cloud
stopped and using the practices. Foundation deactivates
default SSH service policy Disabling SSH access SSH on ESXi hosts
Start and stop manually . reduces the risk of security after workload domain
attacks on the ESXi hosts deployment.
through the SSH interface.
VCF-ESX-RCMD-SEC-002 Set the advanced setting n Ensures compliance You must turn off
UserVars.SuppressShellWa with the vSphere SSH enablement warning
rning to 0 across all ESXi Security Configuration messages manually when
hosts in the management Guide and with security performing troubleshooting
domain. best practices or support activities.
n Enables the warning
message that appears
in the vSphere Client
every time SSH access
is activated on an ESXi
host.
The configuration tasks for most design requirements and recommendations are automated
in VMware Cloud Foundation. You must perform the configuration manually only for a limited
number of decisions as noted in the design implications.
For full design details, see vCenter Server Design for VMware Cloud Foundation.
Table 15-17. vCenter Server Design Requirements for VMware Cloud Foundation (continued)
Table 15-18. vCenter Server Design Recommendations for VMware Cloud Foundation
VCF-VCS-RCMD-CFG-002 Deploy a vCenter Server Ensures resource The default size for a
appliance with the availability and usage management domain is
appropriate storage size. efficiency per workload Small and for VI Workload
domain. Domains is Medium. To
override these values, you
must use the API.
VCF-VCS-RCMD-CFG-003 Protect workload domain vSphere HA is the only vCenter Server becomes
vCenter Server appliances supported method to unavailable during a
by using vSphere HA. protect vCenter Server vSphere HA failover.
availability in VMware
Cloud Foundation.
VCF-VCS-RCMD-CFG-004 In vSphere HA, set the vCenter Server is the If the restart priority for
restart priority policy management and control another virtual machine
for the vCenter Server plane for physical and is set to highest, the
appliance to high. virtual infrastructure. In a connectivity delay for the
vSphere HA event, to management components
ensure the rest of the will be longer.
SDDC management stack
comes up faultlessly, the
workload domain vCenter
Server must be available
first, before the other
management components
come online.
Table 15-19. vCenter Server Design Recommendations for vSAN Stretched Clusters with VMware
Cloud Foundation
VCF-VCS-REQD-SSO- Join all vCenter Server When all vCenter Server n Only one vCenter Single
STD-001 instances within aVMware instances are in the Sign-On domain exists.
Cloud Foundation instance same vCenter Single Sign- n The number of
to a single vCenter Single On domain, they can linked vCenter Server
Sign-On domain. share authentication and instances in the
license data across all same vCenter Single
components. Sign-On domain is
limited to 15 instances.
Because each workload
domain uses a
dedicated vCenter
Server instance, you
can deploy up to
15 domains within
each VMware Cloud
Foundation instance.
Table 15-21. Design Requirements for Multiple vCenter Server Instance - Multiple vCenter Single
Sign-On Domain Topology for VMware Cloud Foundation
VCF-VCS-REQD-SSO- Create all vCenter Server n Enables isolation at n Each vCenter server
ISO-001 instances within a VMware the vCenter Single instance is managed
Cloud Foundation instance Sign-On domain layer through its own pane of
in their own unique vCenter for increased security glass using a different
Single Sign-On domains. separation. set of administrative
n Supports up to 25 credentials.
workload domains. n You must manage
password rotation
for each vCenter
Single Sign-On domain
separately.
For full design details, see Logical vSphere Cluster Design for VMware Cloud Foundation.
Table 15-22. vSphere Cluster Design Requirements for VMware Cloud Foundation
Table 15-22. vSphere Cluster Design Requirements for VMware Cloud Foundation (continued)
Table 15-23. vSphere Cluster Design Requirements for vSAN Stretched Clusters with VMware
Cloud Foundation
VCF-CLS-REQD-CFG-007 Enable the Override Enables routing the vSAN vSAN networks across
default gateway for this data traffic through the availability zones must
adapter setting on the vSAN network gateway have a route to each other.
vSAN VMkernel adapters rather than through the
on all ESXi hosts. management gateway.
VCF-CLS-REQD-CFG-008 Create a host group for Makes it easier to manage You must create and
each availability zone and which virtual machines run maintain VM-Host DRS
add the ESXi hosts in in which availability zone. group rules.
the zone to the respective
group.
Table 15-24. vSphere Cluster Design Recommendations for VMware Cloud Foundation
VCF-CLS-RCMD-CFG-001 Use vSphere HA to protect vSphere HA supports a You must provide sufficient
all virtual machines against robust level of protection resources on the remaining
failures. for both ESXi host and hosts so that virtual
virtual machine availability. machines can be restarted
on those hosts in the event
of a host outage.
VCF-CLS-RCMD-CFG-002 Set host isolation response vSAN requires that the If a false positive event
to Power Off and restart host isolation response be occurs, virtual machines are
VMs in vSphere HA. set to Power Off and to powered off and an ESXi
restart virtual machines on host is declared isolated
available ESXi hosts. incorrectly.
VCF-CLS-RCMD-CFG-005 Set the advanced Enables triggering a restart If you want to specifically
cluster setting of a management appliance enable I/O monitoring,
das.iostatsinterval to 0 when an OS failure occurs you must configure
to deactivate monitoring and heartbeats are not the das.iostatsinterval
the storage and network received from VMware advanced setting.
I/O activities of the Tools instead of waiting
management appliances. additionally for the I/O
check to complete.
VCF-CLS-RCMD-CFG-006 Enable vSphere DRS on all Provides the best If a vCenter Server outage
clusters, using the default trade-off between load occurs, the mapping from
fully automated mode with balancing and unnecessary virtual machines to ESXi
medium threshold. migrations with vSphere hosts might be difficult to
vMotion. determine.
Table 15-24. vSphere Cluster Design Recommendations for VMware Cloud Foundation
(continued)
VCF-CLS-RCMD-CFG-007 Enable Enhanced vMotion Supports cluster upgrades You must enable EVC only
Compatibility (EVC) on all without virtual machine if the clusters contain hosts
clusters in the management downtime. with CPUs from the same
domain. vendor.
You must enable EVC on
the default management
domain cluster during
bringup.
VCF-CLS-RCMD-CFG-008 Set the cluster EVC mode Supports cluster upgrades None.
to the highest available without virtual machine
baseline that is supported downtime.
for the lowest CPU
architecture on the hosts in
the cluster.
VCF-CLS-RCMD-LCM-001 Use images as the life cycle vSphere Lifecycle Manager An initial cluster image
management method for VI images simplify the is required during
workload domains. management of firmware workload domain or cluster
and vendor add-ons deployment.
manually.
Table 15-25. vSphere Cluster Design Recommendations for vSAN Stretched Clusters with
VMware Cloud Foundation
VCF-CLS-RCMD-CFG-009 Increase admission control Allocating only half of a In a cluster of 8 ESXi hosts,
percentage to half of the stretched cluster ensures the resources of only 4
ESXi hosts in the cluster. that all VMs have enough ESXi hosts are available for
resources if an availability use.
zone outage occurs. If you add more ESXi hosts
to the default management
cluster, add them in pairs,
one per availability zone.
VCF-CLS-RCMD-CFG-010 Create a virtual machine Ensures that virtual You must add virtual
group for each availability machines are located machines to the allocated
zone and add the VMs in only in the assigned group manually.
the zone to the respective availability zone to
group. avoid unnecessary vSphere
vMotion migrations.
VCF-CLS-RCMD-CFG-011 Create a should-run-on- Ensures that virtual You must manually create
hosts-in-group VM-Host machines are located the rules.
affinity rule to run each only in the assigned
group of virtual machines availability zone to
on the respective group avoid unnecessary vSphere
of hosts in the same vMotion migrations.
availability zone.
For full design details, see vSphere Networking Design for VMware Cloud Foundation.
Table 15-26. vSphere Networking Design Recommendations for VMware Cloud Foundation
VCF-VDS-RCMD-CFG-001 Use a single vSphere n Reduces the complexity Increases the number
Distributed Switch per of the network design. of vSphere Distributed
cluster. n Reduces the size of the Switches that must be
fault domain. managed.
VCF-VDS-RCMD-CFG-002 Configure the MTU size n Supports the MTU size When adjusting the MTU
of the vSphere Distributed required by system packet size, you must
Switch to 9000 for jumbo traffic types. also configure the entire
frames. n Improves traffic network path (VMkernel
throughput. ports, virtual switches,
physical switches, and
routers) to support the
same MTU packet size.
VCF-VDS-RCMD-DPG-001 Use ephemeral port binding Using ephemeral port Port-level permissions and
for the Management VM binding provides the option controls are lost across
port group. for recovery of the vCenter power cycles, and no
Server instance that is historical context is saved.
managing the distributed
switch.
VCF-VDS-RCMD-DPG-002 Use static port binding for Static binding ensures a None.
all non-management port virtual machine connects
groups. to the same port on
the vSphere Distributed
Switch. This allows for
historical data and port
level monitoring.
Table 15-26. vSphere Networking Design Recommendations for VMware Cloud Foundation
(continued)
VCF-VDS-RCMD-NIO-001 Enable Network I/O Control Increases resiliency and Network I/O Control
on vSphere Distributed performance of the might impact network
Switch of the management network. performance for critical
domain cluster. traffic types if
misconfigured.
VCF-VDS-RCMD-NIO-002 Set the share value for By keeping the default None.
management traffic to setting of Normal,
Normal. management traffic is
prioritized higher than
vSphere vMotion but
lower than vSAN traffic.
Management traffic is
important because it
ensures that the hosts
can still be managed
during times of network
contention.
VCF-VDS-RCMD-NIO-003 Set the share value for During times of network During times of network
vSphere vMotion traffic to contention, vSphere contention, vMotion takes
Low. vMotion traffic is not longer than usual to
as important as virtual complete.
machine or storage traffic.
VCF-VDS-RCMD-NIO-004 Set the share value for Virtual machines are the None.
virtual machines to High. most important asset in the
SDDC. Leaving the default
setting of High ensures that
they always have access to
the network resources they
need.
Table 15-26. vSphere Networking Design Recommendations for VMware Cloud Foundation
(continued)
VCF-VDS-RCMD-NIO-006 Set the share value for By default, VVMware Cloud None.
other traffic types to Low. Foundation does not use
other traffic types, like
vSphere FT traffic. Hence,
these traffic types can be
set the lowest priority.
For full design details, see Chapter 8 NSX Design for VMware Cloud Foundation.
VCF-NSX-LM-REQD- Deploy three NSX Manager Supports high availability of You must have sufficient
CFG-002 nodes in the default the NSX manager cluster. resources in the default
vSphere cluster in the cluster of the management
management domain for domain to run three NSX
configuring and managing Manager nodes.
the network services for
the workload domain.
Table 15-28. NSX Manager Design Recommendations for VMware Cloud Foundation
VCF-NSX-LM-RCMD- Deploy appropriately sized Ensures resource The default size for
CFG-001 nodes in the NSX Manager availability and usage a management domain
cluster for the workload efficiency per workload is Medium, and for VI
domain. domain. workload domains is Large.
VCF-NSX-LM-RCMD- Create a virtual IP (VIP) Provides high availability of n The VIP address
CFG-002 address for the NSX the user interface and API feature provides high
Manager cluster for the of NSX Manager. availability only. It
workload domain. does not load-balance
requests across the
cluster.
n When using the VIP
address feature, all NSX
Manager nodes must
be deployed on the
same Layer 2 network.
VCF-NSX-LM-RCMD- Apply VM-VM anti-affinity Keeps the NSX Manager You must allocate at
CFG-003 rules in vSphere Distributed appliances running on least four physical hosts
Resource Scheduler different ESXi hosts for so that the three
(vSphere DRS) to the NSX high availability. NSX Manager appliances
Manager appliances. continue running if an ESXi
host failure occurs.
VCF-NSX-LM-RCMD- In vSphere HA, set the n NSX Manager If the restart priority
CFG-004 restart priority policy implements the control for another management
for each NSX Manager plane for virtual appliance is set to highest,
appliance to high. network segments. the connectivity delay for
vSphere HA restarts management appliances
the NSX Manager will be longer.
appliances first so that
other virtual machines
that are being powered
on or migrated by using
vSphere vMotion while
the control plane is
offline lose connectivity
only until the control
plane quorum is re-
established.
n Setting the restart
priority to high reserves
the highest priority for
flexibility for adding
services that must be
started before NSX
Manager.
Table 15-29. NSX Manager Design Recommendations for Stretched Clusters in VMware Cloud
Foundation
Table 15-30. NSX Global Manager Design Requirements for VMware Cloud Foundation
Table 15-31. NSX Global Manager Design Recommendations for VMware Cloud Foundation
VCF-NSX-GM-RCMD- Deploy three NSX Global Provides high availability You must have sufficient
CFG-001 Manager nodes for the for the NSX Global resources in the default
workload domain to Manager cluster. cluster of the management
support NSX Federation domain to run three NSX
across VMware Cloud Global Manager nodes.
Foundation instances.
VCF-NSX-GM-RCMD- Create a virtual IP (VIP) Provides high availability of n The VIP address
CFG-003 address for the NSX Global the user interface and API feature provides high
Manager cluster for the of NSX Global Manager. availability only. It
workload domain. does not load-balance
requests across the
cluster.
n When using the VIP
address feature, all NSX
Global Manager nodes
must be deployed on
the same Layer 2
network.
Table 15-31. NSX Global Manager Design Recommendations for VMware Cloud Foundation
(continued)
VCF-NSX-GM-RCMD- Apply VM-VM anti-affinity Keeps the NSX Global You must allocate at
CFG-004 rules in vSphere DRS to Manager appliances least four physical hosts
the NSX Global Manager running on different ESXi so that the three
appliances. hosts for high availability. NSX Manager appliances
continue running if an ESXi
host failure occurs.
VCF-NSX-GM-RCMD- In vSphere HA, set the n NSX Global Manager n Management of NSX
CFG-005 restart priority policy for implements the global components will
each NSX Global Manager management plane for be unavailable until the
appliance to medium. global segments and NSX Global Manager
firewalls. virtual machines restart.
n The NSX Global
NSX Global Manager is
Manager cluster is
not required for control
deployed in the
plane and data plane
management domain,
connectivity.
where the total number
n Setting the restart
of virtual machines
priority to medium
is limited and where
reserves the high
it competes with
priority for services that
other management
impact the NSX control
components for restart
or data planes.
priority.
Table 15-31. NSX Global Manager Design Recommendations for VMware Cloud Foundation
(continued)
VCF-NSX-GM-RCMD- Set the NSX Global Enables recoverability of Must be done manually.
CFG-007 Manager cluster in the NSX Global Manager in
second VMware Cloud the second VMware Cloud
Foundation instance as Foundation instance if a
standby for the workload failure in the first instance
domain. occurs.
Table 15-32. NSX Global Manager Design Recommendations for Stretched Clusters in VMware
Cloud Foundation
VCF-NSX-GM-RCMD- Add the NSX Global Ensures that, by default, Done automatically by
CFG-008 Manager appliances to the the NSX Global Manager VMware Cloud Foundation
virtual machine group for appliances are powered when stretching a cluster.
the first availability zone. on a host in the primary
availability zone.
Table 15-33. NSX Edge Design Requirements for VMware Cloud Foundation (continued)
VCF-NSX-EDGE-REQD- Use a dedicated VLAN A dedicated edge overlay n You must have routing
CFG-003 for edge overlay that is network provides support between the VLANs for
different from the host for edge mobility in edge overlay and host
overlay VLAN. support of advanced overlay.
deployments such as n You must allocate
multiple availability zones another VLAN in
or multi-rack clusters. the data center
infrastructure for edge
overlay.
Table 15-34. NSX Edge Design Requirements for NSX Federation in VMware Cloud Foundation
VCF-NSX-EDGE-REQD- Allocate a separate VLAN The RTEP network must be You must allocate another
CFG-005 for edge RTEP overlay that on a VLAN that is different VLAN in the data center
is different from the edge from the edge overlay infrastructure.
overlay VLAN. VLAN. This is an NSX
requirement that provides
support for configuring
different MTU size per
network.
Table 15-35. NSX Edge Design Recommendations for VMware Cloud Foundation
VCF-NSX-EDGE-RCMD- Use appropriately sized Ensures resource You must provide sufficient
CFG-001 NSX Edge virtual availability and usage compute resources to
appliances. efficiency per workload support the chosen
domain. appliance size.
VCF-NSX-EDGE-RCMD- Deploy the NSX Edge Simplifies the configuration Workloads and NSX Edges
CFG-002 virtual appliances to the and minimizes the number share the same compute
default vSphere cluster of ESXi hosts required for resources.
of the workload domain, initial deployment.
sharing the cluster between
the workloads and the
edge appliances.
VCF-NSX-EDGE-RCMD- Deploy two NSX Edge Creates the minimum size For a VI workload domain,
CFG-003 appliances in an edge NSX Edge cluster while additional edge appliances
cluster in the default satisfying the requirements might be required to satisfy
vSphere cluster of the for availability. increased bandwidth
workload domain. requirements.
VCF-NSX-EDGE-RCMD- Apply VM-VM anti-affinity Keeps the NSX Edge nodes None.
CFG-004 rules for vSphere DRS to running on different ESXi
the virtual machines of the hosts for high availability.
NSX Edge cluster.
VCF-NSX-EDGE-RCMD- In vSphere HA, set the n The NSX Edge nodes If the restart priority for
CFG-005 restart priority policy for are part of the another VM in the cluster
each NSX Edge appliance north-south data path is set to highest, the
to high. for overlay segments. connectivity delays for
vSphere HA restarts the edge appliances will be
NSX Edge appliances longer.
first to minimise the
time an edge VM is
offline.
n Setting the restart
priority to high reserves
highest for future
needs.
Table 15-35. NSX Edge Design Recommendations for VMware Cloud Foundation (continued)
Table 15-36. NSX Edge Design Recommendations for Stretched Clusters in VMware Cloud
Foundation
VCF-NSX-BGP-REQD- To enable ECMP between Supports multiple equal- Additional VLANs are
CFG-001 the Tier-0 gateway and cost routes on the Tier-0 required.
the Layer 3 devices gateway and provides
(ToR switches or upstream more resiliency and better
devices), create two bandwidth use in the
VLANs. network.
The ToR switches or
upstream Layer 3 devices
have an SVI on one of the
two VLANS, and each Edge
node in the cluster has an
interface on each VLAN.
VCF-NSX-BGP-REQD- Create a VLAN transport Enables the configuration Additional VLAN transport
CFG-003 zone for edge uplink traffic. of VLAN segments on the zones might be required
N-VDS in the edge nodes. if the edge nodes are not
connected to the same top
of rack switch pair.
VCF-NSX-BGP-REQD- Deploy a Tier-1 gateway Creates a two-tier routing A Tier-1 gateway can only
CFG-004 and connect it to the Tier-0 architecture. be connected to a single
gateway. Abstracts the NSX logical Tier-0 gateway.
components which interact In cases where multiple
with the physical data Tier-0 gateways are
center from the logical required, you must create
components which provide multiple Tier-1 gateways.
SDN services.
Table 15-38. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation
VCF-NSX-BGP-REQD- Extend the uplink VLANs Because the NSX Edge You must configure a
CFG-006 to the top of rack switches nodes will fail over stretched Layer 2 network
so that the VLANs are between the availability between the availability
stretched between both zones, ensures uplink zones by using physical
availability zones. connectivity to the top network infrastructure.
of rack switches in
both availability zones
regardless of the zone
the NSX Edge nodes are
presently in.
VCF-NSX-BGP-REQD- Provide this SVI Enables the communication You must configure a
CFG-007 configuration on the top of of the NSX Edge nodes to stretched Layer 2 network
the rack switches. the top of rack switches in between the availability
n In the second both availability zones over zones by using the physical
availability zone, the same uplink VLANs. network infrastructure.
configure the top
of rack switches or
upstream Layer 3
devices with an SVI on
each of the two uplink
VLANs.
n Make the top of
rack switch SVI in
both availability zones
part of a common
stretched Layer 2
network between the
availability zones.
VCF-NSX-BGP-REQD- Provide this VLAN Supports multiple equal- n Extra VLANs are
CFG-008 configuration: cost routes on the Tier-0 required.
n Use two VLANs to gateway, and provides n Requires stretching
enable ECMP between more resiliency and better uplink VLANs between
the Tier-0 gateway and bandwidth use in the availability zones
the Layer 3 devices network.
(top of rack switches or
Leaf switches).
n The ToR switches
or upstream Layer 3
devices have an SVI to
one of the two VLANS
and each NSX Edge
node has an interface
to each VLAN.
Table 15-38. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation (continued)
VCF-NSX-BGP-REQD- Create an IP prefix list Used in a route map to You must manually create
CFG-009 that permits access to prepend a path to one or an IP prefix list that is
route advertisement by any more autonomous system identical to the default one.
network instead of using (AS-path prepend) for BGP
the default IP prefix list. neighbors in the second
availability zone.
VCF-NSX-BGP-REQD- Create a route map-out n Used for configuring You must manually create
CFG-010 that contains the custom neighbor relationships the route map.
IP prefix list and an AS- with the Layer 3 The two NSX Edge nodes
path prepend value set to devices in the second will route north-south
the Tier-0 local AS added availability zone. traffic through the second
twice. n Ensures that all ingress availability zone only if
traffic passes through the connection to their
the first availability BGP neighbors in the first
zone. availability zone is lost, for
example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.
VCF-NSX-BGP-REQD- Create an IP prefix list that Used in a route map to You must manually create
CFG-011 permits access to route configure local-reference an IP prefix list that is
advertisement by network on learned default-route identical to the default one.
0.0.0.0/0 instead of using for BGP neighbors in the
the default IP prefix list. second availability zone.
Table 15-38. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation (continued)
VCF-NSX-BGP-REQD- Apply a route map-in that n Used for configuring You must manually create
CFG-012 contains the IP prefix list for neighbor relationships the route map.
the default route 0.0.0.0/0 with the Layer 3 The two NSX Edge nodes
and assign a lower local- devices in the second will route north-south
preference , for example, availability zone. traffic through the second
80, to the learned default n Ensures that all egress availability zone only if
route and a lower local- traffic passes through the connection to their
preference, for example, 90 the first availability BGP neighbors in the first
any routes learned. zone. availability zone is lost, for
example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.
VCF-NSX-BGP-REQD- Configure the neighbors of Makes the path in and The two NSX Edge nodes
CFG-013 the second availability zone out of the second will route north-south
to use the route maps as In availability zone less traffic through the second
and Out filters respectively. preferred because the AS availability zone only if
path is longer and the the connection to their
local preference is lower. BGP neighbors in the first
As a result, all traffic passes availability zone is lost, for
through the first zone. example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.
Table 15-39. BGP Routing Design Requirements for NSX Federation in VMware Cloud
Foundation
VCF-NSX-BGP-REQD- Extend the Tier-0 gateway n Supports ECMP north- The Tier-0 gateway
CFG-014 to the second VMware south routing on all deployed in the second
Cloud Foundation instance. nodes in the NSX Edge instance is removed.
cluster.
n Enables support for
cross-instance Tier-1
gateways and cross-
instance network
segments.
Table 15-39. BGP Routing Design Requirements for NSX Federation in VMware Cloud
Foundation (continued)
Table 15-39. BGP Routing Design Requirements for NSX Federation in VMware Cloud
Foundation (continued)
VCF-NSX-BGP-REQD- Assign the NSX Edge n Enables cross-instance You must manually fail over
CFG-018 cluster in each VMware network span between and fail back the cross-
Cloud Foundation instance the first and instance network from
to the stretched Tier-1 second VMware Cloud the standby NSX Global
gateway. Set the first Foundation instances. Manager.
VMware Cloud Foundation n Enables deterministic
instance as primary and ingress and egress
the second instance as traffic for the cross-
secondary. instance network.
n If a VMware Cloud
Foundation instance
failure occurs, enables
deterministic failover of
the Tier-1 traffic flow.
n During the recovery
of the inaccessible
VMware Cloud
Foundation instance,
enables deterministic
failback of the
Tier-1 traffic flow,
preventing unintended
asymmetrical routing.
n Eliminates the need
to use BGP attributes
in the first and
second VMware Cloud
Foundation instances
to influence location
preference and failover.
VCF-NSX-BGP-REQD- Assign the NSX Edge n Enables instance- You can use the
CFG-019 cluster in each VMware specific networks to be service router that is
Cloud Foundation instance isolated to their specific created for the Tier-1
to the local Tier-1 gateway instances. gateway for networking
for that VMware Cloud n Enables deterministic services. However, such
Foundation instance. flow of ingress and configuration is not
egress traffic for required for network
the instance-specific connectivity.
networks.
VCF-NSX-BGP-REQD- Set each local Tier-1 Prevents the need to use None.
CFG-020 gateway only as primary in BGP attributes in primary
that instance. Avoid setting and secondary instances
the gateway as secondary to influence the instance
in the other instances. ingress-egress preference.
Table 15-40. BGP Routing Design Recommendations for VMware Cloud Foundation
Recommendation Recommendation
Recommendation ID Design Recommendation Justification Implication
VCF-NSX-BGP-RCMD- Configure the BGP Keep Provides a balance By using longer timers to
CFG-002 Alive Timer to 4 and Hold between failure detection detect if a router is not
Down Timer to 12 or lower between the top of responding, the data about
between the top of tack rack switches and the such a router remains in
switches and the Tier-0 Tier-0 gateway, and the routing table longer. As
gateway. overburdening the top of a result, the active router
rack switches with keep- continues to send traffic to
alive traffic. a router that is down.
These timers must be
aligned with the data
center fabric design of your
organization.
Table 15-40. BGP Routing Design Recommendations for VMware Cloud Foundation (continued)
Recommendation Recommendation
Recommendation ID Design Recommendation Justification Implication
Table 15-41. BGP Routing Design Recommendations for NSX Federation in VMware Cloud
Foundation
VCF-NSX-OVERLAY-REQD- Create a single overlay n Ensures that overlay All clusters in all workload
CFG-004 transport zone in the NSX segments are domains that share the
instance for all overlay connected to an same NSX Manager share
traffic across the host and NSX Edge node for the same transport zone.
NSX Edge transport nodes services and north-
of the workload domain. south routing.
n Ensures that all
segments are available
to all ESXi hosts
and NSX Edge nodes
configured as transport
nodes.
Table 15-44. Overlay Design Recommendations for Stretched Clusters inVMware Cloud
Foundation
VCF-NSX-OVERLAY-RCMD- Configure an NSX sub- n You can use static Changes to the
CFG-003 transport node profile. IP pools for the host host transport node
TEPs in each availability configuration are done at
zone. the vSphere cluster level.
n The NSX transport
node profile can
remain attached when
using two separate
VLANs for host TEPs
at each availability
zone as required for
clusters that are based
on vSphere Lifecycle
Manager images.
n Using an external DHCP
server for the host
overlay VLANs in both
availability zones is not
required.
VCF-NSX-AVN-REQD- Create one cross-instance Prepares the environment Each NSX segment requires
CFG-001 NSX segment for the for the deployment of a unique IP address space.
components of a VMware solutions on top of VMware
Aria Suite application Cloud Foundation, such as
or another solution that VMware Aria Suite, without
requires mobility between a complex physical network
VMware Cloud Foundation configuration.
instances. The components of
the VMware Aria Suite
application must be
easily portable between
VMware Cloud Foundation
instances without requiring
reconfiguration.
VCF-NSX-AVN-REQD- Create one or more local- Prepares the environment Each NSX segment requires
CFG-002 instance NSX segments for the deployment of a unique IP address space.
for the components of solutions on top of VMware
a VMware Aria Suite Cloud Foundation, such as
application or another VMware Aria Suite, without
solution that are assigned a complex physical network
to a specific VMware Cloud configuration.
Foundation instance.
Table 15-46. Application Virtual Network Design Requirements for NSX Federation in VMware
Cloud Foundation
VCF-NSX-AVN-REQD- Extend the cross-instance Enables workload mobility Each NSX segment requires
CFG-003 NSX segment to the without a complex physical a unique IP address space.
second VMware Cloud network configuration.
Foundation instance. The components of
a VMware Aria Suite
application must be
easily portable between
VMware Cloud Foundation
instances without requiring
reconfiguration.
VCF-NSX-AVN-REQD- In each VMware Cloud Enables workload mobility Each NSX segment requires
CFG-004 Foundation instance, create within a VMware a unique IP address space.
additional local-instance Cloud Foundation instance
NSX segments. without complex physical
network configuration.
Each VMware Cloud
Foundation instance should
have network segments to
support workloads which
are isolated to that VMware
Cloud Foundation instance.
Table 15-47. Application Virtual Network Design Recommendations for VMware Cloud
Foundation
VCF-NSX-LB-REQD- When creating load Provides load balancing to You must connect
CFG-002 balancing services for applications connected to the gateway to each
Application Virtual the cross-instance network. network that requires load
Networks, connect the balancing.
standalone Tier-1 gateway
to the cross-instance NSX
segments.
Table 15-49. Load Balancing Design Requirements for NSX Federation in VMware Cloud
Foundation
VCF-NSX-LB-REQD- Connect the standalone Provides load balancing to You must connect
CFG-005 Tier-1 gateway in the applications connected to the gateway to each
second VMware Cloud the cross-instance network network that requires load
Foundationinstance to in the second VMware balancing.
the cross-instance NSX Cloud Foundation instance.
segment.
Table 15-49. Load Balancing Design Requirements for NSX Federation in VMware Cloud
Foundation (continued)
For full design details, see Chapter 9 SDDC Manager Design for VMware Cloud Foundation.
Table 15-50. SDDC Manager Design Requirements for VMware Cloud Foundation
Table 15-51. SDDC Manager Design Recommendations for VMware Cloud Foundation
VCF-SDDCMGR-RCMD- Connect SDDC Manager SDDC Manager must be The rules of your
CFG-001 to the Internet for able to download install organization might not
downloading software and upgrade software permit direct access to the
bundles. bundles for deployment of Internet. In this case, you
VI workload domains and must download software
solutions, and for upgrade bundles for SDDC Manager
from a repository. manually.
VCF-SDDCMGR-RCMD- Configure a network proxy To protect SDDC Manager The proxy must not
CFG-002 to connect SDDC Manager against external attacks use authentication because
to the Internet. from the Internet. SDDC Manager does
not support proxy with
authentication.
Table 15-51. SDDC Manager Design Recommendations for VMware Cloud Foundation
(continued)
VCF-SDDCMGR-RCMD- Configure SDDC Manager Software bundles for Requires the use of a
CFG-003 with a VMware Customer VMware Cloud Foundation VMware Customer Connect
Connect account with are stored in a repository user account with access to
VMware Cloud Foundation that is secured with access VMware Cloud Foundation
entitlement to check for controls. licensing.
and download software Sites without an internet
bundles. connection can use local
upload option instead.
For full design details, see Chapter 10 VMware Aria Suite Lifecycle Design for VMware Cloud
Foundation.
Table 15-52. VMware Aria Suite Lifecycle Design Requirements for VMware Cloud Foundation
VCF-VASL- Deploy a VMware Aria Suite Provides life cycle You must ensure that
REQD-CFG-001 Lifecycle instance in the management operations for the required resources are
management domain of each VMware Aria Suite applications available.
VMware Cloud Foundation and Workspace ONE Access.
instance to provide life cycle
management for VMware Aria
Suite and Workspace ONE
Access.
VCF-VASL- Deploy VMware Aria Suite n Deploys VMware Aria Suite None.
REQD-CFG-002 Lifecycle by using SDDC Lifecycle in VMware Cloud
Manager. Foundation mode, which
enables the integration
with the SDDC Manager
inventory for product
deployment and life cycle
management of VMware
Aria Suite components.
n Automatically configures
the standalone Tier-1
gateway required for load
balancing the clustered
Workspace ONE Access
and VMware Aria Suite
components.
VCF-VASL- Place the VMware Aria Provides a consistent You must use an
REQD-CFG-004 Suite Lifecycle appliance deployment model for implementation in NSX to
on an overlay-backed management applications. support this networking
(recommended) or VLAN- configuration.
backed NSX network segment.
VCF-VASL- Import VMware Aria Suite n You can review the validity, When using the API, you must
REQD-CFG-005 product licenses to the Locker details, and deployment specify the Locker ID for the
repository for product life cycle usage for the license across license to be used in the JSON
operations. the VMware Aria Suite payload.
products.
n You can reference and
use licenses during product
life cycle operations, such
as deployment and license
replacement.
Table 15-52. VMware Aria Suite Lifecycle Design Requirements for VMware Cloud Foundation
(continued)
VCF-VASL- Configure datacenter objects You can deploy and manage You must manage a separate
REQD-ENV-001 in VMware Aria Suite the integrated VMware Aria datacenter object for the
Lifecycle for local and cross- Suite components across the products that are specific to
instance VMware Aria Suite SDDC as a group. each instance.
deployments and assign the
management domain vCenter
Server instance to each data
center.
VCF-VASL- If deploying VMware Aria n Supports deployment and You can manage instance-
REQD-ENV-003 Operations or VMware Aria management of the specific components, such as
Automation, create a cross- integrated VMware Aria remote collectors, only in
instance environment in Suite products across an environment that is cross-
VMware Aria Suite Lifecycle VMware Cloud Foundation instance.
instances as a group.
n Enables the deployment
of instance-specific
components, such as
VMware Aria Operations
remote collectors. In
VMware Aria Suite
Lifecycle, you can deploy
and manage VMware
Aria Operations remote
collector objects only
in an environment that
contains the associated
cross-instance components.
Table 15-52. VMware Aria Suite Lifecycle Design Requirements for VMware Cloud Foundation
(continued)
VCF-VASL- Use the custom vCenter Server VMware Aria Suite Lifecycle You must maintain the
REQD-SEC-001 role for VMware Aria Suite accesses vSphere with the permissions required by the
Lifecycle that has the minimum minimum set of permissions custom role.
privileges required to support that are required to support
the deployment and upgrade the deployment and upgrade
of VMware Aria Suite products. of VMware Aria Suite products.
SDDC Manager automates the
creation of the custom role.
VCF-VASL- Use the service account in n Provides the following n You must maintain the life
REQD-SEC-002 vCenter Server for application- access control features: cycle and availability of the
to-application communication n VMware Aria Suite service account outside of
from VMware Aria Suite Lifecycle accesses SDDC manager password
Lifecycle to vSphere. Assign vSphere with the rotation.
global permissions using the minimum set of
custom role. required permissions.
n You can introduce
improved accountability
in tracking request-
response interactions
between the
components of the
SDDC.
n SDDC Manager automates
the creation of the service
account.
Table 15-53. VMware Aria Suite Lifecycle Design Requirements for Stretched Clusters in VMware
Cloud Foundation
VCF-VASL- For multiple availability zones, Ensures that, by default, the If VMware Aria Suite Lifecycle
REQD-CFG-006 add the VMware Aria Suite VMware Aria Suite Lifecycle is deployed after the creation
Lifecycle appliance to the VM appliance is powered on a host of the stretched management
group for the first availability in the first availability zone. cluster, you must add the
zone. VMware Aria Suite Lifecycle
appliance to the VM group
manually.
Table 15-54. VMware Aria Suite Lifecycle Design Requirements for NSX Federation in VMware
Cloud Foundation
VCF-VASL- Configure the DNS settings Improves resiliency in the event As you scale from a
REQD-CFG-007 for the VMware Aria Suite of an outage of external deployment with a single
Lifecycle appliance to use DNS services for a VMware Cloud VMware Cloud Foundation
servers in each instance. Foundation instance. instance to one with multiple
VMware Cloud Foundation
instances, the DNS settings
of the VMware Aria Suite
Lifecycle appliance must be
updated.
VCF-VASL- Configure the NTP settings Improves resiliency if an outage As you scale from a
REQD-CFG-008 for the VMware Aria Suite of external services for a deployment with a single
Lifecycle appliance to use NTP VMware Cloud Foundation VMware Cloud Foundation
servers in each VMware Cloud instance occurs. instance to one with multiple
Foundation instance. VMware Cloud Foundation
instances, the NTP settings
on the VMware Aria Suite
Lifecycle appliance must be
updated.
Table 15-55. VMware Aria Suite Lifecycle Design Recommendations for VMware Cloud
Foundation
Recommendati
on ID Design Recommendation Justification Implication
VCF-VASL- Obtain product binaries for n You can upgrade VMware The site must have an Internet
RCMD-LCM-001 install, patch, and upgrade in Aria Suite products connection to use VMware
VMware Aria Suite Lifecycle based on their general Customer Connect.
from VMware Customer availability and endpoint Sites without an Internet
Connect. interoperability rather than connection should use the local
being listed as part of upload option instead.
VMware Cloud Foundation
bill of materials (BOM).
n You can deploy and
manage binaries in an
environment that does not
allow access to the Internet
or are dark sites.
Table 15-55. VMware Aria Suite Lifecycle Design Recommendations for VMware Cloud
Foundation (continued)
Recommendati
on ID Design Recommendation Justification Implication
VCF-VASL- Enable integration between n Enables authentication to You must deploy and configure
RCMD-SEC-001 VMware Aria Suite Lifecycle VMware Aria Suite Lifecycle Workspace ONE Access
and your corporate identity by using your corporate to establish the integration
source by using the Workspace identity source. between VMware Aria Suite
ONE Access instance. n Enables authorization Lifecycle and your corporate
through the assignment identity sources.
of organization and cloud
services roles to enterprise
users and groups defined
in your corporate identity
source.
VCF-VASL- Create corresponding security Streamlines the management n You must create the
RCMD-SEC-002 groups in your corporate of VMware Aria Suite Lifecycle security groups outside of
directory services for VMware roles for users. the SDDC stack.
Aria Suite Lifecycle roles: n You must set the desired
n VCF directory synchronization
n Content Release Manager interval in Workspace
ONE Access to ensure
n Content Developer
that changes are available
within a reasonable period.
For full design details, see Chapter 11 Workspace ONE Access Design for VMware Cloud
Foundation.
Table 15-56. Workspace ONE Access Design Requirements for VMware Cloud Foundation
VCF-WSA-REQD-SEC-001 Import certificate authority- n You can reference When using the API, you
signed certificates to and use certificate must specify the Locker ID
the Locker repository authority-signed for the certificate to be
for Workspace ONE certificates during used in the JSON payload.
Access product life cycle product life cycle
operations. operations, such
as deployment and
certificate replacement.
Table 15-56. Workspace ONE Access Design Requirements for VMware Cloud Foundation
(continued)
VCF-WSA-REQD-CFG-007 If using clustered n During the deployment You must use the load
Workspace ONE Access, of Workspace ONE balancer that is configured
use the NSX load balancer Access by using by SDDC Manager and the
that is configured by SDDC VMware Aria Suite integration with VMware
Manager on a dedicated Lifecycle, SDDC Aria Suite Lifecycle.
Tier-1 gateway. Manager automates the
configuration of an
NSX load balancer
for Workspace ONE
Access to facilitate
scale-out.
Table 15-57. Workspace ONE Access Design Requirements for Stretched Clusters in VMware
Cloud Foundation
VCF-WSA-REQD-CFG-008 Add the Workspace ONE Ensures that, by default, n If the Workspace
Access appliances to the the Workspace ONE ONE Access instance
VM group for the first Access cluster nodes are is deployed after
availability zone. powered on a host in the the creation of the
first availability zone. stretched management
cluster, you must add
the appliances to the
VM group manually.
n ClusteredWorkspace
ONE Access might
require manual
intervention after a
failure of the active
availability zone occurs.
Table 15-58. Workspace ONE Access Design Requirements for NSX Federation in VMware Cloud
Foundation
VCF-WSA-REQD-CFG-010 Configure the NTP settings Improves resiliency if If you scale from a
on Workspace ONE Access an outage of external deployment with a single
cluster nodes to use NTP services for a VMware VMware Cloud Foundation
servers in each VMware Cloud Foundation instance instance to one with
Cloud Foundation instance. occurs. multiple VMware Cloud
Foundation instances,
the NTP settings on
Workspace ONE Access
must be updated.
Table 15-59. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
VCF-WSA-RCMD-CFG-001 Protect all Workspace Supports high availability None for standard
ONE Access nodes using for Workspace ONE deployments.
vSphere HA. Access. Clustered Workspace ONE
Access deployments might
require intervention if an
ESXi host failure occurs.
VCF-WSA-RCMD-CFG-003 When using Active Provides the following n You must manage the
Directory as an Identity access control features: password life cycle of
Provider, use an Active n Workspace ONE this account.
Directory user account with Access connects to the n If authentication to
a minimum of read-only Active Directory with more than one Active
access to Base DNs for the minimum set of Directory domain is
users and groups as the required permissions to required, additional
service account for the bind and query the accounts are required
Active Directory bind. directory. for the Workspace ONE
n You can introduce Access connector to
improved accountability bind to each Active
in tracking request- Directory domain over
response interactions LDAP.
between the
Workspace ONE
Access and Active
Directory.
Table 15-59. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)
VCF-WSA-RCMD-CFG-004 Configure the directory n Limits the number You must manage
synchronization to of replicated groups the groups from
synchronize only groups required for each your enterprise
required for the integrated product. directory selected for
SDDC solutions. n Reduces the replication synchronization to
interval for group Workspace ONE Access.
information.
VCF-WSA-RCMD-CFG-007 Add a filter to the Limits the number of To ensure that replicated
Workspace ONE Access replicated users for user accounts are managed
directory settings to Workspace ONE Access within the maximums, you
exclude users from the within the maximum scale. must define a filtering
directory replication. schema that works for your
organization based on your
directory attributes.
Table 15-59. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)
VCF-WSA-RCMD-CFG-008 Configure the mapped You can configure the User accounts in your
attributes included when minimum required and organization's enterprise
a user is added to the extended user attributes directory must have
Workspace ONE Access to synchronize directory the following required
directory. user accounts for the attributes mapped:
Workspace ONE Access n firstname, for example,
to be used as an givenname for Active
authentication source for Directory
cross-instance VMware
n lastName, for example,
Aria Suite solutions.
sn for Active Directory
VCF-WSA-RCMD-CFG-009 Configure the Workspace Ensures that any changes Schedule the
ONE Access directory to group memberships in synchronization interval
synchronization frequency the corporate directory to be longer than the
to a reoccurring schedule, are available for integrated time to synchronize from
for example, 15 minutes. solutions in a timely the enterprise directory.
manner. If users and groups
are being synchronized
to Workspace ONE
Access when the
next synchronization is
scheduled, the new
synchronization starts
immediately after the end
of the previous iteration.
With this schedule, the
process is continuous.
Table 15-59. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)
VCF-WSA-RCMD-SEC-002 Configure a password You can set a policy for You must set the policy
policy for Workspace Workspace ONE Access in accordance with your
ONE Access local local directory users that organization policies and
directory users, admin and addresses your corporate regulatory standards, as
configadmin. policies and regulatory applicable.
standards. You must apply the
The password policy is password policy on the
applicable only to the Workspace ONE Access
local directory users and cluster nodes.
does not impact your
organization directory.
For full design details, see Chapter 12 Life Cycle Management Design for VMware Cloud
Foundation.
Table 15-60. Life Cycle Management Design Requirements for VMware Cloud Foundation
VCF-LCM-REQD-001 Use SDDC Manager to Because the deployment The operations team must
perform the life cycle scope of SDDC Manager understand and be aware
management of the covers the full VMware of the impact of a
following components: Cloud Foundation stack, patch, update, or upgrade
n SDDC Manager SDDC Manager performs operation by using SDDC
patching, update, or Manager.
n NSX Manager
upgrade of these
n NSX Edges
components across all
n vCenter Server workload domains.
n ESXi
VCF-LCM-REQD-002 Use VMware Aria Suite VMware Aria Suite n You must deploy
Lifecycle to manage the Lifecycle automates the life VMware Aria Suite
life cycle of the following cycle of VMware Aria Suite Lifecycle by using
components: Lifecycle and Workspace SDDC Manager.
n VMware Aria Suite ONE Access. n You must manually
Lifecycle apply Workspace
n Workspace ONE ONE Access patches,
Access updates, and hotfixes.
Patches, updates, and
hotfixes for Workspace
ONE Access are not
generally managed by
VMware Aria Suite
Lifecycle.
VCF-LCM-RCMD-001 Use vSphere Lifecycle n With vSphere Lifecycle n Updating the firmware
Manager images to manage Manager images, with images requires
the life cycle of vSphere firmware updates are an OEM-provided
clusters. carried out through hardware support
firmware and driver manager plug-in, which
add-ons, which you add integrates with vSphere
to the image you use to Lifecycle Manager.
manage a cluster. n An updated vSAN
n You can check the Hardware Compatibility
hardware compatibility List (vSAN HCL) is
of the hosts in a cluster required during bring-
against the VMware up.
Compatibility Guide.
n You can validate
a vSphere Lifecycle
Manager image to
check if it applies to
all hosts in the cluster.
You can also perform a
remediation pre-check.
Table 15-61. Life Cycle Management Design Requirements for NSX Federation in VMware Cloud
Foundation
VCF-LCM-REQD-003 Use the upgrade The version of SDDC n You must explicitly
coordinator in NSX Manager in this design is plan upgrades of the
to perform life cycle not currently capable of life NSX Global Manager
management on the NSX cycle operations (patching, nodes. An upgrade
Global Manager appliances. update, or upgrade) for of the NSX Global
NSX Global Manager. Manager nodes might
require a cascading
upgrade of the NSX
Local Manager nodes
and underlying SDDC
Manager infrastructure
before upgrading the
NSX Global Manager
nodes.
n You must always align
the version of the NSX
Global Manager nodes
with the rest of the
SDDC stack in VMware
Cloud Foundation.
VCF-LCM-REQD-004 Establish an operations The versions of NSX Global The administrator must
practice to ensure that Manager and NSX Local establish and follow an
prior to the upgrade of Manager nodes must be operations practice by
any workload domain, the compatible with each other. using a runbook or
impact of any version Because SDDC Manager automated process to
upgrades is evaluated does not provide life ensure a fully supported
in relation to the need cycle operations (patching, and compliant bill of
to upgrade NSX Global update, or upgrade) materials prior to any
Manager. for the NSX Global upgrade operation.
Manager nodes, upgrade
to an unsupported version
cannot be prevented.
VCF-LCM-REQD-005 Establish an operations The versions of NSX Global The administrator must
practice to ensure that Manager and NSX Local establish and follow an
prior to the upgrade of Manager nodes must be operations practice by
the NSX Global Manager, compatible with each other. using a runbook or
the impact of any Because SDDC Manager automated process to
version change is evaluated does not provide life ensure a fully supported
against the existing NSX cycle operations (patching, and compliant bill of
Local Manager nodes and update, or upgrade) materials prior to any
workload domains. for the NSX Global upgrade operation.
Manager nodes, upgrade
to an unsupported version
cannot be prevented.
For full design details, see Chapter 14 Information Security Design for VMware Cloud Foundation.
Table 15-62. Design Requirements for Account and Password Management for VMware Cloud
Foundation
VCF-ACTMGT-REQD- Enable scheduled n Increases the security You must retrieve new
SEC-001 password rotation in SDDC posture of your SDDC. passwords by using the API
Manager for all accounts n Simplifies password if you must use accounts
supporting scheduled management interactively.
rotation. across your
SDDC management
components.
Table 15-63. Certificate Management Design Recommendations for VMware Cloud Foundation
VCF-SDDC-RCMD-SEC-001 Replace the default VMCA Ensures that the Replacing the default
or signed certificates on communication to all certificates with trusted
all management virtual management components CA-signed certificates from
appliances with a certificate is secure. a certificate authority might
that is signed by an internal increase the deployment
certificate authority. preparation time because
you must generate and
submit certificate requests.
VCF-SDDC-RCMD-SEC-002 Use a SHA-2 algorithm The SHA-1 algorithm is Not all certificate
or higher for signed considered less secure and authorities support SHA-2
certificates. has been deprecated. or higher.
VCF-SDDC-RCMD-SEC-003 Perform SSL certificate life SDDC Manager supports Certificate management
cycle management for all automated SSL certificate for NSX Global Manager
management appliances by lifecycle management instances must be done
using SDDC Manager. rather than requiring a manually.
series of manual steps.