0% found this document useful (0 votes)
81 views274 pages

Vmwarevcf 51

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views274 pages

Vmwarevcf 51

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 274

VMware Cloud Foundation

Design Guide
07 NOV 2023
VMware Cloud Foundation 5.1
VMware Cloud Foundation Design Guide

You can find the most up-to-date technical documentation on the VMware by Broadcom website at:

https://docs.vmware.com/

VMware by Broadcom
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com

©
Copyright 2021-2023 Broadcom. All Rights Reserved. The term “Broadcom” refers to Broadcom Inc.
and/or its subsidiaries. For more information, go to https://www.broadcom.com. All trademarks, trade
names, service marks, and logos referenced herein belong to their respective companies.

VMware by Broadcom 2
Contents

About VMware Cloud Foundation Design Guide 7

1 VMware Cloud Foundation Concepts 10


Architecture Models and Workload Domain Types in VMware Cloud Foundation 10
VMware Cloud Foundation Topologies 15
Single Instance - Single Availability Zone 17
Single Instance - Multiple Availability Zones 18
Multiple Instances - Single Availability Zone per Instance 19
Multiple Instances - Multiple Availability Zones per Instance 21
VMware Cloud Foundation Design Blueprints 24
Design Blueprint One: Multiple Instance - Multiple Availability Zone 24
Design Blueprint Two: Single Instance - Multiple Availability Zones 29
Design Blueprint Three: Single Instance - Consolidated 33

2 Workload Domain Cluster to Rack Mapping in VMware Cloud Foundation 36

3 Supported Storage Types for VMware Cloud Foundation 41

4 External Services Design for VMware Cloud Foundation 43

5 Physical Network Infrastructure Design for VMware Cloud Foundation 44


VLANs and Subnets for VMware Cloud Foundation 44
Leaf-Spine Physical Network Design Requirements and Recommendations for VMware Cloud
Foundation 48

6 vSAN Design for VMware Cloud Foundation 53


Logical Design for vSAN for VMware Cloud Foundation 53
Hardware Configuration for vSAN for VMware Cloud Foundation 55
Network Design for vSAN for VMware Cloud Foundation 57
vSAN Witness Design for VMware Cloud Foundation 58
vSAN Design Requirements and Recommendations for VMware Cloud Foundation 60

7 vSphere Design for VMware Cloud Foundation 68


ESXi Design for VMware Cloud Foundation 68
Logical Design for ESXi for VMware Cloud Foundation 69
Sizing Considerations for ESXi for VMware Cloud Foundation 69
ESXi Design Requirements and Recommendations for VMware Cloud Foundation 71
vCenter Server Design for VMware Cloud Foundation 75

VMware by Broadcom 3
VMware Cloud Foundation Design Guide

Logical Design for vCenter Server for VMware Cloud Foundation 75


Sizing Considerations for vCenter Server for VMware Cloud Foundation 77
High Availability Design for vCenter Server for VMware Cloud Foundation 77
vCenter Server Design Requirements and Recommendations for VMware Cloud Foundation
78
vCenter Single Sign-On Design Requirements for VMware Cloud Foundation 81
vSphere Cluster Design for VMware Cloud Foundation 86
Logical vSphere Cluster Design for VMware Cloud Foundation 86
vSphere Cluster Life Cycle Method Design for VMware Cloud Foundation 89
vSphere Cluster Design Requirements and Recommendations for VMware Cloud Foundation
89
vSphere Networking Design for VMware Cloud Foundation 94
Logical vSphere Networking Design for VMware Cloud Foundation 94
vSphere Networking Design Recommendations for VMware Cloud Foundation 97

8 NSX Design for VMware Cloud Foundation 100


Logical Design for NSX for VMware Cloud Foundation 101
NSX Manager Design for VMware Cloud Foundation 107
Sizing Considerations for NSX Manager for VMware Cloud Foundation 108
NSX Manager Design Requirements and Recommendations for VMware Cloud Foundation
108
NSX Global Manager Design Requirements and Recommendations for VMware Cloud
Foundation 110
NSX Edge Node Design for VMware Cloud Foundation 113
Deployment Model for the NSX Edge Nodes for VMware Cloud Foundation 113
Sizing Considerations for NSX Edges for VMware Cloud Foundation 116
Network Design for the NSX Edge Nodes for VMware Cloud Foundation 116
NSX Edge Node Requirements and Recommendations for VMware Cloud Foundation 118
Routing Design for VMware Cloud Foundation 122
BGP Routing Design for VMware Cloud Foundation 124
BGP Routing Design Requirements and Recommendations for VMware Cloud Foundation
131
Overlay Design for VMware Cloud Foundation 141
Logical Overlay Design for VMware Cloud Foundation 141
Overlay Design Requirements and Recommendations for VMware Cloud Foundation 143
Application Virtual Network Design for VMware Cloud Foundation 146
Logical Application Virtual Network Design VMware Cloud Foundation 146
Application Virtual Network Design Requirements and Recommendations forVMware Cloud
Foundation 148
Load Balancing Design for VMware Cloud Foundation 150
Logical Load Balancing Design for VMware Cloud Foundation 150
Load Balancing Design Requirements for VMware Cloud Foundation 152

VMware by Broadcom 4
VMware Cloud Foundation Design Guide

9 SDDC Manager Design for VMware Cloud Foundation 155


Logical Design for SDDC Manager 155
SDDC Manager Design Requirements and Recommendations for VMware Cloud Foundation
157

10 VMware Aria Suite Lifecycle Design for VMware Cloud Foundation 159
Logical Design for VMware Aria Suite Lifecycle for VMware Cloud Foundation 160
Network Design for VMware Aria Suite Lifecycle 162
Data Center and Environment Design for VMware Aria Suite Lifecycle 163
Locker Design for VMware Aria Suite Lifecycle 165
VMware Aria Suite Lifecycle Design Requirements and Recommendations for VMware Cloud
Foundation 166

11 Workspace ONE Access Design for VMware Cloud Foundation 172


Logical Design for Workspace ONE Access 173
Sizing Considerations for Workspace ONE Access for VMware Cloud Foundation 177
Network Design for Workspace ONE Access 178
Integration Design for Workspace ONE Access with VMware Cloud Foundation 182
Deployment Model for Workspace ONE Access 183
Workspace ONE Access Design Requirements and Recommendations for VMware Cloud
Foundation 184

12 Life Cycle Management Design for VMware Cloud Foundation 192

13 Logging and Monitoring Design for VMware Cloud Foundation 196

14 Information Security Design for VMware Cloud Foundation 197


Access Management for VMware Cloud Foundation 197
Account Management Design for VMware Cloud Foundation 198
Certificate Management for VMware Cloud Foundation 203

15 Appendix: Design Elements for VMware Cloud Foundation 205


Architecture Design Elements for VMware Cloud Foundation 205
Workload Domain Design Elements for VMware Cloud Foundation 206
External Services Design Elements for VMware Cloud Foundation 206
Physical Network Design Elements for VMware Cloud Foundation 207
vSAN Design Elements for VMware Cloud Foundation 211
ESXi Design Elements for VMware Cloud Foundation 218
vCenter Server Design Elements for VMware Cloud Foundation 221
vSphere Cluster Design Elements for VMware Cloud Foundation 226
vSphere Networking Design Elements for VMware Cloud Foundation 230
NSX Design Elements for VMware Cloud Foundation 232

VMware by Broadcom 5
VMware Cloud Foundation Design Guide

SDDC Manager Design Elements for VMware Cloud Foundation 256


VMware Aria Suite Lifecycle Design Elements for VMware Cloud Foundation 258
Workspace ONE Access Design Elements for VMware Cloud Foundation 263
Life Cycle Management Design Elements for VMware Cloud Foundation 270
Information Security Design Elements for VMware Cloud Foundation 273

VMware by Broadcom 6
About VMware Cloud Foundation Design
Guide

The VMware Cloud Foundation Design Guide contains a design model for VMware Cloud
Foundation (also called VCF) that is based on industry best practices for SDDC implementation.

The VMware Cloud Foundation Design Guide provides the supported design options for VMware
Cloud Foundation, and a set of decision points, justifications, implications, and considerations for
building each component.

Intended Audience
This VMware Cloud Foundation Design Guide is intended for cloud architects who are familiar
with and want to use VMware Cloud Foundation to deploy and manage an SDDC that meets the
requirements for capacity, scalability, backup and restore, and extensibility for disaster recovery
support.

Before You Apply This Guidance


The sequence of the VMware Cloud Foundation documentation follows the stages for
implementing and maintaining an SDDC.

To apply this VMware Cloud Foundation Design Guide, you must be acquainted with the Getting
Started with VMware Cloud Foundation documentation and with the VMware Cloud Foundation
Release Notes. See VMware Cloud Foundation documentation.
For performance best practices for vSphere, see Performance Best Practices for VMware
vSphere 8.0 Update 1.

Design Elements
This VMware Cloud Foundation Design Guide contains requirements and recommendations for
the design of each component of the SDDC. In situations where a configuration choice exists,
requirements and recommendations are available for each choice. Implement only those that are
relevant to your target configuration.

VMware by Broadcom 7
VMware Cloud Foundation Design Guide

Design Element Description

Requirement Required for the operation of VMware Cloud Foundation.


Deviations are not permitted.

Recommendation Recommended as a best practice. Deviations are


permitted.

VMware Cloud Foundation Deployment Options in This


Design
This design guidance is for the all architecture models of VMware Cloud Foundation. By following
the guidance, you can examine the design for these deployment options:

n Single VMware Cloud Foundation instance.

n Single VMware Cloud Foundation instance with multiple availability zones (also known as
stretched deployment). The default vSphere cluster of the workload domain is stretched
between two availability zones by using Chapter 6 vSAN Design for VMware Cloud
Foundation and configuring vSphere Cluster Design Requirements and Recommendations
for VMware Cloud Foundation and BGP Routing Design for VMware Cloud Foundation
accordingly.

n Multiple VMware Cloud Foundation instances. You deploy several instances of VMware Cloud
Foundation to address requirements for scale and co-location of users and resources.

For disaster recovery, workload mobility, or propagation of common configuration to multiple


VMware Cloud Foundation instances, you can deploy Chapter 8 NSX Design for VMware
Cloud Foundation for the SDDC management and workload components.

n Multiple VMware Cloud Foundation instances with multiple availability zones. You apply the
configuration for stretched clusters for a single VMware Cloud Foundation instance to one or
more additional VMware Cloud Foundation instances in your environment.

vCenter Single Sign-On Options in This Design


This design guidance covers the topology with a single vCenter Single Sign-On domain in a
VMware Cloud Foundation instance and the topology with several isolated vCenter Single Sign-
On domains in a single instance. See vCenter Single Sign-On Design Requirements for VMware
Cloud Foundation.

VMware Cloud Foundation Design Blueprints


You can follow design blueprints for selected architecture models and topologies that list the
applicable design elements. See VMware Cloud Foundation Design Blueprints.

VMware by Broadcom 8
VMware Cloud Foundation Design Guide

VMware Cloud Foundation Glossary


See the VMware Cloud Foundation Glossary for constructs, operations, and other terms specific
to VMware Cloud Foundation. It is important to understand these constructs before continuing
with this design guidance.

VMware by Broadcom 9
VMware Cloud Foundation
Concepts 1
To design a VMware Cloud Foundation deployment, you need to understand certain VMware
Cloud Foundation concepts.
Read the following topics next:

n Architecture Models and Workload Domain Types in VMware Cloud Foundation

n VMware Cloud Foundation Topologies

n VMware Cloud Foundation Design Blueprints

Architecture Models and Workload Domain Types in


VMware Cloud Foundation
When you design a VMware Cloud Foundation deployment, you decide what architecture model,
that is, standard or consolidated, and what workload domain types, for example, consolidated,
isolated, or standard, to implement according to the requirements for hardware, expected
number of workloads and workload domains, co-location of management and customer
workloads, identity isolation, and other.

Architecture Models
Decide on a model according to your organization's requirements and your environment's
resource capabilities. Implement a standard architecture for workload provisioning and mobility
across VMware Cloud Foundation instances according to production best practices. If you plan to
deploy a small-scale environment, or if you are working on an SDDC proof-of-concept, implement
a consolidated architecture.

VMware by Broadcom 10
VMware Cloud Foundation Design Guide

Figure 1-1. Choosing a VMware Cloud Foundation Architecture Model

Start choosing an architecture model


for VMware Cloud Foundation

Do you need to
No
minimize hardware
requirements?

Yes

No Can management Yes


Use the standard Use the consolidated
and customer
architecture model architecture model
workloads be
co-located?

Table 1-1. Architecture Model Recommendations for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-ARCH-RCMD-CFG-001 Use the standard n Aligns with the Requires additional


architecture model of VMware best hardware.
VMware Cloud Foundation. practice of separating
management workloads
from customer
workloads.
n Provides better long-
term flexibility and
expansion options.

Workload Domain Types


A workload domain represents a logical unit of application-ready infrastructure that groups ESXi
hosts managed by a vCenter Server instance with specific characteristics according to VMware
recommended practices. A workload domain can consist of one or more vSphere clusters,
provisioned by SDDC Manager.

VMware by Broadcom 11
VMware Cloud Foundation Design Guide

Table 1-2. Workload Domain Types

Workload Domain Type Description Benefits Drawbacks

Management domain n First domain deployed. n Guaranteed sufficient n You must carefully
n Contains the resources for size the domain to
following management management accommodate planned
appliances for all components deployment of VI
workload domains: workload domains and
additional management
n vCenter Server
components.
n NSX Manager
n Hardware might not be
n SDDC Manager
fully utilized until full-
n Optional. VMware scale deployment has
Aria Suite been reached.
components
n Optional.
Management
domain NSX Edge
nodes
n Has dedicated ESXi
hosts
n First domain to
upgrade.

Consolidated domain n Represents a n Considers the minimum n Management


management domain possible initial hardware components and
which also runs and management customer workloads
customer workloads. component footprint. are not isolated.
n Uses resource n Can be scaled to a n You must constantly
pools to ensure standard architecture monitor it to ensure
sufficient resources model. sufficient resources
for management for management
components. components.
n Migrating customer
workloads to dedicated
VI workloads domains
is more complex.

VMware by Broadcom 12
VMware Cloud Foundation Design Guide

Table 1-2. Workload Domain Types (continued)

Workload Domain Type Description Benefits Drawbacks

VI workload domain n Represents an n Can share an NSX This workload domain type
additional workload Manager instance with cannot provide distinct
domain for running other VI workload vCenter Single Sign-On
customer workloads. domains. domains for customer
n Shares a vCenter n All workload domains workloads.
Single Sign-On domain can be managed
with the management through a single pane
domain. of glass.
n Shares identity provider n Minimizes password
configuration with the management overhead.
management domain. n Allows for independent
n Has dedicated ESXi life cycle management.
hosts.

Isolated VI workload n Represents an n Can provide distinct n Workload domains of


domain additional workload vCenter Single Sign-On this type cannot share
domain for running domains for customer an NSX Manager
customer workloads. workloads. instance with other VI
n Has a distinct vCenter n Supports a scale workload domains.
Single Sign-On domain. beyond 14 VI workload n Workload domain
n Has a distinct identity domains. vCenter Server
provider configuration. n Allows for independent instances are managed
life cycle management. through different panes
n Has dedicated ESXi
of glass.
hosts.
n Additional password
management overhead
exists for administrators
of VMware Cloud
Foundation.
n You can scale up to 24
VI workload domains
per VMware Cloud
Foundation instance.

VMware by Broadcom 13
VMware Cloud Foundation Design Guide

Figure 1-2. Choosing a VMware Cloud Foundation Workload Domain Type for Customer
Workloads

Start choosing
a workload domain type

Have you chosen Yes


the consolidated Use consolidated
architecture model? workload domain

No

Yes Is a shared NSX instance


between the VI workload
domains required?

No

Are more than


14 VI workload
domains per Yes
Use isolated
Use VI workload domains VMware Cloud VI workload domains
Foundation
instance required?

No

Is a VI workload
No domain with Yes
a dedicated
vCenter Single
Sign-on domain
required?

VMware by Broadcom 14
VMware Cloud Foundation Design Guide

Table 1-3. Workload Domain Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-WLD-RCMD-CFG-001 Use VI workload domains n Aligns with the Requires additional


or isolated VI workload VMware best hardware.
domains for customer practice of separating
workloads. management workloads
from customer
workloads.
n Provides better long
term flexibility and
expansion options.

VMware Cloud Foundation Topologies


VMware Cloud Foundation supports multiple topologies that provide different levels of
availability and scale.

Availability Zones and VMware Cloud Foundation Instances


Availability zone

An availability zone is a fault domain at the SDDC level.

You create multiple availability zones for the purpose of creating vSAN stretched clusters.
Using multiple availability zones can improve availability of management components and
workloads running within the SDDC, minimize downtime of services, and improve SLAs.

Availability zones are typically located either within the same data center, but in different
racks, chassis, rooms, or in different data centers with low-latency high-speed links
connecting them. One availability zone can contain several fault domains.

Note Only stretched clusters created by using the Stretch Cluster API, and are therefore
vSAN storage based, are considered by and treated as stretched clusters by VMware Cloud
Foundation.

VMware Cloud Foundation Instance

Each VMware Cloud Foundation instance is a separate VMware Cloud Foundation


deployment and might contain one or two availability zones. VMware Cloud Foundation
instances may be geographically separate.

VMware Cloud Foundation Topologies


Several topologies of VMware Cloud Foundation exist according to the number of availability
zones and VMware Cloud Foundation instances.

VMware by Broadcom 15
VMware Cloud Foundation Design Guide

Table 1-4. VMware Cloud Foundation Topologies

Topology Description

Single Instance - Single Availability Zone Workload domains are deployed in a single availability
zone.

Single Instance - Multiple Availability Zones Workload domains might be stretched between two
availability zones.

Multiple Instances - Single Availability Zone per VMware Workload domains in each instance are deployed in a
Cloud Foundation instance single availability zone.

Multiple Instances - Multiple Availability Zones per Workload domains in each instance might be stretched
VMware Cloud Foundation instance between two availability zones.

Figure 1-3. Choosing a VMware Cloud Foundation Topology

Start choosing
a VMware Cloud Foundation topology

Do you need
No disaster recovery Yes
or more than 24
VI workload
domains?

No Do you need Yes No Do you need Yes


vSAN stretched vSAN stretched
clusters? clusters?

Use the Single Instance - Use the Single Instance - Use the Multiple Instance - Use the Multiple Instance -
Single Avalibility Multiple Avalibility Single Avalibility Multiple Avalibility
Zone topology Zones topology Zones topology Zones topology

What to read next

n Single Instance - Single Availability Zone


Single Instance - Single Availability Zone is the simplest VMware Cloud Foundation topology
where workload domains are deployed in a single availability zone.

n Single Instance - Multiple Availability Zones


You protect your VMware Cloud Foundation environment against a failure of a single
hardware fault domain by implementing multiple availability zones.

n Multiple Instances - Single Availability Zone per Instance


You protect against a failure of a single VMware Cloud Foundation instance by implementing
multiple VMware Cloud Foundation instances.

VMware by Broadcom 16
VMware Cloud Foundation Design Guide

n Multiple Instances - Multiple Availability Zones per Instance


You protect against a failure of a single VMware Cloud Foundation instance by implementing
multiple VMware Cloud Foundation instances. Implementing multiple availability zones in an
instance protects against a failure of a single hardware fault domain.

Single Instance - Single Availability Zone


Single Instance - Single Availability Zone is the simplest VMware Cloud Foundation topology
where workload domains are deployed in a single availability zone.

The Single Instance - Single Availability Zone topology relies on vSphere HA to protect against
host failures.

Figure 1-4. Single VMware Cloud Foundation Instance with a Single Availability Zone

VCF Instance

VI Workload
Domain

Management
Domain

Table 1-5. Single Instance - Single Availability Zone Attributes

Attributes Detail

Data centers Single data center

Workload domain cluster rack mappings n Workload domain cluster in a single rack
n Workload domain cluster spanning multiple racks

Scale n Up to 25 workload domains

Up to 15 workload domains in a single vCenter Single


Sign-On domain

Resilience vSphere HA provides protection against host failures

VMware by Broadcom 17
VMware Cloud Foundation Design Guide

Single Instance - Multiple Availability Zones


You protect your VMware Cloud Foundation environment against a failure of a single hardware
fault domain by implementing multiple availability zones.

Incorporating multiple availability zones in your design can help reduce the blast radius of a
failure and can increase application availability. You usually deploy multiple availability zones
across two independent data centers.

Figure 1-5. Multiple Availability Zones in VMware Cloud Foundation

VCF Instance

Availability Zone 1 Availability Zone 2

Stretched vSAN Cluster

VI Workload
Domain

VI Workload
Domain

VI Workload
Domain

Stretched vSAN Cluster

VI Workload
Domain

Stretched vSAN Cluster

Management
Domain

VMware by Broadcom 18
VMware Cloud Foundation Design Guide

Table 1-6. Single Instance - Multiple Availability Zone Attributes

Attributes Detail

Workload domain cluster rack mappings n Workload domain cluster in a single rack
n Workload domain cluster spanning multiple racks
n Workload domain cluster with multiple availability
zones, each zone in a single rack
n Workload domain cluster with multiple availability
zones, each zone spanning multiple racks

Stretched cluster n Because availability zones use VMware vSAN™


stretched clusters, the bandwidth between the zones
must be at least 10 Gbps and the round-trip latency
must be less than 5 ms.
n Having the management domain on a vSAN stretched
cluster is a prerequisite to configure and implement
vSAN stretched clusters in your VI workload domains.
n You can have up to two availability zones.

Scale n Up to 25 workload domains.

Up to 15 workload domains in a single vCenter Single


Sign-On domain

Resilience n vSphere HA provides protection against host failures.


n Multiple availability zones protect against data center
failures.

Multiple Instances - Single Availability Zone per Instance


You protect against a failure of a single VMware Cloud Foundation instance by implementing
multiple VMware Cloud Foundation instances.

Incorporating multiple VMware Cloud Foundation instances in your design can help reduce
the blast radius of a failure and can increase application availability across larger geographical
distances than cannot be achieved by using multiple availability zones. You usually deploy this
topology in the same data center for scale or across independent data centers for resilience.

VMware by Broadcom 19
VMware Cloud Foundation Design Guide

Figure 1-6. Multiple Instance - Single Availability Zone Topology for VMware Cloud Foundation

VCF Instance 1

VI Workload
Domain

Management
Domain

VCF Instance 2

VI Workload
Domain

Management
Domain

VMware by Broadcom 20
VMware Cloud Foundation Design Guide

Table 1-7. Multiple Instance - Single Availability Zone Attributes

Attributes Detail

Workload domain cluster rack mapping n Workload domain cluster in a single rack
n Workload domain cluster spanning multiple racks

Multiple instances Using multiple VMware Cloud Foundation instances can


facilitate the following use cases:
n Disaster recovery across different VMware Cloud
Foundation instances at longer distances
n Scale beyond the maximums of a single VMware
Cloud Foundation instance.
n Co-location of end users and resources
If you plan to use NSX Federation between VMware Cloud
Foundation instances, the following considerations exist:
n Maximum of four locations when using medium-size
NSX Global Managers
n Up to 16 locations when using large-size NSX Global
Managers
n Maximum of four locations per cross-instance Tier-0
gateway
n Life cycle management must be planned carefully

Scale n Up to 25 workload domains per VMware Cloud


Foundation instance

Up to 15 workload domains in a single vCenter Single


Sign-on domain per instance

Resilience n vSphere HA provides protection against host failures.


n Deploying multiple instances can protect against
natural disasters by providing recovery locations at
greater geographical distances.

Multiple Instances - Multiple Availability Zones per Instance


You protect against a failure of a single VMware Cloud Foundation instance by implementing
multiple VMware Cloud Foundation instances. Implementing multiple availability zones in an
instance protects against a failure of a single hardware fault domain.

Incorporating multiple VMware Cloud Foundation instances into your design can help reduce
the blast radius of a failure and can increase application availability across larger geographical
distances that cannot be achieved using multiple availability zones.

VMware by Broadcom 21
VMware Cloud Foundation Design Guide

Figure 1-7. Multiple Instance - Multiple Availability Zones Topology for VMware Cloud
Foundation

VCF Instance 1

Availability Zone 1 Availability Zone 2

Stretched vSAN Cluster

VI Workload
Domain

VI Workload
Domain

VI Workload
Domain

Stretched vSAN Cluster

VI Workload
Domain

Stretched vSAN Cluster

Management
Domain

VCF Instance 2

Availability Zone 1 Availability Zone 2

Stretched vSAN Cluster

VI Workload
Domain
VMware by Broadcom 22
VMware Cloud Foundation Design Guide

Table 1-8. Multiple Instance - Multiple Availability Zone Attributes

Attributes Detail

Workload domain cluster rack mapping n Workload domain cluster in a single rack
n Workload domain cluster spanning multiple racks
n Workload domain cluster with multiple availability
zones, each zone in a single rack
n Workload domain cluster with multiple availability
zones, each zone spanning multiple racks

Multiple instances Using multiple VMware Cloud Foundation instances can


facilitate the following:
n Disaster recovery across different VMware Cloud
Foundation instances at longer distances
n Scale beyond the maximums of a single VMware
Cloud Foundation instance
n Co-location of end users and resources
If you plan to use NSX Federation between instances,
VMware Cloud Foundation consider the following:
n Maximum of 4 locations when using meduim size
global managers.
n Up to 16 locations when using large size global
managers.
n Maximum of 4 locations per Stretched Tier-0
Gateway.
n Lifecycle management will need to be carefully
planned.

Stretched cluster n Because availability zones use VMware vSAN™


stretched clusters, the bandwidth between the zones
must be at least 10 Gbps and the round-trip latency
must be less than 5 ms.
n You can have up to two availability zones.
n Having the management domain on a vSAN stretched
cluster is a prerequisite to configure and implement
vSAN stretched clusters in your VI workload domains.

Scale n Up to 25 workload domains per VMware Cloud


Foundation instance

Up to 15 workload domains in a single vCenter Single


Sign-On domain per instance

Resilience n vSphere HA provides protection against host failures.


n Multiple availability zones protect against data center
failures.
n Multiple instances can protect against natural
disasters by providing recovery locations at greater
geographical distances.

VMware by Broadcom 23
VMware Cloud Foundation Design Guide

VMware Cloud Foundation Design Blueprints


A VMware Cloud Foundation design blueprint is a collection of design requirements and
recommendations based on a chosen architecture model, workload domain type, and topology.
It can be used as a full end-to-end design for a VMware Cloud Foundation deployment.

Design Blueprint One: Multiple Instance - Multiple Availability Zone


This design blueprint lists the design choices and resulting requirements and recommendations
to set up a topology that includes multiple VMware Cloud Foundation instances, each instance
containing multiple availability zones, for an organization called Rainpole.

Design Choices for Design Blueprint One


Rainpole has made the following choices for its VMware Cloud Foundation deployment:

Table 1-9. Design Choices for Design Blueprint One

Design Aspect Choice Made

Architecture Models and Workload Domain Types in Standard


VMware Cloud Foundation

Workload Domain Types Management domain and VI workload domains

Multiple Instances - Single Availability Zone per Instance Multiple Instances - Multiple Availability Zones per
instance

Leaf-Spine Physical Network Design Requirements and Leaf-Spine


Recommendations for VMware Cloud Foundation

Routing Design for VMware Cloud Foundation BGP

Chapter 6 vSAN Design for VMware Cloud Foundation vSAN

Chapter 10 VMware Aria Suite Lifecycle Design for Included


VMware Cloud Foundation

Chapter 11 Workspace ONE Access Design for VMware Standard Workspace ONE Access
Cloud Foundation

Design Elements for Design Blueprint One


Table 1-10. External Services Design Elements

Design Area Applicable Design Elements

External Services Design Elements for VMware Cloud External Services Design Requirements
Foundation

VMware by Broadcom 24
VMware Cloud Foundation Design Guide

Table 1-11. Physical Network Design Elements

Design Area Applicable Design Elements

Physical Network Design Elements for VMware Cloud Leaf-Spine Physical Network Design Requirements
Foundation
Leaf-Spine Physical Network Design Requirements for
NSX Federation

Leaf-Spine Physical Network Design Recommendations

Leaf-Spine Physical Network Design Recommendations


for Stretched Clusters

Leaf-Spine Physical Network Design Recommendations


for NSX Federation

Table 1-12. Management Domain Design Elements

Design Area Applicable Design Elements

vSAN Design Elements for VMware Cloud Foundation vSAN Design Requirements

vSAN Design Requirements for Stretched Clusters

vSAN Design Recommendations

vSAN Design Recommendations for Stretched Clusters

ESXi Design Elements for VMware Cloud Foundation ESXi Server Design Requirements

ESXi Server Design Recommendations

vCenter Server Design Elements vCenter Server Design Requirements

vCenter Server Design Recommendations

vCenter Server Design Recommendations for Stretched


Clusters

vCenter Single Sign-On Design Elements vCenter Single Sign-on Design Requirements for Multiple
vCenter - Single vCenter Single Sign-On Domain Topology

vSphere Cluster Design Elements for VMware Cloud vSphere Cluster Design Requirements
Foundation
vSphere Cluster Design Requirements for Stretched
Clusters

vSphere Cluster Design Recommendations

vSphere Cluster Design Recommendations for Stretched


Clusters

vSphere Networking Design Elements for VMware Cloud vSphere Networking Design Recommendations
Foundation

NSX Manager Design Elements NSX Manager Design Requirements

NSX Manager Design Recommendation

VMware by Broadcom 25
VMware Cloud Foundation Design Guide

Table 1-12. Management Domain Design Elements (continued)

Design Area Applicable Design Elements

NSX Manager Design Recommendations for Stretched


Clusters

NSX Global Manager Design Elements NSX Global Manager Design Requirements for NSX
Federation

NSX Global Manager Design Recommendations for NSX


Federation

NSX Global Manager Design Recommendations for


Stretched Clusters

NSX Edge Design Elements NSX Edge Design Requirements

NSX Edge Design Requirements for NSX Federation

NSX Edge Design Recommendations

NSX Edge Design Recommendations for Stretched


Clusters

BGP Routing Design Elements for VMware Cloud BGP Routing Design Requirements
Foundation
BGP Routing Design Requirements for Stretched Clusters

BGP Routing Design Requirements for NSX Federation

BGP Routing Design Recommendations

BGP Routing Design Recommendations for NSX


Federation

Overlay Design Elements for VMware Cloud Foundation Overlay Design Requirements

Overlay Design Recommendations

Application Virtual Network Design Elements for VMware Application Virtual Network Design Requirements
Cloud Foundation
Application Virtual Network Design Requirements for NSX
Federation

Load Balancing Design Elements for VMware Cloud Load Balancing Design Requirements
Foundation
Load Balancing Design Requirements for NSX Federation

SDDC Manager Design Elements for VMware Cloud SDDC Manager Design Requirements
Foundation
SDDC Manager Design Recommendations

Table 1-13. VI Workload Domain Design Elements

Design Area Applicable Design Elements

vSAN Design Elements for VMware Cloud Foundation vSAN Design Requirements

vSAN Design Requirements for Stretched Clusters

VMware by Broadcom 26
VMware Cloud Foundation Design Guide

Table 1-13. VI Workload Domain Design Elements (continued)

Design Area Applicable Design Elements

vSAN Design Recommendations

vSAN Design Recommendations for Stretched Clusters

ESXi Design Elements for VMware Cloud Foundation ESXi Server Design Requirements

ESXi Server Design Recommendations

vCenter Server Design Elements vCenter Server Design Requirements

vCenter Server Design Recommendations

vCenter Server Design Recommendations for Stretched


Clusters

vCenter Single Sign-On Design Elements vCenter Single Sign-on Design Requirements for Multiple
vCenter - Single SSO Domain Topology

vSphere Cluster Design Elements for VMware Cloud vSphere Cluster Design Requirements VMware Cloud
Foundation Foundation

vSphere Cluster Design Requirements for Stretched


Clusters

vSphere Cluster Design Recommendations

vSphere Cluster Design Recommendations for Stretched


Clusters

vSphere Networking Design Elements for VMware Cloud vSphere Networking Design Recommendations
Foundation

NSX Manager Design Elements NSX Manager Design Requirements

NSX Manager Design Recommendations

NSX Manager Design Recommendations for Stretched


Clusters

NSX Global Manager Design Elements NSX Global Manager Design Requirements for NSX
Federation

NSX Global Manager Design Recommendations for NSX


Federation

NSX Global Manager Design Recommendations for


Stretched Clusters

NSX Edge Design Elements NSX Edge Design Requirements

NSX Edge Design Requirements for NSX Federation

NSX Edge Design Recommendations

NSX Edge Design Recommendations for Stretched


Clusters

BGP Routing Design Elements for VMware Cloud BGP Routing Design Requirements
Foundation

VMware by Broadcom 27
VMware Cloud Foundation Design Guide

Table 1-13. VI Workload Domain Design Elements (continued)

Design Area Applicable Design Elements

BGP Routing Design Requirements for Stretched Clusters

BGP Routing Design Requirements for NSX Federation

BGP Routing Design Recommendations

BGP Routing Design Recommendations for NSX


Federation

Overlay Design Elements for VMware Cloud Foundation Overlay Design Requirements

Overlay Design Recommendations

Table 1-14. VMware Aria Suite Lifecycle and Workspace ONE Access Design Elements

Design Area Applicable Design Elements

VMware Aria Suite Lifecycle Design Elements for VMware VMware Aria Suite Lifecycle Design Requirements
Cloud Foundation
VMware Aria Suite Lifecycle Design Requirements for
Stretched Clusters

VMware Aria Suite Lifecycle Design Requirements for NSX


Federation

VMware Aria Suite Lifecycle Design Recommendations

Workspace ONE Access Design Elements for VMware Workspace ONE Access Design Requirements
Cloud Foundation
Workspace ONE Access Design Requirements for
Stretched Clusters

Workspace ONE Access Design Requirements for NSX


Federation

Workspace ONE Access Design Recommendations

Table 1-15. Life Cycle Management Design Elements

Design Area Applicable Design Elements

Life Cycle Management Design Elements for VMware Life Cycle Management Design Requirements
Cloud Foundation

Table 1-16. Account and Password Management Design Elements

Design Area Applicable Design Elements

Information Security Design Elements for VMware Cloud Account and Password Management Design
Foundation Recommendations

VMware by Broadcom 28
VMware Cloud Foundation Design Guide

Table 1-17. Certificate Management Design Elements

Design Area Applicable Design Elements

Information Security Design Elements for VMware Cloud Certificate Management Design Recommendations
Foundation

Design Blueprint Two: Single Instance - Multiple Availability Zones


This design blueprint lists the design choices and resulting requirements and recommendations to
set up a topology that includes one VMware Cloud Foundation instance with multiple availability
zones for an organization called Rainpole.

Design Choices for Design Blueprint Two


Rainpole has made the following choices for its VMware Cloud Foundation deployment:

Table 1-18. Design Choices for Design Blueprint Two

Design Aspect Choice Made

Architecture Models and Workload Domain Types in Standard


VMware Cloud Foundation

Workload Domain Types Management domain and VI workload domains

Multiple Instances - Single Availability Zone per Instance Single Instance - Multiple Availability Zones per instance

Leaf-Spine Physical Network Design Requirements and Leaf-Spine


Recommendations for VMware Cloud Foundation

Routing Design for VMware Cloud Foundation BGP

Chapter 6 vSAN Design for VMware Cloud Foundation vSAN

Chapter 10 VMware Aria Suite Lifecycle Design for Included


VMware Cloud Foundation

Chapter 11 Workspace ONE Access Design for VMware Standard Workspace ONE Access
Cloud Foundation

Design Elements for Design Blueprint Two


Table 1-19. External Services Design Elements

Design Area Applicable Design Elements

External Services Design Elements for VMware Cloud External Services Design Requirements
Foundation

VMware by Broadcom 29
VMware Cloud Foundation Design Guide

Table 1-20. Physical Network Design Elements

Design Area Applicable Design Elements

Physical Network Design Elements for VMware Cloud Leaf-Spine Physical Network Design Requirements
Foundation
Leaf-Spine Physical Network Design Recommendations

Leaf-Spine Physical Network Design Recommendations


for Stretched Clusters

Table 1-21. Management Domain Design Elements

Design Area Applicable Design Elements

vSAN Design Elements for VMware Cloud Foundation vSAN Design Requirements

vSAN Design Requirements for Stretched Clusters

vSAN Design Recommendations

vSAN Design Recommendations for Stretched Clusters

ESXi Design Elements for VMware Cloud Foundation ESXi Server Design Requirements

ESXi Server Design Recommendations

vCenter Server Design Elements vCenter Server Design Requirements

vCenter Server Design Recommendations

vCenter Server Design Recommendations for Stretched


Clusters

vCenter Single Sign-On Design Elements vCenter Single Sign-on Design Requirements for Multiple
vCenter - Single vCenter Single Sign-On Domain Topology

vSphere Cluster Design Elements for VMware Cloud vSphere Cluster Design Requirements
Foundation
vSphere Cluster Design Requirements for Stretched
Clusters

vSphere Cluster Design Recommendations

vSphere Cluster Design Recommendations for Stretched


Clusters

vSphere Networking Design Elements for VMware Cloud vSphere Networking Design Recommendations
Foundation

NSX Manager Design Elements NSX Manager Design Requirements

NSX Manager Design Recommendation

NSX Manager Design Recommendations for Stretched


Clusters

NSX Edge Design Elements NSX Edge Design Requirements

NSX Edge Design Recommendations

VMware by Broadcom 30
VMware Cloud Foundation Design Guide

Table 1-21. Management Domain Design Elements (continued)

Design Area Applicable Design Elements

NSX Edge Design Recommendations for Stretched


Clusters

BGP Routing Design Elements for VMware Cloud BGP Routing Design Requirements
Foundation
BGP Routing Design Requirements for Stretched Clusters

BGP Routing Design Recommendations

Overlay Design Elements for VMware Cloud Foundation Overlay Design Requirements

Overlay Design Recommendations

Application Virtual Network Design Elements for VMware Application Virtual Network Design Requirements
Cloud Foundation

Load Balancing Design Elements for VMware Cloud Load Balancing Design Requirements
Foundation

SDDC Manager Design Elements for VMware Cloud SDDC Manager Design Requirements
Foundation
SDDC Manager Design Recommendations

Table 1-22. VI Workload Domain Design Elements

Design Area Applicable Design Elements

vSAN Design Elements for VMware Cloud Foundation vSAN Design Requirements

vSAN Design Requirements for Stretched Clusters

vSAN Design Recommendations

vSAN Design Recommendations for Stretched Clusters

ESXi Design Elements for VMware Cloud Foundation ESXi Server Design Requirements

ESXi Server Design Recommendations

vCenter Server Design Elements vCenter Server Design Requirements

vCenter Server Design Recommendations

vCenter Server Design Recommendations for Stretched


Clusters

vCenter Single Sign-On Design Elements vCenter Single Sign-on Design Requirements for Multiple
vCenter - Single SSO Domain Topology

vSphere Cluster Design Elements for VMware Cloud vSphere Cluster Design Requirements VMware Cloud
Foundation Foundation

vSphere Cluster Design Requirements for Stretched


Clusters

vSphere Cluster Design Recommendations

VMware by Broadcom 31
VMware Cloud Foundation Design Guide

Table 1-22. VI Workload Domain Design Elements (continued)

Design Area Applicable Design Elements

vSphere Cluster Design Recommendations for Stretched


Clusters

vSphere Networking Design Elements for VMware Cloud vSphere Networking Design Recommendations
Foundation

NSX Manager Design Elements NSX Manager Design Requirements

NSX Manager Design Recommendations

NSX Manager Design Recommendations for Stretched


Clusters

NSX Edge Design Elements NSX Edge Design Requirements

NSX Edge Design Recommendations

NSX Edge Design Recommendations for Stretched


Clusters

BGP Routing Design Elements for VMware Cloud BGP Routing Design Requirements
Foundation
BGP Routing Design Requirements for Stretched Clusters

BGP Routing Design Recommendations

Overlay Design Elements for VMware Cloud Foundation Overlay Design Requirements

Overlay Design Recommendations

Table 1-23. VMware Aria Suite Lifecycle and Workspace ONE Access Design Elements

Design Area Applicable Design Elements

VMware Aria Suite Lifecycle Design Elements for VMware VMware Aria Suite Lifecycle Design Requirements
Cloud Foundation
VMware Aria Suite Lifecycle Design Requirements for
Stretched Clusters

VMware Aria Suite Lifecycle Design Recommendations

Workspace ONE Access Design Elements for VMware Workspace ONE Access Design Requirements
Cloud Foundation
Workspace ONE Access Design Requirements for
Stretched Clusters

Workspace ONE Access Design Recommendations

Table 1-24. Life Cycle Management Design Elements

Design Area Applicable Design Elements

Life Cycle Management Design Elements for VMware Life Cycle Management Design Requirements
Cloud Foundation

VMware by Broadcom 32
VMware Cloud Foundation Design Guide

Table 1-25. Account and Password Management Design Elements

Design Area Applicable Design Elements

Information Security Design Elements for VMware Cloud Account and Password Management Design
Foundation Recommendations

Table 1-26. Certificate Management Design Elements

Design Area Applicable Design Elements

Information Security Design Elements for VMware Cloud Certificate Management Design Recommendations
Foundation

Design Blueprint Three: Single Instance - Consolidated


This design blueprint lists the design choices and resulting requirements and recommendations
to set up a topology for an organization called Rainpole which includes one VMware Cloud
Foundation instance where the management domain runs both management and customer
workloads in a single availability zone.

Design Choices for Design Blueprint Three


Rainpole has made the following choices for its VMware Cloud Foundation deployment:

Table 1-27. Design Choices for Design Blueprint Three

Design Aspect Choice Made

Architecture Models and Workload Domain Types in Consolidated


VMware Cloud Foundation

Workload Domain Types Consolidated

Multiple Instances - Single Availability Zone per Instance Consolidated topology

Leaf-Spine Physical Network Design Requirements and Leaf-Spine


Recommendations for VMware Cloud Foundation

Routing Design for VMware Cloud Foundation BGP

Chapter 6 vSAN Design for VMware Cloud Foundation vSAN

Chapter 10 VMware Aria Suite Lifecycle Design for Included


VMware Cloud Foundation

Chapter 11 Workspace ONE Access Design for VMware Standard Workspace ONE Access
Cloud Foundation

VMware by Broadcom 33
VMware Cloud Foundation Design Guide

Design Elements for Design Blueprint Three


Table 1-28. External Services Design Elements

Design Area Applicable Design Elements

External Services Design Elements for VMware Cloud External Services Design Requirements
Foundation

Table 1-29. Physical Network Design Elements

Design Area Applicable Design Elements

Physical Network Design Elements for VMware Cloud Leaf-Spine Physical Network Design Requirements
Foundation
Leaf-Spine Physical Network Design Recommendations

Table 1-30. Management Domain Design Elements

Design Area Applicable Design Elements

vSAN Design Elements for VMware Cloud Foundation vSAN Design Requirements

vSAN Design Recommendations

ESXi Design Elements for VMware Cloud Foundation ESXi Server Design Requirements

ESXi Server Design Recommendations

vCenter Server Design Elements vCenter Server Design Requirements

vCenter Server Design Recommendations

vCenter Single Sign-On Design Elements vCenter Single Sign-on Design Requirements for Multiple
vCenter - Single vCenter Single Sign-On Domain Topology

vSphere Cluster Design Elements for VMware Cloud vSphere Cluster Design Requirements
Foundation
vSphere Cluster Design Recommendations

vSphere Networking Design Elements for VMware Cloud vSphere Networking Design Recommendations
Foundation

NSX Manager Design Elements NSX Manager Design Requirements

NSX Manager Design Recommendation

NSX Edge Design Elements NSX Edge Design Requirements

NSX Edge Design Recommendations

BGP Routing Design Elements for VMware Cloud BGP Routing Design Requirements
Foundation
BGP Routing Design Recommendations

Overlay Design Elements for VMware Cloud Foundation Overlay Design Requirements

Overlay Design Recommendations

VMware by Broadcom 34
VMware Cloud Foundation Design Guide

Table 1-30. Management Domain Design Elements (continued)

Design Area Applicable Design Elements

Application Virtual Network Design Elements for VMware Application Virtual Network Design Requirements
Cloud Foundation

Load Balancing Design Elements for VMware Cloud Load Balancing Design Requirements
Foundation

SDDC Manager Design Elements for VMware Cloud SDDC Manager Design Requirements
Foundation
SDDC Manager Design Recommendations

Table 1-31. VMware Aria Suite Lifecycle and Workspace ONE Access Design Elements

Design Area Applicable Design Elements

VMware Aria Suite Lifecycle Design Elements for VMware VMware Aria Suite Lifecycle Design Requirements
Cloud Foundation
VMware Aria Suite Lifecycle Design Recommendations

Workspace ONE Access Design Elements for VMware Workspace ONE Access Design Requirements
Cloud Foundation
Workspace ONE Access Design Recommendations

Table 1-32. Life Cycle Management Design Elements

Design Area Applicable Design Elements

Life Cycle Management Design Elements for VMware Life Cycle Management Design Requirements
Cloud Foundation

Table 1-33. Account and Password Management Design Elements

Design Area Applicable Design Elements

Information Security Design Elements for VMware Cloud Account and Password Management Design
Foundation Recommendations

Table 1-34. Certificate Management Design Elements

Design Area Applicable Design Elements

Information Security Design Elements for VMware Cloud Certificate Management Design Recommendations
Foundation

VMware by Broadcom 35
Workload Domain Cluster to
Rack Mapping in VMware Cloud
Foundation
2
VMware Cloud Foundation distributes the functionality of the SDDC across multiple workload
domains and vSphere clusters. A workload domain, whether it is the management workload
domain or a VI workload domain, is a logical abstraction of compute, storage, and network
capacity, and consists of one or more clusters. Each cluster can exist vertically in a single rack or
be spanned horizontally across multiple racks.

The relationship between workload domain clusters and data center racks in VMware Cloud
Foundation is not one-to-one. While a workload domain cluster is an atomic unit of repeatable
building blocks, a rack is a unit of size. Because workload domain clusters can have different
sizes, you map workload domain clusters to data center racks according to your requirements
and physical infrastructure constraints. You determine the total number of racks for each cluster
type according to your scalability needs.

Table 2-1. Workload Domain Cluster to Rack Configuration Options

Workload Domain Cluster to Rack Configuration Description

Workload domain cluster in a single rack The workload domain cluster occupies a single rack.

Workload domain cluster spanning multiple racks n The management domain can span multiple racks if
the data center fabric can provide Layer 2 adjacency,
such as BGP EVPN, between racks. If the Layer 3
fabric does not support this requirement, then the
management cluster should be mapped to a single
rack.
n A VI workload domain can span multiple racks. If you
are using a Layer 3 network fabric, NSX Edge clusters
cannot be hosted on clusters that span racks.

VMware by Broadcom 36
VMware Cloud Foundation Design Guide

Table 2-1. Workload Domain Cluster to Rack Configuration Options (continued)

Workload Domain Cluster to Rack Configuration Description

Workload domain cluster with multiple availability zones, To span multiple availability zones, the network fabric
each zone in a single rack must support stretched Layer 2 networks and Layer 3
routed networks between the availability zones.

Workload domain cluster with multiple availability zones, n A VI workload domain cluster with customer
each zone spanning multiple racks workloads and no NSX Edge clusters can span racks
by using Layer 3 network fabric without Layer 2
adjacency between racks. If you are using a Layer 3
network fabric, NSX Edge clusters cannot be hosted
on clusters that span racks.
n To span multiple racks, the network fabric must
support stretched Layer 2 networks between these
racks if NSX Edge clusters are deployed on the
vSphere cluster.
n To span multiple availability zones, the network fabric
must support stretched Layer 2 networks and Layer 3
routed networks between availability zones.

VMware by Broadcom 37
VMware Cloud Foundation Design Guide

Figure 2-1. Workload Domains in a Single Rack

Data Center
Fabric

ToR ToR
Switch Switch

VI Workload
Domain Cluster

Management
Domain Cluster

Rack

VMware by Broadcom 38
VMware Cloud Foundation Design Guide

Figure 2-2. Workload Domain Spanning Multiple Racks

Data Center
Fabric

ToR ToR ToR ToR


Switch Switch Switch Switch

VI Workload
Domain Cluster

Management
Domain Cluster

Rack Rack

VMware by Broadcom 39
VMware Cloud Foundation Design Guide

Figure 2-3. Workload Domains with Multiple Availability Zones, Each Zone in One Rack

Data Center
Fabric

Availability Zone 1 Availability Zone 2

ToR ToR ToR ToR


Switch Switch Switch Switch

VI Workload Domain
Stretched Cluster

Management Domain
Stretched Cluster

Rack Rack

VMware by Broadcom 40
Supported Storage Types for
VMware Cloud Foundation 3
Storage design for VMware Cloud Foundation includes the design for principal and supplemental
storage.

Principal storage is used during the creation of a workload domain and is capable of running
workloads. Supplemental storage can be added after the creation of a workload domain and can
be capable of running workloads or be used for data at rest storage such as virtual machine
templates, backup data, and ISO images.

Special considerations apply if you plan to add clusters to the management domain, for example,
to separate additional management components that require specific hardware resources or
might impact the performance of the main management components in the default cluster, or,
in the case of the consolidated architecture of VMware Cloud Foundation, to separate customer
workloads from the management components.

VMware Cloud Foundation supports the following principal and supplemental storage
combinations:

Table 3-1. Supported Storage Types in VMware Cloud Foundation

Storage Type Management Domain VI Workload Domain

vSAN Original Storage Architecture Principal Principal


(OSA)

vSAN Express Storage Architecture Principal Principal


(ESA)

VMware vSAN Max™ Not Supported Not Supported

Cross-cluster capacity sharing (HCI Supplemental n Principal (additional clusters only)


Mesh) n Supplemental
®
VMware vSphere Virtual Volumes™ Supplemental n Principal
(FC, iSCSI, or NFS) n Supplemental

VMFS on FC Supplemental n Principal


n Supplemental

NFS Supplemental (NFS 3 and NFS 4.1) n Principal (NFS 3)


n Supplemental (NFS 3 and NFS 4.1)

iSCSI Supplemental Supplemental

VMware by Broadcom 41
VMware Cloud Foundation Design Guide

Table 3-1. Supported Storage Types in VMware Cloud Foundation (continued)

Storage Type Management Domain VI Workload Domain

NVMe/TCP Supplemental Supplemental

NVMe/FC Supplemental Supplemental

Note For a consolidated VMware Cloud Foundation architecture model, the storage types that
are supported for the management domain apply.

VMware by Broadcom 42
External Services Design for
VMware Cloud Foundation 4
IP addressing scheme, name resolution, and time synchronization must support the requirements
for VMware Cloud Foundation deployments.

Table 4-1. External Services Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-EXT-REQD-NET-001 Allocate statically assigned Ensures stability across the You must provide precise
IP addresses and host VMware Cloud Foundation IP address management.
names for all workload instance, and makes it
domain components. simpler to maintain, track,
and implement a DNS
configuration.

VCF-EXT-REQD-NET-002 Configure forward and Ensures that all You must provide
reverse DNS records components are accessible DNS records for each
for all workload domain by using a fully qualified component.
components. domain name instead of by
using IP addresses only. It
is easier to remember and
connect to components
across the VMware Cloud
Foundation instance.

VCF-EXT-REQD-NET-003 Configure time Ensures that all An operational NTP service


synchronization by using an components are must be available in the
internal NTP time source synchronized with a valid environment.
for all workload domain time source.
components.

VCF-EXT-REQD-NET-004 Set the NTP service Ensures that the None.


for all workload domain NTP service remains
components to start synchronized after you
automatically. restart a component.

VMware by Broadcom 43
Physical Network Infrastructure
Design for VMware Cloud
Foundation
5
Design of the physical data center network includes defining the network topology for
connecting physical switches and ESXi hosts, determining switch port settings for VLANs and
link aggregation, and designing routing.

A software-defined network (SDN) both integrates with and uses components of the physical
data center. SDN integrates with your physical network to support east-west transit in the data
center and north-south transit to and from the SDDC networks.

Several typical data center network deployment topologies exist:

n Core-Aggregation-Access

n Leaf-Spine

n Hardware SDN

Note Leaf-Spine is the default data center network deployment topology used for VMware
Cloud Foundation.

Read the following topics next:

n VLANs and Subnets for VMware Cloud Foundation

n Leaf-Spine Physical Network Design Requirements and Recommendations for VMware Cloud
Foundation

VLANs and Subnets for VMware Cloud Foundation


Configure your VLANs and subnets according to the guidelines and requirements for VMware
Cloud Foundation.

When designing the VLAN and subnet configuration for your VMware Cloud Foundation
deployment, consider the following guidelines:

VMware by Broadcom 44
VMware Cloud Foundation Design Guide

Table 5-1. VLAN and Subnet Guidelines for VMware Cloud Foundation
NSX Federation Between Multiple
All Deployment Topologies Multiple Availability Zones VMware Cloud Foundation Instances

n Ensure your subnets are scaled n For network segments which n An RTEP network segment should
appropriately to allow for are stretched between availability have a VLAN ID and Layer
expansion as expanding at a later zones, the VLAN ID must meet 3 range that are specific to
time can be disruptive. the following requirements: the VMware Cloud Foundation
n Use the IP address of the floating n Be the same in both instance.
interface for Virtual Router availability zones with the n In a VMware Cloud Foundation
Redundancy Protocol (VRPP) or same Layer 3 network instance with multiple availability
Hot Standby Routing Protocol segments. zones, the RTEP network
(HSRP) as the gateway. n Have a Layer 3 gateway segment must be stretched
n Use the RFC 1918 IPv4 address at the first hop that is between the zones and assigned
space for these subnets and highly available such that it the same VLAN ID and IP range.
allocate one octet by VMware tolerates the failure of an n All Edge RTEP networks must
Cloud Foundation instance and entire availability zone. reach each other.
another octet by function. n For network segments of the
same type which are not
stretched between availability
zones, the VLAN ID can be the
same or different between the
zones.

When deploying VLANs and subnets for VMware Cloud Foundation, they must conform to the
following requirements according to the VMware Cloud Foundation topology:

VMware by Broadcom 45
VMware Cloud Foundation Design Guide

Figure 5-1. Choosing a VLAN Model for Host and Management VM Traffic

Start choosing a management


VLAN model

Is separate security
Yes access required for
ESXi host and
VM management?

No

Yes Is isolating host No


Use separate VLANs for Use the same VLAN for
from VM management
managing VMs and ESXi hosts managing VMs and ESXi hosts
traffic required?

Table 5-2. VLANs and Subnets for VMware Cloud Foundation


VMware Cloud Foundation Instances VMware Cloud Foundation Instances
Function with a Single Availability Zone with Multiple Availability Zones

VM management n Required n Required


n Highly available gateway within n Must be stretched within the
the instance instance
n Highly available gateway across
availability zones within the
instance

Host management - first availability n Required n Required


zone n Highly available gateway within n Highly available gateway across
the instance availability zones within the
instance

vSphere vMotion - first availability n Required n Required


zone n Highly available gateway within n Highly available gateway in
the instance first availability zone within the
instance

vSAN - first availability zone n Required n Required


n Highly available gateway within n Highly available gateway in
the instance first availability zone within the
instance

VMware by Broadcom 46
VMware Cloud Foundation Design Guide

Table 5-2. VLANs and Subnets for VMware Cloud Foundation (continued)
VMware Cloud Foundation Instances VMware Cloud Foundation Instances
Function with a Single Availability Zone with Multiple Availability Zones

Host overlay - first availability zone n Required n Required


n Highly available gateway within n Highly available gateway in
the instance first availability zone within the
instance

Uplink01 n Required n Required


n Gateway optional n Gateway optional
n Must be stretched within the
instance

Uplink02 n Required n Required


n Gateway optional n Gateway optional
n Must be stretched within the
instance

Edge overlay n Required n Required


n Highly available gateway within n Must be stretched within the
the instance instance
n Highly available gateway across
availability zones within the
instance

Host management - second n Not required n Required


availability zone n Highly available gateway in
second availability zone within the
instance

vSphere vMotion - second availability n Not required n Required


zone n Highly available gateway in
second availability zone within the
instance

vSAN - second availability zone n Not required n Required


n Highly available gateway in
second availability zone within the
instance

Host overlay - second availability n Not required n Required


zone n Highly available gateway in
second availability zone within the
instance

Edge RTEP n Required for NSX Federation only n Required for NSX Federation only
n Highly available gateway within n Must be stretched within the
the instance instance
n Highly available gateway across
availability zones within the
instance

Management and Witness - witness n Not required n Required


appliance at a third location n Highly available gateway at the
witness location

VMware by Broadcom 47
VMware Cloud Foundation Design Guide

Leaf-Spine Physical Network Design Requirements and


Recommendations for VMware Cloud Foundation
Leaf-Spine is the default data center network deployment topology used for VMware Cloud
Foundation. Consider network bandwidth, trunk port configuration, jumbo frames and routing
configuration for NSX in a deployment with a single or multiple VMware Cloud Foundation
instances.

Leaf-Spine Physical Network Logical Design


Each ESXi host is connected redundantly to the top-of-rack (ToR) switches of the SDDC network
fabric by two 25-GbE ports. The ToR switches are configured to provide all necessary VLANs
using an 802.1Q trunk. These redundant connections use features in vSphere Distributed Switch
and NSX to guarantee that no physical interface is overrun and available redundant paths are
used.

Figure 5-2. Leaf-Spine Physical Network Logical Design

Data Center
Fabric Spine

ToR ToR ToR ToR


Switch Switch Switch Switch

ESXi ESXi
Hosts Hosts

Leaf-Spine Physical Network Design Requirements and


Recommendations
The requirements and recommendations for the leaf-spine network configuration determine the
physical layout and use of VLANs. They also include requirements and recommendations on
jumbo frames, and on network-related requirements such as DNS and NTP.

VMware by Broadcom 48
VMware Cloud Foundation Design Guide

Table 5-3. Leaf-Spine Physical Network Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NET-REQD-CFG-001 Do not use EtherChannel n Simplifies configuration None.


(LAG, LACP, or vPC) of top of rack switches.
configuration for ESXi host n Teaming options
uplinks. available with vSphere
Distributed Switch
provide load balancing
and failover.
n EtherChannel
implementations might
have vendor-specific
limitations.

VCF-NET-REQD-CFG-002 Use VLANs to separate n Supports physical Requires uniform


physical network functions. network connectivity configuration and
without requiring many presentation on all the
NICs. trunks that are made
n Isolates the different available to the ESXi hosts.
network functions in
the SDDC so that you
can have differentiated
services and prioritized
traffic as needed.

VCF-NET-REQD-CFG-003 Configure the VLANs as All VLANs become Optionally, the


members of a 802.1Q trunk. available on the same management VLAN can act
physical network adapters as the native VLAN.
on the ESXi hosts.

VCF-NET-REQD-CFG-004 Set the MTU size to at least n Improves traffic When adjusting the MTU
1,700 bytes (recommended throughput. packet size, you must
9,000 bytes for jumbo n Supports Geneve by also configure the entire
frames) on the physical increasing the MTU size network path (VMkernel
switch ports, vSphere to a minimum of 1,600 network adapters, virtual
Distributed Switches, bytes. switches, physical switches,
vSphere Distributed Switch and routers) to support the
n Geneve is an extensible
port groups, and N-VDS same MTU packet size.
protocol. The MTU
switches that support the In an environment with
size might increase
following traffic types: multiple availability zones,
with future capabilities.
n Overlay (Geneve) While 1,600 bytes the MTU must be
n vSAN is sufficient, an MTU configured on the entire
size of 1,700 bytes network path between the
n vSphere vMotion
provides more room for zones.
increasing the Geneve
MTU size without the
need to change the
MTU size of the
physical infrastructure.

VMware by Broadcom 49
VMware Cloud Foundation Design Guide

Table 5-4. Leaf-Spine Physical Network Design Requirements for NSX Federation in VMware
Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NET-REQD-CFG-005 Set the MTU size to at least n Jumbo frames are When adjusting the MTU
1,500 bytes (1,700 bytes not required between packet size, you must
preferred; 9,000 bytes VMware Cloud also configure the entire
recommended for jumbo Foundation instances. network path, that is,
frames) on the components However, increased virtual interfaces, virtual
of the physical network MTU improves traffic switches, physical switches,
between the VMware throughput. and routers to support the
Cloud Foundation instances n Increasing the RTEP same MTU packet size.
for the following traffic MTU to 1,700
types. bytes minimizes
n NSX Edge RTEP fragmentation for
standard-size workload
packets between
VMware Cloud
Foundation instances.

VCF-NET-REQD-CFG-006 Ensure that the latency A latency lower than 500 None.
between VMware Cloud ms is required for NSX
Foundation instances that Federation.
are connected in an NSX
Federation is less than 500
ms.

VCF-NET-REQD-CFG-007 Provide a routed Configuring NSX You must assign unique


connection between the Federation requires routable IP addresses for
NSX Manager clusters in connectivity between the each fault domain.
VMware Cloud Foundation NSX Global Manager
instances that are instances, NSX Local
connected in an NSX Manager instances, and
Federation. NSX Edge clusters.

VMware by Broadcom 50
VMware Cloud Foundation Design Guide

Table 5-5. Leaf-Spine Physical Network Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NET-RCMD-CFG-001 Use two ToR switches for Supports the use of Requires two ToR switches
each rack. two 10-GbE (25-GbE or per rack which might
greater recommended) increase costs.
links to each server,
provides redundancy and
reduces the overall design
complexity.

VCF-NET-RCMD-CFG-002 Implement the following n Provides availability n Might limit the


physical network during a switch failure. hardware choices.
architecture: n Provides support for n Requires dynamic
n One 25-GbE (10-GbE BGP dynamic routing routing protocol
minimum) port on each protocol configuration in the
ToR switch for ESXi physical network.
host uplinks (Host to
ToR).
n Layer 3 device that
supports BGP.

VCF-NET-RCMD-CFG-003 Use a physical network n Supports design Requires BGP configuration


that is configured for BGP flexibility for routing in the physical network.
routing adjacency. multi-site and multi-
tenancy workloads.
n BGP is the only
dynamic routing
protocol that is
supported for NSX
Federation.
n Supports failover
between ECMP Edge
uplinks.

VCF-NET-RCMD-CFG-004 Assign persistent IP n Ensures that endpoints If you add more hosts to
configurations for NSX have a persistent TEP the cluster, expanding the
tunnel endpoints (TEPs) IP address. static IP pools might be
that use static IP pools n In VMware Cloud required.
instead of dynamic IP pool Foundation, TEP IP
addressing. assignment by using
static IP pools is
recommended for all
topologies.
n This configuration
removes any
requirement for
external DHCP services.

VMware by Broadcom 51
VMware Cloud Foundation Design Guide

Table 5-5. Leaf-Spine Physical Network Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-NET-RCMD-CFG-005 Configure the trunk ports Reduces the time to Although this design
connected to ESXi NICs as transition ports over to the does not use the STP,
trunk PortFast. forwarding state. switches usually have STP
configured by default.

VCF-NET-RCMD-CFG-006 Configure VRRP, HSRP, or Ensures that the VLANs Requires configuration of a
another Layer 3 gateway that are stretched high availability technology
availability method for between availability zones for the Layer 3 gateways in
these networks. are connected to a the data center.
n Management highly- available gateway.
Otherwise, a failure in the
n Edge overlay
Layer 3 gateway will cause
disruption in the traffic in
the SDN setup.

Table 5-6. Leaf-Spine Physical Network Design Recommendations for NSX Federation in
VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NET-RCMD-CFG-007 Provide BGP routing BGP is the supported None.


between all VMware Cloud routing protocol for NSX
Foundation instances that Federation.
are connected in an NSX
Federation setup.

VCF-NET-RCMD-CFG-008 Ensure that the latency A latency lower than None.


between VMware Cloud 150 ms is required for the
Foundation instances that following features:
are connected in an NSX n Cross vCenter Server
Federation is less than vMotion
150 ms for workload
mobility.

VMware by Broadcom 52
vSAN Design for VMware Cloud
Foundation 6
VMware Cloud Foundation uses VMware vSAN as the principal storage type for the management
domain and is recommended for use as principal storage in VI workload domains. You must
determine the size of the compute and storage resources for the vSAN storage, and the
configuration of the network carrying vSAN traffic. For multiple availability zones, you extend
the resource size and determine the configuration of the vSAN witness host.

Read the following topics next:

n Logical Design for vSAN for VMware Cloud Foundation

n Hardware Configuration for vSAN for VMware Cloud Foundation

n Network Design for vSAN for VMware Cloud Foundation

n vSAN Witness Design for VMware Cloud Foundation

n vSAN Design Requirements and Recommendations for VMware Cloud Foundation

Logical Design for vSAN for VMware Cloud Foundation


vSAN is a cost-efficient storage technology that provides a simple storage management user
experience, and permits a fully automated initial deployment of VMware Cloud Foundation. It also
provides support for future storage expansion and implementation of vSAN stretched clusters in
a workload domain.

VMware by Broadcom 53
VMware Cloud Foundation Design Guide

Figure 6-1. vSAN Logical Design for VMware Cloud Foundation

vSphere Cluster

ESXi Hosts

Principal Supplemental Datastores


Datastore (Optional)

Virtual
Machines

Software-Defined Storage
Storage

Policy-Based Storage Management


Virtualized Data Services
Hypervisor Storage Abstraction

vSAN

Physical Disks

FLASH NVMe SATA

VMware by Broadcom 54
VMware Cloud Foundation Design Guide

Table 6-1. vSAN Logical Design


VMware Cloud Foundation Instances VMware Cloud FoundationInstances
Workload Domain Type with a Single Availability Zone with Multiple Availability Zones

Management domain (default cluster) Four nodes minimum n Must be stretched first
n 8 node minimum, equally
distributed across availability
zones
n vSAN witness appliance in a third
fault domain

Management domain (additional n Three nodes minimum n Six nodes minimum, equally
clusters) n Four nodes minimum is distributed across availability
recommended for higher zones
availability n Eight nodes minimum is
recommended for higher
availability
n vSAN witness appliance in a third
fault domain

VI workload domain (all clusters) n Three nodes minimum n Six nodes minimum, equally
n Four nodes minimum is distributed across availability
recommended for higher zones
availability n Eight nodes minimum is
recommended for higher
availability
n vSAN witness appliance in a third
fault domain

Hardware Configuration for vSAN for VMware Cloud


Foundation
Determine the vSAN architecture and the storage controllers for performance and stability
according to the requirements of the management components of VMware Cloud Foundation.

VMware by Broadcom 55
VMware Cloud Foundation Design Guide

Figure 6-2. Choosing a vSAN Architecture Model

Start choosing an architecture


model for vSAN

Yes Do you need


vSAN stretched
clusters?

No

Yes Is 25-GbE
Use vSAN OSA networking Use vSAN ESA
a constraint?

No

Is a hybrid
Yes No
vSAN configuration
required?

vSAN Physical Requirements and Dependencies


vSAN has the following requirements and options:

n vSAN Original Storage Architecture (OSA) as hybrid storage or all-flash storage.

n A vSAN hybrid storage configuration requires both magnetic devices and flash caching
devices. The cache tier must be at least 10% of the size of the capacity tier.

n An all-flash vSAN configuration requires flash devices for both the caching and capacity
tiers.

n VMware vSAN ReadyNodes or hardware from the VMware Compatibility Guide to build
your own.

VMware by Broadcom 56
VMware Cloud Foundation Design Guide

n vSAN Express Storage Architecture (ESA)

n All storage devices claimed by vSAN contribute to capacity and performance. Each host's
storage devices claimed by vSAN form a storage pool. The storage pool represents the
amount of caching and capacity provided by the host to the vSAN datastore.

n ESXi hosts must be on the vSAN ESA Ready Node HCL with a minimum of 512 GB RAM
per host.

Note vSAN ESA stretched clusters are not supported by VMware Cloud Foundation.

For best practices, capacity considerations, and general recommendations about designing and
sizing a vSAN cluster, see the VMware vSAN Design and Sizing Guide.

Network Design for vSAN for VMware Cloud Foundation


In the network design for vSAN in VMware Cloud Foundation, you determine the network
configuration for vSAN traffic.

Consider the overall traffic bandwidth and decide how to isolate storage traffic.

n Consider how much vSAN data traffic is running between ESXi hosts.

n The amount of storage traffic depends on the number of VMs that are running in the cluster,
and on how write-intensive the I/O process is for the applications running in the VMs.

For information on the physical network setup for vSAN traffic, and other system traffic, see
Chapter 5 Physical Network Infrastructure Design for VMware Cloud Foundation.

For information on the virtual network setup for vSAN traffic, and other system traffic, see
Logical vSphere Networking Design for VMware Cloud Foundation.

The vSAN network design includes these components.

Table 6-2. Components of vSAN Network Design

Design Component Description

Physical NIC speed For best and predictable performance (IOPS) of the
environment, this design uses a minimum of a 10-GbE
connection, with 25-GbE recommended, for use with
vSAN OSA all-flash configurations.
For vSAN ESA, 25-GbE connection is recommended.

VMkernel network adapters for vSAN The vSAN VMkernel network adapter on each ESXi host
is created when you enable vSAN on the cluster. Connect
the vSAN VMkernel network adapters on all ESXi hosts in
a cluster to a dedicated distributed port group, including
ESXi hosts that are not contributing storage resources to
the cluster.

VMware by Broadcom 57
VMware Cloud Foundation Design Guide

Table 6-2. Components of vSAN Network Design (continued)

Design Component Description

VLAN All storage traffic should be isolated on its own VLAN.


When a design uses multiple vSAN clusters, each cluster
should use a dedicated VLAN or segment for its traffic.
This approach increases security, prevents interference
between clusters, and helps with troubleshooting cluster
configuration. If a cluster spans a rack, the vSAN VLAN
can be allocated per rack to enable Layer 3 multi-rack
deployments.

Jumbo frames vSAN traffic can be handled by using jumbo frames.


Use jumbo frames for vSAN traffic only if the physical
environment is already configured to support them, they
are part of the existing design, or if the underlying
configuration does not create a significant amount of
added complexity to the design.

What to read next

vSAN Witness Design for VMware Cloud Foundation


The vSAN witness appliance is a specialized ESXi installation that provides quorum and
tiebreaker services for stretched clusters in VMware Cloud Foundation.

vSAN Witness Deployment Specification


You must deploy a witness ESXi host when using vSAN in a stretched cluster configuration. This
appliance must be deployed in a third location that is not local to the ESXi hosts on either side of
the stretched cluster.

Table 6-3. vSAN Witness Appliance Sizing Considerations


Maximum Number of Supported Maximum Number of Supported
Appliance Size Virtual Machines Witness Components

Tiny 10 750

Medium 500 21, 000

Large More than 500 45,000

Extra Large More than 500 64, 000

vSAN Witness Network Design


When using two availability zones, connect the vSAN witness appliance to the workload domain
vCenter Server so that you can perform the initial setup of the stretched cluster and have
workloads failover between the zones.

VMware by Broadcom 58
VMware Cloud Foundation Design Guide

VMware Cloud Foundation uses vSAN witness traffic separation where you can use a VMkernel
adapter for vSAN witness traffic that is different from the adapter for vSAN data traffic. In this
design, you configure vSAN witness traffic in the following way:

n On each ESXi host in both availability zones, place the vSAN witness traffic on the
management VMkernel adapter.

n On the vSAN witness appliance, use the same VMkernel adapter for both management and
witness traffic.

For information about vSAN witness traffic separation, see vSAN Stretched Cluster Guide on
VMware Cloud Platform Tech Zone.

Management network

Routed to the management networks in both availability zones. Connect the first VMkernel
adapter of the vSAN witness appliance to this network. The second VMkernel adapter on the
vSAN witness appliance is not used.

Place the following traffic on this network:

n Management traffic

n vSAN witness traffic

Figure 6-3. vSAN Witness Network Design


Witness
Site

Witness
Appliance Physical
Upstream
Router

Witness
Management
Network

Availability Zone 1 (AZ 1) Availability Zone 2 (AZ 2)

ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi


Physical Management Host 01 Host 02 Host 03 Host 04 Host 01 Host 02 Host 03 Host 04 Physical
Upstream Domain Upstream
Router vCenter Server Router

AZ 1: Management
Domain Management
Network

AZ 1: VI Workload AZ 2: VI Workload
Domain Management Domain
Network Management
Network

AZ 1: VI Workload AZ 2: VI Workload
Domain vSAN Domain vSAN
Network Network

VMware by Broadcom 59
VMware Cloud Foundation Design Guide

vSAN Design Requirements and Recommendations for


VMware Cloud Foundation
Consider the requirements for using vSAN storage for standard and stretched clusters in VMware
Cloud Foundation, such as required capacity, number of hosts, storage policies, and the similar
best practices for having vSAN operate in an optimal way.

For related vSphere cluster requirements and recommendations, see vSphere Cluster Design
Requirements and Recommendations for VMware Cloud Foundation.

vSAN Design Requirements


You must meet the following design requirements for standard and stretched clusters in your
vSAN design for VMware Cloud Foundation.

Table 6-4. vSAN Design Requirements for VMware Cloud Foundation


Requireme
nt ID Design Requirement Justification Implication

VCF- Provide sufficient raw Ensures that sufficient None.


VSAN- capacity to meet resources are present to
REQD- the initial needs of create the workload domain
CFG-001 the workload domain cluster.
cluster.

VCF- Provide at least Satisfies the requirements for None.


VSAN- the required minimum storage availability.
REQD- number of hosts
CFG-002 according to the
cluster type.

Table 6-5. vSAN ESA Design Requirements for VMware Cloud Foundation
Requireme
nt ID Design Requirement Justification Implication

VCF- Verify the hardware Prevents hardware-related Limits the number of compatible hardware
VSAN- components used failures during workload configurations that can be used.
REQD- in your vSAN deployment
CFG-003 deployment are on
the vSAN Hardware
Compatibility List.

VMware by Broadcom 60
VMware Cloud Foundation Design Guide

Table 6-6. vSAN Design Requirements for Stretched Clusters with VMware Cloud Foundation
Requireme
nt ID Design Requirement Justification Implication

VCF- Add the following Provides the necessary You might need additional policies if third-
VSAN- setting to the default protection for virtual party virtual machines are to be hosted in
REQD- vSAN storage policy: machines in each availability these clusters because their performance or
CFG-004 Site disaster tolerance zone, with the ability to availability requirements might differ from
= Site mirroring - recover from an availability what the default VMware vSAN policy
stretched cluster zone outage. supports.

VCF- Configure two fault Fault domains are mapped to You must provide additional raw storage when
VSAN- domains, one for availability zones to provide the site mirroring - stretched cluster option is
REQD- each availability zone. logical host separation and selected, and fault domains are enabled.
CFG-005 Assign each host ensure a copy of vSAN
to their respective data is always available even
availability zone fault when an availability zone
domain. goes offline.

VCF- Use vSAN OSA to Stretched clusters on top of None.


VSAN- create a stretched vSAN ESA are not supported
REQD- cluster. by VMware Cloud Foundation
CFG-006

VCF- Configure an individual The vSAN storage policy of You must configure additional vSAN storage
VSAN- vSAN storage policy a stretched cluster cannot be policies.
REQD- for each stretched shared with other clusters.
CFG-007 cluster.

VCF- Deploy a vSAN witness Ensures availability of vSAN You must provide a third physically separate
VSAN- appliance in a location witness components in the location that runs a vSphere environment.
WTN- that is not local to the event of a failure of one of You might use a VMware Cloud Foundation
REQD- ESXi hosts in any of the availability zones. instance in a separate physical location.
CFG-001 the availability zones.

VCF- Deploy a witness Ensures the witness The vSphere environment at the witness
VSAN- appliance that appliance is sized to support location must satisfy the resource
WTN- corresponds to the the projected workload requirements of the witness appliance.
REQD- required cluster storage consumption.
CFG-002 capacity.

VCF- Connect the first Enables connecting the The management networks in both availability
VSAN- VMkernel adapter of witness appliance to the zones must be routed to the management
WTN- the vSAN witness workload domain vCenter network in the witness site.
REQD- appliance to the Server.
CFG-003 management network
in the witness site.

VCF- Allocate a statically Simplifies maintenance and Requires precise IP address management.
VSAN- assigned IP address tracking, and implements a
WTN- and host name to the DNS configuration.
REQD- management adapter
CFG-004 of the vSAN witness
appliance.

VMware by Broadcom 61
VMware Cloud Foundation Design Guide

Table 6-6. vSAN Design Requirements for Stretched Clusters with VMware Cloud Foundation
(continued)
Requireme
nt ID Design Requirement Justification Implication

VCF- Configure forward Enables connecting the vSAN You must provide DNS records for the vSAN
VSAN- and reverse DNS witness appliance to the witness appliance.
WTN- records for the vSAN workload domain vCenter
REQD- witness appliance for Server by FQDN instead of IP
CFG-005 the VMware Cloud address.
Foundation instance.

VCF- Configure time Prevents any failures n An operational NTP service must be
VSAN- synchronization by in the stretched cluster available in the environment.
WTN- using an internal NTP configuration that are caused n All firewalls between the vSAN witness
REQD- time for the vSAN by time mismatch between appliance and the NTP servers must allow
CFG-006 witness appliance. the vSAN witness appliance NTP traffic on the required network ports.
and the ESXi hosts in
both availability zones and
workload domain vCenter
Server.

vSAN Design Recommendations


In your vSAN design for VMware Cloud Foundation, you can apply certain best practices for
standard and stretched clusters.

Table 6-7. vSAN Design Recommendations for VMware Cloud Foundation


Recommen
dation ID Design Recommendation Justification Implication

VCF- Provide sufficient raw capacity to Ensures that sufficient resources are present in the None.
VSAN- meet the planned needs of the workload domain cluster, preventing the need to
RCMD- workload domain cluster. expand the vSAN datastore in the future.
CFG-001

VCF- Ensure that at least 30% of free This reserved capacity is set aside for host Increases
VSAN- space is always available on the maintenance mode data evacuation, component the amount
RCMD- vSAN datastore,. rebuilds, rebalancing operations, and VM of available
CFG-002 snapshots. storage
needed.

VMware by Broadcom 62
VMware Cloud Foundation Design Guide

Table 6-7. vSAN Design Recommendations for VMware Cloud Foundation (continued)
Recommen
dation ID Design Recommendation Justification Implication

VCF- Use the default VMware vSAN n Provides the level of redundancy that is You might
VSAN- storage policy. needed in the workload domain cluster. need
RCMD- n Provides the level of performance that is additional
CFG-003 enough for the individual workloads. policies for
third-party
virtual
machines
hosted in
these
clusters
because
their
performanc
e or
availability
requirement
s might
differ from
what the
default
VMware
vSAN policy
supports.

VCF- Leave the default virtual machine Sparse virtual swap files consume capacity on None.
VSAN- swap file as a sparse object on vSAN only as they are accessed. As a result,
RCMD- vSAN. you can reduce the consumption on the vSAN
CFG-004 datastore if virtual machines do not experience
memory over-commitment, which would require
the use of the virtual swap file.

VCF- Use the existing vSphere n Reduces the complexity of the network design. All traffic
VSAN- Distributed Switch instance for the n Reduces the number of physical NICs required. types can
RCMD- workload domain cluster. be shared
CFG-005 over
common
uplinks.

VMware by Broadcom 63
VMware Cloud Foundation Design Guide

Table 6-7. vSAN Design Recommendations for VMware Cloud Foundation (continued)
Recommen
dation ID Design Recommendation Justification Implication

VCF- Configure jumbo frames on the n Simplifies configuration because jumbo frames Every
VSAN- VLAN for vSAN traffic. are also used to improve the performance of device in
RCMD- vSphere vMotion and NFS storage traffic. the network
CFG-006 n Reduces the CPU overhead, resulting in high must
network usage. support
jumbo
frames.

VCF- Configure vSAN in an all-flash Meets the performance needs of the default All vSAN
VSAN- configuration in the default workload domain cluster. disks must
RCMD- workload domain cluster. be flash
CFG-007 disks, which
might cost
more than
magnetic
disks.

Table 6-8. vSAN OSA Design Recommendations for with VMware Cloud Foundation
Recommen
dation ID Design Recommendation Justification Implication

VCF- Ensure that the storage I/O Storage controllers with lower queue depths can Limits the
VSAN- controller has a minimum queue cause performance and stability problems when number of
RCMD- depth of 256 set. running vSAN. compatible
CFG-008 vSAN ReadyNode servers are configured with the I/O
correct queue depths for vSAN. controllers
that can be
used for
storage.

VCF- Do not use the storage I/O Running non-vSAN disks, for example, VMFS, on a If non-vSAN
VSAN- controllers that are running vSAN storage I/O controller that is running a vSAN disk disks are
RCMD- disk groups for another purpose. group can impact vSAN performance. required in
CFG-009 ESXi hosts,
you must
have an
additional
storage I/O
controller in
the host.

VMware by Broadcom 64
VMware Cloud Foundation Design Guide

Table 6-8. vSAN OSA Design Recommendations for with VMware Cloud Foundation (continued)
Recommen
dation ID Design Recommendation Justification Implication

VCF- Configure vSAN with a minimum of Reduces the size of the fault domain and Using
VSAN- two disk groups per ESXi host. spreads the I/O load over more disks for better multiple disk
RCMD- performance. groups
CFG-010 requires
more disks
in each ESXi
host.

VCF- For the cache tier in each disk Provides enough cache for both hybrid or all-flash Using larger
VSAN- group, use a flash-based drive that vSAN configurations to buffer I/O and ensure disk flash disks
RCMD- is at least 600 GB large. group performance. can increase
CFG-011 Additional space in the cache tier does not the initial
increase performance. host cost.

VMware by Broadcom 65
VMware Cloud Foundation Design Guide

Table 6-9. vSAN ESA Design Recommendations for with VMware Cloud Foundation
Recommen
dation ID Design Recommendation Justification Implication

VCF- Activate auto-policy management. Configures optimized storage policies based on You must
VSAN- the cluster type and the number of hosts in activate
RCMD- the cluster inventory. Changes to the number of auto-policy
CFG-012 hosts in the cluster or Host Rebuild Reserve will managemen
prompt you to make a suggested adjustment to t manually.
the optimized storage policy.

VCF- Activate vSAN ESA compression. Activated by default, it also improves PostgreSQL
VSAN- performance. databases
RCMD- and other
CFG-013 applications
might use
their own
compressio
n
capabilities.
In these
cases, using
a storage
policy with
the
compressio
n capability
turned off
will save
CPU cycles.
You can
disable
vSAN ESA
compressio
ns for such
workloads
through the
use of the
Storage
Policy
Based
Managemen
t (SPBM)
framework.

VCF- Use NICs with a minimum 25-GbE 10-GbE NICs will limit the scale and performance of Requires 25-
VSAN- capacity. a vSAN ESA cluster because usually performance GbE or
RCMD- requirements increase over the lifespan of the faster
CFG-014 cluster. network
fabric.

VMware by Broadcom 66
VMware Cloud Foundation Design Guide

Table 6-10. vSAN Design Recommendations for Stretched Clusters with VMware Cloud
Foundation
Recommen
dation ID Design Recommendation Justification Implication

VCF- Configure the vSAN witness Removes the requirement to have static routes on The
VSAN- appliance to use the first VMkernel the witness appliance as witness traffic is routed managemen
WTN- adapter, that is the management over the management network. t networks
RCMD- interface, for vSAN witness traffic. in both
CFG-001 availability
zones must
be routed to
the
managemen
t network in
the witness
site.

VCF- Place witness traffic on the Separates the witness traffic from the vSAN data The
VSAN- management VMkernel adapter of traffic. Witness traffic separation provides the managemen
WTN- all the ESXi hosts in the workload following benefits: t networks
RCMD- domain. n Removes the requirement to have static routes in both
CFG-002 from the vSAN networks in both availability availability
zones to the witness site. zones must
be routed to
n Removes the requirement to have jumbo
the
frames enabled on the path between each
managemen
availability zone and the witness site because
t network in
witness traffic can use a regular MTU size of
the witness
1500 bytes.
site.

VMware by Broadcom 67
vSphere Design for VMware Cloud
Foundation 7
The vSphere design includes determining the configuration of the vCenter Server instances, ESXi
hosts, vSphere clusters,and vSphere networking for a VMware Cloud Foundation environment.

Read the following topics next:

n ESXi Design for VMware Cloud Foundation

n vCenter Server Design for VMware Cloud Foundation

n vSphere Cluster Design for VMware Cloud Foundation

n vSphere Networking Design for VMware Cloud Foundation

ESXi Design for VMware Cloud Foundation


In the design of the ESXi host configuration for your VMware Cloud Foundation environment,
consider the resources, networking, and security policies that are required to support the virtual
machines in each workload domain cluster.

n Logical Design for ESXi for VMware Cloud Foundation


In the logical design for ESXi, you determine the high-level integration of the ESXi hosts
with the other components of the VMware Cloud Foundation instance for providing virtual
infrastructure to management and workload components.

n Sizing Considerations for ESXi for VMware Cloud Foundation


You decide on the number of ESXi hosts per cluster and the number of physical disks per
ESXi host.

n ESXi Design Requirements and Recommendations for VMware Cloud Foundation


The requirements for the ESXi hosts in a workload domain in VMware Cloud Foundation
are related to the system requirements of the workloads hosted in the domain. The
ESXi requirements include number, server configuration, amount of hardware resources,
networking, and certificate management. Similar best practices help you design optimal
environment operation.

VMware by Broadcom 68
VMware Cloud Foundation Design Guide

Logical Design for ESXi for VMware Cloud Foundation


In the logical design for ESXi, you determine the high-level integration of the ESXi hosts with the
other components of the VMware Cloud Foundation instance for providing virtual infrastructure
to management and workload components.

To provide the resources required to run the management and workload components of the
VMware Cloud Foundation instance, each ESXi host consists of the following elements:

n CPU and memory

n Storage devices

n Out of band management interface

n Network interfaces

Figure 7-1. ESXi Logical Design for VMware Cloud Foundation

ESXi Host

Compute

CPU Memory

Storage

Non-Local Storage Local Storage


(SAN/NAS) (vSAN)

Network

NIC 1 and NIC 2 Out of Band


Uplinks Management Uplink

Sizing Considerations for ESXi for VMware Cloud Foundation


You decide on the number of ESXi hosts per cluster and the number of physical disks per ESXi
host.

For detailed sizing based on the overall profile of the VMware Cloud Foundation instance you
plan to deploy, see VMware Cloud Foundation Planning and Preparation Workbook.

VMware by Broadcom 69
VMware Cloud Foundation Design Guide

The configuration and assembly process for each system should be standardized, with all
components installed in the same manner on each ESXi host. Because standardization of
the physical configuration of the ESXi hosts removes variability, the infrastructure is easily
managed and supported. ESXi hosts are deployed with identical configuration across all cluster
members, including storage and networking configurations. For example, consistent PCIe card
slot placement, especially for network interface controllers, is essential for accurate mapping
of physical network interface controllers to virtual network resources. By using identical
configurations, you have an even balance of virtual machine storage components across storage
and compute resources.

VMware by Broadcom 70
VMware Cloud Foundation Design Guide

Table 7-1. ESXi Server Sizing Considerations by Hardware Element

Hardware Element Considerations

CPU n Total CPU requirements for the workloads that are


running in the cluster.
n Host failure and maintenance scenarios.

Keep the overcommitment ratio vCPU-to-pCPU less


than or equal to 2:1 for the management domain and
less than or equal to 8:1 for VI workload domains .
n Additional third-party management components.
n Number of physical cores, not logical cores.
Simultaneous multithreading (SMT) technologies in
CPUs, such as hyper-threading in Intel CPUs, improve
CPU performance by allowing multiple threads to run
in parallel on the same CPU core. Although a single
CPU core can be viewed as two logical cores, the
performance enhancement will not be equivalent to
100% more CPU power. It will also differ from one
environment to another.

Memory n Total memory requirements for the workloads that are


running in the cluster.
n When sizing memory for the ESXi hosts in a cluster,
to reserve the resources of one host for failover or
maintenance, set the admission control setting to N+1,
which reserves the resources of one host for failover
or maintenance.
n vSAN OSA. Number of vSAN disk groups and disks on
an ESXi host.

To support the maximum number of disk groups, you


must provide 32 GB of RAM. For more information
about disk groups, including design and sizing
guidance, see Administering VMware vSAN in the
vSphere documentation.
n vSAN ESA. You must provide 512 GB of RAM.

Storage n Use high-endurance device such as a hard drive or


SSD for boot device
n Use 128-GB boot device to maximize the space
available for ESX-OS Data
n vSAN OSA
n Provide at least one 600-GB cache disk.
n Use a minimum of two capacity disks.
n vSAN ESA. Use a minimum of two NVME devices.
n Use hosts with homogeneous configuration.

ESXi Design Requirements and Recommendations for VMware Cloud


Foundation
The requirements for the ESXi hosts in a workload domain in VMware Cloud Foundation
are related to the system requirements of the workloads hosted in the domain. The ESXi

VMware by Broadcom 71
VMware Cloud Foundation Design Guide

requirements include number, server configuration, amount of hardware resources, networking,


and certificate management. Similar best practices help you design optimal environment
operation.

ESXi Server Design Requirements


You must meet the following design requirements for the ESXi hosts in a workload domain in a
VMware Cloud Foundation deployment.

Table 7-2. Design Requirements for ESXi Server Hardware

Requirement ID Design Requirement Requirement Justification Requirement Implication

VCF-ESX-REQD-CFG-001 Install no less than n Ensures availability None.


the minimum number of requirements are met.
ESXi hosts required for n If one of the hosts is
the cluster type being not available because
deployed. of a failure or
maintenance event, the
CPU overcommitment
ratio becomes 2:1.

VCF-ESX-REQD-CFG-002 Ensure each ESXi host n Ensures workloads Assemble the server
matches the required will run without specification and number
CPU, memory and storage contention even during according to the sizing in
specification. failure and maintenance VMware Cloud Foundation
conditions. Planning and Preparation
Workbook which is based
on projected deployment
size.

VCF-ESX-REQD-SEC-001 Regenerate the certificate Establishes a secure You must manually


of each ESXi host after connection with VMware regenerate the certificates
assigning the host an Cloud Builder during the of the ESXi hosts before
FQDN. deployment of a workload the deployment of a
domain and prevents workload domain.
man-in-the-middle (MiTM)
attacks.

ESXi Server Design Recommendations


In your ESXi host design for VMware Cloud Foundation, you can apply certain best practices.

VMware by Broadcom 72
VMware Cloud Foundation Design Guide

Table 7-3. Design Recommendations for ESXi Server Hardware

Recommendation ID Recommendation Justification Implication

VCF-ESX-RCMD-CFG-001 Use vSAN ReadyNodes Your management domain Hardware choices might be
with vSAN storage for is fully compatible with limited.
each ESXi host in the vSAN at deployment. If you plan to use a
management domain. For information about the server configuration that is
models of physical servers not a vSAN ReadyNode,
that are vSAN-ready, see your CPU, disks and I/O
vSAN Compatibility Guide modules must be listed on
for vSAN ReadyNodes. the VMware Compatibility
Guide under CPU Series
and vSAN Compatibility List
aligned to the ESXi version
specified in VMware Cloud
Foundation 5.1 Release
Notes.

VCF-ESX-RCMD-CFG-002 Allocate hosts with uniform A balanced cluster has You must apply vendor
configuration across the these advantages: sourcing, budgeting,
default management n Predictable and procurement
vSphere cluster. performance even considerations for uniform
during hardware server nodes on a per
failures cluster basis.

n Minimal impact of
resynchronization or
rebuild operations on
performance

VCF-ESX-RCMD-CFG-003 When sizing CPU, do Although multithreading Because you must provide
not consider multithreading technologies increase more physical CPU
technology and associated CPU performance, the cores, costs increase and
performance gains. performance gain depends hardware choices become
on running workloads and limited.
differs from one case to
another.

VCF-ESX-RCMD-CFG-004 Install and configure all Provides hosts that have None.
ESXi hosts in the default large memory, that is,
management cluster to greater than 512 GB,
boot using a 128-GB device with enough space for
or larger. the scratch partition when
using vSAN.

VCF-ESX-RCMD-CFG-005 Use the default n If a failure in the None.


configuration for the vSAN cluster occurs,
scratch partition on all the ESXi hosts remain
ESXi hosts in the default responsive and log
management cluster. information is still
accessible.
n It is not possible to use
vSAN datastore for the
scratch partition.

VMware by Broadcom 73
VMware Cloud Foundation Design Guide

Table 7-3. Design Recommendations for ESXi Server Hardware (continued)

Recommendation ID Recommendation Justification Implication

VCF-ESX-RCMD-CFG-006 For workloads running in Simplifies the configuration Increases the amount
the default management process. of replication traffic for
cluster, save the virtual management workloads
machine swap file at the that are recovered as part
default location. of the disaster recovery
process.

VCF-ESX-RCMD-NET-001 Place the ESXi hosts Enables the separation of Increases the number of
in each management the physical VLAN between VLANs required.
domain cluster on a host ESXi hosts and the other
management network that management components
is separate from the VM for security reasons.
management network.

VCF-ESX-RCMD-NET-002 Place the ESXi hosts Enables the separation of Increases the number of
in each VI workload the physical VLAN between VLANs required. For each
domain on a separate host the ESXi hosts in different VI workload domain, you
management VLAN-backed VI workload domains for must allocate a separate
network. security reasons. management subnet.

VCF-ESX-RCMD-SEC-001 Deactivate SSH access on Ensures compliance with You must activate SSH
all ESXi hosts in the the vSphere Security access manually for
management domain by Configuration Guide and troubleshooting or support
having the SSH service with security best activities as VMware Cloud
stopped and using the practices. Foundation deactivates
default SSH service policy Disabling SSH access SSH on ESXi hosts
Start and stop manually . reduces the risk of security after workload domain
attacks on the ESXi hosts deployment.
through the SSH interface.

VCF-ESX-RCMD-SEC-002 Set the advanced setting n Ensures compliance You must turn off
UserVars.SuppressShellWa with the vSphere SSH enablement warning
rning to 0 across all ESXi Security Configuration messages manually when
hosts in the management Guide and with security performing troubleshooting
domain. best practices or support activities.
n Enables the warning
message that appears
in the vSphere Client
every time SSH access
is activated on an ESXi
host.

VMware by Broadcom 74
VMware Cloud Foundation Design Guide

vCenter Server Design for VMware Cloud Foundation


vCenter Server design considers the location, size, high availability, and identity domain isolation
of the vCenter Server instances for the workload domains in a VMware Cloud Foundation
environment.

n Logical Design for vCenter Server for VMware Cloud Foundation


Each workload domain has a dedicated vCenter Server that manages the ESXi hosts
running NSX Edge nodes and customer workloads. All vCenter Server instances run in the
management domain.

n Sizing Considerations for vCenter Server for VMware Cloud Foundation


You select an appropriate vCenter Server appliance size according to the scale of your
environment.

n High Availability Design for vCenter Server for VMware Cloud Foundation
Protecting vCenter Server is important because it is the central point of management and
monitoring for each workload domain.

n vCenter Server Design Requirements and Recommendations for VMware Cloud Foundation
Each workload domain in VMware Cloud Foundation is managed by a single vCenter
Server instance. You determine the size of this vCenter Server instance and its storage
requirements according to the number of ESXi hosts per cluster and the number of virtual
machines you plan to run on these clusters.

n vCenter Single Sign-On Design Requirements for VMware Cloud Foundation


vCenter Server instances for the VI workload domains in a VMware Cloud Foundation
deployment can be either joined to the vCenter Single Sign-On domain of the vCenter
Server instance for the management domain or deployed in isolated vCenter Single Sign-On
domains.

Logical Design for vCenter Server for VMware Cloud Foundation


Each workload domain has a dedicated vCenter Server that manages the ESXi hosts running
NSX Edge nodes and customer workloads. All vCenter Server instances run in the management
domain.

VMware by Broadcom 75
VMware Cloud Foundation Design Guide

Figure 7-2. Design of vCenter Server for VMware Cloud Foundation

VCF Instance

Access

User Interface

API

Management Domain

Management
Domain vCenter Server

VI Workload Domain
vCenter Server

VI Workload Domain
vCenter Server

Supporting
Infrastructure: Shared
Storage, DNS, NTP

Virtual Infrastructure

ESXi

VMware by Broadcom 76
VMware Cloud Foundation Design Guide

Table 7-4. vCenter Server Layout


VMware Cloud Foundation Instances with a Single VMware Cloud Foundation Instances with Multiple
Availability Zone Availability Zones

n One vCenter Server instance for the management n One vCenter Server instance for the management
domain that manages the management components domain that manages the management components
of the SDDC, such as the vCenter Server instances for of the SDDC, such as vCenter Server instances for
the VI workload domains, NSX Manager cluster nodes, the VI workload domains, NSX Manager cluster nodes,
SDDC Manager, and other solutions. SDDC Manager, and other solutions.
n Optionally, additional vCenter Server instances for the n Optionally, additional vCenter Server instances for the
VI workload domains to support customer workloads. VI workload domains to support customer workloads.
n vSphere HA protecting all vCenter Server appliances. n vSphere HA protecting all vCenter Server appliances.
n A should-run-on-host-in-group VM-Host affinity rule
in vSphere DRS specifying that the vCenter Server
appliances should run in the primary availability zone
unless an outage in this zone occurs.

Sizing Considerations for vCenter Server for VMware Cloud


Foundation
You select an appropriate vCenter Server appliance size according to the scale of your
environment.

When you deploy a workload domain, you select a vCenter Server appliance size that is suitable
for the scale of your environment. The option that you select determines the number of CPUs
and the amount of memory of the appliance. For detailed sizing according to a collective profile
of the VMware Cloud Foundation instance you plan to deploy, refer to the VMware Cloud
Foundation Planning and Preparation Workbook .

Table 7-5. Sizing Considerations for vCenter Server

vCenter Server Appliance Size Management Capacity

Tiny Up to 10 hosts or 100 virtual machines

Small * Up to 100 hosts or 1,000 virtual machines

Medium ** Up to 400 hosts or 4,000 virtual machines

Large Up to 1,000 hosts or 10,000 virtual machines

X-Large Up to 2,000 hosts or 35,000 virtual machines

* Default for the management domain vCenter Server

** Default for VI workload domain vCenter Server instances

High Availability Design for vCenter Server for VMware Cloud


Foundation
Protecting vCenter Server is important because it is the central point of management and
monitoring for each workload domain.

VMware by Broadcom 77
VMware Cloud Foundation Design Guide

VMware Cloud Foundation supports only vSphere HA as a high availability method for vCenter
Server.

Table 7-6. Methods for Protecting the vCenter Server Appliance


Supported in VMware Cloud
High Availability Method Foundation Considerations

vSphere High Availability Yes -

vCenter High Availability (vCenter No n vCenter Server services must fail


HA) over to the passive node so there
is no continuous availability.
n Recovery time can be up to 30
mins.
n You must meet additional
networking requirements for the
private network.
n vCenter HA requires additional
resources for the Passive and
Witness nodes.
n Life cycle management is
complicated because you must
manually delete and recreate the
standby virtual machines during a
life cycle management operation.

vSphere Fault Tolerance (vSphere FT) No n The vCPU limit of vSphere FT


vCPU would limit vCenter Server
appliance size to medium.
n You must provide a dedicated
network.

vCenter Server Design Requirements and Recommendations for


VMware Cloud Foundation
Each workload domain in VMware Cloud Foundation is managed by a single vCenter Server
instance. You determine the size of this vCenter Server instance and its storage requirements
according to the number of ESXi hosts per cluster and the number of virtual machines you plan
to run on these clusters.

vCenter Server Design Requirements for VMware Cloud Foundation


You allocate vCenter Server appliances according to the requirements for workload isolation,
scalability, and resilience to failures.

VMware by Broadcom 78
VMware Cloud Foundation Design Guide

Table 7-7. vCenter Server Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-VCS-REQD-CFG-001 Deploy a dedicated n Isolates vCenter Server failures Requires a


vCenter Server appliance to management or customer separate license
for the management workloads. for the vCenter
domain of the VMware n Isolates vCenter Server Server instance in
Cloud Foundation instance. operations between the management
management and customers. domain
n Supports a scalable cluster
design where you can reuse
the management components as
more customer workloads are
added to the SDDC.
n Simplifies capacity planning
for customer workloads
because you do not consider
management workloads for the
VI workload domain vCenter
Server.
n Improves the ability to upgrade
the vSphere environment and
related components by enabling
for explicit separation of
maintenance windows:
n Management workloads
remain available while you
are upgrading the tenant
workloads
n Customer workloads remain
available while you are
upgrading the management
nodes
n Supports clear separation of
roles and responsibilities to
ensure that only administrators
with granted authorization
can control the management
workloads.
n Facilitates quicker
troubleshooting and problem
resolution.
n Simplifies disaster recovery
operations by supporting a clear
separation between recovery of
the management components
and tenant workloads.
n Provides isolation of potential
network issues by introducing
network separation of the
clusters in the SDDC.

VCF-VCS-REQD-NET-001 Place all workload n Simplifies IP addressing for None.


domain vCenters Server management VMs by using the

VMware by Broadcom 79
VMware Cloud Foundation Design Guide

Table 7-7. vCenter Server Design Requirements for VMware Cloud Foundation (continued)

Requirement ID Design Requirement Justification Implication

appliances on the VM same VLAN and subnet.


management network in n Provides simplified secure
the management domain. access to management VMs in
the same VLAN network.

vCenter Server Design Recommendations


In your vCenter Server design for VMware Cloud Foundation, you can apply certain best
practices for sizing and high availability.

Table 7-8. vCenter Server Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-VCS-RCMD-CFG-001 Deploy an appropriately Ensures resource The default size for a


sized vCenter Server availability and usage management domain is
appliance for each efficiency per workload Small and for VI workload
workload domain. domain. domains is Medium. To
override these values, you
must use the Cloud Builder
API and the SDDC Manager
API.

VCF-VCS-RCMD-CFG-002 Deploy a vCenter Server Ensures resource The default size for a
appliance with the availability and usage management domain is
appropriate storage size. efficiency per workload Small and for VI Workload
domain. Domains is Medium. To
override these values, you
must use the API.

VCF-VCS-RCMD-CFG-003 Protect workload domain vSphere HA is the only vCenter Server becomes
vCenter Server appliances supported method to unavailable during a
by using vSphere HA. protect vCenter Server vSphere HA failover.
availability in VMware
Cloud Foundation.

VCF-VCS-RCMD-CFG-004 In vSphere HA, set the vCenter Server is the If the restart priority for
restart priority policy management and control another virtual machine
for the vCenter Server plane for physical and is set to highest, the
appliance to high. virtual infrastructure. In a connectivity delay for the
vSphere HA event, to management components
ensure the rest of the will be longer.
SDDC management stack
comes up faultlessly, the
workload domain vCenter
Server must be available
first, before the other
management components
come online.

VMware by Broadcom 80
VMware Cloud Foundation Design Guide

vCenter Server Design Recommendations for Stretched Clusters with VMware


Cloud Foundation
The following additional design recommendations apply when using stretched clusters.

Table 7-9. vCenter Server Design Recommendations for vSAN Stretched Clusters with VMware
Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-VCS-RCMD-CFG-005 Add the vCenter Server Ensures that, by default, None.


appliance to the virtual the vCenter Server
machine group for the first appliance is powered on a
availability zone. host in the first availability
zone.

vCenter Single Sign-On Design Requirements for VMware Cloud


Foundation
vCenter Server instances for the VI workload domains in a VMware Cloud Foundation
deployment can be either joined to the vCenter Single Sign-On domain of the vCenter Server
instance for the management domain or deployed in isolated vCenter Single Sign-On domains.

You select the vCenter Single Sign-On topology according to the needs and design objectives of
your deployment.

VMware by Broadcom 81
VMware Cloud Foundation Design Guide

Table 7-10. vCenter Single Sign-On Topologies for VMware Cloud Foundation
VMware Cloud Foundation vCenter Single Sign-On Benefits Drawbacks
Topology Domain Topology

Single vCenter Server Instance One vCenter Single Enables a small -


- Single vCenter Single Sign-On Sign-On domain with environment where
Domain the management domain customer workloads run
vCenter Server instance in the same cluster as
only. the management domain
components.

Multiple vCenter Server Instances One vCenter Single Sign- Enables sharing of Limited to 15 workload
- Single vCenter Single Sign-On On domain with the vCenter Server roles, domains per VMware
Domain management domain and tags and licenses Cloud Foundation
all VI workload domain between all workload instance including the
vCenter Server instances domain instances. management domain.
in enhanced linked mode
(ELM) using a ring
topology.

Multiple vCenter Server Instances n One vCenter Single n Enables isolation at Additional password
- Multiple vCenter Single Sign-On Sign-On domain the vCenter Single management overhead
Domains with at least Sign-On domain layer per vCenter Single Sign-
the management for increased security On domain.
domain vCenter Server separation.
instance n Supports up to 25
n Additional VI workload workload domains
domains, each with per VMware Cloud
their own isolated Foundation instance.
vCenter Single Sign-On
domain.

Figure 7-3. Single vCenter Server Instance - Single vCenter Single Sign-On Domain

VMware Cloud
Foundation Instance

vCenter Single Sign-On


Domain

Management Domain
vCenter Server

Because the Single vCenter Server Instance - Single vCenter Single Sign-On Domain topology
contains a single vCenter Server instance by definition, no relevant design requirements or
recommendations for vCenter Single Sign-On are needed.

VMware by Broadcom 82
VMware Cloud Foundation Design Guide

Figure 7-4. Multiple vCenter Server Instances - Single vCenter Single Sign-On Domain

VMware Cloud Foundation Instance

vCenter Single Sign-On


Domain

Management Domain Replication VI Workload Domain


vCenter Server Partner vCenter Server 1

Ring topology
Replication formed during Replication
Partner domain creation Partner

VI Workload Domain Replication VI Workload Domain


vCenter Server 3 Partner vCenter Server 2

VMware by Broadcom 83
VMware Cloud Foundation Design Guide

Table 7-11. Design Requirements for the Multiple vCenter Server Instance - Single vCenter Single
Sign-on Domain Topology for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-VCS-REQD-SSO- Join all vCenter Server When all vCenter Server n Only one vCenter Single
STD-001 instances within aVMware instances are in the Sign-On domain exists.
Cloud Foundation instance same vCenter Single Sign- n The number of
to a single vCenter Single On domain, they can linked vCenter Server
Sign-On domain. share authentication and instances in the
license data across all same vCenter Single
components. Sign-On domain is
limited to 15 instances.
Because each workload
domain uses a
dedicated vCenter
Server instance, you
can deploy up to
15 domains within
each VMware Cloud
Foundation instance.

VCF-VCS-REQD-SSO- Create a ring topology By default, one vCenter None.


STD-002 between the vCenter Server instance replicates
Server instances within the only with another vCenter
VMware Cloud Foundation Server instance. This setup
instance. creates a single point
of failure for replication.
A ring topology ensures
that each vCenter Server
instance has two replication
partners and removes any
single point of failure.

VMware by Broadcom 84
VMware Cloud Foundation Design Guide

Figure 7-5. Multiple vCenter Server Instances - Multiple vCenter Single Sign-On Domain

VMware Cloud Foundation Instance

vCenter Single Sign-On


Domain

Management Domain Replication VI Workload Domain


vCenter Server Partner vCenter Server 1

Ring topology
Replication formed during Replication
Partner domain creation Partner

VI Workload Domain Replication VI Workload Domain


vCenter Server 3 Partner vCenter Server 2

vCenter Single Sign-On vCenter Single Sign-On


Domain Domain

VI Workload Domain VI Workload Domain


vCenter Server 4 vCenter Server 5

VMware by Broadcom 85
VMware Cloud Foundation Design Guide

Table 7-12. Design Requirements for Multiple vCenter Server Instance - Multiple vCenter Single
Sign-On Domain Topology for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-VCS-REQD-SSO- Create all vCenter Server n Enables isolation at n Each vCenter server
ISO-001 instances within a VMware the vCenter Single instance is managed
Cloud Foundation instance Sign-On domain layer through its own pane of
in their own unique vCenter for increased security glass using a different
Single Sign-On domains. separation. set of administrative
n Supports up to 25 credentials.
workload domains. n You must manage
password rotation
for each vCenter
Single Sign-On domain
separately.

vSphere Cluster Design for VMware Cloud Foundation


vSphere cluster design must consider the requirements for standard, stretched and remote
clusters and for the life cycle management of the ESXi hosts in the clusters according to the
characteristics of the workloads.

n Logical vSphere Cluster Design for VMware Cloud Foundation


The cluster design must consider the characteristics of the workloads that are deployed in
the cluster.

n vSphere Cluster Life Cycle Method Design for VMware Cloud Foundation
vSphere Lifecycle Manager is used to manage the vSphere clusters in each VI workload
domain.

n vSphere Cluster Design Requirements and Recommendations for VMware Cloud Foundation
The design of a vSphere cluster is a subject to a minimum number of hosts, design
requirements, and design recommendations.

Logical vSphere Cluster Design for VMware Cloud Foundation


The cluster design must consider the characteristics of the workloads that are deployed in the
cluster.

When you design the cluster layout in vSphere, consider the following guidelines:

n Compare the capital costs of purchasing fewer, larger ESXi hosts with the costs of purchasing
more, smaller ESXi hosts. Costs vary between vendors and models. Evaluate the risk of losing
one larger host in a scaled-up cluster and the impact on the business with the higher chance
of losing one or more smaller hosts in a scale-out cluster.

n Evaluate the operational costs of managing a few ESXi hosts with the costs of managing
more ESXi hosts.

n Consider the purpose of the cluster.

VMware by Broadcom 86
VMware Cloud Foundation Design Guide

n Consider the total number of ESXi hosts and cluster limits.

Figure 7-6. Logical vSphere Cluster Layout with a Single Availability Zone for VMware Cloud
Foundation

VCF Instance

vSphere Cluster

Workload Domain
vCenter Server

ESXi ESXi ESXi ESXi

VMware by Broadcom 87
VMware Cloud Foundation Design Guide

Figure 7-7. Logical vSphere Cluster Layout for Multiple Availability Zones for VMware Cloud
Foundation

VCF Instance

vSphere Cluster

Workload Domain
vCenter Server

ESXi ESXi ESXi ESXi

Availability Zone 1

Availability Zone 2

ESXi ESXi ESXi ESXi

Remote Cluster Design Considerations


Remote clusters are managed by the management infrastructure at the central site.

Table 7-13. Remote Cluster Design Considerations

Remote Cluster Attribute Consideration

Number of hosts per remote cluster n Minimum: 3


n Maximum: 16

Number of remote clusters per VMware Cloud Foundation n Maximum: 8


instance

Number of remote clusters per VI workload domain 1

Cluster types per VI workload domain A VI workload domain can include either local clusters or
a remote cluster.

Latency between the central site and the remote site n Maximum: 100 ms

Bandwidth between the central site and the remote site n Minimum: 10 Mbps

VMware by Broadcom 88
VMware Cloud Foundation Design Guide

vSphere Cluster Life Cycle Method Design for VMware Cloud


Foundation
vSphere Lifecycle Manager is used to manage the vSphere clusters in each VI workload domain.

When you deploy a workload domain, you choose a vSphere cluster life cycle method according
to your requirements.

Table 7-14. vSphere Lifecycle Manager choices

Cluster Life Cycle Method Description Benefits Drawbacks

vSphere Lifecycle Manager vSphere Lifecycle Manager n Supports vSAN n An initial cluster image
images images contain base stretched clusters. is required during
images, vendor add-ons, n Supports VI workload workload domain or
firmware, and drivers. domains with vSphere cluster deployment.
with Tanzu.
n Supports NVIDIA GPU-
enabled clusters.
n Supports 2-node NFS,
FC, or vVols clusters.

vSphere Lifecycle Manager An upgrade baseline n Supports vSAN n Not supported for
baselines contains the ESXi image stretched clusters. NVIDIA GPU-enabled
and a patch baseline n Supports VI workload clusters.
contains the respective domains with vSphere n Not supported for 2-
patches for ESXi host. with Tanzu. node NFS, FC, or vVols
clusters.

vSphere Cluster Design Requirements and Recommendations for


VMware Cloud Foundation
The design of a vSphere cluster is a subject to a minimum number of hosts, design requirements,
and design recommendations.

For vSAN design requirements and recommendations, see vSAN Design Requirements and
Recommendations for VMware Cloud Foundation.

The requirements for the ESXi hosts in a workload domain in VMware Cloud Foundation
are related to the system requirements of the workloads hosted in the domain. The ESXi
requirements include number, server configuration, amount of hardware resources, networking,
and certificate management. Similar best practices help you design optimal environment
operation

vSphere Cluster Design Considerations


You consider different number of hosts per cluster according to the storage type and specific
resource requirements for standard and stretched vSAN clusters.

VMware by Broadcom 89
VMware Cloud Foundation Design Guide

Table 7-15. Host-Related Design Considerations per Cluster


Management Domain Management Domain (Additional Clusters) or
Attribute Specification (Default Cluster) VI Workload Domain (All Clusters)

Minimum number of vSAN (single 4 3


ESXi hosts availability zone)

vSAN (two 8 6
availability zones)

NFS, FC, or vVols Not supported n 2


n VI workload domain only
n Requires vSphere Lifecycle Manager
images
n 3
n Additional management clusters

Reserved capacity Single availability n 25% CPU and n 33% CPU and memory
for handling ESXi zone memory n Tolerates one host failure
host failures per n Tolerates one
cluster host failure

Two availability n 50% CPU and n 50% CPU and memory


zones memory n Tolerates one availability zone failure
n Tolerates one
availability zone
failure

vSphere Cluster Design Requirements VMware Cloud Foundation


You must meet the following design requirements for standard and stretched clusters in your
vSphere cluster design for VMware Cloud Foundation.The cluster design considers the storage
type for the cluster, the architecture model of the environment, and the life cycle management
method .

Table 7-16. vSphere Cluster Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-CLS-REQD-CFG-001 Create a cluster in each n Simplifies configuration Management of multiple


workload domain for the by isolating clusters and vCenter
initial set of ESXi hosts. management from Server instances increases
customer workloads. operational overhead.
n Ensures that customer
workloads have no
impact on the
management stack.

VCF-CLS-REQD-CFG-002 Allocate a minimum n Ensures correct level of To support redundancy,


number of ESXi hosts redundancy to protect you must allocate
according to the cluster against host failure in additional ESXi host
type being deployed. the cluster. resources.

VMware by Broadcom 90
VMware Cloud Foundation Design Guide

Table 7-16. vSphere Cluster Design Requirements for VMware Cloud Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-CLS-REQD-CFG-003 If using a consolidated n Ensures sufficient You must manage the


workload domain, resources for vSphere resource pool
configure the following the management settings over time.
vSphere resource pools to components.
control resource usage by
management and customer
workloads.
n cluster-name-rp-sddc-
mgmt
n cluster-name-rp-sddc-
edge
n cluster-name-rp-user-
edge
n cluster-name-rp-user-
vm

VCF-CLS-REQD-CFG-004 Configure the vSAN Allows vSphere HA to None.


network gateway IP validate if a host is isolated
address as the isolation from the vSAN network.
address for the cluster.

VCF-CLS-REQD-CFG-005 Set the advanced cluster Ensures that vSphere None.


setting HA uses the manual
das.usedefaultisolationa isolation addresses instead
ddress to false. of the default management
network gateway address.

Table 7-17. vSphere Cluster Design Requirements for vSAN Stretched Clusters with VMware
Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-CLS-REQD-CFG-006 Configure the vSAN Allows vSphere HA to None.


network gateway IP validate if a host is isolated
addresses for the second from the vSAN network for
availability zone as hosts in both availability
an additional isolation zones.
addresses for the cluster.

VCF-CLS-REQD-CFG-007 Enable the Override Enables routing the vSAN vSAN networks across
default gateway for this data traffic through the availability zones must
adapter setting on the vSAN network gateway have a route to each other.
vSAN VMkernel adapters rather than through the
on all ESXi hosts. management gateway.

VCF-CLS-REQD-CFG-008 Create a host group for Makes it easier to manage You must create and
each availability zone and which virtual machines run maintain VM-Host DRS
add the ESXi hosts in in which availability zone. group rules.
the zone to the respective
group.

VMware by Broadcom 91
VMware Cloud Foundation Design Guide

vSphere Cluster Design Recommendations for VMware Cloud Foundation


In your vSphere cluster design, you can apply certain best practices for standard and stretched
clusters .

Table 7-18. vSphere Cluster Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-CLS-RCMD-CFG-001 Use vSphere HA to protect vSphere HA supports a You must provide sufficient
all virtual machines against robust level of protection resources on the remaining
failures. for both ESXi host and hosts so that virtual
virtual machine availability. machines can be restarted
on those hosts in the event
of a host outage.

VCF-CLS-RCMD-CFG-002 Set host isolation response vSAN requires that the If a false positive event
to Power Off and restart host isolation response be occurs, virtual machines are
VMs in vSphere HA. set to Power Off and to powered off and an ESXi
restart virtual machines on host is declared isolated
available ESXi hosts. incorrectly.

VCF-CLS-RCMD-CFG-003 Configure admission Using the percentage- In a cluster of 4 ESXi hosts,


control for 1 ESXi host based reservation works the resources of only 3
failure and percentage- well in situations where ESXi hosts are available for
based failover capacity. virtual machines have use.
varying and sometimes
significant CPU or memory
reservations.
vSphere automatically
calculates the reserved
percentage according to
the number of ESXi host
failures to tolerate and the
number of ESXi hosts in the
cluster.

VCF-CLS-RCMD-CFG-004 Enable VM Monitoring for VM Monitoring provides None.


each cluster. in-guest protection for
most VM workloads. The
application or service
running on the virtual
machine must be capable
of restarting successfully
after a reboot or the
virtual machine restart is
not sufficient.

VCF-CLS-RCMD-CFG-005 Set the advanced Enables triggering a restart If you want to specifically
cluster setting of a management appliance enable I/O monitoring,
das.iostatsinterval to 0 when an OS failure occurs you must configure
to deactivate monitoring and heartbeats are not the das.iostatsinterval
the storage and network received from VMware advanced setting.
I/O activities of the Tools instead of waiting
management appliances. additionally for the I/O
check to complete.

VMware by Broadcom 92
VMware Cloud Foundation Design Guide

Table 7-18. vSphere Cluster Design Recommendations for VMware Cloud Foundation (continued)

Recommendation ID Design Recommendation Justification Implication

VCF-CLS-RCMD-CFG-006 Enable vSphere DRS on all Provides the best If a vCenter Server outage
clusters, using the default trade-off between load occurs, the mapping from
fully automated mode with balancing and unnecessary virtual machines to ESXi
medium threshold. migrations with vSphere hosts might be difficult to
vMotion. determine.

VCF-CLS-RCMD-CFG-007 Enable Enhanced vMotion Supports cluster upgrades You must enable EVC only
Compatibility (EVC) on all without virtual machine if the clusters contain hosts
clusters in the management downtime. with CPUs from the same
domain. vendor.
You must enable EVC on
the default management
domain cluster during
bringup.

VCF-CLS-RCMD-CFG-008 Set the cluster EVC mode Supports cluster upgrades None.
to the highest available without virtual machine
baseline that is supported downtime.
for the lowest CPU
architecture on the hosts in
the cluster.

VCF-CLS-RCMD-LCM-001 Use images as the life cycle vSphere Lifecycle Manager An initial cluster image
management method for VI images simplify the is required during
workload domains. management of firmware workload domain or cluster
and vendor add-ons deployment.
manually.

VMware by Broadcom 93
VMware Cloud Foundation Design Guide

Table 7-19. vSphere Cluster Design Recommendations for vSAN Stretched Clusters with VMware
Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-CLS-RCMD-CFG-009 Increase admission control Allocating only half of a In a cluster of 8 ESXi hosts,
percentage to half of the stretched cluster ensures the resources of only 4
ESXi hosts in the cluster. that all VMs have enough ESXi hosts are available for
resources if an availability use.
zone outage occurs. If you add more ESXi hosts
to the default management
cluster, add them in pairs,
one per availability zone.

VCF-CLS-RCMD-CFG-010 Create a virtual machine Ensures that virtual You must add virtual
group for each availability machines are located machines to the allocated
zone and add the VMs in only in the assigned group manually.
the zone to the respective availability zone to
group. avoid unnecessary vSphere
vMotion migrations.

VCF-CLS-RCMD-CFG-011 Create a should-run-on- Ensures that virtual You must manually create
hosts-in-group VM-Host machines are located the rules.
affinity rule to run each only in the assigned
group of virtual machines availability zone to
on the respective group avoid unnecessary vSphere
of hosts in the same vMotion migrations.
availability zone.

vSphere Networking Design for VMware Cloud Foundation


VMware Cloud Foundation uses vSphere Distributed Switch for virtual networking.

Logical vSphere Networking Design for VMware Cloud Foundation


When you design vSphere networking, consider the configuration of the vSphere Distributed
Switches, distributed port groups, and VMkernel adapters in the VMware Cloud Foundation
environment.

vSphere Distributed Switch Design


The default cluster in a workload domain uses a single vSphere Distributed Switch with a
configuration for system traffic types, NIC teaming, and MTU size.

VMware Cloud Foundation supports NSX Overlay traffic over a single vSphere Distributed Switch
per cluster. Additional distributed switches are supported for other traffic types.

When using vSAN ReadyNodes, you must define the number of vSphere Distributed Switches at
workload domain deployment time. You cannot add additional vSphere Distributed Switches post
deployment.

VMware by Broadcom 94
VMware Cloud Foundation Design Guide

Table 7-20. Configuration Options for vSphere Distributed Switch for VMware Cloud Foundation
vSphere Distributed Management Domain VI Workload Domain
Switch Configuration Options Options Benefits Drawbacks

Single vSphere n One vSphere n One vSphere Requires the least All traffic shares the
Distributed Switch Distributed Distributed number of physical same two uplinks.
for hosts with two Switch for each Switch for each NICs and switch
physical NICs cluster with all cluster with all ports.
traffic using two traffic using two
uplinks. uplinks.

Single vSphere n One vSphere n One vSphere n Provides support n You must provide
Distributed Switch for Distributed Distributed for traffic additional
hosts with four or six Switch for each Switch for each separation across physical NICs and
physical NICs cluster with four cluster with four different uplinks. switch ports.
uplinks by using or six uplinks.
the predefined
profiles in
the Deployment
Parameters
Workbook in
VMware Cloud
Builder to deploy
the default
management
cluster.
n One vSphere
Distributed
Switch for each
cluster with
four or six
uplinks by using
the VMware
Cloud Builder
API to deploy
the default
management
cluster.

Multiple vSphere n Maximum n Maximum n Provides support n You must provide


Distributed Switches two vSphere 16 vSphere for traffic additional
Distributed Distributed separation across physical NICs and
Switches by using Switches per different uplinks switch ports.
the predefined cluster. or vSphere n More complex
profiles in n You can use Distributed with additional
the Deployment only one of Switches. configuration
Parameters the vSphere n Provides support and management
Workbook in Distributed for traffic overhead.
VMware Cloud Switches for NSX separation onto
Builder to deploy overlay traffic. different physical
the default network fabrics.
management
cluster.

VMware by Broadcom 95
VMware Cloud Foundation Design Guide

Table 7-20. Configuration Options for vSphere Distributed Switch for VMware Cloud Foundation
(continued)
vSphere Distributed Management Domain VI Workload Domain
Switch Configuration Options Options Benefits Drawbacks

n Maximum
16 vSphere
Distributed
Switches per
cluster. You use
the VMware
Cloud Builder
API to deploy
the default
management
cluster using
combinations
of vSphere
Distributed
Switches and
physical NIC
configurations
that are
not available
as predefined
profiles in
the Deployment
Parameters
Workbook
n You can use
only one of
the vSphere
Distributed
Switches for NSX
overlay traffic.

Distributed Port Group Design


VMware Cloud Foundation requires several port groups on the vSphere Distributed Switch for a
workload domain. The VMkernel adapters for the host TEPs are connected to the host overlay
network, but does not require a dedicated port group on the distributed switch. The VMkernel
network adapter for host TEP is automatically created VMware Cloud Foundation configures the
ESXi host as a transport node.

VMware by Broadcom 96
VMware Cloud Foundation Design Guide

Table 7-21. Distributed Port Group Configuration for VMware Cloud Foundation

Function Teaming Policy Management Domain VI Workload Domain

n Management Route based on physical Required. Recommended.


n vSphere vMotion NIC load

n vSAN n Failover Detection: Link Recommended. Recommended.


status only
n Failback: Yes

Occurs only on
saturation of the active
uplink.
n Notify Switches: Yes

n Host Overlay Not applicable. Not applicable. Not applicable.

n Edge Uplinks and Use explicit failover order. Required. Required.


Overlay

n Edge RTEP (NSX Not applicable. Not applicable. Not applicable.


Federation Only)

VMkernel Network Adapter Design


The VMkernel networking layer provides connectivity to hosts and handles the system traffic for
management, vSphere vMotion, vSphere HA, vSAN, and others.

Table 7-22. Default VMkernel Adapters for a Workload Domain per Availability Zone
Recommended MTU Size
VMkernel Adapter Service Connected Port Group Activated Services (Bytes)

Management Management Port Group Management Traffic 1500 (Default)

vMotion vMotion Port Group vMotion Traffic 9000

vSAN vSAN Port Group vSAN 9000

Host TEPs Not applicable Not applicable 9000

vSphere Networking Design Recommendations for VMware Cloud


Foundation
Consider the recommendations for vSphere networking in VMware Cloud Foundation, such as
MTU size, port binding, teaming policy and traffic-specific network shares.

VMware by Broadcom 97
VMware Cloud Foundation Design Guide

Table 7-23. vSphere Networking Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-VDS-RCMD-CFG-001 Use a single vSphere n Reduces the complexity Increases the number
Distributed Switch per of the network design. of vSphere Distributed
cluster. n Reduces the size of the Switches that must be
fault domain. managed.

VCF-VDS-RCMD-CFG-002 Configure the MTU size n Supports the MTU size When adjusting the MTU
of the vSphere Distributed required by system packet size, you must
Switch to 9000 for jumbo traffic types. also configure the entire
frames. n Improves traffic network path (VMkernel
throughput. ports, virtual switches,
physical switches, and
routers) to support the
same MTU packet size.

VCF-VDS-RCMD-DPG-001 Use ephemeral port binding Using ephemeral port Port-level permissions and
for the Management VM binding provides the option controls are lost across
port group. for recovery of the vCenter power cycles, and no
Server instance that is historical context is saved.
managing the distributed
switch.

VCF-VDS-RCMD-DPG-002 Use static port binding for Static binding ensures a None.
all non-management port virtual machine connects
groups. to the same port on
the vSphere Distributed
Switch. This allows for
historical data and port
level monitoring.

VCF-VDS-RCMD-DPG-003 Use the Route based on Reduces the complexity None.


physical NIC load teaming of the network design,
algorithm for the VM increases resiliency, and
management port group. can adjust to fluctuating
workloads.

VCF-VDS-RCMD-DPG-004 Use the Route based on Reduces the complexity None.


physical NIC load teaming of the network design,
algorithm for the ESXi increases resiliency, and
management port group. can adjust to fluctuating
workloads.

VCF-VDS-RCMD-DPG-005 Use the Route based on Reduces the complexity None.


physical NIC load teaming of the network design,
algorithm for the vSphere increases resiliency, and
vMotion port group. can adjust to fluctuating
workloads.

VCF-VDS-RCMD-DPG-006 Use the Route based on Reduces the complexity None.


physical NIC load teaming of the network design,
algorithm for the vSAN port increases resiliency, and
group. can adjust to fluctuating
workloads.

VMware by Broadcom 98
VMware Cloud Foundation Design Guide

Table 7-23. vSphere Networking Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-VDS-RCMD-NIO-001 Enable Network I/O Control Increases resiliency and Network I/O Control
on vSphere Distributed performance of the might impact network
Switch of the management network. performance for critical
domain cluster. traffic types if
misconfigured.

VCF-VDS-RCMD-NIO-002 Set the share value for By keeping the default None.
management traffic to setting of Normal,
Normal. management traffic is
prioritized higher than
vSphere vMotion but
lower than vSAN traffic.
Management traffic is
important because it
ensures that the hosts
can still be managed
during times of network
contention.

VCF-VDS-RCMD-NIO-003 Set the share value for During times of network During times of network
vSphere vMotion traffic to contention, vSphere contention, vMotion takes
Low. vMotion traffic is not longer than usual to
as important as virtual complete.
machine or storage traffic.

VCF-VDS-RCMD-NIO-004 Set the share value for Virtual machines are the None.
virtual machines to High. most important asset in the
SDDC. Leaving the default
setting of High ensures that
they always have access to
the network resources they
need.

VCF-VDS-RCMD-NIO-005 Set the share value for During times of None.


vSAN traffic to High. network contention,
vSAN traffic needs
guaranteed bandwidth to
support virtual machine
performance.

VCF-VDS-RCMD-NIO-006 Set the share value for By default, VVMware Cloud None.
other traffic types to Low. Foundation does not use
other traffic types, like
vSphere FT traffic. Hence,
these traffic types can be
set the lowest priority.

VMware by Broadcom 99
NSX Design for VMware Cloud
Foundation 8
In VMware Cloud Foundation, you use NSX for connecting management and customer virtual
machines by using virtual network segments and routing. You also create constructs for solutions
that are deployed for a single VMware Cloud Foundation instance or are available across multiple
VMware Cloud Foundation instances. These constructs provide routing to the data center and
load balancing.

Table 8-1. NSX Logical Concepts and Components

Component Description

NSX Manager n Provides the user interface and the REST API for creating,
configuring, and monitoring NSX components, such as segments,
and Tier-0 and Tier-1 gateways.
n In a deployment with NSX Federation, NSX Manager is called NSX
Local Manager.

NSX Edge nodes n Is a special type of transport node which contains service router
components.
n Provides north-south traffic connectivity between the physical
data center networks and the NSX SDN networks. Each NSX Edge
node has multiple interfaces where traffic flows.
n Can provide east-west traffic flow between virtualized workloads.
They provide stateful services such as load balancers and
DHCP. In a deployment with multiple VMware Cloud Foundation
instances, east-west traffic between the VMware Cloud
Foundation instances flows through the NSX Edge nodes too.

NSX Federation (optional design extension) n Propagates configurations that span multiple NSX instances in
a single VMware Cloud Foundation instance or across multiple
VMware Cloud Foundation instances. You can stretch overlay
segments, activate failover of segment ingress and egress traffic
between VMware Cloud Foundation instances, and implement a
unified firewall configuration.
n In a deployment with multiple VMware Cloud Foundation
instances, you use NSX to provide cross-instance services
to SDDC management components that do not have native
support for availability at several locations, such as VMware Aria
Automation and VMware Aria Operations.
n Connect only workload domains of matching types (management
domain to management domain or VI workload domain to VI
workload domain).

VMware by Broadcom 100


VMware Cloud Foundation Design Guide

Table 8-1. NSX Logical Concepts and Components (continued)

Component Description

NSX Global Manager (Federation only) n Is part of deployments with multiple VMware Cloud Foundation
instances where NSX Federation is required. NSX Global Manager
can connect multiple NSX Local Manager instances under a single
global management plane.
n Provides the user interface and the REST API for creating,
configuring, and monitoring NSX global objects, such as global
virtual network segments, and global Tier-0 and Tier-1 gateways.
n Connected NSX Local Manager instances create the global objects
on the underlying software-defined network that you define
from NSX Global Manager. An NSX Local Manager instance
directly communicates with other NSX Local Manager instances to
synchronize configuration and state needed to implement a global
policy.
n NSX Global Manager is a deployment-time role that you assign to
an NSX Manager appliance.

NSX Manager instance shared between VI n An NSX Manager instance can be shared between up to 14 VI
workload domains workload domains that are part of the same vCenter Single Sign-
On domain.
n VI workload domains sharing an NSX Manager instance must use
the same vSphere cluster life cycle method.
n Using a shared NSX Manager instance reduces resource
requirements for the management domain.
n A single transport zone is shared across all clusters in all VI
workload domains that share the NSX Manager instance.
n The management domain NSX instance cannot be shared.
n Isolated workload domain NSX instances cannot be shared.

Read the following topics next:

n Logical Design for NSX for VMware Cloud Foundation

n NSX Manager Design for VMware Cloud Foundation

n NSX Edge Node Design for VMware Cloud Foundation

n Routing Design for VMware Cloud Foundation

n Overlay Design for VMware Cloud Foundation

n Application Virtual Network Design for VMware Cloud Foundation

n Load Balancing Design for VMware Cloud Foundation

Logical Design for NSX for VMware Cloud Foundation


NSX provides networking services to workloads in VMware Cloud Foundation such as load
balancing, routing and virtual networking.

VMware by Broadcom 101


VMware Cloud Foundation Design Guide

Table 8-2. NSX Logical Design


VMware Cloud Foundation Instances VMware Cloud Foundation Instances
Component with a Single Availability Zone with Multiple Availability Zones

NSX Manager Cluster n Three appropriately sized nodes n Three appropriately sized nodes
with a virtual IP (VIP) address with with a VIP address with an anti-
an anti-affinity rule to keep them affinity rule to keep them on
on different hosts. different hosts.
n vSphere HA protects the cluster n vSphere HA protects the cluster
nodes applying high restart nodes applying high restart
priority priority
n vSphere DRS rule should-run-on-
hosts-in-group keeps the NSX
Manager VMs in the first
availability zone.

NSX Global Manager Cluster n Manually deployed three n Manually deployed three
(Conditional) appropriately sized nodes with a appropriately sized nodes with a
VIP address with an anti-affinity VIP address with an anti-affinity
rule to run them on different rule to run them on different
hosts. hosts.
n One active and one standby n One active and one standby
cluster. cluster.
n vSphere HA protects the cluster n vSphere HA protects the cluster
nodes applying high restart nodes applying high restart
priority. priority.
n vSphere DRS rule should-run-on-
hosts-in-group keeps the NSX
Global Manager VMs in the first
availability zone.

NSX Edge Cluster n Two appropriately sized NSX n Two appropriately sized NSX
Edge nodes with an anti-affinity Edge nodes in the first availability
rule to separate them on different zone with an anti-affinity rule to
hosts. separate them on different hosts.
n vSphere HA protects the cluster n vSphere HA protects the cluster
nodes applying high restart nodes applying high restart
priority. priority.
n vSphere DRS rule should-run-on-
hosts-in-group keeps the NSX
Edge VMs in the first availability
zone.

Transport Nodes n Each ESXi host acts as a host n Each ESXi host acts as a host
transport node. transport node.
n Two edge transport nodes. n Two edge transport nodes in the
first availability zone.

VMware by Broadcom 102


VMware Cloud Foundation Design Guide

Table 8-2. NSX Logical Design (continued)


VMware Cloud Foundation Instances VMware Cloud Foundation Instances
Component with a Single Availability Zone with Multiple Availability Zones

Transport zones n One VLAN transport zone for n One VLAN transport zone for
north-south traffic. north-south traffic.
n Maximum one overlay transport n Maximum one overlay transport
zone for overlay segments per zone for overlay segments per
NSX instance. NSX instance.
n One VLAN tranpsort zone for n One or more VLAN tranpsort
VLAN-backed segments. zones for VLAN-backed
segments.

VLANs and IP subnets allocated to See VLANs and Subnets for VMware See VLANs and Subnets for VMware
NSX Cloud Foundation. Cloud Foundation.
For information about the networks
for virtual infrastructure management,
see Distributed Port Group Design.

Routing configuration n BGP for a single VMware Cloud n BGP with path prepend to
Foundation instance. control ingress traffic and local
n In a VMware Cloud Foundation preference to control egress
deployment with NSX Federation, traffic through the first availability
BGP with ingress and egress zone during normal operating
traffic to the first VMware condition.
Cloud Foundation instance during n In a VMware Cloud Foundation
normal operating conditions. deployment with NSX Federation,
BGP with ingress and egress
traffic to the first instance during
normal operating conditions.

For a description of the NSX logical component in this design, see Table 8-1. NSX Logical
Concepts and Components.

Single Instance - Single Availability Zone


The NSX design for the Single Instance - Single Availability Zone topology consists of the
following components:

VMware by Broadcom 103


VMware Cloud Foundation Design Guide

Figure 8-1. NSX Logical Design for a Single Instance - Single Availability Zone Topology

Access Supporting
NSX Edge
Cluster Infrastructure
User Interface
DNS NTP
API NSX Edge NSX Edge
Node 1 Node 2

NSX
Manager Cluster

Virtual Infrastructure Management


Internal VIP Load Balancer

Workload Domain
vCenter Server
NSX NSX NSX
Manager Manager Manager
1 2 3

NSX
Transport Nodes

Workload Domain Cluster

ESXi ESXi ESXi ESXi

n Unified appliances that have both the NSX Local Manager and NSX Controller roles. They
provide management and control plane capabilities.

n NSX Edge nodes in the workload domain that provide advanced services such as load
balancing, and north-south connectivity.

n ESXi hosts in the workload domain that are registered as NSX transport nodes to provide
distributed routing and firewall services to workloads.

Single Instance - Multiple Availability Zones


The NSX design for a Single Instance - Multiple Availability Zone topology consists of the
following components:

VMware by Broadcom 104


VMware Cloud Foundation Design Guide

Figure 8-2. NSX Logical Design for a Single Instance - Multiple Availability Zone Topology

Access Supporting
NSX Edge
Cluster Infrastructure
User Interface
DNS NTP
API NSX Edge NSX Edge
Node 1 Node 2

NSX
Manager Cluster

Virtual Infrastructure Management


Internal VIP Load Balancer

Workload Domain
vCenter Server
NSX NSX NSX
Manager Manager Manager
1 2 3

Availability Zone 1 Availability Zone 2


NSX Transport Nodes NSX Transport Nodes

Workload Domain Cluster

ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi

n Unified appliances that have both the NSX Local Manager and NSX Controller roles. They
provide management and control plane capabilities.

n NSX Edge nodes that provide advanced services such as load balancing, and north-south
connectivity.

n ESXi hosts that are distributed evenly across availability zones in the workload domain and
are registered as NSX transport nodes to provide distributed routing and firewall services to
workloads.

Multiple Instances - Single Availability Zone


The NSX design for a Multiple Instance - Single Availability Zone topology consists of the
following components:

VMware by Broadcom 105


VMware Cloud Foundation Design Guide

Figure 8-3. NSX Logical Design for a Multiple Instance - Single Availability Zone Topology
VCF Instance A VCF Instance B

NSX Global NSX Global


Manager Cluster (Active) Manager Cluster (Standby)

Internal VIP Load Balancer Internal VIP Load Balancer

Global Global Global Global Global Global


Manager Manager Manager Manager Manager Manager
1 2 3 1 2 3

NSX Local NSX Local


Manager Cluster Manager Cluster
Virtual Infrastructure Management Virtual Infrastructure Management

Internal VIP Load Balancer Internal VIP Load Balancer


Workload Domain Workload Domain
vCenter Server vCenter Server
Local Local Local Local Local Local
Manager Manager Manager Manager Manager Manager
1 2 3 1 2 3

NSX NSX
NSX Edge NSX Edge
Transport Nodes Transport Nodes
Node Cluster Node Cluster

Workload Domain Cluster Workload Domain Cluster


Edge Edge Edge Edge
Node 1 Node 2 Node 1 Node 2

ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi

n Unified appliances that have both the NSX Local Manager and NSX Controller roles. They
provide management and control plane capabilities.

n NSX Edge nodes that provide advanced services such as load balancing, and north-south
connectivity.

n ESXi hosts in the workload domain that are registered as NSX transport nodes to provide
distributed routing and firewall services to workloads.

n NSX Global Manager cluster in each of the first two VMware Cloud Foundation instances.

You deploy the NSX Global Manager cluster in each VMware Cloud Foundation instance so
that you can use NSX Federation for global management of networking and security services.

n An additional infrastructure VLAN in each VMware Cloud Foundation instance to carry


instance-to-instance traffic (RTEP).

Multiple Instances - Multiple Availability Zones


The NSX design for a Multiple Instance - Multiple Availability Zone topology consists of the
following components:

VMware by Broadcom 106


VMware Cloud Foundation Design Guide

Figure 8-4. NSX Logical Design for Multiple Instance - Multiple Availability Zone Topology
VCF Instance A VCF Instance B

NSX Global NSX Global


Manager Cluster (Active) Manager Cluster (Standby)
Virtual Infrastructure Management Virtual Infrastructure Management

Internal VIP Load Balancer Internal VIP Load Balancer


Workload Domain Workload Domain
vCenter Server vCenter Server
Global Global Global Global Global Global
Manager Manager Manager Manager Manager Manager
1 2 3 1 2 3

NSX Local NSX Local


Manager Cluster Manager Cluster
NSX Edge NSX Edge
Node Cluster Node Cluster
Internal VIP Load Balancer Internal VIP Load Balancer

Edge Edge Edge Edge


Node 1 Node 2 Local Local Local Local Local Local Node 1 Node 2
Manager Manager Manager Manager Manager Manager
1 2 3 1 2 3

Availability Zone 1 Availability Zone 2 Availability Zone 1 Availability Zone 2


NSX Transport Nodes NSX Transport Nodes NSX Transport Nodes NSX Transport Nodes

Workload Domain Cluster Workload Domain Cluster

ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi

n Unified appliances that have both the NSX Local Manager and NSX Controller roles. They
provide management and control plane capabilities.

n NSX Edge nodes that provide advanced services such as load balancing, and north-south
connectivity.

n ESXi hosts that are distributed evenly across availability zones in the workload domain in a
VMware Cloud Foundation instance, and are registered as NSX transport nodes to provide
distributed routing and firewall services to workloads.

n NSX Global Manager cluster in each of the first two VMware Cloud Foundation instances.

You deploy the NSX Global Manager cluster in each VMware Cloud Foundation instance so
that you can use NSX Federation for global management of networking and security services.

n An additional infrastructure VLAN in each VMware Cloud Foundation instance to carry


instance-to-instance traffic (RTEP).

NSX Manager Design for VMware Cloud Foundation


Following the principles of this design and of each product, you determine the size of, deploy
and configure NSX Manager as part of your VMware Cloud Foundation deployment.

VMware by Broadcom 107


VMware Cloud Foundation Design Guide

Sizing Considerations for NSX Manager for VMware Cloud


Foundation
You select an appropriate NSX Manager appliance size that is suitable for the scale of your
environment.

When you deploy NSX Manager appliances, either with a local or global scope, you select to
deploy the appliance with a size that is suitable for the scale of your environment. The option
that you select determines the number of CPUs and the amount of memory of the appliance. For
detailed sizing according to the overall profile of the VMware Cloud Foundation instance you plan
to deploy, see VMware Cloud Foundation Planning and Preparation Workbook.

Table 8-3. Sizing Considerations for NSX Manager

NSX Manager Appliance Size Scale

Extra-Small Cloud Service Manager only

Small Proof of concept

Medium Up to 128 ESXi hosts


Default for the management domain

Large Up to 1,024 ESXi hosts


Default for VI workload domains

Note To deploy an NSX Manager appliance in the VI workload domain with a size different from
the default one, you must use the API.

NSX Manager Design Requirements and Recommendations for


VMware Cloud Foundation
Consider the placement requirements for using NSX Manager in VMware Cloud Foundation, and
the best practices for having an NSX Manager cluster operate in an optimal way, such as number
and size of the nodes, and high availability, on a standard or stretched management cluster.

NSX Manager Design Requirements for VMware Cloud Foundation


You must meet the following design requirements for in your NSX Manager design for VMware
Cloud Foundation.

VMware by Broadcom 108


VMware Cloud Foundation Design Guide

Table 8-4. NSX Manager Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-LM-REQD- Place the appliances n Simplifies IP addressing None.


CFG-001 of the NSX Manager for management VMs
cluster on the VM by using the same
management network in VLAN and subnet.
the management domain. n Provides simplified
secure access to
management VMs in
the same VLAN
network.

VCF-NSX-LM-REQD- Deploy three NSX Manager Supports high availability of You must have sufficient
CFG-002 nodes in the default the NSX manager cluster. resources in the default
vSphere cluster in the cluster of the management
management domain for domain to run three NSX
configuring and managing Manager nodes.
the network services for
the workload domain.

NSX Manager Design Recommendations for VMware Cloud Foundation


In your NSX Manager design for VMware Cloud Foundation, you can apply certain best practices
for standard and stretched clusters.

Table 8-5. NSX Manager Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-LM-RCMD- Deploy appropriately sized Ensures resource The default size for
CFG-001 nodes in the NSX Manager availability and usage a management domain
cluster for the workload efficiency per workload is Medium, and for VI
domain. domain. workload domains is Large.

VCF-NSX-LM-RCMD- Create a virtual IP (VIP) Provides high availability of n The VIP address
CFG-002 address for the NSX the user interface and API feature provides high
Manager cluster for the of NSX Manager. availability only. It
workload domain. does not load-balance
requests across the
cluster.
n When using the VIP
address feature, all NSX
Manager nodes must
be deployed on the
same Layer 2 network.

VMware by Broadcom 109


VMware Cloud Foundation Design Guide

Table 8-5. NSX Manager Design Recommendations for VMware Cloud Foundation (continued)

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-LM-RCMD- Apply VM-VM anti-affinity Keeps the NSX Manager You must allocate at
CFG-003 rules in vSphere Distributed appliances running on least four physical hosts
Resource Scheduler different ESXi hosts for so that the three
(vSphere DRS) to the NSX high availability. NSX Manager appliances
Manager appliances. continue running if an ESXi
host failure occurs.

VCF-NSX-LM-RCMD- In vSphere HA, set the n NSX Manager If the restart priority
CFG-004 restart priority policy implements the control for another management
for each NSX Manager plane for virtual appliance is set to highest,
appliance to high. network segments. the connectivity delay for
vSphere HA restarts management appliances
the NSX Manager will be longer.
appliances first so that
other virtual machines
that are being powered
on or migrated by using
vSphere vMotion while
the control plane is
offline lose connectivity
only until the control
plane quorum is re-
established.
n Setting the restart
priority to high reserves
the highest priority for
flexibility for adding
services that must be
started before NSX
Manager.

Table 8-6. NSX Manager Design Recommendations for Stretched Clusters in VMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-LM-RCMD- Add the NSX Manager Ensures that, by default, None.


CFG-006 appliances to the virtual the NSX Manager
machine group for the first appliances are powered
availability zone. on a host in the primary
availability zone.

NSX Global Manager Design Requirements and Recommendations


for VMware Cloud Foundation
For a deployment with multiple VMware Cloud Foundation instances, you use NSX Federation,
which requires the manual deployment of NSX Global Manager nodes in the first two
instances. Consider the placement requirements for using NSX Global Manager in VMware Cloud
Foundation, and the best practices for having an NSX Global Manager cluster operate in an

VMware by Broadcom 110


VMware Cloud Foundation Design Guide

optimal way, such as the number and size of the nodes, high availability, on a standard or
stretched management cluster.

NSX Global Manager Design Requirements


You must meet the following design requirements in your NSX Global Manager design for
VMware Cloud Foundation.

Table 8-7. NSX Global Manager Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-GM-REQD- Place the appliances of n Simplifies IP addressing None.


CFG-001 the NSX Global Manager for management VMs.
cluster on the Management n Provides simplified
VM network in each secure access to
VMware Cloud Foundation all management VMs
instance. in the same VLAN
network.

NSX Global Manager Design Recommendations


In your NSX Global Manager design for VMware Cloud Foundation, you can apply certain best
practices for standard and stretched clusters.

Table 8-8. NSX Global Manager Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-GM-RCMD- Deploy three NSX Global Provides high availability You must have sufficient
CFG-001 Manager nodes for the for the NSX Global resources in the default
workload domain to Manager cluster. cluster of the management
support NSX Federation domain to run three NSX
across VMware Cloud Global Manager nodes.
Foundation instances.

VCF-NSX-GM-RCMD- Deploy appropriately sized Ensures resource The recommended size


CFG-002 nodes in the NSX Global availability and usage for a management domain
Manager cluster for the efficiency per workload is Medium and for VI
workload domain. domain. workload domains is Large.

VCF-NSX-GM-RCMD- Create a virtual IP (VIP) Provides high availability of n The VIP address
CFG-003 address for the NSX Global the user interface and API feature provides high
Manager cluster for the of NSX Global Manager. availability only. It
workload domain. does not load-balance
requests across the
cluster.
n When using the VIP
address feature, all NSX
Global Manager nodes
must be deployed on
the same Layer 2
network.

VMware by Broadcom 111


VMware Cloud Foundation Design Guide

Table 8-8. NSX Global Manager Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-GM-RCMD- Apply VM-VM anti-affinity Keeps the NSX Global You must allocate at
CFG-004 rules in vSphere DRS to Manager appliances least four physical hosts
the NSX Global Manager running on different ESXi so that the three
appliances. hosts for high availability. NSX Manager appliances
continue running if an ESXi
host failure occurs.

VCF-NSX-GM-RCMD- In vSphere HA, set the n NSX Global Manager n Management of NSX
CFG-005 restart priority policy for implements the global components will
each NSX Global Manager management plane for be unavailable until the
appliance to medium. global segments and NSX Global Manager
firewalls. virtual machines restart.
n The NSX Global
NSX Global Manager is
Manager cluster is
not required for control
deployed in the
plane and data plane
management domain,
connectivity.
where the total number
n Setting the restart
of virtual machines
priority to medium
is limited and where
reserves the high
it competes with
priority for services that
other management
impact the NSX control
components for restart
or data planes.
priority.

VCF-NSX-GM-RCMD- Deploy an additional NSX Enables recoverability of Requires additional NSX


CFG-006 Global Manager Cluster in NSX Global Manager in Global Manager nodes in
the second VMware Cloud the second VMware Cloud the second VMware Cloud
Foundation instance. Foundation instance if a Foundation instance.
failure in the first VMware
Cloud Foundation instance
occurs.

VMware by Broadcom 112


VMware Cloud Foundation Design Guide

Table 8-8. NSX Global Manager Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-GM-RCMD- Set the NSX Global Enables recoverability of Must be done manually.
CFG-007 Manager cluster in the NSX Global Manager in
second VMware Cloud the second VMware Cloud
Foundation instance as Foundation instance if a
standby for the workload failure in the first instance
domain. occurs.

VCF-NSX-GM-RCMD- Establish an operational Ensures secured The administrator must


SEC-001 practice to capture and connectivity between the establish and follow an
update the thumbprint of NSX Manager instances. operational practice by
the NSX Local Manager Each certificate has its using a runbook or
certificate on NSX Global own unique thumbprint. automated process to
Manager every time the NSX Global Manager stores ensure that the thumbprint
certificate is updated by the unique thumbprint of is up-to-date.
using SDDC Manager. the NSX Local Manager
instances for enhanced
security.
If an authentication failure
between NSX Global
Manager and NSX Local
Manager occurs, objects
that are created from NSX
Global Manager will not be
propagated on to the SDN.

Table 8-9. NSX Global Manager Design Recommendations for Stretched Clusters in VMware
Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-GM-RCMD- Add the NSX Global Ensures that, by default, Done automatically by
CFG-008 Manager appliances to the the NSX Global Manager VMware Cloud Foundation
virtual machine group for appliances are powered when stretching a cluster.
the first availability zone. on a host in the primary
availability zone.

NSX Edge Node Design for VMware Cloud Foundation


Following the principles of this design and of each product, you deploy, configure, and connect
the NSX Edge nodes to support networks in the NSX instances in your VMware Cloud Foundation
deployment.

Deployment Model for the NSX Edge Nodes for VMware Cloud
Foundation
For NSX Edge nodes, you determine the form factor, number of nodes and placement according
to the requirements for network services in a VMware Cloud Foundation workload domain.

VMware by Broadcom 113


VMware Cloud Foundation Design Guide

An NSX Edge node is an appliance that provides centralized networking services which cannot
be distributed to hypervisors, such as load balancing, NAT, VPN, and physical network uplinks.
Some services, such as Tier-0 gateways, are limited to a single instance per NSX Edge node.
However, most services can coexist in these nodes.

NSX Edge nodes are grouped in one or more edge clusters, representing a pool of capacity for
NSX services.

An NSX Edge node can be deployed as a virtual appliance, or installed on bare-metal hardware.
The edge node on bare-metal hardware can have better performance capabilities at the expense
of more difficult deployment and limited deployment topology use cases. For details on the
trade-offs of using virtual or bare-metal NSX Edges, see the NSX documentation.

VMware by Broadcom 114


VMware Cloud Foundation Design Guide

Table 8-10. NSX Edge Deployment Model Considerations

Deployment Model Benefits Drawbacks

NSX Edge virtual appliance deployed n Deployment and life cycle n Might not provide best
by using SDDC Manager management by using SDDC performance in individual
Manager workflows that call NSX customer scenarios
Manager
n Automated password
management by using SDDC
Manager
n Benefits from vSphere HA
recovery
n Can be used across availability
zones
n Easy to scale up by modifying
the specification of the virtual
appliance

NSX Edge virtual appliance deployed n Benefits from vSphere HA n Might not provide best
by using NSX Manager recovery performance in individual
n Can be used across availability customer scenarios
zones n Manually deployed by using NSX
n Easy to scale up by modifying Manager
the specification of the virtual n Manual password Management
appliance by using NSX Manager
n Cannot be used to support
Application Virtual Networks
(AVNs) in the management
domain

Bare-metal NSX Edge appliance n Might provide better performance n Has hardware compatibility
in individual customer scenarios requirements
n Requires individual hardware
life cycle management and
monitoring of failures, firmware
and drivers
n Manual password management
n Must be manually deployed and
connected to the environment
n Requires manual recovery after
hardware failure
n Requires deploying a bare-metal
NSX Edge appliance in each
availability zone for network
failover
n Deploying a bare-metal edge in
each availability zone requires
considering asymmetric routing
n Requires edge fault domains if
more than one edge is deployed
in each availability zone for
Active/Standby Tier-0 or Tier-1
gateways

VMware by Broadcom 115


VMware Cloud Foundation Design Guide

Table 8-10. NSX Edge Deployment Model Considerations (continued)

Deployment Model Benefits Drawbacks

n Requires redeployment to new


host to achieve scale-up
n Cannot be used to support AVNs
in the management domain

Sizing Considerations for NSX Edges for VMware Cloud Foundation


When you deploy NSX Edge appliances, you select a size according to the scale of your
environment. The option that you select determines the number of CPUs and the amount of
memory of the appliance.

For detailed sizing according to the overall profile of the VMware Cloud Foundation instance you
plan to deploy, see VMware Cloud Foundation Planning and Preparation Workbook.

Table 8-11. Sizing Considerations for NSX Edges

NSX Edge Appliance Size Scale

Small Proof of concept

Medium Suitable when only Layer 2 through Layer 4 features such


as NAT, routing, Layer 4 firewall, Layer 4 load balancer
are required and the total throughput requirement is less
than 2 Gbps.

Large Suitable when only Layer 2 through Layer 4 features such


as NAT, routing, Layer 4 firewall, Layer 4 load balancer
are required and the total throughput is 2 ~ 10 Gbps. It
is also suitable when Layer 7 load balancer, for example,
SSL offload is required.

Extra Large Suitable when the total throughput required is multiple


Gbps for Layer 7 load balancer and VPN.

Network Design for the NSX Edge Nodes for VMware Cloud
Foundation
In each VMware Cloud Foundation instance, you implement an NSX Edge configuration with a
single N-VDS. You connect the uplink network interfaces of the edge appliance to VLAN trunk
port groups that are connected to particular physical NICs on the host.

NSX Edge Network Configuration


The NSX Edge node contains a virtual switch, called an N-VDS, that is managed by NSX. This
internal N-VDS is used to define traffic flow through the interfaces of the edge node. An N-VDS
can be connected to one or more interfaces. Interfaces cannot be shared between N-VDS
instances.

VMware by Broadcom 116


VMware Cloud Foundation Design Guide

If you plan to deploy multiple VMware Cloud Foundation instances, apply the same network
design to the NSX Edge cluster in the second and other additional VMware Cloud Foundation
instances.

Figure 8-5. NSX Edge Network Configuration

ToR
Switches

vmnic0 vmnic1

vSphere Distributed Switch with NSX

Management Network Edge Uplink 01 Network Edge Uplink 02 Network

Eth0 fp-eth0 fp-eth1 fp-eth2


(mgmt) (uplink 1) (uplink 2) (unused)

Edge N-VDS

TEP1 RTEP TEP2


Uplink 1 Uplink 2
(VLAN) (VLAN)
Overlay Segments

NSX Edge Node

ESXi Host

Uplink Policy Design for the NSX Edge Nodes for VMware Cloud Foundation
A transport node can participate in an overlay and VLAN network. Uplink profiles define policies
for the links from the NSX Edge transport nodes to top of rack switches. Uplink profiles are
containers for the properties or capabilities for the network adapters. Uplink profiles are applied
to the N-VDS of the edge node.

VMware by Broadcom 117


VMware Cloud Foundation Design Guide

Uplink profiles can use either load balance source or failover order teaming. If using load balance
source, multiple uplinks can be active. If using failover order, only a single uplink can be active.

Teaming can be configured by using the default teaming policy or a user-defined named teaming
policy. You can use named teaming policies to pin traffic segments to designated edge uplinks.

NSX Edge Node Requirements and Recommendations for VMware


Cloud Foundation
Consider the network, N-VDS configuration and uplink policy requirements for using NSX
Edge nodes in VMware Cloud Foundation, and the best practices for having NSX Edge nodes
operate in an optimal way, such as number and size of the nodes, high availability, and N-VDS
architecture, on a standard or stretched cluster.

NSX Edge Design Requirements


You must meet the following design requirements for standard and stretched clusters in your
NSX Edge design for VMware Cloud Foundation.

Table 8-12. NSX Edge Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-EDGE-REQD- Connect the management Provides connection from None.


CFG-001 interface of each NSX the NSX Manager cluster to
Edge node to the VM the NSX Edge.
management network.

VCF-NSX-EDGE-REQD- n Connect the fp-eth0 n Because VLAN trunk None.


CFG-002 interface of each NSX port groups pass
Edge appliance to a traffic for all VLANs,
VLAN trunk port group VLAN tagging can
pinned to physical NIC occur in the NSX
0 of the host, with Edge node itself for
the ability to failover to easy post-deployment
physical NIC 1. configuration.
n Connect the fp-eth1 n By using two separate
interface of each NSX VLAN trunk port
Edge appliance to a groups, you can direct
VLAN trunk port group traffic from the edge
pinned to physical NIC node to a particular
1 of the host, with the host network interface
ability to failover to and top of rack switch
physical NIC 0. as needed.
n Leave the fp-eth2 n In the event of failure of
interface of each NSX the top of rack switch,
Edge appliance unused. the VLAN trunk port
group will failover to
the other physical NIC
and to ensure both fp-
eth0 and fp-eth1 are
available.

VMware by Broadcom 118


VMware Cloud Foundation Design Guide

Table 8-12. NSX Edge Design Requirements for VMware Cloud Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-EDGE-REQD- Use a dedicated VLAN A dedicated edge overlay n You must have routing
CFG-003 for edge overlay that is network provides support between the VLANs for
different from the host for edge mobility in edge overlay and host
overlay VLAN. support of advanced overlay.
deployments such as n You must allocate
multiple availability zones another VLAN in
or multi-rack clusters. the data center
infrastructure for edge
overlay.

VCF-NSX-EDGE-REQD- Create one uplink profile n An NSX Edge node None.


CFG-004 for the edge nodes with that uses a single N-
three teaming policies. VDS can have only one
n Default teaming policy uplink profile.
of load balance source n For increased resiliency
with both active uplinks and performance,
uplink1 and uplink2. supports the
n Named teaming policy concurrent use of both
of failover order edge uplinks through
with a single active both physical NICs on
uplink uplink1 without the ESXi hosts.
standby uplinks. n The default teaming
n Named teaming policy policy increases overlay
of failover order performance and
with a single active availability by using
uplink uplink2 without multiple TEPs, and
standby uplinks. balancing of overlay
traffic.
n By using named
teaming policies, you
can connect an edge
uplink to a specific host
uplink and from there
to a specific top of
rack switch in the data
center.
n Enables ECMP because
the NSX Edge nodes
can uplink to the
physical network over
two different VLANs.

VMware by Broadcom 119


VMware Cloud Foundation Design Guide

Table 8-13. NSX Edge Design Requirements for NSX Federation in VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-EDGE-REQD- Allocate a separate VLAN The RTEP network must be You must allocate another
CFG-005 for edge RTEP overlay that on a VLAN that is different VLAN in the data center
is different from the edge from the edge overlay infrastructure.
overlay VLAN. VLAN. This is an NSX
requirement that provides
support for configuring
different MTU size per
network.

NSX Edge Design Recommendations


In your NSX Edge design for VMware Cloud Foundation, you can apply certain best practices for
standard and stretched clusters.

Table 8-14. NSX Edge Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implications

VCF-NSX-EDGE-RCMD- Use appropriately sized Ensures resource You must provide sufficient
CFG-001 NSX Edge virtual availability and usage compute resources to
appliances. efficiency per workload support the chosen
domain. appliance size.

VCF-NSX-EDGE-RCMD- Deploy the NSX Edge Simplifies the configuration Workloads and NSX Edges
CFG-002 virtual appliances to the and minimizes the number share the same compute
default vSphere cluster of ESXi hosts required for resources.
of the workload domain, initial deployment.
sharing the cluster between
the workloads and the
edge appliances.

VCF-NSX-EDGE-RCMD- Deploy two NSX Edge Creates the minimum size For a VI workload domain,
CFG-003 appliances in an edge NSX Edge cluster while additional edge appliances
cluster in the default satisfying the requirements might be required to satisfy
vSphere cluster of the for availability. increased bandwidth
workload domain. requirements.

VCF-NSX-EDGE-RCMD- Apply VM-VM anti-affinity Keeps the NSX Edge nodes None.
CFG-004 rules for vSphere DRS to running on different ESXi
the virtual machines of the hosts for high availability.
NSX Edge cluster.

VMware by Broadcom 120


VMware Cloud Foundation Design Guide

Table 8-14. NSX Edge Design Recommendations for VMware Cloud Foundation (continued)

Recommendation ID Design Recommendation Justification Implications

VCF-NSX-EDGE-RCMD- In vSphere HA, set the n The NSX Edge nodes If the restart priority for
CFG-005 restart priority policy for are part of the another VM in the cluster
each NSX Edge appliance north-south data path is set to highest, the
to high. for overlay segments. connectivity delays for
vSphere HA restarts the edge appliances will be
NSX Edge appliances longer.
first to minimise the
time an edge VM is
offline.
n Setting the restart
priority to high reserves
highest for future
needs.

VCF-NSX-EDGE-RCMD- Create an NSX n Satisfies the availability None.


CFG-006 Edge cluster with requirements by
the default Bidirectional default.
Forwarding Detection n Edge nodes must
(BFD) configuration remain available to
between the NSX Edge create services such
nodes in the cluster. as NAT, routing to
physical networks, and
load balancing.

VCF-NSX-EDGE-RCMD- Use a single N-VDS in the n Simplifies deployment None.


CFG-007 NSX Edge nodes. of the edge nodes.
n The same N-VDS switch
design can be used
regardless of edge
form factor.
n Supports multiple TEP
interfaces in the edge
node.
n vSphere Distributed
Switch is not supported
in the edge node.

Table 8-15. NSX Edge Design Recommendations for Stretched Clusters in VMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implications

VCF-NSX-EDGE-RCMD- Add the NSX Edge Ensures that, by default, None.


CFG-008 appliances to the virtual the NSX Edge appliances
machine group for the first are powered on upon
availability zone. a host in the primary
availability zone.

VMware by Broadcom 121


VMware Cloud Foundation Design Guide

Routing Design for VMware Cloud Foundation


NSX Edge clusters in VMware Cloud Foundation provide pools of capacity for service router
functions in NSX.

Routing Options for VMware Cloud Foundation


VMware Cloud Foundation supports the following routing options:

VMware by Broadcom 122


VMware Cloud Foundation Design Guide

Routing Type Description Benefits Drawbacks

Static routing n The administrator n No dynamic routing n You must manually


manages the routing protocol required on create static routes in
information, adding the ToR switches. NSX Manager on the
routing information to n In some cases, Tier-0 gateway.
the routing table. no additional license n If required, you must
n If any change occurs needed on the ToR manually create an HA
in the network, the switches to implement VIP in the NSX Manager
administrator has to dynamic routing. on the Tier-0 gateway
update the related to provide redundancy
information in the across ToR switches.
routing table. n Not supported with
vSAN stretched
clusters.

OSPF n The routing protocol n If the physical fabric is n Needs additional


automatically adds and running OSPF routing manual configuration.
manages the routing protocol, using OSPF See VMware
information in the at the virtual layer Knowledge Base article
routing table. If any might be a simpler 85916.
change occurs in the approach for the n Not supported with
network, the routing network administrator. vSAN stretched
protocol automatically clusters.
updates the related n Not supported with
information in the NSX Federation.
routing table. If any
n Combined use of BGP
new segments or
and OSPF on a single
subnets are added
Tier-0 gateway not
in NSX, they are
supported.
automatically added to
the routing table.

BGP n BGP is known as n Fully supported by n None.


an exterior gateway the automated edge
protocol. It is designed workflows in VMware
to share routing Cloud Foundation.
information between n Fully supported for
disparate networks, all VMware Cloud
known as autonomous Foundation topologies.
systems (ASes).
n When multiple BGP-
derived paths exist,
the protocol chooses
a path to send
traffic based on certain
criteria.
n The routing protocol
automatically adds and
manages the routing
information in the
routing table. If any
new segments or

VMware by Broadcom 123


VMware Cloud Foundation Design Guide

Routing Type Description Benefits Drawbacks

subnets are added


in NSX, they are
automatically added to
the routing table.

BGP routing is the routing option recommended for VMware Cloud Foundation.

BGP Routing Design for VMware Cloud Foundation


Determine the number, networking, and high-availability configuration of the Tier-0 and
Tier-1 gateways in NSX for VMware Cloud Foundation workload domains. Identify the BGP
configuration for a single availability zone and two availability zones in the environment.

Table 8-16. Routing Direction Definitions

Routing Direction Description

North-south Traffic leaving or entering the NSX domain, for example,


a virtual machine on an overlay network communicating
with an end-user device on the corporate network.

East-west Traffic that remains in the NSX domain, for example,


two virtual machines on the same or different segments
communicating with each other.

North-South Routing
The routing design considers different levels of routing in the environment, such as number and
type of gateways in NSX, dynamic routing protocol, and others.

The following models for north-south traffic exist:

VMware by Broadcom 124


VMware Cloud Foundation Design Guide

Table 8-17. Considerations for the Operating Model for North-South Service Routers
North-South Service
Router Operating Model Description Benefits Drawbacks

Active-Active n Bandwidth independent n The active-active mode n Cannot provide some


of the Tier-0 gateway can support up to stateful services, such
failover model. 8 NSX Edge nodes as SNAT or DNAT.
n Configured in active- per northbound service
active equal-cost multi- router (SR).
path (ECMP) mode. n Availability can be as
n Failover takes high as N+7, with up
approximately 2 to 8 active-active NSX
seconds for virtual Edge nodes.
edges and is sub- n Supports ECMP north-
second for bare-metal south routing on all
edges. nodes in the NSX Edge
cluster.

Active-Standby n Bandwidth independent n Can provide stateful n The active-standby


of the Tier-0 gateway services such as NAT. mode is limited to a
failover model. single node.
n Failover takes n Availability limited to
approximately 2 N+1.
seconds for virtual
edges and is sub-
second for bare-metal
edges.

BGP North-South Routing for a Single or Multiple Availability Zones


For multiple availability zones, plan for failover of the NSX Edge nodes by configuring BGP so
that traffic from the top of rack switches is directed to the first availability zone unless a failure in
this zone occurs.

VMware by Broadcom 125


VMware Cloud Foundation Design Guide

Figure 8-6. BGP North-South Routing for VMware Cloud Foundation Instances with a Single
Availability Zone

VCF Instance A

ToR
Switches

Data Center A
BGP ASN eBGP
ECMP
Uplink VLAN 1 BDF (Optional) Default Route

Uplink VLAN 2

Tier-0 SR SR
Gateway
DR DR
SDDC BGP ASN

ESXi Transport Nodes

SR SR
Tier-1
DR DR DR DR DR DR
Gateway
NSX NSX ESXi ESXi ESXi ESXi
Edge Edge Host 1 Host 2 Host 3 Host 4
Node 1 Node 2

VMware by Broadcom 126


VMware Cloud Foundation Design Guide

Figure 8-7. BGP North-South Routing for VMware Cloud Foundation Instances with Multiple
Availability Zones
VCF Instance A

Availability Zone 1 eBGP (loc-pref Availability Zone 2


& ASPath_prepend)
ToR ECMP
eBGP BFD (Optional) ToR
Switches
ECMP Switches
BFD (Optional)
Data Center A - AZ 1
BGP ASN Data Center A - AZ 2
Default Route Default Route BGP ASN
Uplink VLAN 1 (low loc-pref)

Uplink VLAN 2

Tier-0 SR SR
Gateway
DR DR
SDDC BGP ASN

ESXi Transport Nodes ESXi Transport Nodes

SR SR

DR DR DR DR DR DR Tier-1 DR DR DR DR
Gateway
ESXi ESXi ESXi ESXi NSX NSX ESXi ESXi ESXi ESXi
Host 1 Host 2 Host 3 Host 4 Edge Edge Host 1 Host 2 Host 3 Host 4
Node 1 Node 2

BGP North-South Routing Design for NSX Federation


In a routing design for an environment with VMware Cloud Foundation instances that use NSX
Federation, you identify the instances that an SDN network must span and at which physical
location ingress and egress traffic should occur.

Local egress allows traffic to leave any location which the network spans. The use of local-egress
would require controlling local-ingress to prevent asymmetrical routing. This design does not
use local-egress. Instead, this design uses a preferred and failover VMware Cloud Foundation
instances for all networks.

VMware by Broadcom 127


VMware Cloud Foundation Design Guide

Figure 8-8. BGP North-South Routing for VMware Cloud Foundation Instances with NSX
Federation

G G

NSX Global NSX Global


Manager Manager
Management/
Control Plane

L L

NSX Local NSX Local


Manager Manager

Data Plane G Tier-0 Gateway


Active/Active
Primary/Primary

G Tier-1 Gateway G Tier-1 Gateway G Tier-1 Gateway


Active/Standby Active/Standby Active/Standby
Primary Primary/Secondary Primary

Segment Local to Multi-Instance Segment Segment Local to


VCF Instance A Egress through single VCF Instance B
Location

VCF Instance A VCF Instance B

Tier-0 Gateways with NSX Federation


In NSX Federation, a Tier-0 gateway can span multiple VMware Cloud Foundation instances.

Each VMware Cloud Foundation instance that is in the scope of a Tier-0 gateway can be
configured as primary or secondary. A primary instance passes traffic for any other SDN service
such as Tier-0 logical segments or Tier-1 gateways. A secondary instance routes traffic locally but
does not egress traffic outside the SDN or advertise networks in the data center.

VMware by Broadcom 128


VMware Cloud Foundation Design Guide

When deploying an additional VMware Cloud Foundation instance, the Tier-0 gateway in the first
instance is extended to the new instance.

In this design, the Tier-0 gateway in each VMware Cloud Foundation instance is configured as
primary. Although the Tier-0 gateway technically supports local-egress, the design does not
recommend the use of local-egress. Ingress and egress traffic is controlled at the Tier-1 gateway
level.

Each VMware Cloud Foundation instance has its own NSX Edge cluster with associated uplink
VLANs for north-south traffic flow for that instance. The Tier-0 gateway in each instance peers
with the top of rack switches over eBGP.

Figure 8-9. BGP Peering to Top of Rack Switches for VMware Cloud Foundation Instances with
NSX Federation

Data Center A Data Center B


BGP ASN A RTEP BGP ASN B
Tunnel

Data Center
Network
ToR ToR
Management Management
VLAN VLAN

eBGP Routes eBGP Routes


Inbound: Inbound:
Default Gateway Default Gateway
VCF Instance A local network VCF Instance B local network

Outbound: Outbound:
Tier-0 -No segments attached NSX Edge NSX Edge Tier-0 -No segments attached
Tier-1- Global and local segments Cluster Cluster Tier-1- Global and local segments
Primary at this location Primary at this location

Edge Edge
Node Node
Tier-0 SR iBGP -Inter-SR Routing Tier-0 SR
Active/Active Active/Active

SDN G Tier-0
Active/Active
Primary/Primary

G Tier-1 G Tier-1 G Tier-1


Active/Standby Active/Standby Active/Standby
Primary Primary/Secondary Primary

Segments Local Multi-Instance Segments Segments Local to


to VCF Instance A Egress through single VCF Instance B
Location

NSX SDN
SDDC BGP ASN

VCF Instance A VCF Instance B

VMware by Broadcom 129


VMware Cloud Foundation Design Guide

Tier-1 Gateways with NSX Federation


A Tier-1 gateway can span several VMware Cloud Foundation instances. As with a Tier-0
gateway, you can configure an instance's location as primary or secondary for the Tier-1
gateway. The gateway then passes ingress and egress traffic for the logical segments connected
to it.

Any logical segments connected to the Tier-1 gateway follow the span of the Tier-1 gateway. If
the Tier-1 gateway spans several VMware Cloud Foundation instances, any segments connected
to that gateway become available in both instances.

Using a Tier-1 gateway enables more granular control on logical segments in the first and second
VVMware Cloud Foundation instances. You use three Tier-1 gateways - one in each VMware
Cloud Foundation instance for segments that are local to the instance, and one for segments
which span the two instances.

Table 8-18. Location Configuration of the Tier-1 Gateways for Multiple VMware Cloud Foundation
Instances
First VMware Cloud Second VMware Cloud
Tier-1 Gateway Foundation Instance Foundation Instance Ingress and Egress Traffic

Connected to both VMware Primary Secondary First VMware Cloud


Cloud Foundation instances Foundation instance
Second VMware Cloud
Foundation instance

Local to the firstVMware Primary - First VMware Cloud


Cloud Foundation instance Foundation instance only

Local to the second - Primary Second VMware Cloud


VMware Cloud Foundation Foundation instance only
instance

The Tier-1 gateway advertises its networks to the connected local-instance unit of the Tier-0
gateway. In the case of primary-secondary location configuration, the Tier-1 gateway advertises
its networks only to the Tier-0 gateway unit in the location where the Tier-1 gateway is primary.
The Tier-0 gateway unit then re-advertises those networks to the data center in the sites
where that Tier-1 gateway is primary. During failover of the components in the first VMware
Cloud Foundation instance, an administrator must manually set the Tier-1 gateway in the second
VMware Cloud Foundation instance as primary. Then, networks become advertised through the
Tier-1 gateway unit in the second instance.

In a Multiple Instance-Multiple Availability Zone topology, the same Tier-0 and Tier-1 gateway
architecture applies. The ESXi transport nodes from the second availability zone are also
attached to the Tier-1 gateway as per the Figure 8-7. BGP North-South Routing for VMware
Cloud Foundation Instances with Multiple Availability Zones design.

VMware by Broadcom 130


VMware Cloud Foundation Design Guide

BGP Routing Design Requirements and Recommendations for


VMware Cloud Foundation
Consider the requirements for the configuration of Tier-0 and Tier-1 gateways for implementing
BGP routing in VMware Cloud Foundation, and the best practices for having optimal traffic
routing on a standard or stretched cluster in a environment with a single or multiple VMware
Cloud Foundation instances.

BGP Routing
The BGP routing design has the following characteristics:

n Enables dynamic routing by using NSX.

n Offers increased scale and flexibility.

n Is a proven protocol that is designed for peering between networks under independent
administrative control - data center networks and the NSX SDN.

Note These design recommendations do not include BFD. However, if faster convergence than
BGP timers is required, you must enable BFD on the physical network and also on the NSX Tier-0
gateway.

BGP Routing Design Requirements


You must meet the following design requirements for standard and stretched clusters in your
routing design for a single VMware Cloud Foundation instance. For NSX Federation, additional
requirements exist.

Table 8-19. BGP Routing Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- To enable ECMP between Supports multiple equal- Additional VLANs are
CFG-001 the Tier-0 gateway and cost routes on the Tier-0 required.
the Layer 3 devices gateway and provides
(ToR switches or upstream more resiliency and better
devices), create two bandwidth use in the
VLANs. network.
The ToR switches or
upstream Layer 3 devices
have an SVI on one of the
two VLANS, and each Edge
node in the cluster has an
interface on each VLAN.

VCF-NSX-BGP-REQD- Assign a named teaming Pins the VLAN traffic on None.


CFG-002 policy to the VLAN each segment to its target
segments to the Layer 3 edge node interface. From
device pair. there, the traffic is directed
to the host physical NIC
that is connected to the
target top of rack switch.

VMware by Broadcom 131


VMware Cloud Foundation Design Guide

Table 8-19. BGP Routing Design Requirements for VMware Cloud Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Create a VLAN transport Enables the configuration Additional VLAN transport
CFG-003 zone for edge uplink traffic. of VLAN segments on the zones might be required
N-VDS in the edge nodes. if the edge nodes are not
connected to the same top
of rack switch pair.

VCF-NSX-BGP-REQD- Deploy a Tier-1 gateway Creates a two-tier routing A Tier-1 gateway can only
CFG-004 and connect it to the Tier-0 architecture. be connected to a single
gateway. Abstracts the NSX logical Tier-0 gateway.
components which interact In cases where multiple
with the physical data Tier-0 gateways are
center from the logical required, you must create
components which provide multiple Tier-1 gateways.
SDN services.

VCF-NSX-BGP-REQD- Deploy a Tier-1 gateway to Enables stateful services, None.


CFG-005 the NSX Edge cluster. such as load balancers
and NAT, for SDDC
management components.
Because a Tier-1 gateway
always works in active-
standby mode, the
gateway supports stateful
services.

VMware by Broadcom 132


VMware Cloud Foundation Design Guide

Table 8-20. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Extend the uplink VLANs Because the NSX Edge You must configure a
CFG-006 to the top of rack switches nodes will fail over stretched Layer 2 network
so that the VLANs are between the availability between the availability
stretched between both zones, ensures uplink zones by using physical
availability zones. connectivity to the top network infrastructure.
of rack switches in
both availability zones
regardless of the zone
the NSX Edge nodes are
presently in.

VCF-NSX-BGP-REQD- Provide this SVI Enables the communication You must configure a
CFG-007 configuration on the top of of the NSX Edge nodes to stretched Layer 2 network
the rack switches. the top of rack switches in between the availability
n In the second both availability zones over zones by using the physical
availability zone, the same uplink VLANs. network infrastructure.
configure the top
of rack switches or
upstream Layer 3
devices with an SVI on
each of the two uplink
VLANs.
n Make the top of
rack switch SVI in
both availability zones
part of a common
stretched Layer 2
network between the
availability zones.

VCF-NSX-BGP-REQD- Provide this VLAN Supports multiple equal- n Extra VLANs are
CFG-008 configuration: cost routes on the Tier-0 required.
n Use two VLANs to gateway, and provides n Requires stretching
enable ECMP between more resiliency and better uplink VLANs between
the Tier-0 gateway and bandwidth use in the availability zones
the Layer 3 devices network.
(top of rack switches or
Leaf switches).
n The ToR switches
or upstream Layer 3
devices have an SVI to
one of the two VLANS
and each NSX Edge
node has an interface
to each VLAN.

VMware by Broadcom 133


VMware Cloud Foundation Design Guide

Table 8-20. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Create an IP prefix list Used in a route map to You must manually create
CFG-009 that permits access to prepend a path to one or an IP prefix list that is
route advertisement by any more autonomous system identical to the default one.
network instead of using (AS-path prepend) for BGP
the default IP prefix list. neighbors in the second
availability zone.

VCF-NSX-BGP-REQD- Create a route map-out n Used for configuring You must manually create
CFG-010 that contains the custom neighbor relationships the route map.
IP prefix list and an AS- with the Layer 3 The two NSX Edge nodes
path prepend value set to devices in the second will route north-south
the Tier-0 local AS added availability zone. traffic through the second
twice. n Ensures that all ingress availability zone only if
traffic passes through the connection to their
the first availability BGP neighbors in the first
zone. availability zone is lost, for
example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.

VCF-NSX-BGP-REQD- Create an IP prefix list that Used in a route map to You must manually create
CFG-011 permits access to route configure local-reference an IP prefix list that is
advertisement by network on learned default-route identical to the default one.
0.0.0.0/0 instead of using for BGP neighbors in the
the default IP prefix list. second availability zone.

VMware by Broadcom 134


VMware Cloud Foundation Design Guide

Table 8-20. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Apply a route map-in that n Used for configuring You must manually create
CFG-012 contains the IP prefix list for neighbor relationships the route map.
the default route 0.0.0.0/0 with the Layer 3 The two NSX Edge nodes
and assign a lower local- devices in the second will route north-south
preference , for example, availability zone. traffic through the second
80, to the learned default n Ensures that all egress availability zone only if
route and a lower local- traffic passes through the connection to their
preference, for example, 90 the first availability BGP neighbors in the first
any routes learned. zone. availability zone is lost, for
example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.

VCF-NSX-BGP-REQD- Configure the neighbors of Makes the path in and The two NSX Edge nodes
CFG-013 the second availability zone out of the second will route north-south
to use the route maps as In availability zone less traffic through the second
and Out filters respectively. preferred because the AS availability zone only if
path is longer and the the connection to their
local preference is lower. BGP neighbors in the first
As a result, all traffic passes availability zone is lost, for
through the first zone. example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.

VMware by Broadcom 135


VMware Cloud Foundation Design Guide

Table 8-21. BGP Routing Design Requirements for NSX Federation in VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Extend the Tier-0 gateway n Supports ECMP north- The Tier-0 gateway
CFG-014 to the second VMware south routing on all deployed in the second
Cloud Foundation instance. nodes in the NSX Edge instance is removed.
cluster.
n Enables support for
cross-instance Tier-1
gateways and cross-
instance network
segments.

VCF-NSX-BGP-REQD- Set the Tier-0 gateway n In NSX Federation, None.


CFG-015 as primary for all a Tier-0 gateway
VMware Cloud Foundation lets egress traffic
instances. from connected Tier-1
gateways only in its
primary locations.
n Local ingress and
egress traffic
is controlled
independently at
the Tier-1 level.
No segments are
provisioned directly to
the Tier-0 gateway.
n A mixture of network
spans (local to
a VMware Cloud
Foundation instance
or spanning multiple
instances) is enabled
without requiring
additional Tier-0
gateways and hence
edge nodes.
n If a failure in a VMware
Cloud Foundation
instance occurs,
the local-instance
networking in the
other instance remains
available without
manual intervention.

VMware by Broadcom 136


VMware Cloud Foundation Design Guide

Table 8-21. BGP Routing Design Requirements for NSX Federation in VMware Cloud Foundation
(continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- From the global Tier-0 n Enables the learning None.


CFG-016 gateway, establish BGP and advertising of
neighbor peering to the routes in the
ToR switches connected to second VMware Cloud
the second VMware Cloud Foundation instance.
Foundation instance. n Facilitates a potential
automated failover
of networks from
the first to the
second VMware Cloud
Foundation instance.

VCF-NSX-BGP-REQD- Use a stretched Tier-1 n Enables network span None.


CFG-017 gateway and connect it between the VMware
to the Tier-0 gateway for Cloud Foundation
cross-instance networking. instances because NSX
network segments
follow the span of
the gateway they are
attached to.
n Creates a two-tier
routing architecture.

VMware by Broadcom 137


VMware Cloud Foundation Design Guide

Table 8-21. BGP Routing Design Requirements for NSX Federation in VMware Cloud Foundation
(continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Assign the NSX Edge n Enables cross-instance You must manually fail over
CFG-018 cluster in each VMware network span between and fail back the cross-
Cloud Foundation instance the first and instance network from
to the stretched Tier-1 second VMware Cloud the standby NSX Global
gateway. Set the first Foundation instances. Manager.
VMware Cloud Foundation n Enables deterministic
instance as primary and ingress and egress
the second instance as traffic for the cross-
secondary. instance network.
n If a VMware Cloud
Foundation instance
failure occurs, enables
deterministic failover of
the Tier-1 traffic flow.
n During the recovery
of the inaccessible
VMware Cloud
Foundation instance,
enables deterministic
failback of the
Tier-1 traffic flow,
preventing unintended
asymmetrical routing.
n Eliminates the need
to use BGP attributes
in the first and
second VMware Cloud
Foundation instances
to influence location
preference and failover.

VCF-NSX-BGP-REQD- Assign the NSX Edge n Enables instance- You can use the
CFG-019 cluster in each VMware specific networks to be service router that is
Cloud Foundation instance isolated to their specific created for the Tier-1
to the local Tier-1 gateway instances. gateway for networking
for that VMware Cloud n Enables deterministic services. However, such
Foundation instance. flow of ingress and configuration is not
egress traffic for required for network
the instance-specific connectivity.
networks.

VCF-NSX-BGP-REQD- Set each local Tier-1 Prevents the need to use None.
CFG-020 gateway only as primary in BGP attributes in primary
that instance. Avoid setting and secondary instances
the gateway as secondary to influence the instance
in the other instances. ingress-egress preference.

VMware by Broadcom 138


VMware Cloud Foundation Design Guide

BGP Routing Design Recommendations


In your routing design for a single VMware Cloud Foundation instance, you can apply certain best
practices for standard and stretched clusters. For NSX Federation, additional recommendations
are available.

Table 8-22. BGP Routing Design Recommendations for VMware Cloud Foundation
Recommendation Recommendation
Recommendation ID Design Recommendation Justification Implication

VCF-NSX-BGP-RCMD- Deploy an active-active Supports ECMP north-south Active-active Tier-0


CFG-001 Tier-0 gateway. routing on all Edge nodes gateways cannot provide
in the NSX Edge cluster. stateful services such as
NAT.

VCF-NSX-BGP-RCMD- Configure the BGP Keep Provides a balance By using longer timers to
CFG-002 Alive Timer to 4 and Hold between failure detection detect if a router is not
Down Timer to 12 or lower between the top of responding, the data about
between the top of tack rack switches and the such a router remains in
switches and the Tier-0 Tier-0 gateway, and the routing table longer. As
gateway. overburdening the top of a result, the active router
rack switches with keep- continues to send traffic to
alive traffic. a router that is down.
These timers must be
aligned with the data
center fabric design of your
organization.

VCF-NSX-BGP-RCMD- Do not enable Graceful Avoids loss of traffic. None.


CFG-003 Restart between BGP On the Tier-0 gateway,
neighbors. BGP peers from all the
gateways are always
active. On a failover, the
Graceful Restart capability
increases the time a remote
neighbor takes to select an
alternate Tier-0 gateway.
As a result, BFD-based
convergence is delayed.

VCF-NSX-BGP-RCMD- Enable helper mode for Avoids loss of traffic. None.


CFG-004 Graceful Restart mode During a router restart,
between BGP neighbors. helper mode works
with the graceful restart
capability of upstream
routers to maintain the
forwarding table which in
turn will forward packets
to a down neighbor even
after the BGP timers have
expired causing loss of
traffic.

VMware by Broadcom 139


VMware Cloud Foundation Design Guide

Table 8-22. BGP Routing Design Recommendations for VMware Cloud Foundation (continued)
Recommendation Recommendation
Recommendation ID Design Recommendation Justification Implication

VCF-NSX-BGP-RCMD- Enable Inter-SR iBGP In the event that an None.


CFG-005 routing. edge node has all of its
northbound eBGP sessions
down, north-south traffic
will continue to flow by
routing traffic to a different
edge node.

VCF-NSX-BGP-RCMD- Deploy a Tier-1 gateway Ensures that after a failed None.


CFG-006 in non-preemptive failover NSX Edge transport node
mode. is back online, it does
not take over the gateway
services thus preventing a
short service outage.

VCF-NSX-BGP-RCMD- Enable standby relocation Ensures that if an edge None.


CFG-007 of the Tier-1 gateway. failure occurs, a standby
Tier-1 gateway is created
on another edge node.

Table 8-23. BGP Routing Design Recommendations for NSX Federation in VMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-BGP-RCMD- Use Tier-1 gateways to Enables a mixture of To control location span,


CFG-008 control the span of network spans (isolated a Tier-1 gateway must
networks and ingress and to a VMware Cloud be assigned to an edge
egress traffic in the Foundation instance cluster and hence has the
VMware Cloud Foundation or spanning multiple Tier-1 SR component. East-
instances. instances) without requiring west traffic between Tier-1
additional Tier-0 gateways gateways with SRs need to
and hence edge nodes. physically traverse an edge
node.

VCF-NSX-BGP-RCMD- Allocate a Tier-1 gateway in n Creates a two-tier None.


CFG-009 each instance for instance- routing architecture.
specific networks and n Enables local-instance
connect it to the stretched networks that are
Tier-0 gateway. not to span between
the VMware Cloud
Foundation instances.
n Guarantees that local-
instance networks
remain available if
a failure occurs in
another VMware Cloud
Foundation instance.

VMware by Broadcom 140


VMware Cloud Foundation Design Guide

Overlay Design for VMware Cloud Foundation


As part of the overlay design, you determine the NSX configuration for handling traffic between
workloads, management or customer, in VMware Cloud Foundation. You determine the NSX
segments and the transport zones.

Logical Overlay Design for VMware Cloud Foundation


This conceptual design provides the network virtualization design of the logical components
that handle the data to and from the workloads in the environment. For an environment with
multiple VMware Cloud Foundation instances, you replicate the design of the first VMware Cloud
Foundation instance to the additional VMware Cloud Foundation instances.

ESXi Host Transport Nodes


A transport node in NSX is a node that is capable of participating in an NSX data plane. The
workload domains contain multiple ESXi hosts in a vSphere cluster to support management or
customer workloads. You register these ESXi hosts as transport nodes so that networks and
workloads on that host can use the capabilities of NSX. During the preparation process, the
native vSphere Distributed Switch for the workload domain is extended with NSX capabilities.

Virtual Segments
Geneve provides the overlay capability to create isolated, multi-tenant broadcast domains in NSX
across data center fabrics, and enables customers to create elastic, logical networks that span
physical network boundaries, and physical locations.

Transport Zones
A transport zone identifies the type of traffic, VLAN or overlay, and the vSphere Distributed
Switch name. You can configure one or more VLAN transport zones and a single overlay
transport zone per virtual switch. A transport zone does not represent a security boundary.
VMware Cloud Foundation supports a single overlay transport zone per NSX Instance. All
vSphere clusters, within and across workload domains that share the same NSX instance
subsequently share the same overlay transport zone.

VMware by Broadcom 141


VMware Cloud Foundation Design Guide

Figure 8-10. Transport Zone Design

NSX ESXi Hosts


Edge Nodes

Edge Edge
Node 1 Node 2 ESXi

vSphere Distributed
N-VDS
Switch with NSX

Overlay Transport Zone

VLAN Transport Zone


VLAN Transport Zone
(Optional - Workload
(Edge Uplinks to ToR)
VLANs)

Uplink Policy for ESXi Host Transport Nodes


Uplink profiles define policies for the links from ESXi hosts to NSX segments or from NSX
Edge appliances to top of rack switches. By using uplink profiles, you can apply consistent
configuration of capabilities for network adapters across multiple ESXi hosts or NSX Edge nodes.

Uplink profiles can use either load balance source or failover order teaming. If using load balance
source, multiple uplinks can be active. If using failover order, only a single uplink can be active.

Replication Mode of Segments


The control plane decouples NSX from the physical network. The control plane handles the
broadcast, unknown unicast, and multicast (BUM) traffic in the virtual segments.

The following options are available for BUM replication on segments.

VMware by Broadcom 142


VMware Cloud Foundation Design Guide

Table 8-24. BUM Replication Modes of NSX Segments

BUM Replication Mode Description

Hierarchical Two-Tier The ESXi host transport nodes are grouped according
to their TEP IP subnet. One ESXi host in each subnet
is responsible for replication to an ESXi host in another
subnet. The receiving ESXi host replicates the traffic to
the ESXi hosts in its local subnet.
The source ESXi host transport node knows about the
groups based on information it has received from the
control plane. The system can select an arbitrary ESXi
host transport node as the mediator for the source subnet
if the remote mediator ESXi host node is available.

Head-End The ESXi host transport node at the origin of the frame
to be flooded on a segment sends a copy to every
other ESXi host transport node that is connected to this
segment.

Overlay Design Requirements and Recommendations for VMware


Cloud Foundation
Consider the requirements for the configuration of the ESXi hosts in a workload domain as NSX
transport nodes, transport zone layout, uplink teaming policies, and the best practices for IP
allocation and BUM replication mode in a VMware Cloud Foundation deployment.

Overlay Design Requirements


You must meet the following design requirements in your overlay design.

Table 8-25. Overlay Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-OVERLAY-REQD- Configure all ESXi hosts in Enables distributed routing, None.


CFG-001 the workload domain as logical segments, and
transport nodes in NSX. distributed firewall.

VCF-NSX-OVERLAY-REQD- Configure each ESXi host n Enables the None.


CFG-002 as a transport node using participation of ESXi
transport node profiles. hosts and the virtual
machines running on
them in NSX overlay
and VLAN networks.
n Transport node profiles
can only be applied at
the cluster level.

VMware by Broadcom 143


VMware Cloud Foundation Design Guide

Table 8-25. Overlay Design Requirements for VMware Cloud Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-OVERLAY-REQD- To provide virtualized n Creates isolated, multi- Requires configuring


CFG-003 network capabilities to tenant broadcast transport networks with an
workloads, use overlay domains across data MTU size of at least 1,600
networks with NSX Edge center fabrics to deploy bytes.
nodes and distributed elastic, logical networks
routing. that span physical
network boundaries.
n Enables advanced
deployment topologies
by introducing Layer
2 abstraction from the
data center networks.

VCF-NSX-OVERLAY-REQD- Create a single overlay n Ensures that overlay All clusters in all workload
CFG-004 transport zone in the NSX segments are domains that share the
instance for all overlay connected to an same NSX Manager share
traffic across the host and NSX Edge node for the same transport zone.
NSX Edge transport nodes services and north-
of the workload domain. south routing.
n Ensures that all
segments are available
to all ESXi hosts
and NSX Edge nodes
configured as transport
nodes.

VCF-NSX-OVERLAY-REQD- Create an uplink profile For increased resiliency None.


CFG-005 with a load balance source and performance, supports
teaming policy with two the concurrent use of both
active uplinks for ESXi physical NICs on the ESXi
hosts. hosts that are configured
as transport nodes.

Overlay Design Recommendations


In your overlay design for VMware Cloud Foundation, you can apply certain best practices.

VMware by Broadcom 144


VMware Cloud Foundation Design Guide

Table 8-26. Overlay Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-OVERLAY-RCMD- Use static IP pools to n Removes the need None.


CFG-001 assign IP addresses to the for an external DHCP
host TEP interfaces. server for the host
overlay VLANs.
n You can use NSX
Manager to verify static
IP pool configurations.

VCF-NSX-OVERLAY-RCMD- Use hierarchical two-tier Hierarchical two-tier None.


CFG-002 replication on all overlay replication is more efficient
segments. because it reduced the
number of ESXi hosts
the source ESXi host
must replicate traffic to
if hosts have different
TEP subnets. This is
typically the case with
more than one cluster and
will improve performance in
that scenario.

Table 8-27. Overlay Design Recommendations for Stretched Clusters inVMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-OVERLAY-RCMD- Configure an NSX sub- n You can use static Changes to the
CFG-003 transport node profile. IP pools for the host host transport node
TEPs in each availability configuration are done at
zone. the vSphere cluster level.
n The NSX transport
node profile can
remain attached when
using two separate
VLANs for host TEPs
at each availability
zone as required for
clusters that are based
on vSphere Lifecycle
Manager images.
n Using an external DHCP
server for the host
overlay VLANs in both
availability zones is not
required.

VMware by Broadcom 145


VMware Cloud Foundation Design Guide

Application Virtual Network Design for VMware Cloud


Foundation
In VMware Cloud Foundation, you place VMware Aria Suite components on a pre-defined
configuration of NSX segments (known as application virtual networks or AVNs) for dynamic
routing and load balancing.

Logical Application Virtual Network Design VMware Cloud


Foundation
NSX segments provide flexibility for workload placement by removing the dependence on
traditional physical data center networks. This approach also improves security and mobility of
the management applications, and reduces the integration effort with existing customer network.

Table 8-28. Comparing Application Virtual Network Types


VLAN-Backed NSX
Design Component Overlay-Based NSX Segments Segments

Benefits n Supports IP mobility with dynamic routing. Uses the data center fabric
n Limits the number of VLANs needed in the data center fabric. for the network segment
and the next-hop gateway.
n In an environment with multiple availability zones, limits the
number of VLANs needed to expand from an architecture
with one availability zone to an architecture with two
availability zones.

Requirement Requires routing between the data center fabric and the NSX
Edge nodes.

VMware by Broadcom 146


VMware Cloud Foundation Design Guide

Figure 8-11. Application Virtual Networks in VMware Cloud Foundation

Internet/ Enterprise VI Workload


Network Domain
VC SDDC Mgr
OS OS

ToR Switches

ECMP

Tier-0 Gateway NSX Edge


Active/ Active Cluster

Cross-Instance Local-Instance
Tier-1 Gateway Tier-1 Gateway

Cross-Instance Local-Instance
IP Subnet IP Subnet
Cross-Instance Local-Instance
NSX Segment NSX Segment
VMware Aria Lifecycle Local-Instance VMware Aria
Clustered WSA Suite Component
VMware Aria Suite Component
with Cross-Instance Mobility

For the design for specific VMware Aria Suite components, see this design and VMware
Validated Solutions. For identity and access management design for NSX, see Identity and
Access Management for VMware Cloud Foundation.

Important If you plan to use NSX Federation in the management domain, create the AVNs
before you enable the federation. Creating AVNs in an environment where NSX Federation is
already active is not supported.

With NSX Federation, an NSX segment can span multiple instances of NSX and VMware Cloud
Foundation. A single network segment can be available in different physical locations over
the NSX SDN. In an environment with multiple VMware Cloud Foundation instances, the cross-
instance NSX network in the management domain is extended between the first two instances.
This configuration provides IP mobility for management components which fail over from the first
to the second instance.

VMware by Broadcom 147


VMware Cloud Foundation Design Guide

Application Virtual Network Design Requirements and


Recommendations forVMware Cloud Foundation
Consider the requirements and best practices for the configuration of the NSX segments for
using the Application Virtual Networks in VMware Cloud Foundation for a single VMware Cloud
Foundation or multiple VMware Cloud Foundation instances.

Application Virtual Network Design Requirements


You must meet the following design requirements in your Application Virtual Network design
for a single VMware Cloud Foundation instance and for multiple VMware Cloud Foundation
instances.

Table 8-29. Application Virtual Network Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-AVN-REQD- Create one cross-instance Prepares the environment Each NSX segment requires
CFG-001 NSX segment for the for the deployment of a unique IP address space.
components of a VMware solutions on top of VMware
Aria Suite application Cloud Foundation, such as
or another solution that VMware Aria Suite, without
requires mobility between a complex physical network
VMware Cloud Foundation configuration.
instances. The components of
the VMware Aria Suite
application must be
easily portable between
VMware Cloud Foundation
instances without requiring
reconfiguration.

VCF-NSX-AVN-REQD- Create one or more local- Prepares the environment Each NSX segment requires
CFG-002 instance NSX segments for the deployment of a unique IP address space.
for the components of solutions on top of VMware
a VMware Aria Suite Cloud Foundation, such as
application or another VMware Aria Suite, without
solution that are assigned a complex physical network
to a specific VMware Cloud configuration.
Foundation instance.

VMware by Broadcom 148


VMware Cloud Foundation Design Guide

Table 8-30. Application Virtual Network Design Requirements for NSX Federation in VMware
Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-AVN-REQD- Extend the cross-instance Enables workload mobility Each NSX segment requires
CFG-003 NSX segment to the without a complex physical a unique IP address space.
second VMware Cloud network configuration.
Foundation instance. The components of
a VMware Aria Suite
application must be
easily portable between
VMware Cloud Foundation
instances without requiring
reconfiguration.

VCF-NSX-AVN-REQD- In each VMware Cloud Enables workload mobility Each NSX segment requires
CFG-004 Foundation instance, create within a VMware a unique IP address space.
additional local-instance Cloud Foundation instance
NSX segments. without complex physical
network configuration.
Each VMware Cloud
Foundation instance should
have network segments to
support workloads which
are isolated to that VMware
Cloud Foundation instance.

VCF-NSX-AVN-REQD- In each VMware Configures local-instance Requires an individual Tier-1


CFG-005 Cloud Foundation NSX segments at required gateway for local-instance
instance, connect or sites only. segments.
migrate the local-instance
NSX segments to
the corresponding local-
instance Tier-1 gateway.

Application Virtual Network Design Recommendations


In your Application Virual Network design for VMware Cloud Foundation, you can apply certain
best practices.

Table 8-31. Application Virtual Network Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-AVN-RCMD- Use overlay-backed NSX n Supports expansion to Using overlay-backed NSX


CFG-001 segments. deployment topologies segments requires routing,
for multiple VMware eBGP recommended,
Cloud Foundation between the data center
instances. fabric and edge nodes.
n Limits the number of
VLANs required for the
data center fabric.

VMware by Broadcom 149


VMware Cloud Foundation Design Guide

Load Balancing Design for VMware Cloud Foundation


Following the principles of this design and of each product, you deploy and configure NSX load
balancing services to support VMware Aria Suite and Workspace ONE Access components.

Logical Load Balancing Design for VMware Cloud Foundation


The logical load balancer capability in NSX offers a high-availability service for applications in
VMware Cloud Foundation and distributes the network traffic load among multiple servers.

A standalone Tier-1 gateway is created to provide load balancing services with a service interface
on the cross-instance application virtual network.

VMware by Broadcom 150


VMware Cloud Foundation Design Guide

Figure 8-12. NSX Logical Load Balancing Design for VMware Cloud Foundation

NSX Edge
Cluster

NSX Tier-1 NSX Tier-0


Standalone Gateway Gateway

NSX
Load Balancer
Service
NSX Tier-1
Gateway

Cross-Instance NSX Segment

NSX
Manager Cluster Supporting
Infrastructure

Internal VIP Load Balancer DNS

NSX NSX NSX


Manager Manager Manager NTP
1 2 3

NSX
Transport Nodes

Management Cluster

ESXi ESXi ESXi ESXi

VMware by Broadcom 151


VMware Cloud Foundation Design Guide

Load Balancing Design Requirements for VMware Cloud Foundation


Consider the requirements for running a load balancing service including creating a standalone
Tier-1 gateway and connecting it to the client applications. Separate requirements exist for a
single VMware Cloud Foundation instance and for multiple VMware Cloud Foundation instances.

Table 8-32. Load Balancing Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-LB-REQD- Deploy a standalone Provides independence You must add a separate


CFG-001 Tier-1 gateway to support between north-south Tier-1 Tier-1 gateway.
advanced stateful services gateways to support
such as load balancing advanced deployment
for other management scenarios.
components.

VCF-NSX-LB-REQD- When creating load Provides load balancing to You must connect
CFG-002 balancing services for applications connected to the gateway to each
Application Virtual the cross-instance network. network that requires load
Networks, connect the balancing.
standalone Tier-1 gateway
to the cross-instance NSX
segments.

VCF-NSX-LB-REQD- Configure a default static Because the Tier-1 gateway None.


CFG-003 route on the standalone is standalone, it does not
Tier-1 gateway with a next auto-configure its routes.
hop the Tier-1 gateway for
the segment to provide
connectivity to the load
balancer.

VMware by Broadcom 152


VMware Cloud Foundation Design Guide

Table 8-33. Load Balancing Design Requirements for NSX Federation in VMware Cloud
Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-LB-REQD- Deploy a standalone Tier-1 Provides a cold-standby n You must add


CFG-004 gateway in the second non-global service router a separate Tier-1
VMware Cloud Foundation instance for the second gateway.
instance. VMware Cloud Foundation n You must manually
instance to support configure any services
services on the cross- and synchronize them
instance network which between the non-global
require advanced services service router instances
not currently supported as in the first and
NSX global objects. second VMware Cloud
Foundation instances.
n To avoid a network
conflict between the
two VMware Cloud
Foundation instances,
make sure that the
primary and standby
networking services are
not both active at the
same time.

VCF-NSX-LB-REQD- Connect the standalone Provides load balancing to You must connect
CFG-005 Tier-1 gateway in the applications connected to the gateway to each
second VMware Cloud the cross-instance network network that requires load
Foundationinstance to in the second VMware balancing.
the cross-instance NSX Cloud Foundation instance.
segment.

VMware by Broadcom 153


VMware Cloud Foundation Design Guide

Table 8-33. Load Balancing Design Requirements for NSX Federation in VMware Cloud
Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-LB-REQD- Configure a default static Because the Tier-1 gateway None.


CFG-006 route on the standalone is standalone, it does not
Tier-1 gateway in the autoconfigure its routes.
second VMware Cloud
Foundation instance with
a next hop as the Tier-1
gateway for the segment
it connects with to provide
connectivity to the load
balancers.

VCF-NSX-LB-REQD- Establish a process to Keeps the network service n Because of incorrect


CFG-007 ensure any changes made in the failover load configuration between
on to the load balancer balancer instance ready the VMware Cloud
instance in the first VMware for activation if a failure Foundation instances,
Cloud Foundationinstance in the first VMware the load balancer
are manually applied to the Cloud Foundation instance service in the second
disconnected load balancer occurs. instance might come
in the second instance. Because network services online with an
are not supported as global invalid or incomplete
objects, you must configure configuration.
them manually in each n If both VMware Cloud
VMware Cloud Foundation Foundation instances
instance. The load balancer are online and active
service in one instance at the same time,
must be connected and a conflict between
active, while the service in services could occur
the other instance must be resulting in a potential
disconnected and inactive. outage.
n The administrator
must establish and
follow an operational
practice by using a
runbook or automated
process to ensure that
configuration changes
are reproduced in
each VMware Cloud
Foundation instance.

VMware by Broadcom 154


SDDC Manager Design for
VMware Cloud Foundation 9
In VMware Cloud Foundation, operational day-to-day efficiencies are delivered through SDDC
Manager. These efficiencies include full life cycle management tasks such as deployment,
configuration, patching and upgrades.

Read the following topics next:

n Logical Design for SDDC Manager

n SDDC Manager Design Requirements and Recommendations for VMware Cloud Foundation

Logical Design for SDDC Manager


You deploy an SDDC Manager appliance in the management domain for creating VI workload
domains, provisioning additional virtual infrastructure, and life cycle management of the SDDC
management components.

VMware by Broadcom 155


VMware Cloud Foundation Design Guide

Figure 9-1. Logical Design of SDDC Manager


VCF Instance A VCF Instance B

External Services Access Access External Services

My VMware User Interface User Interface My VMware

API API
VMware Depot VMware Depot

Infrastructure Provisioning Infrastructure Provisioning


and Configuration and Configuration

vCenter vCenter
Server Server

Life Cycle Management Life Cycle Management

NSX SDDC Manager SDDC Manager NSX

vCenter vCenter
Server Server

Supporting Infrastructure: Supporting Infrastructure:


ESXi Shared Storage, DNS, NTP, Shared Storage, DNS, NTP, ESXi
Certificate Authority Certificate Authority

VMware Aria VMware Aria


Suite Lifecycle Suite Lifecycle
Solution and Solution and
User Authentication User Authentication

vCenter Single vCenter Single


Sign-On Domain Sign-On Domain

You use SDDC Manager to perform the following operations:

n Commissioning or decommissioning ESXi hosts

n Deployment of VI workload domains

n Deployment of VMware Aria Suite Lifecycle

n Deployment of NSX Edge clusters in workload domains

n Adding and extending clusters in workload domains

n Life cycle management of the virtual infrastructure components in all workload domains and
of VMware Aria Suite Lifecycle

n Storage management for vVOL VASA providers

n Identity provider management

n Composable infrastructure management

n Creation of network pools for host configuration workload domains

n Product licenses storage

n Certificate management

VMware by Broadcom 156


VMware Cloud Foundation Design Guide

n Password management and rotation

n Backup configuration

Table 9-1. SDDC Manager Logical Components


VMware Cloud Foundation Instances with a Single VMware Cloud Foundation Instances with Multiple
Availability Zone Availability Zones

n A single SDDC Manager appliance is deployed on the n A single SDDC Manager appliance is deployed on the
management network. management network.
n vSphere HA protects the SDDC Manager appliance. n vSphere HA protects the SDDC Manager appliance.
n A vSphere DRS rule specifies that the SDDC Manager
appliance should run on an ESXi host in the first
availability zone.

SDDC Manager Design Requirements and


Recommendations for VMware Cloud Foundation
Consider the placement and network design requirements for SDDC Manager, and the best
practices for configuring the access to install and upgrade software bundles.

SDDC Manager Design Requirements


You must meet the following design requirements for in your SDDC Manager design.

Table 9-2. SDDC Manager Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-SDDCMGR-REQD- Deploy an SDDC Manager SDDC Manager is required None.


CFG-001 system in the first to perform VMware Cloud
availability zone of the Foundation capabilities,
management domain. such as provisioning
VI workload domains,
deploying solutions,
patching, upgrading, and
others.

VCF-SDDCMGR-REQD- Deploy SDDC Manager with The configuration of None.


CFG-002 its default configuration. SDDC Manager is not
configurable and should
not be changed from its
defaults.

VCF-SDDCMGR-REQD- Place the SDDC Manager n Simplifies IP addressing None.


CFG-003 appliance on the VM for management VMs
management network. by using the same
VLAN and subnet.
n Provides simplified
secure access to
management VMs in
the same VLAN
network.

VMware by Broadcom 157


VMware Cloud Foundation Design Guide

SDDC Manager Design Recommendations


In your SDDC Manager design, you can apply certain best practices.

Table 9-3. SDDC Manager Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-SDDCMGR-RCMD- Connect SDDC Manager SDDC Manager must be The rules of your
CFG-001 to the Internet for able to download install organization might not
downloading software and upgrade software permit direct access to the
bundles. bundles for deployment of Internet. In this case, you
VI workload domains and must download software
solutions, and for upgrade bundles for SDDC Manager
from a repository. manually.

VCF-SDDCMGR-RCMD- Configure a network proxy To protect SDDC Manager The proxy must not
CFG-002 to connect SDDC Manager against external attacks use authentication because
to the Internet. from the Internet. SDDC Manager does
not support proxy with
authentication.

VCF-SDDCMGR-RCMD- Configure SDDC Manager Software bundles for Requires the use of a
CFG-003 with a VMware Customer VMware Cloud Foundation VMware Customer Connect
Connect account with are stored in a repository user account with access to
VMware Cloud Foundation that is secured with access VMware Cloud Foundation
entitlement to check for controls. licensing.
and download software Sites without an internet
bundles. connection can use local
upload option instead.

VCF-SDDCMGR-RCMD- Configure SDDC Manager Provides increased security An external certificate


CFG-004 with an external certificate by implementing signed authority, such as Microsoft
authority that is responsible certificate generation and CA, must be locally
for providing signed replacement across the available.
certificates. management components.

VMware by Broadcom 158


VMware Aria Suite Lifecycle
Design for VMware Cloud
Foundation
10
In VMware Cloud Foundation, VMware Aria Suite Lifecycle provides life cycle management
capabilities for VMware Aria Suite components and Workspace ONE Access, including
automated deployment, configuration, patching, and upgrade, and content management across
VMware Aria Suite products.

You deploy VMware Aria Suite Lifecycle by using SDDC Manager. SDDC Manager deploys
VMware Aria Suite Lifecycle in VMware Cloud Foundation mode. In this mode, VMware Aria Suite
Lifecycle is integrated with SDDC Manager, providing the following benefits:

n Integration with the SDDC Manager inventory to retrieve infrastructure details when creating
environments for Workspace ONE Access and VMware Aria Suite components, such as NSX
segments and vCenter Server details.

n Automation of the NSX load balancer configuration when deploying Workspace ONE Access,
VMware Aria Operations, and VMware Aria Automation.

n Deployment details for VMware Aria Suite Lifecycle environments are populated in the SDDC
Manager inventory and can be queried using the SDDC Manager API.

n Day-two workflows in SDDC Manager to connect VMware Aria Operations for Logs and
VMware Aria Operations to workload domains.

n The ability to manage password life cycle for Workspace ONE Access and VMware Aria Suite
components.

For information about deploying VMware Aria Suite components, see VMware Validated
Solutions.

Read the following topics next:

n Logical Design for VMware Aria Suite Lifecycle for VMware Cloud Foundation

n Network Design for VMware Aria Suite Lifecycle

n Data Center and Environment Design for VMware Aria Suite Lifecycle

n Locker Design for VMware Aria Suite Lifecycle

n VMware Aria Suite Lifecycle Design Requirements and Recommendations for VMware Cloud
Foundation

VMware by Broadcom 159


VMware Cloud Foundation Design Guide

Logical Design for VMware Aria Suite Lifecycle for VMware


Cloud Foundation
You deploy VMware Aria Suite Lifecycle to provide life cycle management capabilities for
VMware Aria Suite components and a Workspace ONE Access cluster.

Logical Design
In a VMware Cloud Foundation environment, you use VMware Aria Suite Lifecycle in VMware
Cloud Foundation mode. In this mode, VMware Aria Suite Lifecycle is integrated with VMware
Cloud Foundation in the following way:

n SDDC Manager deploys the VMware Aria Suite Lifecycle appliance. Then, you deploy the
VMware Aria Suite products that are supported by VMware Cloud Foundation by using
VMware Aria Suite Lifecycle.

n Supported versions are controlled by the VMware Aria Suite Lifecycle appliance and Product
Support Packs. See the VMware Interoperability Matrix.

n To orchestrate the deployment, patching, and upgrade of Workspace ONE Access and the
VMware Aria Suite products, VMware Aria Suite Lifecycle communicates with SDDC Manager
and the management domain vCenter Server in the environment.

n SDDC Manager configures the load balancer for Workspace ONE Access, VMware Aria
Operations, and VMware Aria Automation.

VMware by Broadcom 160


VMware Cloud Foundation Design Guide

Figure 10-1. Logical Design of VMware Aria Suite Lifecycle

VCF Instance A

Identity Management

Access Workspace
ONE Access
Integration
User Interface

SDDC
REST API Manager

VMware Aria
Lifecycle
in VMware Cloud
Life Cycle Management Foundation Mode
Endpoint
VMware Aria
Suite Components
vCenter
Server
Workspace
ONE Access

Shared Storage

VCF Instance B

Access Integration

User Interface SDDC


VMware Aria
Lifecycle Manager
in VMware Cloud
REST API
Foundation Mode

Life Cycle Management Endpoint

VMware Aria vCenter


Operations for Logs Server

Shared Storage

According to the VMware Cloud Foundation topology deployed, VMware Aria Suite Lifecycle is
deployed in one or more locations and is responsible for the life cycle of the VMware Aria Suite
components in one or more VMware Cloud Foundation instances.

VMware Cloud Foundation instances might be connected for the following reasons:

n Disaster recovery of the VMware Aria Suite components.

VMware by Broadcom 161


VMware Cloud Foundation Design Guide

n Over-arching management of those instances from the same VMware Aria Suite
deployments.

Table 10-1. VMware Aria Suite Lifecycle Component Layout


VMware Cloud Foundation Instances VMware Cloud Foundation Instances Connected VMware Cloud
with a Single Availability Zone with Multiple Availability Zones Foundation Instances

n A single VMware Aria Suite n A single VMware Aria Suite The VMware Aria Suite Lifecycle
Lifecycle appliance deployed on Lifecycle appliance deployed on instance in the first VMware Cloud
the cross-instance NSX segment. the cross-instance NSX segment. Foundation instance provides life
n vSphere HA protects the VMware n vSphere HA protects the VMware cycle management for:
Aria Suite Lifecycle appliance. Aria Suite Lifecycle appliance. n Workspace ONE Access
Life cycle management for: n A should-run vSphere DRS rule n VMware Aria Suite
n Workspace ONE Access specifies that the VMware Aria VMware Aria Suite Lifecycle in
Suite Lifecycle appliance should each additional VMware Cloud
n VMware Aria Suite
run on an ESXi host in the first Foundation instance provides life
availability zone. cycle management for:
Life cycle management for: n VMware Aria Operations for Logs
n Workspace ONE Access
n VMware Aria Suite

Network Design for VMware Aria Suite Lifecycle


For secure access to the UI and API, you place the VMware Aria Suite Lifecycle appliance on an
overlay-backed (recommended) or VLAN-backed Application Virtual Network.

VMware Aria Suite Lifecycle must have routed access to the management VLAN through the
Tier-0 gateway in the NSX instance for the management domain.

VMware by Broadcom 162


VMware Cloud Foundation Design Guide

Figure 10-2. Network Design for VMware Aria Suite Lifecycle

Data Center Active Internet/


User Directory Enterprise
Network

Physical Management VI Workload VI Workload Management Physical


Upstream Domain Domain Domain Domain Upstream
Router vCenter Server vCenter Server vCenter Server vCenter Server Router

Management VLAN Management VLAN

Cross-Instance NSX Tier-0 Gateway

Local-Instance Local-Instance
Cross-Instance NSX Tier-1 Gateway
NSX Tier-1 Gateway NSX Tier-1 Gateway

Cross-Instance NSX Segment Cross-Instance NSX Segment


(VCF Instance A) (VCF Instance B)

APP APP
OS OS

VMware Aria Suite Lifecycle VMware Aria Suite Lifecycle


Appliance Appliance

VCF Instance A VCF Instance B

Data Center and Environment Design for VMware Aria Suite


Lifecycle
To deploy VMware Aria Suite products by using VMware Aria Suite Lifecycle, you configure
product support, data centers, environment structures, and product specifications.

Product Support
VMware Aria Suite Lifecycle provides several methods to obtain and store product binaries for
the install, patch, and upgrade of the VMware Aria Suite products.

Table 10-2. Methods for Obtaining and Storing Product Binaries

Method Description

Product Upload n You can upload and discover product binaries to the VMware Aria Suite
Lifecycle appliance.

VMware Customer Connect n You can integrate vVMware Aria Suite Lifecycle with VMware
Customer Connect to access and download VMware Aria Suite product
entitlements from an online depot over the Internet. This method
simplifies, automates, and organizes the repository.

VMware by Broadcom 163


VMware Cloud Foundation Design Guide

Data Centers and Environments


VMware Aria Suite Lifecycle supports the deployment and upgrade of VMware Aria Suite
products in a logical environment grouping.

You create data centers and environments in VMware Aria Suite Lifecycle to manage the life
cycle operations on the VMware Aria Suite products and to support the growth of the SDDC.

Table 10-3. VMware Aria Suite Lifecycle Logical Constructs

Construct Definition

Datacenter Represents a geographical or logical location for


an organization. Management domain vCenter Server
instances are added to specific data centers.

Environment Is mapped to a data center object. Each environment can


contain only one instance of a VMware Aria Suite product.

Table 10-4. Logical Datacenter to vCenter Server Mappings in VMware Aria Suite Lifecycle

Logical Datacenter vCenter Server Type Description

Cross-instance n Management domain vCenter Server for Supports the deployment of cross-instance
the local VMware Cloud Foundation components, such as Workspace ONE
instance. Access, VMware Aria Operations, and
n Management domain vCenter Server for VMware Aria Automation, including any per-
an additional VMware Cloud Foundation instance collector components.
instance.

Local-instance Management domain vCenter Server for the Supports the deployment of VMware Aria
local VMware Cloud Foundation instance. Operations for Logs.

VMware by Broadcom 164


VMware Cloud Foundation Design Guide

Table 10-5. VMware Aria Suite Lifecycle Environment Types

Environment Type Description

Global Environment Contains the Workspace ONE Access instance that is


required before you can deploy VMware Aria Automation.

VMware Cloud Foundation Mode n Infrastructure details for the deployed products,
including vCenter Server, networking, DNS and NTP
information are retrieved from the SDDC Manager
inventory.
n Successful deployment details are synced back to the
SDDC Manager inventory.
n Limited to one instance of each VMware Aria Suite
product.

Standalone Mode n Infrastructure details for the deployed products are


entered manually.
n Successful deployment details are not synced back to
the SDDC Manager inventory.
n Supports deployment of more than one instance of a
VMware Aria Suite product.

Note You can deploy new VMware Aria Suite products to the SDDC environment or import
existing product deployments.

Table 10-6. Environment Topologies


Environment VMware Cloud
Name Foundation Mode Logical Datacenter Product Components

Global Enabled Cross-instance Workspace ONE Access


Environment

Cross-instance Enabled Cross-instance n VMware Aria Operations


analytics nodes
n VMware Aria Operations remote
collectors
n VMware Aria Automation cluster

Each instance Enabled Local-instance VMware Aria Operations for Logs


cluster nodes

Locker Design for VMware Aria Suite Lifecycle


The VMware Aria Suite Lifecycle Locker allows you to secure and manage passwords,
certificates, and licenses for VMware Aria Suite product solutions and integrations.

Passwords
VMware Aria Suite Lifecycle stores passwords in the locker repository which are referenced
during life cycle operations on data centers, environments, products, and integrations.

VMware by Broadcom 165


VMware Cloud Foundation Design Guide

Table 10-7. Life Cycle Operations Use of Locker Passwords in VMware Aria Suite Lifecycle

Life Cycle Operations Element Password Use

Datacenters vCenter Server credentials for aVMware Aria Suite Lifecycle-to-vSphere


integration user.

Environments n Global environment default configuration administrator,configadmin.


n Environment password, for example, for product default admin or root
password.

Products n Product administrator password, for example, the admin password for
an individual product.
n Product appliance password, for example, the root password for an
individual product.

Certificates
VMware Aria Suite Lifecycle stores certificates in the Locker repository which can be referenced
during product life cycle operations. Externally provided certificates, such as Certificate
Authority-signed certificates, can be imported or certificates can be generated by the VMware
Aria Suite Lifecycle appliance.

Licenses
VMware Aria Suite Lifecycle stores licenses in the Locker repository which can be referenced
during product life cycle operations. Licenses can be validated and added to the repository
directory or imported through an integration with VMware Customer Connect.

VMware Aria Suite Lifecycle Design Requirements and


Recommendations for VMware Cloud Foundation
Consider the placement, networking, sizing and high availability requirements for using VMware
Aria Suite Lifecycle for deployment and life cycle management of VMware Aria Suite components
in VMware Cloud Foundation. Apply similar best practices for having VMware Aria Suite Lifecycle
operate in an optimal way.

VMware Aria Suite Lifecycle Design Requirements


You must meet the following design requirements for standard and stretched clusters in
your VMware Aria Suite Lifecycle design for VMware Cloud Foundation. For NSX Federation,
additional requirements exist.

VMware by Broadcom 166


VMware Cloud Foundation Design Guide

Table 10-8. VMware Aria Suite Lifecycle Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-VASL- Deploy a VMware Aria Suite Provides life cycle You must ensure that
REQD-CFG-001 Lifecycle instance in the management operations for the required resources are
management domain of each VMware Aria Suite applications available.
VMware Cloud Foundation and Workspace ONE Access.
instance to provide life cycle
management for VMware Aria
Suite and Workspace ONE
Access.

VCF-VASL- Deploy VMware Aria Suite n Deploys VMware Aria Suite None.
REQD-CFG-002 Lifecycle by using SDDC Lifecycle in VMware Cloud
Manager. Foundation mode, which
enables the integration
with the SDDC Manager
inventory for product
deployment and life cycle
management of VMware
Aria Suite components.
n Automatically configures
the standalone Tier-1
gateway required for load
balancing the clustered
Workspace ONE Access
and VMware Aria Suite
components.

VCF-VASL- Allocate extra 100 GB of n Provides support for None.


REQD-CFG-003 storage to the VMware Aria VMware Aria Suite product
Suite Lifecycle appliance for binaries (install, upgrade,
VMware Aria Suite product and patch) and content
binaries. management.
n SDDC Manager automates
the creation of storage.

VCF-VASL- Place the VMware Aria Provides a consistent You must use an
REQD-CFG-004 Suite Lifecycle appliance deployment model for implementation in NSX to
on an overlay-backed management applications. support this networking
(recommended) or VLAN- configuration.
backed NSX network segment.

VCF-VASL- Import VMware Aria Suite n You can review the validity, When using the API, you must
REQD-CFG-005 product licenses to the Locker details, and deployment specify the Locker ID for the
repository for product life cycle usage for the license across license to be used in the JSON
operations. the VMware Aria Suite payload.
products.
n You can reference and
use licenses during product
life cycle operations, such
as deployment and license
replacement.

VMware by Broadcom 167


VMware Cloud Foundation Design Guide

Table 10-8. VMware Aria Suite Lifecycle Design Requirements for VMware Cloud Foundation
(continued)

Requirement ID Design Requirement Justification Implication

VCF-VASL- Configure datacenter objects You can deploy and manage You must manage a separate
REQD-ENV-001 in VMware Aria Suite the integrated VMware Aria datacenter object for the
Lifecycle for local and cross- Suite components across the products that are specific to
instance VMware Aria Suite SDDC as a group. each instance.
deployments and assign the
management domain vCenter
Server instance to each data
center.

VCF-VASL- If deploying VMware Aria Supports the deployment of None.


REQD-ENV-002 Operations for Logs, create a an instance of VMware Aria
local-instance environment in Operations for Logs.
VMware Aria Suite Lifecycle.

VCF-VASL- If deploying VMware Aria n Supports deployment and You can manage instance-
REQD-ENV-003 Operations or VMware Aria management of the specific components, such as
Automation, create a cross- integrated VMware Aria remote collectors, only in
instance environment in Suite products across an environment that is cross-
VMware Aria Suite Lifecycle VMware Cloud Foundation instance.
instances as a group.
n Enables the deployment
of instance-specific
components, such as
VMware Aria Operations
remote collectors. In
VMware Aria Suite
Lifecycle, you can deploy
and manage VMware
Aria Operations remote
collector objects only
in an environment that
contains the associated
cross-instance components.

VMware by Broadcom 168


VMware Cloud Foundation Design Guide

Table 10-8. VMware Aria Suite Lifecycle Design Requirements for VMware Cloud Foundation
(continued)

Requirement ID Design Requirement Justification Implication

VCF-VASL- Use the custom vCenter Server VMware Aria Suite Lifecycle You must maintain the
REQD-SEC-001 role for VMware Aria Suite accesses vSphere with the permissions required by the
Lifecycle that has the minimum minimum set of permissions custom role.
privileges required to support that are required to support
the deployment and upgrade the deployment and upgrade
of VMware Aria Suite products. of VMware Aria Suite products.
SDDC Manager automates the
creation of the custom role.

VCF-VASL- Use the service account in n Provides the following n You must maintain the life
REQD-SEC-002 vCenter Server for application- access control features: cycle and availability of the
to-application communication n VMware Aria Suite service account outside of
from VMware Aria Suite Lifecycle accesses SDDC manager password
Lifecycle to vSphere. Assign vSphere with the rotation.
global permissions using the minimum set of
custom role. required permissions.
n You can introduce
improved accountability
in tracking request-
response interactions
between the
components of the
SDDC.
n SDDC Manager automates
the creation of the service
account.

Table 10-9. VMware Aria Suite Lifecycle Design Requirements for Stretched Clusters in VMware
Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-VASL- For multiple availability zones, Ensures that, by default, the If VMware Aria Suite Lifecycle
REQD-CFG-006 add the VMware Aria Suite VMware Aria Suite Lifecycle is deployed after the creation
Lifecycle appliance to the VM appliance is powered on a host of the stretched management
group for the first availability in the first availability zone. cluster, you must add the
zone. VMware Aria Suite Lifecycle
appliance to the VM group
manually.

VMware by Broadcom 169


VMware Cloud Foundation Design Guide

Table 10-10. VMware Aria Suite Lifecycle Design Requirements for NSX Federation in VMware
Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-VASL- Configure the DNS settings Improves resiliency in the event As you scale from a
REQD-CFG-007 for the VMware Aria Suite of an outage of external deployment with a single
Lifecycle appliance to use DNS services for a VMware Cloud VMware Cloud Foundation
servers in each instance. Foundation instance. instance to one with multiple
VMware Cloud Foundation
instances, the DNS settings
of the VMware Aria Suite
Lifecycle appliance must be
updated.

VCF-VASL- Configure the NTP settings Improves resiliency if an outage As you scale from a
REQD-CFG-008 for the VMware Aria Suite of external services for a deployment with a single
Lifecycle appliance to use NTP VMware Cloud Foundation VMware Cloud Foundation
servers in each VMware Cloud instance occurs. instance to one with multiple
Foundation instance. VMware Cloud Foundation
instances, the NTP settings
on the VMware Aria Suite
Lifecycle appliance must be
updated.

VCF-VASL- Assign the management Supports the deployment None.


REQD-ENV-004 domain vCenter Server of VMware Aria Operations
instance in the additional remote collectors in an
VMware Cloud Foundation additional VMware Cloud
instance to the cross-instance Foundation instance.
data center.

VMware Aria Suite Lifecycle Design Recommendations


In your VMware Aria Suite Lifecycle design for VMware Cloud Foundation, you can apply certain
best practices .

VMware by Broadcom 170


VMware Cloud Foundation Design Guide

Table 10-11. VMware Aria Suite Lifecycle Design Recommendations for VMware Cloud
Foundation
Recommendati
on ID Design Recommendation Justification Implication

VCF-VASL- Protect VMware Aria Suite Supports the availability None.


RCMD-CFG-001 Lifecycle by using vSphere HA. objectives for VMware
Aria Suite Lifecycle without
requiring manual intervention
during a failure event.

VCF-VASL- Obtain product binaries for n You can upgrade VMware The site must have an Internet
RCMD-LCM-001 install, patch, and upgrade in Aria Suite products connection to use VMware
VMware Aria Suite Lifecycle based on their general Customer Connect.
from VMware Customer availability and endpoint Sites without an Internet
Connect. interoperability rather than connection should use the local
being listed as part of upload option instead.
VMware Cloud Foundation
bill of materials (BOM).
n You can deploy and
manage binaries in an
environment that does not
allow access to the Internet
or are dark sites.

VCF-VASL- Use support packs (PSPAKS) Enables the upgrade of an None.


RCMD-LCM-002 for VMware Aria Suite Lifecycle existing VMware Aria Suite
to enable upgrading to later Lifecycle to permit later
versions of VMware Aria Suite versions of VMware Aria
products. Suite products without an
associated VMware Cloud
Foundation upgrade. See
VMware Knowledge Base
article 88829

VCF-VASL- Enable integration between n Enables authentication to You must deploy and configure
RCMD-SEC-001 VMware Aria Suite Lifecycle VMware Aria Suite Lifecycle Workspace ONE Access
and your corporate identity by using your corporate to establish the integration
source by using the Workspace identity source. between VMware Aria Suite
ONE Access instance. n Enables authorization Lifecycle and your corporate
through the assignment identity sources.
of organization and cloud
services roles to enterprise
users and groups defined
in your corporate identity
source.

VCF-VASL- Create corresponding security Streamlines the management n You must create the
RCMD-SEC-002 groups in your corporate of VMware Aria Suite Lifecycle security groups outside of
directory services for VMware roles for users. the SDDC stack.
Aria Suite Lifecycle roles: n You must set the desired
n VCF directory synchronization
n Content Release Manager interval in Workspace
ONE Access to ensure
n Content Developer
that changes are available
within a reasonable period.

VMware by Broadcom 171


Workspace ONE Access Design
for VMware Cloud Foundation 11
Workspace ONE Access in VMware Cloud Foundation mode provides identity and access
management services to specific components in the SDDC, such as VMware Aria Suite.

Workspace ONE Access provides the following capabilities:

n Directory integration to authenticate users against an identity provider (IdP), such as Active
Directory or LDAP.

n Multiple authentication methods.

n Access policies that consist of rules to specify criteria that users must meet to authenticate.

The Workspace ONE Access instance that is integrated with VMware Aria Suite Lifecycle
provides identity and access management services to VMware Aria Suite solutions that either run
in a VMware Cloud Foundation instance or must be available across VMware Cloud Foundation
instances.

For identity management design for a VMware Aria Suite product, see VMware Cloud Foundation
Validated Solutions .

For identity and access management for components other than VMware Aria Suite, such as
NSX, you can deploy a standalone Workspace ONE Access instance. See Identity and Access
Management for VMware Cloud Foundation.

Read the following topics next:

n Logical Design for Workspace ONE Access

n Sizing Considerations for Workspace ONE Access for VMware Cloud Foundation

n Network Design for Workspace ONE Access

n Integration Design for Workspace ONE Access with VMware Cloud Foundation

n Deployment Model for Workspace ONE Access

n Workspace ONE Access Design Requirements and Recommendations for VMware Cloud
Foundation

VMware by Broadcom 172


VMware Cloud Foundation Design Guide

Logical Design for Workspace ONE Access


To provide identity and access management services to supported SDDC components, such
as VMware Aria Suite components, this design uses a Workspace ONE Access instance that is
deployed on an NSX network segment.

VMware by Broadcom 173


VMware Cloud Foundation Design Guide

Figure 11-1. Logical Design for Standard Workspace ONE Access

VCF Instance A

Identity Provider

Directory Services
eg. AD, LDAP

Access

User Interface

Rest API

Standard
Workspace
ONE Access

Primary

Supporting Components:
Postgres

Supporting Infrastructure:
Shared Storage
DNS, NTP, SMTP

Cross- and Local-Instance


Solutions

VMware Aria
Lifecycle

VMware Aria
Suite Component

VMware by Broadcom 174


VMware Cloud Foundation Design Guide

Table 11-1. Logical Components of Standard Workspace ONE Access


VMware Cloud Foundation Instances with a Single VMware Cloud Foundation Instances with Multiple
Availability Zone Availability Zones

n A single-node Workspace ONE Access instance n A single-node Workspace ONE Access instance
deployed on an overlay-backed (recommended) or deployed on an overlay-backed (recommended) or
VLAN-backed NSX segment. VLAN-backed NSX segment.
n SDDC solutions that are portable across VMware n SDDC solutions that are portable across VMware
Cloud Foundation instances are integrated with the Cloud Foundation instances are integrated with the
Workspace ONE Access instance in the first VMware Workspace ONE Access instance in the first VMware
Cloud Foundation instance. Cloud Foundation instance.
n A should-run-on-hosts-in-group vSphere DRS rule
ensures that, under normal operating conditions, the
Workspace ONE Access node runs on a management
ESXi host in the first availability zone.

VMware by Broadcom 175


VMware Cloud Foundation Design Guide

Figure 11-2. Logical Design for Clustered Workspace ONE Access

VCF Instance A

Identity Provider

Directory Services
e.g. AD, LDAP

Access

User Interface

REST API

NSX
Load Balancer

Clustered
Workspace
ONE Access

Primary Secondary Secondary

Supporting Components:
Postgres

Supporting Infrastructure:
Shared Storage,
DNS, NTP, SMTP

Cross - and
Local-Instance Solutions

VMware Aria
Lifecycle

VMware Aria
Suite Component

VMware by Broadcom 176


VMware Cloud Foundation Design Guide

Table 11-2. Logical Components of Clustered Workspace ONE Access


VMware Cloud Foundation Instances with a Single VMware Cloud Foundation Instances with Multiple
Availability Zone Availability Zones

n A three-node Workspace ONE Access cluster n A three-node Workspace ONE Access cluster
behind an NSX load balancer and deployed on an behind an NSX load balancer and deployed on an
overlay-backed (recommended) or VLAN-backed NSX overlay-backed (recommended) or VLAN-backed NSX
segment is deployed in the first VMware Cloud segment.
Foundation instance. n All Workspace ONE Access services and databases
n All Workspace ONE Access services and databases are configured for high availability using a native
are configured for high availability using a native cluster configuration. SDDC solutions that are portable
cluster configuration. SDDC solutions that are portable across VMware Cloud Foundation instances are
across VMware Cloud Foundation instances are integrated with this Workspace ONE Access cluster.
integrated with this Workspace ONE Access cluster. n Each node of the three-node cluster is configured as a
n Each node of the three node cluster is configured as a connector to any relevant identity providers
connector to any relevant identity providers n vSphere HA protects the Workspace ONE Access
n vSphere HA protects the Workspace ONE Access nodes.
nodes. n A vSphere DRS anti-affinity rule ensures that the
n vSphere DRS anti-affinity rules ensure that the Workspace ONE Access nodes run on different ESXi
Workspace ONE Access nodes run on different ESXi hosts.
hosts. n A should-run-on-hosts-in-group vSphere DRS rule
n Additional single-node Workspace ONE Access ensures that, under normal operating conditions, the
instance is deployed on an overlay-backed Workspace ONE Access nodes run on management
(recommended) or VLAN-backed NSX segment in all ESXi hosts in the first availability zone.
other VMware Cloud Foundation instances. n Additional single-node Workspace ONE Access
instance is deployed on an overlay-backed
(recommended) or VLAN-backed NSX segment in all
other VMware Cloud Foundation instances.

Sizing Considerations for Workspace ONE Access for


VMware Cloud Foundation
When you deploy Workspace ONE Access, you select to deploy the appliance with a size that is
suitable for the scale of your environment. The option that you select determines the number of
CPUs and the amount of memory of the appliance.

For detailed sizing based on the overall profile of the VMware Cloud Foundation instance you
plan to deploy, see VMware Cloud Foundation Planning and Preparation Workbook.

Table 11-3. Sizing Considerations for Workspace ONE Access

Workspace ONE Access Appliance Size Supported Limits

Extra Small n 3,000 users


n 30 groups

Small n 5,000 users


n 50 groups

Medium n 10,000 Users


n 100 groups
A minimum requirement for VMware Aria Automation

VMware by Broadcom 177


VMware Cloud Foundation Design Guide

Table 11-3. Sizing Considerations for Workspace ONE Access (continued)

Workspace ONE Access Appliance Size Supported Limits

Large n 25,000 users


n 250 groups

Extra Large n 50,000 users


n 500 groups

Extra Extra Large n 100,000 users


n 1,000 groups

Network Design for Workspace ONE Access


For secure access to the UI and API of Workspace ONE Access, you deploy the nodes on an
overlay-backed or VLAN-backed NSX network segment.

Network Segment
This network design has the following features:

n All Workspace ONE Access components have routed access to the management VLAN
through the Tier-0 gateway in the NSX instance for the management domain.

n Routing to the management network and other external networks is dynamic and is based on
the Border Gateway Protocol (BGP).

VMware by Broadcom 178


VMware Cloud Foundation Design Guide

Figure 11-3. Network Design for Standard Workspace ONE Access

Data Center Active Internet/


User Directory Enterprise
Network

VCF Instance A VCF Instance B

Physical Management VI Workload VI Workload Management Physical


Upstream Domain Domain Domain Domain Upstream
Router vCenter Server vCenter Server vCenter Server vCenter Server Router

Management VLAN Management VLAN

Cross-Instance NSX Tier-0 Gateway

Local-Instance Cross-Instance Local-Instance


NSX Tier-1 Gateway NSX Tier-1 Gateway NSX Tier-1 Gateway

Cross-Instance NSX Segment

APP
OS

WSA Node 1

Standard Workspace ONE Access

VCF Instance A

VMware by Broadcom 179


VMware Cloud Foundation Design Guide

Figure 11-4. Network Design for Clustered Workspace ONE Access

Data Center Active Internet/


User Directory Enterprise
Network

VCF Instance A VCF Instance B

Physical Management VI Workload VI Workload Management Physical


Upstream Domain Domain Domain Domain Upstream
Router vCenter Server vCenter Server vCenter Server vCenter Server Router

Management VLAN Management VLAN

Cross-Instance NSX Tier-0 Gateway

Local-Instance Cross-Instance Local-Instance


NSX Tier-1 Gateway NSX Tier-1 Gateway NSX Tier-1 Gateway

NSX Load Balancer

Cross-Instance NSX Segment

APP APP APP


OS OS OS

WSA Node 1 WSA Node 2 WSA Node 3

Workspace ONE Access Cluster

VCF Instance A

Load Balancing
A Workspace ONE Access cluster deployment requires a load balancer to manage connections
to the Workspace ONE Access services.

Load-balancing services are provided by NSX. During the deployment of the Workspace ONE
Access cluster or scale-out of a standard deployment, VMware Aria Suite Lifecycle and SDDC
Manager coordinate to automate the configuration of the NSX load balancer. The load balancer is
configured with the following settings:

VMware by Broadcom 180


VMware Cloud Foundation Design Guide

Table 11-4. Clustered Workspace ONE Access Load Balancer Configuration

Load Balancer Element Settings

Service Monitor n Use the default intervals and timeouts:


n Monitoring interval: 3 seconds
n Idle timeout period: 10 seconds
n Rise/Fall: 3 seconds
n HTTP request:
n HTTP method: Get
n HTTP request version: 1.1
n Request URL: /SAAS/API/1.0/REST/system/health/
heartbeat.
n HTTP response:
n HTTP response code: 200
n HTTP response body: OK
n SSL configuration:
n Server SSL: Enabled
n Client certificate: Cross-instance Workspace ONE
Access cluster certificate
n SSL profile: default-balanced-server-ssl-profil.

Server Pool n LEAST_CONNECTION algorithm.


n Set the SNAT translation mode to Auto Map for the
pool.
n Static members:
n Name: host name IP:
n IP address
n Port: 443
n Weight: 1
n State: Enabled
n Set the above service monitor.

HTTP Application Profile n Timeout


n 3,600 seconds (60 minutes).
n X-Forwarded-For
n Insert

VMware by Broadcom 181


VMware Cloud Foundation Design Guide

Table 11-4. Clustered Workspace ONE Access Load Balancer Configuration (continued)

Load Balancer Element Settings

Cookie Persistence Profile n Cookie name


n JSESSIONID.
n Cookie mode
n Rewrite

Virtual Server n HTTP type


n L7
n Port
n 443
n IP
n Workspace ONE Access cluster IP
n Persistence
n Above Cookie Persistence Profile.
n Application profile
n Above HTTP application profile.
n Server pool
n Above server pool

Integration Design for Workspace ONE Access with VMware


Cloud Foundation
You integrate supported SDDC components with the Workspace ONE Access cluster to enable
authentication through the identity and access management services.

After the integration, information security and access control configurations for the integrated
SDDC products can be configured.

Table 11-5. Workspace ONE Access SDDC Integration

SDDC Component Integration Considerations

vCenter Server Not Supported For directory services you must


connect vCenter Server directly to
Active Directory. See Identity and
Access Management for VMware
Cloud Foundation.

SDDC Manager Not Supported SDDC Manager uses vCenter Single


Sign-On. For directory services, you
must connect vCenter Server directly
to Active Directory

VMware by Broadcom 182


VMware Cloud Foundation Design Guide

Table 11-5. Workspace ONE Access SDDC Integration (continued)

SDDC Component Integration Considerations

NSX Supported If you intend to scale out to an


environment with multiple VMware
Cloud Foundation instances, for
example, for disaster recovery, you
must deploy an additional standard
instance of Workspace ONE Access
in each VMware Cloud Foundation
instance. The Workspace ONE
Access instance that is leveraged
by components protected across
VMware Cloud Foundation instances
might fail over between physical
locations which will impact the
authentication to NSX in the first
VMware Cloud Foundation instance.
See Identity and Access Management
for VMware Cloud Foundation.

VMware Aria Suite Lifecycle Supported None.

See VMware Cloud Foundation Validated Solutions for the design for specific VMware Aria Suite
components including identity management.

Deployment Model for Workspace ONE Access


Workspace ONE Access is distributed as a virtual appliance in OVA format that you can deploy
and manage from VMware Aria Suite Lifecycle together with other VMware Aria Suite products.
The Workspace ONE Access appliance includes identity and access management services.

Deployment Type
You consider the deployment type, standard or cluster, according to the design objectives for
the availability and number of users that the system and integrated SDDC solutions must support.
You deploy Workspace ONE Access on the default management vSphere cluster.

VMware by Broadcom 183


VMware Cloud Foundation Design Guide

Table 11-6. Topology Attributes of Workspace ONE Access

Deployment Type Description Benefit Drawbacks

Standard (Recommended) n Single node n Can be scaled out to n Does not provide high
n NSX load balancer a 3-node cluster behind availability for Identity
automatically deployed. an NSX load balancer Provider connectors.
n Can leverage vSphere
HA for recovery after a
failure occurs.
n Consumes less
resources.

Cluster n Three node clustered n Provides high n May require manual


deployment using availability for Identity intervention after a
internal PostgreSQL Provider connectors. failure occurs.
database. n Consumes additional
n NSX load balancer resources.
automatically deployed.

Workspace ONE Access Design Requirements and


Recommendations for VMware Cloud Foundation
Consider the placement, networking, sizing and high availability requirements for using
Workspace ONE Access for identity and access management of SDDC solutions on a standard
or stretched management cluster in VMware Cloud Foundation. Apply similar best practices for
having Workspace ONE Access operate in an optimal way.

Workspace ONE Access Design Requirements


You must meet the following design requirements in your Workspace ONE Access design for
VMware Cloud Foundation, considering standard or stretched clusters. For NSX Federation,
additional requirements exist.

Table 11-7. Workspace ONE Access Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-WSA-REQD-ENV-001 Create a global A global environment is None.


environment in VMware required by VMware Aria
Aria Suite Lifecycle to Suite Lifecycle to deploy
support the deployment of Workspace ONE Access.
Workspace ONE Access.

VCF-WSA-REQD-SEC-001 Import certificate authority- n You can reference When using the API, you
signed certificates to and use certificate must specify the Locker ID
the Locker repository authority-signed for the certificate to be
for Workspace ONE certificates during used in the JSON payload.
Access product life cycle product life cycle
operations. operations, such
as deployment and
certificate replacement.

VMware by Broadcom 184


VMware Cloud Foundation Design Guide

Table 11-7. Workspace ONE Access Design Requirements for VMware Cloud Foundation
(continued)

Requirement ID Design Requirement Justification Implication

VCF-WSA-REQD-CFG-001 Deploy an appropriately The Workspace ONE None.


sized Workspace ONE Access instance is
Access instance according managed by VMware
to the deployment model Aria Suite Lifecycle and
you have selected by imported into the SDDC
using VMware Aria Suite Manager inventory.
Lifecycle in VMware Cloud
Foundation mode.

VCF-WSA-REQD-CFG-002 Place the Workspace Provides a consistent You must use an


ONE Access appliances deployment model for implementation in NSX
on an overlay-backed or management applications to support this network
VLAN-backed NSX network in an environment with configuration.
segment. a single or multiple
VMware Cloud Foundation
instances.

VCF-WSA-REQD-CFG-003 Use the embedded Removes the need for None.


PostgreSQL database with external database services.
Workspace ONE Access.

VCF-WSA-REQD-CFG-004 Add a VM group for You can define the None.


Workspace ONE Access startup order of virtual
and set VM rules to machines regarding the
restart the Workspace service dependency. The
ONE Access VM group startup order ensures that
before any of the VMs vSphere HA powers on the
that depend on it for Workspace ONE Access
authentication. virtual machines in an
order that respects product
dependencies.

VCF-WSA-REQD-CFG-005 Connect the Workspace You can integrate your None.


ONE Access instance to enterprise directory with
a supported upstream Workspace ONE Access
Identity Provider. to synchronize users and
groups to the Workspace
ONE Access identity
and access management
services.

VMware by Broadcom 185


VMware Cloud Foundation Design Guide

Table 11-7. Workspace ONE Access Design Requirements for VMware Cloud Foundation
(continued)

Requirement ID Design Requirement Justification Implication

VCF-WSA-REQD-CFG-006 If using clustered Adding the additional Each of the Workspace


Workspace ONE Access, native connectors provides ONE Access cluster nodes
configure second and third redundancy and improves must be joined to the
native connectors that performance by load- Active Directory domain
correspond to the second balancing authentication to use Active Directory
and third Workspace ONE requests. with Integrated Windows
Access cluster nodes to Authentication with the
support the high availability native connector.
of directory services
access.

VCF-WSA-REQD-CFG-007 If using clustered n During the deployment You must use the load
Workspace ONE Access, of Workspace ONE balancer that is configured
use the NSX load balancer Access by using by SDDC Manager and the
that is configured by SDDC VMware Aria Suite integration with VMware
Manager on a dedicated Lifecycle, SDDC Aria Suite Lifecycle.
Tier-1 gateway. Manager automates the
configuration of an
NSX load balancer
for Workspace ONE
Access to facilitate
scale-out.

Table 11-8. Workspace ONE Access Design Requirements for Stretched Clusters in VMware
Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-WSA-REQD-CFG-008 Add the Workspace ONE Ensures that, by default, n If the Workspace
Access appliances to the the Workspace ONE ONE Access instance
VM group for the first Access cluster nodes are is deployed after
availability zone. powered on a host in the the creation of the
first availability zone. stretched management
cluster, you must add
the appliances to the
VM group manually.
n ClusteredWorkspace
ONE Access might
require manual
intervention after a
failure of the active
availability zone occurs.

VMware by Broadcom 186


VMware Cloud Foundation Design Guide

Table 11-9. Workspace ONE Access Design Requirements for NSX Federation in VMware Cloud
Foundation

Requirement ID Design Requirement Justification Implication

VCF-WSA-REQD-CFG-009 Configure the DNS settings Improves resiliency if None.


for Workspace ONE Access an outage of external
to use DNS servers in each services for a VMware
VMware Cloud Foundation Cloud Foundation instance
instance. occurs.

VCF-WSA-REQD-CFG-010 Configure the NTP settings Improves resiliency if If you scale from a
on Workspace ONE Access an outage of external deployment with a single
cluster nodes to use NTP services for a VMware VMware Cloud Foundation
servers in each VMware Cloud Foundation instance instance to one with
Cloud Foundation instance. occurs. multiple VMware Cloud
Foundation instances,
the NTP settings on
Workspace ONE Access
must be updated.

Workspace ONE Access Design Recommendations


In your Workspace ONE Access design for VMware Cloud Foundation, you can apply certain
best practices.

VMware by Broadcom 187


VMware Cloud Foundation Design Guide

Table 11-10. Workspace ONE Access Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-WSA-RCMD-CFG-001 Protect all Workspace Supports high availability None for standard
ONE Access nodes using for Workspace ONE deployments.
vSphere HA. Access. Clustered Workspace ONE
Access deployments might
require intervention if an
ESXi host failure occurs.

VCF-WSA-RCMD-CFG-002 When using Active The native (embedded) n In a multi-domain


Directory as an Identity Workspace ONE Access forest, where the
Provider, use Active connector binds to Active Workspace ONE
Directory over LDAP as Directory over LDAP Access instance
the Directory Service using a standard bind connects to a
connection option. authentication. child domain, Active
Directory security
groups must have
global scope.
Therefore, members
added to the
Active Directory global
security group must
reside within the
same Active Directory
domain.
n If authentication to
more than one Active
Directory domain is
required, additional
Workspace ONE
Access directories are
required.

VCF-WSA-RCMD-CFG-003 When using Active Provides the following n You must manage the
Directory as an Identity access control features: password life cycle of
Provider, use an Active n Workspace ONE this account.
Directory user account with Access connects to the n If authentication to
a minimum of read-only Active Directory with more than one Active
access to Base DNs for the minimum set of Directory domain is
users and groups as the required permissions to required, additional
service account for the bind and query the accounts are required
Active Directory bind. directory. for the Workspace ONE
n You can introduce Access connector to
improved accountability bind to each Active
in tracking request- Directory domain over
response interactions LDAP.
between the
Workspace ONE
Access and Active
Directory.

VMware by Broadcom 188


VMware Cloud Foundation Design Guide

Table 11-10. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-WSA-RCMD-CFG-004 Configure the directory n Limits the number You must manage
synchronization to of replicated groups the groups from
synchronize only groups required for each your enterprise
required for the integrated product. directory selected for
SDDC solutions. n Reduces the replication synchronization to
interval for group Workspace ONE Access.
information.

VCF-WSA-RCMD-CFG-005 Activate the When activated, members None.


synchronization of of the enterprise directory
enterprise directory group groups are synchronized
members when a group is to the Workspace ONE
added to the Workspace Access directory when
ONE Access directory. groups are added. When
deactivated, group names
are synchronized to the
directory, but members
of the group are not
synchronized until the
group is entitled to an
application or the group
name is added to an access
policy.

VCF-WSA-RCMD-CFG-006 Enable Workspace ONE Allows Workspace ONE Changes to group


Access to synchronize Access to update and membership are not
nested group members by cache the membership of reflected until the next
default. groups without querying synchronization event.
your enterprise directory.

VCF-WSA-RCMD-CFG-007 Add a filter to the Limits the number of To ensure that replicated
Workspace ONE Access replicated users for user accounts are managed
directory settings to Workspace ONE Access within the maximums, you
exclude users from the within the maximum scale. must define a filtering
directory replication. schema that works for your
organization based on your
directory attributes.

VMware by Broadcom 189


VMware Cloud Foundation Design Guide

Table 11-10. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-WSA-RCMD-CFG-008 Configure the mapped You can configure the User accounts in your
attributes included when minimum required and organization's enterprise
a user is added to the extended user attributes directory must have
Workspace ONE Access to synchronize directory the following required
directory. user accounts for the attributes mapped:
Workspace ONE Access n firstname, for example,
to be used as an givenname for Active
authentication source for Directory
cross-instance VMware
n lastName, for example,
Aria Suite solutions.
sn for Active Directory

n email, for example,


mail for Active
Directory
n userName, for
example,sAMAccountNam
e for Active Directory

n If you require users


to sign in with
an alternate unique
identifier, for example,
userPrincipalName, you
must map the
attribute and update
the identity and
access management
preferences.

VCF-WSA-RCMD-CFG-009 Configure the Workspace Ensures that any changes Schedule the
ONE Access directory to group memberships in synchronization interval
synchronization frequency the corporate directory to be longer than the
to a reoccurring schedule, are available for integrated time to synchronize from
for example, 15 minutes. solutions in a timely the enterprise directory.
manner. If users and groups
are being synchronized
to Workspace ONE
Access when the
next synchronization is
scheduled, the new
synchronization starts
immediately after the end
of the previous iteration.
With this schedule, the
process is continuous.

VMware by Broadcom 190


VMware Cloud Foundation Design Guide

Table 11-10. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-WSA-RCMD-SEC-001 Create corresponding Streamlines the n You must set the


security groups in management of Workspace appropriate directory
your corporate directory ONE Access roles to users. synchronization interval
services for these in Workspace ONE
Workspace ONE Access Access to ensure that
roles: changes are available
n Super Admin within a reasonable
period.
n Directory Admins
n You must create the
n ReadOnly Admin
security group outside
of the SDDC stack.

VCF-WSA-RCMD-SEC-002 Configure a password You can set a policy for You must set the policy
policy for Workspace Workspace ONE Access in accordance with your
ONE Access local local directory users that organization policies and
directory users, admin and addresses your corporate regulatory standards, as
configadmin. policies and regulatory applicable.
standards. You must apply the
The password policy is password policy on the
applicable only to the Workspace ONE Access
local directory users and cluster nodes.
does not impact your
organization directory.

VMware by Broadcom 191


Life Cycle Management Design for
VMware Cloud Foundation 12
In a VMware Cloud Foundation instance, you use SDDC Manager for life cycle management of
the management components in the entire instance except for NSX Global Manager and VMware
Aria Suite Lifecycle. VMware Aria Suite Lifecycle manages the life cycle of the components that it
deploys.

Life cycle management of a VMware Cloud Foundation instance is the process of performing
patch updates or upgrades to the underlying management components.

Table 12-1. Life Cycle Management for VMware Cloud Foundation

Component Management Domain VI Workload Domain

SDDC Manager SDDC Manager performs its own life Not applicable
cycle management.

NSX Local Manager SDDC Manager uses the NSX upgrade coordinator service in the NSX Local
Manager.

NSX Edges SDDC Manager uses the NSX upgrade coordinator service in NSX Manager.

NSX Global Manager You manually use the NSX upgrade coordinator service in the NSX Global
Manager.

vCenter Server You use SDDC Manager for life cycle management of all vCenter Server
instances.

ESXi n SDDC Manager uses either n SDDC Manager uses either


vSphere Lifecycle Manager vSphere Lifecycle Manager
baselines and baseline groups or baselines and baseline groups
vSphere Lifecycle Manager images or vSphere Lifecycle Manager
to update and upgrade the ESXi images to update and upgrade
hosts. the ESXi hosts.
n Custom vendor ISOs are n Custom vendor ISOs are
supported and might be required supported and might be
depending on the ESXi hardware required depending on the ESXi
in use. hardware in use.

VMware Aria Suite Lifecycle VMware Aria Suite Lifecycle performs Not applicable
its own life cycle management.

VMware by Broadcom 192


VMware Cloud Foundation Design Guide

VMware Cloud Foundation Life Cycle Management


Requirements
Consider the design requirements for automated and centralized life cycle management in the
context of the entire VMware Cloud Foundation environment.

VMware by Broadcom 193


VMware Cloud Foundation Design Guide

Table 12-2. Life Cycle Management Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-LCM-REQD-001 Use SDDC Manager to Because the deployment The operations team must
perform the life cycle scope of SDDC Manager understand and be aware
management of the covers the full VMware of the impact of a
following components: Cloud Foundation stack, patch, update, or upgrade
n SDDC Manager SDDC Manager performs operation by using SDDC
patching, update, or Manager.
n NSX Manager
upgrade of these
n NSX Edges
components across all
n vCenter Server workload domains.
n ESXi

VCF-LCM-REQD-002 Use VMware Aria Suite VMware Aria Suite n You must deploy
Lifecycle to manage the Lifecycle automates the life VMware Aria Suite
life cycle of the following cycle of VMware Aria Suite Lifecycle by using
components: Lifecycle and Workspace SDDC Manager.
n VMware Aria Suite ONE Access. n You must manually
Lifecycle apply Workspace
n Workspace ONE ONE Access patches,
Access updates, and hotfixes.
Patches, updates, and
hotfixes for Workspace
ONE Access are not
generally managed by
VMware Aria Suite
Lifecycle.

VCF-LCM-RCMD-001 Use vSphere Lifecycle n With vSphere Lifecycle n Updating the firmware
Manager images to manage Manager images, with images requires
the life cycle of vSphere firmware updates are an OEM-provided
clusters. carried out through hardware support
firmware and driver manager plug-in, which
add-ons, which you add integrates with vSphere
to the image you use to Lifecycle Manager.
manage a cluster. n An updated vSAN
n You can check the Hardware Compatibility
hardware compatibility List (vSAN HCL) is
of the hosts in a cluster required during bring-
against the VMware up.
Compatibility Guide.
n You can validate
a vSphere Lifecycle
Manager image to
check if it applies to
all hosts in the cluster.
You can also perform a
remediation pre-check.

VMware by Broadcom 194


VMware Cloud Foundation Design Guide

Table 12-3. Life Cycle Management Design Requirements for NSX Federation in VMware Cloud
Foundation

Requirement ID Design Requirement Justification Implication

VCF-LCM-REQD-003 Use the upgrade The version of SDDC n You must explicitly
coordinator in NSX Manager in this design is plan upgrades of the
to perform life cycle not currently capable of life NSX Global Manager
management on the NSX cycle operations (patching, nodes. An upgrade
Global Manager appliances. update, or upgrade) for of the NSX Global
NSX Global Manager. Manager nodes might
require a cascading
upgrade of the NSX
Local Manager nodes
and underlying SDDC
Manager infrastructure
before upgrading the
NSX Global Manager
nodes.
n You must always align
the version of the NSX
Global Manager nodes
with the rest of the
SDDC stack in VMware
Cloud Foundation.

VCF-LCM-REQD-004 Establish an operations The versions of NSX Global The administrator must
practice to ensure that Manager and NSX Local establish and follow an
prior to the upgrade of Manager nodes must be operations practice by
any workload domain, the compatible with each other. using a runbook or
impact of any version Because SDDC Manager automated process to
upgrades is evaluated does not provide life ensure a fully supported
in relation to the need cycle operations (patching, and compliant bill of
to upgrade NSX Global update, or upgrade) materials prior to any
Manager. for the NSX Global upgrade operation.
Manager nodes, upgrade
to an unsupported version
cannot be prevented.

VCF-LCM-REQD-005 Establish an operations The versions of NSX Global The administrator must
practice to ensure that Manager and NSX Local establish and follow an
prior to the upgrade of Manager nodes must be operations practice by
the NSX Global Manager, compatible with each other. using a runbook or
the impact of any Because SDDC Manager automated process to
version change is evaluated does not provide life ensure a fully supported
against the existing NSX cycle operations (patching, and compliant bill of
Local Manager nodes and update, or upgrade) materials prior to any
workload domains. for the NSX Global upgrade operation.
Manager nodes, upgrade
to an unsupported version
cannot be prevented.

VMware by Broadcom 195


Logging and Monitoring Design
for VMware Cloud Foundation 13
By using VMware or third-party components, collect log data from all SDDC management
components in your VMware Cloud Foundation environment in a central place. You can use
VMware Aria Operations for Logs as the central platform because of its native integration with
VMware Aria Suite Lifecycle.

After you deploy VMware Aria Operations for Logs by using VMware Aria Suite Lifecycle
in VMware Cloud Foundation mode, SDDC Manager configures VMware Aria Suite Lifecycle
logging to VMware Aria Operations for Logs over the log ingestion API. For information about
on-premises VMware Aria Operations for Logs in VMware Cloud Foundation, see Intelligent
Logging and Analytics for VMware Clod Foundation.

VMware by Broadcom 196


Information Security Design for
VMware Cloud Foundation 14
You design management of access controls, certificates and accounts for VMware Cloud
Foundation according to the requirements of your organization.

Read the following topics next:

n Access Management for VMware Cloud Foundation

n Account Management Design for VMware Cloud Foundation

n Certificate Management for VMware Cloud Foundation

Access Management for VMware Cloud Foundation


You design access management for VMware Cloud Foundation according to industry standards
and the requirements of your organization.

Component Access Method Additional Information

SDDC Manager n UI SSH is active by default. root user


n API access is deactivated.

n SSH

NSX Local Manager n UI SSH is deactivated by default.


n API
n SSH

NSX Edges n API SSH is deactivated by default.


n SSH

NSX Global Manager n UI SSH setting is defined during


n API deployment.

n SSH

vCenter Server n UI SSH is active by default.


n API
n SSH
n VAMI

VMware by Broadcom 197


VMware Cloud Foundation Design Guide

Component Access Method Additional Information

ESXi n Direct Console User Interface SSH and ESXi shell are deactivated
(DCUI) by default.
n ESXi Shell
n SSH
n VMware Host Client

VMware Aria Suite Lifecycle n UI SSH is active by default.


n API
n SSH

Workspace ONE Access n UI SSH is active by default.


n API
n SSH

Account Management Design for VMware Cloud Foundation


You design account management for VMware Cloud Foundation according to industry standards
and the requirements of your organization.

Password Management Methods


SDDC Manager manages the life cycle of passwords for the components that are part of the
VMware Cloud Foundation instance. Multiple methods for managing password life cycle are
supported.

Table 14-1. Password Management Methods in VMware Cloud Foundation

Method Description

Rotate Update one or more accounts with an auto-generated


password

Update Update password for a single account with a manually


entered password

Remediate Reconcile a single account with a password that has been


set manually at the component.

Schedule Schedule auto-rotation for one or more selected


accounts.

Manual Update a password manually directly in the component.

Account and Password Management


VMware Cloud Foundation comprises multiple types of interactive, local, and service accounts.
Each account has different attributes and can be managed in the following ways:

For more information on password complexity, account lockout or integration with additional
Identity Providers, refer to the Identity and Access Management for VMware Cloud Foundation.

VMware by Broadcom 198


VMware Cloud Foundation Design Guide

Table 14-2. Account and Password Management in VMware Cloud Foundation

Component User Account Password Management Additional Information

SDDC Manager admin@local n Manual by using the n Local appliance account


SDDC Manager API n API access (break-glass
n Default Expiry: Never account)

vcf n Manual by using the OS n Local appliance account


n Default Expiry: 365 n OS level access
days

root n Manual by using the OS n Local appliance account


n Default Expiry: 90 days n OS level access

backup n Rotate, n Local appliance account


update,remediate or n OS level access
schedule by using the
SDDC Manager UI or
API
n Default Expiry: 365
days

[email protected] n Rotate, n vCenter Single Sign-On


l update,remediate or account.
schedule by using the n Application and API
SDDC Manager UI or access.
API n Additional
n Default Expiry: 90 days VMware Cloud
FoundationAdmin
account required
to perform manual
password rotation.

NSX Local Manager admin n Rotate, n Local appliance account


update,remediate or n OS level, API, and
schedule by using the application access
SDDC Manager UI or
API
n Default Expiry: 90 days

root n Rotate, n Local appliance account


update,remediate or n OS level access
schedule by using the
SDDC Manager UI or
API
n Default Expiry: 90 days

audit n Rotate, n Local appliance account


update,remediate or n OS level access
schedule by using the n Read-only application
SDDC Manager UI or level access
API
n Default Expiry: 90 days

VMware by Broadcom 199


VMware Cloud Foundation Design Guide

Table 14-2. Account and Password Management in VMware Cloud Foundation (continued)

Component User Account Password Management Additional Information

NSX Edges admin n Rotate, n Local appliance account


update,remediate or n OS level, API, and
schedule by using the application access
SDDC Manager UI or
API
n Default Expiry: 90 days

root n Rotate, n Local appliance account


update,remediate or n OS level access
schedule by using the
SDDC Manager UI or
API
n Default Expiry: 90 days

audit n Rotate, n Local appliance account


update,remediate or n OS level access
schedule by using the n Read-only application
SDDC Manager UI or level access
API
n Default Expiry: 90 days

NSX Global Manager admin n Manual by using the n Local appliance account
NSX Global Manager UI n OS level, API, and
or API application access
n Default Expiry: 90 days

root n Manual by using each n Local appliance account


NSX Global Manager n OS level access
appliance
n Default Expiry: 90 days

audit n Manual by using the n Local appliance account


NSX Global Manager UI n OS level access
or API n Read-only application
n Default Expiry: 90 days level access

vCenter Server root n Rotate, n Local appliance account


update,remediate or n OS level access
schedule by using the n VAMI access
SDDC Manager UI or
API
n Default Expiry: 90 days

[email protected] n Rotate, n vCenter Single Sign-On


ocal update,remediate or account.
schedule by using the n Application and API
SDDC Manager UI or access.
API n Relevant to isolated
n Default Expiry: 90 days workload domain

VMware by Broadcom 200


VMware Cloud Foundation Design Guide

Table 14-2. Account and Password Management in VMware Cloud Foundation (continued)

Component User Account Password Management Additional Information

svc-sddc-manager- n System managed. Service account between


hostname-vcenter-server- n Automatically rotated SDDC Manager and
[email protected] every 30 days by vCenter Server
default
n Default Expiry: None

svc-nsx-manager- n System managed. Service account between


hostname-vcenter-server- n Automatically rotated NSX Manager and vCenter
[email protected] every 30 days by Server
default
n Default Expiry: None

svc-vrslcm-hostname- n System managed Service account between


vcenter-server- n Automatically rotated VMware Aria Suite
[email protected] every 30 days by Lifecycle and vCenter
default Server

n Default Expiry: None

ESXi root n Rotate, Manual


update,remediate or
schedule by using the
SDDC Manager UI or
API
n Default Expiry: 99999
(never)

svc-vcf-esxi-hostname n Rotate, Service account between


update,remediate or SDDC Manager and the
schedule by using the ESXi host
SDDC Manager UI or
API
n Default Expiry: 99999
(never)

VMware Aria Suite vcfadmin@local n Rotate, API and application access


Lifecycle update,remediate or
schedule by using the
SDDC Manager UI or
API
n Default Expiry: Never

root n Rotate, n Local appliance account


update,remediate or n OS level access
schedule by using the
SDDC Manager UI or
API
n Default Expiry: 365
days

VMware by Broadcom 201


VMware Cloud Foundation Design Guide

Table 14-2. Account and Password Management in VMware Cloud Foundation (continued)

Component User Account Password Management Additional Information

Workspace ONE Access root n Rotate, n Local appliance account


update,remediate or n OS level access
schedule by using the
SDDC Manager UI or
API
n Default Expiry: 60 days

sshuser n Managed by VMware n Local appliance account


Aria Suite Lifecycle n OS level access
n Default Expiry: 60 days

admin (port 8443) Managed by VMware Aria System Admin


Suite Lifecycle

Admin (port 443) n Rotate, Default application


update,remediate or administrator
schedule by using the
SDDC Manager UI or
API
n Default Expiry: Never

configadmin n You must use Application configuration


both Workspace ONE administrator
Access and VMware
Aria Suite Lifecycle to
manage the password
rotation schedule of the
configadmin user.
n Default Expiry: Never

Account Management Design Recommendations


In your account management design, you can apply certain best practices.

VMware by Broadcom 202


VMware Cloud Foundation Design Guide

Table 14-3. Design Requirements for Account and Password Management for VMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-ACTMGT-REQD- Enable scheduled n Increases the security You must retrieve new
SEC-001 password rotation in SDDC posture of your SDDC. passwords by using the API
Manager for all accounts n Simplifies password if you must use accounts
supporting scheduled management interactively.
rotation. across your
SDDC management
components.

VCF-ACTMGT-REQD- Establish operational Rotates passwords and None.


SEC-003 practice to rotate automatically remediates
passwords using SDDC SDDC Manager databases
Manager on components for those user accounts.
that do not support
scheduled rotation in SDDC
Manager.

VCF-ACTMGT-REQD- Establish operational Maintains password policies None.


SEC-003 practice to manually rotate across components not
passwords on components handled by SDDC Manager
that cannot be rotated by password management.
SDDC Manager.

Certificate Management for VMware Cloud Foundation


You design certificate management for VMware Cloud Foundation according to industry
standards and the requirements of your organization.

Access to all management component interfaces must be over a Secure Socket Layer (SSL)
connection. During deployment, each component is assigned a certificate from a default signing
CA. To provide secure access to each component, replace the default certificate with a trusted
enterprise CA-signed certificate.

Table 14-4. Certificate Management in VMware Cloud Foundation


Life cycle for Enterprise CA-Signed
Component Default Signing CA Certificates

SDDC Manager Management domain VMCA Using SDDC Manager

NSX Local Manager Management domain VMCA Using SDDC Manager

NSX Edges Not applicable Not applicable

NSX Global Manager Self Signed Manual

vCenter Server Local workload domain VMCA Using SDDC Manager

VMware by Broadcom 203


VMware Cloud Foundation Design Guide

Table 14-4. Certificate Management in VMware Cloud Foundation (continued)


Life cycle for Enterprise CA-Signed
Component Default Signing CA Certificates

ESXi Local workload domain VMCA Manual*

VMware Aria Suite Lifecycle Management domain VMCA Using SDDC Manager

Note * To use enterprise CA-Signed certificates with ESXi, the initial deployment of VMware
Cloud Foundation must be done using the API providing the Trusted Root certificate.

Table 14-5. Certificate Management Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-SDDC-RCMD-SEC-001 Replace the default VMCA Ensures that the Replacing the default
or signed certificates on communication to all certificates with trusted
all management virtual management components CA-signed certificates from
appliances with a certificate is secure. a certificate authority might
that is signed by an internal increase the deployment
certificate authority. preparation time because
you must generate and
submit certificate requests.

VCF-SDDC-RCMD-SEC-002 Use a SHA-2 algorithm The SHA-1 algorithm is Not all certificate
or higher for signed considered less secure and authorities support SHA-2
certificates. has been deprecated. or higher.

VCF-SDDC-RCMD-SEC-003 Perform SSL certificate life SDDC Manager supports Certificate management
cycle management for all automated SSL certificate for NSX Global Manager
management appliances by lifecycle management instances must be done
using SDDC Manager. rather than requiring a manually.
series of manual steps.

VMware by Broadcom 204


Appendix: Design Elements for
VMware Cloud Foundation 15
The appendix aggregates all design requirements and recommendations in the design guidance
for VMware Cloud Foundation. You can use this list for reference related to the end state of your
platform and potentially to track your level of adherence to the design and any justification for
deviation.

Read the following topics next:

n Architecture Design Elements for VMware Cloud Foundation

n Workload Domain Design Elements for VMware Cloud Foundation

n External Services Design Elements for VMware Cloud Foundation

n Physical Network Design Elements for VMware Cloud Foundation

n vSAN Design Elements for VMware Cloud Foundation

n ESXi Design Elements for VMware Cloud Foundation

n vCenter Server Design Elements for VMware Cloud Foundation

n vSphere Cluster Design Elements for VMware Cloud Foundation

n vSphere Networking Design Elements for VMware Cloud Foundation

n NSX Design Elements for VMware Cloud Foundation

n SDDC Manager Design Elements for VMware Cloud Foundation

n VMware Aria Suite Lifecycle Design Elements for VMware Cloud Foundation

n Workspace ONE Access Design Elements for VMware Cloud Foundation

n Life Cycle Management Design Elements for VMware Cloud Foundation

n Information Security Design Elements for VMware Cloud Foundation

Architecture Design Elements for VMware Cloud Foundation


Use this list of requirements for reference related to using the standard or consolidated
architecture of VMware Cloud Foundation.

For full design details, see Architecture Models and Workload Domain Types in VMware Cloud
Foundation.

VMware by Broadcom 205


VMware Cloud Foundation Design Guide

Table 15-1. Architecture Model Recommendations for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-ARCH-RCMD-CFG-001 Use the standard n Aligns with the Requires additional


architecture model of VMware best hardware.
VMware Cloud Foundation. practice of separating
management workloads
from customer
workloads.
n Provides better long-
term flexibility and
expansion options.

Workload Domain Design Elements for VMware Cloud


Foundation
Use this list of requirements and recommendations for reference related to the types of virtual
infrastructure (VI) workload domains in a VMware Cloud Foundation environment.

For full design details, see Workload Domain Types.

Table 15-2. Workload Domain Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-WLD-RCMD-CFG-001 Use VI workload domains n Aligns with the Requires additional


or isolated VI workload VMware best hardware.
domains for customer practice of separating
workloads. management workloads
from customer
workloads.
n Provides better long
term flexibility and
expansion options.

External Services Design Elements for VMware Cloud


Foundation
Use this list of requirements for reference related to the configuration of external infrastructure
services in an environment with a single or multiple VMware Cloud Foundation instances. The
requirements define IP address allocation, name resolution, and time synchronization.

For full design details, see Chapter 4 External Services Design for VMware Cloud Foundation.

VMware by Broadcom 206


VMware Cloud Foundation Design Guide

Table 15-3. External Services Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-EXT-REQD-NET-001 Allocate statically assigned Ensures stability across the You must provide precise
IP addresses and host VMware Cloud Foundation IP address management.
names for all workload instance, and makes it
domain components. simpler to maintain, track,
and implement a DNS
configuration.

VCF-EXT-REQD-NET-002 Configure forward and Ensures that all You must provide
reverse DNS records components are accessible DNS records for each
for all workload domain by using a fully qualified component.
components. domain name instead of by
using IP addresses only. It
is easier to remember and
connect to components
across the VMware Cloud
Foundation instance.

VCF-EXT-REQD-NET-003 Configure time Ensures that all An operational NTP service


synchronization by using an components are must be available in the
internal NTP time source synchronized with a valid environment.
for all workload domain time source.
components.

VCF-EXT-REQD-NET-004 Set the NTP service Ensures that the None.


for all workload domain NTP service remains
components to start synchronized after you
automatically. restart a component.

Physical Network Design Elements for VMware Cloud


Foundation
Use this design decision list for reference related to the configuration of the physical network in
an environment with a single or multiple VMware Cloud Foundation instances. The design also
considers if an instance contains a single or multiple availability zones.

For full design details, see Chapter 5 Physical Network Infrastructure Design for VMware Cloud
Foundation.

VMware by Broadcom 207


VMware Cloud Foundation Design Guide

Table 15-4. Leaf-Spine Physical Network Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NET-REQD-CFG-001 Do not use EtherChannel n Simplifies configuration None.


(LAG, LACP, or vPC) of top of rack switches.
configuration for ESXi host n Teaming options
uplinks. available with vSphere
Distributed Switch
provide load balancing
and failover.
n EtherChannel
implementations might
have vendor-specific
limitations.

VCF-NET-REQD-CFG-002 Use VLANs to separate n Supports physical Requires uniform


physical network functions. network connectivity configuration and
without requiring many presentation on all the
NICs. trunks that are made
n Isolates the different available to the ESXi hosts.
network functions in
the SDDC so that you
can have differentiated
services and prioritized
traffic as needed.

VCF-NET-REQD-CFG-003 Configure the VLANs as All VLANs become Optionally, the


members of a 802.1Q trunk. available on the same management VLAN can act
physical network adapters as the native VLAN.
on the ESXi hosts.

VCF-NET-REQD-CFG-004 Set the MTU size to at least n Improves traffic When adjusting the MTU
1,700 bytes (recommended throughput. packet size, you must
9,000 bytes for jumbo n Supports Geneve by also configure the entire
frames) on the physical increasing the MTU size network path (VMkernel
switch ports, vSphere to a minimum of 1,600 network adapters, virtual
Distributed Switches, bytes. switches, physical switches,
vSphere Distributed Switch and routers) to support the
n Geneve is an extensible
port groups, and N-VDS same MTU packet size.
protocol. The MTU
switches that support the In an environment with
size might increase
following traffic types: multiple availability zones,
with future capabilities.
n Overlay (Geneve) While 1,600 bytes the MTU must be
n vSAN is sufficient, an MTU configured on the entire
size of 1,700 bytes network path between the
n vSphere vMotion
provides more room for zones.
increasing the Geneve
MTU size without the
need to change the
MTU size of the
physical infrastructure.

VMware by Broadcom 208


VMware Cloud Foundation Design Guide

Table 15-5. Leaf-Spine Physical Network Design Requirements for NSX Federation in VMware
Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NET-REQD-CFG-005 Set the MTU size to at least n Jumbo frames are When adjusting the MTU
1,500 bytes (1,700 bytes not required between packet size, you must
preferred; 9,000 bytes VMware Cloud also configure the entire
recommended for jumbo Foundation instances. network path, that is,
frames) on the components However, increased virtual interfaces, virtual
of the physical network MTU improves traffic switches, physical switches,
between the VMware throughput. and routers to support the
Cloud Foundation instances n Increasing the RTEP same MTU packet size.
for the following traffic MTU to 1,700
types. bytes minimizes
n NSX Edge RTEP fragmentation for
standard-size workload
packets between
VMware Cloud
Foundation instances.

VCF-NET-REQD-CFG-006 Ensure that the latency A latency lower than 500 None.
between VMware Cloud ms is required for NSX
Foundation instances that Federation.
are connected in an NSX
Federation is less than 500
ms.

VCF-NET-REQD-CFG-007 Provide a routed Configuring NSX You must assign unique


connection between the Federation requires routable IP addresses for
NSX Manager clusters in connectivity between the each fault domain.
VMware Cloud Foundation NSX Global Manager
instances that are instances, NSX Local
connected in an NSX Manager instances, and
Federation. NSX Edge clusters.

VMware by Broadcom 209


VMware Cloud Foundation Design Guide

Table 15-6. Leaf-Spine Physical Network Design Recommendations for VMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NET-RCMD-CFG-001 Use two ToR switches for Supports the use of Requires two ToR switches
each rack. two 10-GbE (25-GbE or per rack which might
greater recommended) increase costs.
links to each server,
provides redundancy and
reduces the overall design
complexity.

VCF-NET-RCMD-CFG-002 Implement the following n Provides availability n Might limit the


physical network during a switch failure. hardware choices.
architecture: n Provides support for n Requires dynamic
n One 25-GbE (10-GbE BGP dynamic routing routing protocol
minimum) port on each protocol configuration in the
ToR switch for ESXi physical network.
host uplinks (Host to
ToR).
n Layer 3 device that
supports BGP.

VCF-NET-RCMD-CFG-003 Use a physical network n Supports design Requires BGP configuration


that is configured for BGP flexibility for routing in the physical network.
routing adjacency. multi-site and multi-
tenancy workloads.
n BGP is the only
dynamic routing
protocol that is
supported for NSX
Federation.
n Supports failover
between ECMP Edge
uplinks.

VCF-NET-RCMD-CFG-004 Assign persistent IP n Ensures that endpoints If you add more hosts to
configurations for NSX have a persistent TEP the cluster, expanding the
tunnel endpoints (TEPs) IP address. static IP pools might be
that use static IP pools n In VMware Cloud required.
instead of dynamic IP pool Foundation, TEP IP
addressing. assignment by using
static IP pools is
recommended for all
topologies.
n This configuration
removes any
requirement for
external DHCP services.

VMware by Broadcom 210


VMware Cloud Foundation Design Guide

Table 15-6. Leaf-Spine Physical Network Design Recommendations for VMware Cloud
Foundation (continued)

Recommendation ID Design Recommendation Justification Implication

VCF-NET-RCMD-CFG-005 Configure the trunk ports Reduces the time to Although this design
connected to ESXi NICs as transition ports over to the does not use the STP,
trunk PortFast. forwarding state. switches usually have STP
configured by default.

VCF-NET-RCMD-CFG-006 Configure VRRP, HSRP, or Ensures that the VLANs Requires configuration of a
another Layer 3 gateway that are stretched high availability technology
availability method for between availability zones for the Layer 3 gateways in
these networks. are connected to a the data center.
n Management highly- available gateway.
Otherwise, a failure in the
n Edge overlay
Layer 3 gateway will cause
disruption in the traffic in
the SDN setup.

Table 15-7. Leaf-Spine Physical Network Design Recommendations for NSX Federation in
VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NET-RCMD-CFG-007 Provide BGP routing BGP is the supported None.


between all VMware Cloud routing protocol for NSX
Foundation instances that Federation.
are connected in an NSX
Federation setup.

VCF-NET-RCMD-CFG-008 Ensure that the latency A latency lower than None.


between VMware Cloud 150 ms is required for the
Foundation instances that following features:
are connected in an NSX n Cross vCenter Server
Federation is less than vMotion
150 ms for workload
mobility.

vSAN Design Elements for VMware Cloud Foundation


Use this list of requirements and recommendations for reference related to shared storage, vSAN
principal storage, and NFS supplemental storage in an environment with a single or multiple
VMware Cloud Foundation instances. The design also considers whether an instance contains a
single or multiple availability zones.

After you set up the physical storage infrastructure, the configuration tasks for most design
decisions are automated in VMware Cloud Foundation. You must perform the configuration
manually only for a limited number of design elements as noted in the design implication.

For full design details, see Chapter 6 vSAN Design for VMware Cloud Foundation.

VMware by Broadcom 211


VMware Cloud Foundation Design Guide

Table 15-8. vSAN Design Requirements for VMware Cloud Foundation


Requireme
nt ID Design Requirement Justification Implication

VCF- Provide sufficient raw Ensures that sufficient None.


VSAN- capacity to meet resources are present to
REQD- the initial needs of create the workload domain
CFG-001 the workload domain cluster.
cluster.

VCF- Provide at least Satisfies the requirements for None.


VSAN- the required minimum storage availability.
REQD- number of hosts
CFG-002 according to the
cluster type.

Table 15-9. vSAN ESA Design Requirements for VMware Cloud Foundation
Requireme
nt ID Design Requirement Justification Implication

VCF- Verify the hardware Prevents hardware-related Limits the number of compatible hardware
VSAN- components used failures during workload configurations that can be used.
REQD- in your vSAN deployment
CFG-003 deployment are on
the vSAN Hardware
Compatibility List.

Table 15-10. vSAN Design Requirements for Stretched Clusters with VMware Cloud Foundation
Requireme
nt ID Design Requirement Justification Implication

VCF- Add the following Provides the necessary You might need additional policies if third-
VSAN- setting to the default protection for virtual party virtual machines are to be hosted in
REQD- vSAN storage policy: machines in each availability these clusters because their performance or
CFG-004 Site disaster tolerance zone, with the ability to availability requirements might differ from
= Site mirroring - recover from an availability what the default VMware vSAN policy
stretched cluster zone outage. supports.

VCF- Configure two fault Fault domains are mapped to You must provide additional raw storage when
VSAN- domains, one for availability zones to provide the site mirroring - stretched cluster option is
REQD- each availability zone. logical host separation and selected, and fault domains are enabled.
CFG-005 Assign each host ensure a copy of vSAN
to their respective data is always available even
availability zone fault when an availability zone
domain. goes offline.

VCF- Use vSAN OSA to Stretched clusters on top of None.


VSAN- create a stretched vSAN ESA are not supported
REQD- cluster. by VMware Cloud Foundation
CFG-006

VCF- Configure an individual The vSAN storage policy of You must configure additional vSAN storage
VSAN- vSAN storage policy a stretched cluster cannot be policies.
REQD- for each stretched shared with other clusters.
CFG-007 cluster.

VMware by Broadcom 212


VMware Cloud Foundation Design Guide

Table 15-10. vSAN Design Requirements for Stretched Clusters with VMware Cloud Foundation
(continued)
Requireme
nt ID Design Requirement Justification Implication

VCF- Deploy a vSAN witness Ensures availability of vSAN You must provide a third physically separate
VSAN- appliance in a location witness components in the location that runs a vSphere environment.
WTN- that is not local to the event of a failure of one of You might use a VMware Cloud Foundation
REQD- ESXi hosts in any of the availability zones. instance in a separate physical location.
CFG-001 the availability zones.

VCF- Deploy a witness Ensures the witness The vSphere environment at the witness
VSAN- appliance that appliance is sized to support location must satisfy the resource
WTN- corresponds to the the projected workload requirements of the witness appliance.
REQD- required cluster storage consumption.
CFG-002 capacity.

VCF- Connect the first Enables connecting the The management networks in both availability
VSAN- VMkernel adapter of witness appliance to the zones must be routed to the management
WTN- the vSAN witness workload domain vCenter network in the witness site.
REQD- appliance to the Server.
CFG-003 management network
in the witness site.

VCF- Allocate a statically Simplifies maintenance and Requires precise IP address management.
VSAN- assigned IP address tracking, and implements a
WTN- and host name to the DNS configuration.
REQD- management adapter
CFG-004 of the vSAN witness
appliance.

VCF- Configure forward Enables connecting the vSAN You must provide DNS records for the vSAN
VSAN- and reverse DNS witness appliance to the witness appliance.
WTN- records for the vSAN workload domain vCenter
REQD- witness appliance for Server by FQDN instead of IP
CFG-005 the VMware Cloud address.
Foundation instance.

VCF- Configure time Prevents any failures n An operational NTP service must be
VSAN- synchronization by in the stretched cluster available in the environment.
WTN- using an internal NTP configuration that are caused n All firewalls between the vSAN witness
REQD- time for the vSAN by time mismatch between appliance and the NTP servers must allow
CFG-006 witness appliance. the vSAN witness appliance NTP traffic on the required network ports.
and the ESXi hosts in
both availability zones and
workload domain vCenter
Server.

VMware by Broadcom 213


VMware Cloud Foundation Design Guide

Table 15-11. vSAN Design Recommendations for VMware Cloud Foundation


Recommen
dation ID Design Recommendation Justification Implication

VCF- Provide sufficient raw capacity to Ensures that sufficient resources are present in the None.
VSAN- meet the planned needs of the workload domain cluster, preventing the need to
RCMD- workload domain cluster. expand the vSAN datastore in the future.
CFG-001

VCF- Ensure that at least 30% of free This reserved capacity is set aside for host Increases
VSAN- space is always available on the maintenance mode data evacuation, component the amount
RCMD- vSAN datastore,. rebuilds, rebalancing operations, and VM of available
CFG-002 snapshots. storage
needed.

VCF- Use the default VMware vSAN n Provides the level of redundancy that is You might
VSAN- storage policy. needed in the workload domain cluster. need
RCMD- n Provides the level of performance that is additional
CFG-003 enough for the individual workloads. policies for
third-party
virtual
machines
hosted in
these
clusters
because
their
performanc
e or
availability
requirement
s might
differ from
what the
default
VMware
vSAN policy
supports.

VCF- Leave the default virtual machine Sparse virtual swap files consume capacity on None.
VSAN- swap file as a sparse object on vSAN only as they are accessed. As a result,
RCMD- vSAN. you can reduce the consumption on the vSAN
CFG-004 datastore if virtual machines do not experience
memory over-commitment, which would require
the use of the virtual swap file.

VCF- Use the existing vSphere n Reduces the complexity of the network design. All traffic
VSAN- Distributed Switch instance for the n Reduces the number of physical NICs required. types can
RCMD- workload domain cluster. be shared
CFG-005 over
common
uplinks.

VMware by Broadcom 214


VMware Cloud Foundation Design Guide

Table 15-11. vSAN Design Recommendations for VMware Cloud Foundation (continued)
Recommen
dation ID Design Recommendation Justification Implication

VCF- Configure jumbo frames on the n Simplifies configuration because jumbo frames Every
VSAN- VLAN for vSAN traffic. are also used to improve the performance of device in
RCMD- vSphere vMotion and NFS storage traffic. the network
CFG-006 n Reduces the CPU overhead, resulting in high must
network usage. support
jumbo
frames.

VCF- Configure vSAN in an all-flash Meets the performance needs of the default All vSAN
VSAN- configuration in the default workload domain cluster. disks must
RCMD- workload domain cluster. be flash
CFG-007 disks, which
might cost
more than
magnetic
disks.

Table 15-12. vSAN OSA Design Recommendations for with VMware Cloud Foundation
Recommen
dation ID Design Recommendation Justification Implication

VCF- Ensure that the storage I/O Storage controllers with lower queue depths can Limits the
VSAN- controller has a minimum queue cause performance and stability problems when number of
RCMD- depth of 256 set. running vSAN. compatible
CFG-008 vSAN ReadyNode servers are configured with the I/O
correct queue depths for vSAN. controllers
that can be
used for
storage.

VCF- Do not use the storage I/O Running non-vSAN disks, for example, VMFS, on a If non-vSAN
VSAN- controllers that are running vSAN storage I/O controller that is running a vSAN disk disks are
RCMD- disk groups for another purpose. group can impact vSAN performance. required in
CFG-009 ESXi hosts,
you must
have an
additional
storage I/O
controller in
the host.

VMware by Broadcom 215


VMware Cloud Foundation Design Guide

Table 15-12. vSAN OSA Design Recommendations for with VMware Cloud Foundation
(continued)
Recommen
dation ID Design Recommendation Justification Implication

VCF- Configure vSAN with a minimum of Reduces the size of the fault domain and Using
VSAN- two disk groups per ESXi host. spreads the I/O load over more disks for better multiple disk
RCMD- performance. groups
CFG-010 requires
more disks
in each ESXi
host.

VCF- For the cache tier in each disk Provides enough cache for both hybrid or all-flash Using larger
VSAN- group, use a flash-based drive that vSAN configurations to buffer I/O and ensure disk flash disks
RCMD- is at least 600 GB large. group performance. can increase
CFG-011 Additional space in the cache tier does not the initial
increase performance. host cost.

VMware by Broadcom 216


VMware Cloud Foundation Design Guide

Table 15-13. vSAN ESA Design Recommendations for with VMware Cloud Foundation
Recommen
dation ID Design Recommendation Justification Implication

VCF- Activate auto-policy management. Configures optimized storage policies based on You must
VSAN- the cluster type and the number of hosts in activate
RCMD- the cluster inventory. Changes to the number of auto-policy
CFG-012 hosts in the cluster or Host Rebuild Reserve will managemen
prompt you to make a suggested adjustment to t manually.
the optimized storage policy.

VCF- Activate vSAN ESA compression. Activated by default, it also improves PostgreSQL
VSAN- performance. databases
RCMD- and other
CFG-013 applications
might use
their own
compressio
n
capabilities.
In these
cases, using
a storage
policy with
the
compressio
n capability
turned off
will save
CPU cycles.
You can
disable
vSAN ESA
compressio
ns for such
workloads
through the
use of the
Storage
Policy
Based
Managemen
t (SPBM)
framework.

VCF- Use NICs with a minimum 25-GbE 10-GbE NICs will limit the scale and performance of Requires 25-
VSAN- capacity. a vSAN ESA cluster because usually performance GbE or
RCMD- requirements increase over the lifespan of the faster
CFG-014 cluster. network
fabric.

VMware by Broadcom 217


VMware Cloud Foundation Design Guide

Table 15-14. vSAN Design Recommendations for Stretched Clusters with VMware Cloud
Foundation
Recommen
dation ID Design Recommendation Justification Implication

VCF- Configure the vSAN witness Removes the requirement to have static routes on The
VSAN- appliance to use the first VMkernel the witness appliance as witness traffic is routed managemen
WTN- adapter, that is the management over the management network. t networks
RCMD- interface, for vSAN witness traffic. in both
CFG-001 availability
zones must
be routed to
the
managemen
t network in
the witness
site.

VCF- Place witness traffic on the Separates the witness traffic from the vSAN data The
VSAN- management VMkernel adapter of traffic. Witness traffic separation provides the managemen
WTN- all the ESXi hosts in the workload following benefits: t networks
RCMD- domain. n Removes the requirement to have static routes in both
CFG-002 from the vSAN networks in both availability availability
zones to the witness site. zones must
be routed to
n Removes the requirement to have jumbo
the
frames enabled on the path between each
managemen
availability zone and the witness site because
t network in
witness traffic can use a regular MTU size of
the witness
1500 bytes.
site.

ESXi Design Elements for VMware Cloud Foundation


Use this list of requirements and recommendations for reference related to the ESXi host
configuration in an environment with a single or multiple VMware Cloud Foundation instances.
The design elements determine the ESXi hardware configuration, networking, life cycle
management and remote access.

The configuration tasks for most design requirements and recommendations are automated
in VMware Cloud Foundation. You must perform the configuration manually only for a limited
number of decisions as noted in the design implications.

For full design details, see ESXi Design for VMware Cloud Foundation.

VMware by Broadcom 218


VMware Cloud Foundation Design Guide

Table 15-15. Design Requirements for ESXi Server Hardware

Requirement ID Design Requirement Requirement Justification Requirement Implication

VCF-ESX-REQD-CFG-001 Install no less than n Ensures availability None.


the minimum number of requirements are met.
ESXi hosts required for n If one of the hosts is
the cluster type being not available because
deployed. of a failure or
maintenance event, the
CPU overcommitment
ratio becomes 2:1.

VCF-ESX-REQD-CFG-002 Ensure each ESXi host n Ensures workloads Assemble the server
matches the required will run without specification and number
CPU, memory and storage contention even during according to the sizing in
specification. failure and maintenance VMware Cloud Foundation
conditions. Planning and Preparation
Workbook which is based
on projected deployment
size.

VCF-ESX-REQD-SEC-001 Regenerate the certificate Establishes a secure You must manually


of each ESXi host after connection with VMware regenerate the certificates
assigning the host an Cloud Builder during the of the ESXi hosts before
FQDN. deployment of a workload the deployment of a
domain and prevents workload domain.
man-in-the-middle (MiTM)
attacks.

VMware by Broadcom 219


VMware Cloud Foundation Design Guide

Table 15-16. Design Recommendations for ESXi Server Hardware

Recommendation ID Recommendation Justification Implication

VCF-ESX-RCMD-CFG-001 Use vSAN ReadyNodes Your management domain Hardware choices might be
with vSAN storage for is fully compatible with limited.
each ESXi host in the vSAN at deployment. If you plan to use a
management domain. For information about the server configuration that is
models of physical servers not a vSAN ReadyNode,
that are vSAN-ready, see your CPU, disks and I/O
vSAN Compatibility Guide modules must be listed on
for vSAN ReadyNodes. the VMware Compatibility
Guide under CPU Series
and vSAN Compatibility List
aligned to the ESXi version
specified in VMware Cloud
Foundation 5.1 Release
Notes.

VCF-ESX-RCMD-CFG-002 Allocate hosts with uniform A balanced cluster has You must apply vendor
configuration across the these advantages: sourcing, budgeting,
default management n Predictable and procurement
vSphere cluster. performance even considerations for uniform
during hardware server nodes on a per
failures cluster basis.

n Minimal impact of
resynchronization or
rebuild operations on
performance

VCF-ESX-RCMD-CFG-003 When sizing CPU, do Although multithreading Because you must provide
not consider multithreading technologies increase more physical CPU
technology and associated CPU performance, the cores, costs increase and
performance gains. performance gain depends hardware choices become
on running workloads and limited.
differs from one case to
another.

VCF-ESX-RCMD-CFG-004 Install and configure all Provides hosts that have None.
ESXi hosts in the default large memory, that is,
management cluster to greater than 512 GB,
boot using a 128-GB device with enough space for
or larger. the scratch partition when
using vSAN.

VCF-ESX-RCMD-CFG-005 Use the default n If a failure in the None.


configuration for the vSAN cluster occurs,
scratch partition on all the ESXi hosts remain
ESXi hosts in the default responsive and log
management cluster. information is still
accessible.
n It is not possible to use
vSAN datastore for the
scratch partition.

VMware by Broadcom 220


VMware Cloud Foundation Design Guide

Table 15-16. Design Recommendations for ESXi Server Hardware (continued)

Recommendation ID Recommendation Justification Implication

VCF-ESX-RCMD-CFG-006 For workloads running in Simplifies the configuration Increases the amount
the default management process. of replication traffic for
cluster, save the virtual management workloads
machine swap file at the that are recovered as part
default location. of the disaster recovery
process.

VCF-ESX-RCMD-NET-001 Place the ESXi hosts Enables the separation of Increases the number of
in each management the physical VLAN between VLANs required.
domain cluster on a host ESXi hosts and the other
management network that management components
is separate from the VM for security reasons.
management network.

VCF-ESX-RCMD-NET-002 Place the ESXi hosts Enables the separation of Increases the number of
in each VI workload the physical VLAN between VLANs required. For each
domain on a separate host the ESXi hosts in different VI workload domain, you
management VLAN-backed VI workload domains for must allocate a separate
network. security reasons. management subnet.

VCF-ESX-RCMD-SEC-001 Deactivate SSH access on Ensures compliance with You must activate SSH
all ESXi hosts in the the vSphere Security access manually for
management domain by Configuration Guide and troubleshooting or support
having the SSH service with security best activities as VMware Cloud
stopped and using the practices. Foundation deactivates
default SSH service policy Disabling SSH access SSH on ESXi hosts
Start and stop manually . reduces the risk of security after workload domain
attacks on the ESXi hosts deployment.
through the SSH interface.

VCF-ESX-RCMD-SEC-002 Set the advanced setting n Ensures compliance You must turn off
UserVars.SuppressShellWa with the vSphere SSH enablement warning
rning to 0 across all ESXi Security Configuration messages manually when
hosts in the management Guide and with security performing troubleshooting
domain. best practices or support activities.
n Enables the warning
message that appears
in the vSphere Client
every time SSH access
is activated on an ESXi
host.

vCenter Server Design Elements for VMware Cloud


Foundation
Use this list of requirements and recommendations for reference related to the vCenter Server
configuration in an environment with a single or multiple VMware Cloud Foundation instances.
The design elements also consider if an instance contains a single or multiple availability zones.
The vCenter Server design also includes the configuration of the default management cluster.

VMware by Broadcom 221


VMware Cloud Foundation Design Guide

The configuration tasks for most design requirements and recommendations are automated
in VMware Cloud Foundation. You must perform the configuration manually only for a limited
number of decisions as noted in the design implications.

For full design details, see vCenter Server Design for VMware Cloud Foundation.

VMware by Broadcom 222


VMware Cloud Foundation Design Guide

vCenter Server Design Elements


Table 15-17. vCenter Server Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-VCS-REQD-CFG-001 Deploy a dedicated n Isolates vCenter Server failures Requires a


vCenter Server appliance to management or customer separate license
for the management workloads. for the vCenter
domain of the VMware n Isolates vCenter Server Server instance in
Cloud Foundation instance. operations between the management
management and customers. domain
n Supports a scalable cluster
design where you can reuse
the management components as
more customer workloads are
added to the SDDC.
n Simplifies capacity planning
for customer workloads
because you do not consider
management workloads for the
VI workload domain vCenter
Server.
n Improves the ability to upgrade
the vSphere environment and
related components by enabling
for explicit separation of
maintenance windows:
n Management workloads
remain available while you
are upgrading the tenant
workloads
n Customer workloads remain
available while you are
upgrading the management
nodes
n Supports clear separation of
roles and responsibilities to
ensure that only administrators
with granted authorization
can control the management
workloads.
n Facilitates quicker
troubleshooting and problem
resolution.
n Simplifies disaster recovery
operations by supporting a clear
separation between recovery of
the management components
and tenant workloads.

VMware by Broadcom 223


VMware Cloud Foundation Design Guide

Table 15-17. vCenter Server Design Requirements for VMware Cloud Foundation (continued)

Requirement ID Design Requirement Justification Implication

n Provides isolation of potential


network issues by introducing
network separation of the
clusters in the SDDC.

VCF-VCS-REQD-NET-001 Place all workload n Simplifies IP addressing for None.


domain vCenters Server management VMs by using the
appliances on the VM same VLAN and subnet.
management network in n Provides simplified secure
the management domain. access to management VMs in
the same VLAN network.

Table 15-18. vCenter Server Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-VCS-RCMD-CFG-001 Deploy an appropriately Ensures resource The default size for a


sized vCenter Server availability and usage management domain is
appliance for each efficiency per workload Small and for VI workload
workload domain. domain. domains is Medium. To
override these values, you
must use the Cloud Builder
API and the SDDC Manager
API.

VCF-VCS-RCMD-CFG-002 Deploy a vCenter Server Ensures resource The default size for a
appliance with the availability and usage management domain is
appropriate storage size. efficiency per workload Small and for VI Workload
domain. Domains is Medium. To
override these values, you
must use the API.

VCF-VCS-RCMD-CFG-003 Protect workload domain vSphere HA is the only vCenter Server becomes
vCenter Server appliances supported method to unavailable during a
by using vSphere HA. protect vCenter Server vSphere HA failover.
availability in VMware
Cloud Foundation.

VCF-VCS-RCMD-CFG-004 In vSphere HA, set the vCenter Server is the If the restart priority for
restart priority policy management and control another virtual machine
for the vCenter Server plane for physical and is set to highest, the
appliance to high. virtual infrastructure. In a connectivity delay for the
vSphere HA event, to management components
ensure the rest of the will be longer.
SDDC management stack
comes up faultlessly, the
workload domain vCenter
Server must be available
first, before the other
management components
come online.

VMware by Broadcom 224


VMware Cloud Foundation Design Guide

Table 15-19. vCenter Server Design Recommendations for vSAN Stretched Clusters with VMware
Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-VCS-RCMD-CFG-005 Add the vCenter Server Ensures that, by default, None.


appliance to the virtual the vCenter Server
machine group for the first appliance is powered on a
availability zone. host in the first availability
zone.

vCenter Single Sign-On Design Elements


Table 15-20. Design Requirements for the Multiple vCenter Server Instance - Single vCenter
Single Sign-on Domain Topology for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-VCS-REQD-SSO- Join all vCenter Server When all vCenter Server n Only one vCenter Single
STD-001 instances within aVMware instances are in the Sign-On domain exists.
Cloud Foundation instance same vCenter Single Sign- n The number of
to a single vCenter Single On domain, they can linked vCenter Server
Sign-On domain. share authentication and instances in the
license data across all same vCenter Single
components. Sign-On domain is
limited to 15 instances.
Because each workload
domain uses a
dedicated vCenter
Server instance, you
can deploy up to
15 domains within
each VMware Cloud
Foundation instance.

VCF-VCS-REQD-SSO- Create a ring topology By default, one vCenter None.


STD-002 between the vCenter Server instance replicates
Server instances within the only with another vCenter
VMware Cloud Foundation Server instance. This setup
instance. creates a single point
of failure for replication.
A ring topology ensures
that each vCenter Server
instance has two replication
partners and removes any
single point of failure.

VMware by Broadcom 225


VMware Cloud Foundation Design Guide

Table 15-21. Design Requirements for Multiple vCenter Server Instance - Multiple vCenter Single
Sign-On Domain Topology for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-VCS-REQD-SSO- Create all vCenter Server n Enables isolation at n Each vCenter server
ISO-001 instances within a VMware the vCenter Single instance is managed
Cloud Foundation instance Sign-On domain layer through its own pane of
in their own unique vCenter for increased security glass using a different
Single Sign-On domains. separation. set of administrative
n Supports up to 25 credentials.
workload domains. n You must manage
password rotation
for each vCenter
Single Sign-On domain
separately.

vSphere Cluster Design Elements for VMware Cloud


Foundation
Use this list of requirements and recommendations for reference related to the vSphere cluster
configuration in an environment with a single or multiple VMware Cloud Foundation instances.
The design elements also consider if an instance contains a single or multiple availability zones.

For full design details, see Logical vSphere Cluster Design for VMware Cloud Foundation.

Table 15-22. vSphere Cluster Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-CLS-REQD-CFG-001 Create a cluster in each n Simplifies configuration Management of multiple


workload domain for the by isolating clusters and vCenter
initial set of ESXi hosts. management from Server instances increases
customer workloads. operational overhead.
n Ensures that customer
workloads have no
impact on the
management stack.

VCF-CLS-REQD-CFG-002 Allocate a minimum n Ensures correct level of To support redundancy,


number of ESXi hosts redundancy to protect you must allocate
according to the cluster against host failure in additional ESXi host
type being deployed. the cluster. resources.

VMware by Broadcom 226


VMware Cloud Foundation Design Guide

Table 15-22. vSphere Cluster Design Requirements for VMware Cloud Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-CLS-REQD-CFG-003 If using a consolidated n Ensures sufficient You must manage the


workload domain, resources for vSphere resource pool
configure the following the management settings over time.
vSphere resource pools to components.
control resource usage by
management and customer
workloads.
n cluster-name-rp-sddc-
mgmt
n cluster-name-rp-sddc-
edge
n cluster-name-rp-user-
edge
n cluster-name-rp-user-
vm

VCF-CLS-REQD-CFG-004 Configure the vSAN Allows vSphere HA to None.


network gateway IP validate if a host is isolated
address as the isolation from the vSAN network.
address for the cluster.

VCF-CLS-REQD-CFG-005 Set the advanced cluster Ensures that vSphere None.


setting HA uses the manual
das.usedefaultisolationa isolation addresses instead
ddress to false. of the default management
network gateway address.

Table 15-23. vSphere Cluster Design Requirements for vSAN Stretched Clusters with VMware
Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-CLS-REQD-CFG-006 Configure the vSAN Allows vSphere HA to None.


network gateway IP validate if a host is isolated
addresses for the second from the vSAN network for
availability zone as hosts in both availability
an additional isolation zones.
addresses for the cluster.

VCF-CLS-REQD-CFG-007 Enable the Override Enables routing the vSAN vSAN networks across
default gateway for this data traffic through the availability zones must
adapter setting on the vSAN network gateway have a route to each other.
vSAN VMkernel adapters rather than through the
on all ESXi hosts. management gateway.

VCF-CLS-REQD-CFG-008 Create a host group for Makes it easier to manage You must create and
each availability zone and which virtual machines run maintain VM-Host DRS
add the ESXi hosts in in which availability zone. group rules.
the zone to the respective
group.

VMware by Broadcom 227


VMware Cloud Foundation Design Guide

Table 15-24. vSphere Cluster Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-CLS-RCMD-CFG-001 Use vSphere HA to protect vSphere HA supports a You must provide sufficient
all virtual machines against robust level of protection resources on the remaining
failures. for both ESXi host and hosts so that virtual
virtual machine availability. machines can be restarted
on those hosts in the event
of a host outage.

VCF-CLS-RCMD-CFG-002 Set host isolation response vSAN requires that the If a false positive event
to Power Off and restart host isolation response be occurs, virtual machines are
VMs in vSphere HA. set to Power Off and to powered off and an ESXi
restart virtual machines on host is declared isolated
available ESXi hosts. incorrectly.

VCF-CLS-RCMD-CFG-003 Configure admission Using the percentage- In a cluster of 4 ESXi hosts,


control for 1 ESXi host based reservation works the resources of only 3
failure and percentage- well in situations where ESXi hosts are available for
based failover capacity. virtual machines have use.
varying and sometimes
significant CPU or memory
reservations.
vSphere automatically
calculates the reserved
percentage according to
the number of ESXi host
failures to tolerate and the
number of ESXi hosts in the
cluster.

VCF-CLS-RCMD-CFG-004 Enable VM Monitoring for VM Monitoring provides None.


each cluster. in-guest protection for
most VM workloads. The
application or service
running on the virtual
machine must be capable
of restarting successfully
after a reboot or the
virtual machine restart is
not sufficient.

VCF-CLS-RCMD-CFG-005 Set the advanced Enables triggering a restart If you want to specifically
cluster setting of a management appliance enable I/O monitoring,
das.iostatsinterval to 0 when an OS failure occurs you must configure
to deactivate monitoring and heartbeats are not the das.iostatsinterval
the storage and network received from VMware advanced setting.
I/O activities of the Tools instead of waiting
management appliances. additionally for the I/O
check to complete.

VCF-CLS-RCMD-CFG-006 Enable vSphere DRS on all Provides the best If a vCenter Server outage
clusters, using the default trade-off between load occurs, the mapping from
fully automated mode with balancing and unnecessary virtual machines to ESXi
medium threshold. migrations with vSphere hosts might be difficult to
vMotion. determine.

VMware by Broadcom 228


VMware Cloud Foundation Design Guide

Table 15-24. vSphere Cluster Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-CLS-RCMD-CFG-007 Enable Enhanced vMotion Supports cluster upgrades You must enable EVC only
Compatibility (EVC) on all without virtual machine if the clusters contain hosts
clusters in the management downtime. with CPUs from the same
domain. vendor.
You must enable EVC on
the default management
domain cluster during
bringup.

VCF-CLS-RCMD-CFG-008 Set the cluster EVC mode Supports cluster upgrades None.
to the highest available without virtual machine
baseline that is supported downtime.
for the lowest CPU
architecture on the hosts in
the cluster.

VCF-CLS-RCMD-LCM-001 Use images as the life cycle vSphere Lifecycle Manager An initial cluster image
management method for VI images simplify the is required during
workload domains. management of firmware workload domain or cluster
and vendor add-ons deployment.
manually.

Table 15-25. vSphere Cluster Design Recommendations for vSAN Stretched Clusters with
VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-CLS-RCMD-CFG-009 Increase admission control Allocating only half of a In a cluster of 8 ESXi hosts,
percentage to half of the stretched cluster ensures the resources of only 4
ESXi hosts in the cluster. that all VMs have enough ESXi hosts are available for
resources if an availability use.
zone outage occurs. If you add more ESXi hosts
to the default management
cluster, add them in pairs,
one per availability zone.

VCF-CLS-RCMD-CFG-010 Create a virtual machine Ensures that virtual You must add virtual
group for each availability machines are located machines to the allocated
zone and add the VMs in only in the assigned group manually.
the zone to the respective availability zone to
group. avoid unnecessary vSphere
vMotion migrations.

VCF-CLS-RCMD-CFG-011 Create a should-run-on- Ensures that virtual You must manually create
hosts-in-group VM-Host machines are located the rules.
affinity rule to run each only in the assigned
group of virtual machines availability zone to
on the respective group avoid unnecessary vSphere
of hosts in the same vMotion migrations.
availability zone.

VMware by Broadcom 229


VMware Cloud Foundation Design Guide

vSphere Networking Design Elements for VMware Cloud


Foundation
Use this list of recommendations for reference related to the configuration of the vSphere
Distributed Switch instances and VMkernel adapters in a VMware Cloud Foundation environment.

For full design details, see vSphere Networking Design for VMware Cloud Foundation.

Table 15-26. vSphere Networking Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-VDS-RCMD-CFG-001 Use a single vSphere n Reduces the complexity Increases the number
Distributed Switch per of the network design. of vSphere Distributed
cluster. n Reduces the size of the Switches that must be
fault domain. managed.

VCF-VDS-RCMD-CFG-002 Configure the MTU size n Supports the MTU size When adjusting the MTU
of the vSphere Distributed required by system packet size, you must
Switch to 9000 for jumbo traffic types. also configure the entire
frames. n Improves traffic network path (VMkernel
throughput. ports, virtual switches,
physical switches, and
routers) to support the
same MTU packet size.

VCF-VDS-RCMD-DPG-001 Use ephemeral port binding Using ephemeral port Port-level permissions and
for the Management VM binding provides the option controls are lost across
port group. for recovery of the vCenter power cycles, and no
Server instance that is historical context is saved.
managing the distributed
switch.

VCF-VDS-RCMD-DPG-002 Use static port binding for Static binding ensures a None.
all non-management port virtual machine connects
groups. to the same port on
the vSphere Distributed
Switch. This allows for
historical data and port
level monitoring.

VCF-VDS-RCMD-DPG-003 Use the Route based on Reduces the complexity None.


physical NIC load teaming of the network design,
algorithm for the VM increases resiliency, and
management port group. can adjust to fluctuating
workloads.

VCF-VDS-RCMD-DPG-004 Use the Route based on Reduces the complexity None.


physical NIC load teaming of the network design,
algorithm for the ESXi increases resiliency, and
management port group. can adjust to fluctuating
workloads.

VMware by Broadcom 230


VMware Cloud Foundation Design Guide

Table 15-26. vSphere Networking Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-VDS-RCMD-DPG-005 Use the Route based on Reduces the complexity None.


physical NIC load teaming of the network design,
algorithm for the vSphere increases resiliency, and
vMotion port group. can adjust to fluctuating
workloads.

VCF-VDS-RCMD-DPG-006 Use the Route based on Reduces the complexity None.


physical NIC load teaming of the network design,
algorithm for the vSAN port increases resiliency, and
group. can adjust to fluctuating
workloads.

VCF-VDS-RCMD-NIO-001 Enable Network I/O Control Increases resiliency and Network I/O Control
on vSphere Distributed performance of the might impact network
Switch of the management network. performance for critical
domain cluster. traffic types if
misconfigured.

VCF-VDS-RCMD-NIO-002 Set the share value for By keeping the default None.
management traffic to setting of Normal,
Normal. management traffic is
prioritized higher than
vSphere vMotion but
lower than vSAN traffic.
Management traffic is
important because it
ensures that the hosts
can still be managed
during times of network
contention.

VCF-VDS-RCMD-NIO-003 Set the share value for During times of network During times of network
vSphere vMotion traffic to contention, vSphere contention, vMotion takes
Low. vMotion traffic is not longer than usual to
as important as virtual complete.
machine or storage traffic.

VCF-VDS-RCMD-NIO-004 Set the share value for Virtual machines are the None.
virtual machines to High. most important asset in the
SDDC. Leaving the default
setting of High ensures that
they always have access to
the network resources they
need.

VMware by Broadcom 231


VMware Cloud Foundation Design Guide

Table 15-26. vSphere Networking Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-VDS-RCMD-NIO-005 Set the share value for During times of None.


vSAN traffic to High. network contention,
vSAN traffic needs
guaranteed bandwidth to
support virtual machine
performance.

VCF-VDS-RCMD-NIO-006 Set the share value for By default, VVMware Cloud None.
other traffic types to Low. Foundation does not use
other traffic types, like
vSphere FT traffic. Hence,
these traffic types can be
set the lowest priority.

NSX Design Elements for VMware Cloud Foundation


Use this list of requirements and recommendations for reference related to the configuration of
NSX in an environment with a single or multiple VMware Cloud Foundation instances. The design
also considers if an instance contains a single or multiple availability zones.

For full design details, see Chapter 8 NSX Design for VMware Cloud Foundation.

NSX Manager Design Elements


Table 15-27. NSX Manager Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-LM-REQD- Place the appliances n Simplifies IP addressing None.


CFG-001 of the NSX Manager for management VMs
cluster on the VM by using the same
management network in VLAN and subnet.
the management domain. n Provides simplified
secure access to
management VMs in
the same VLAN
network.

VCF-NSX-LM-REQD- Deploy three NSX Manager Supports high availability of You must have sufficient
CFG-002 nodes in the default the NSX manager cluster. resources in the default
vSphere cluster in the cluster of the management
management domain for domain to run three NSX
configuring and managing Manager nodes.
the network services for
the workload domain.

VMware by Broadcom 232


VMware Cloud Foundation Design Guide

Table 15-28. NSX Manager Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-LM-RCMD- Deploy appropriately sized Ensures resource The default size for
CFG-001 nodes in the NSX Manager availability and usage a management domain
cluster for the workload efficiency per workload is Medium, and for VI
domain. domain. workload domains is Large.

VCF-NSX-LM-RCMD- Create a virtual IP (VIP) Provides high availability of n The VIP address
CFG-002 address for the NSX the user interface and API feature provides high
Manager cluster for the of NSX Manager. availability only. It
workload domain. does not load-balance
requests across the
cluster.
n When using the VIP
address feature, all NSX
Manager nodes must
be deployed on the
same Layer 2 network.

VCF-NSX-LM-RCMD- Apply VM-VM anti-affinity Keeps the NSX Manager You must allocate at
CFG-003 rules in vSphere Distributed appliances running on least four physical hosts
Resource Scheduler different ESXi hosts for so that the three
(vSphere DRS) to the NSX high availability. NSX Manager appliances
Manager appliances. continue running if an ESXi
host failure occurs.

VCF-NSX-LM-RCMD- In vSphere HA, set the n NSX Manager If the restart priority
CFG-004 restart priority policy implements the control for another management
for each NSX Manager plane for virtual appliance is set to highest,
appliance to high. network segments. the connectivity delay for
vSphere HA restarts management appliances
the NSX Manager will be longer.
appliances first so that
other virtual machines
that are being powered
on or migrated by using
vSphere vMotion while
the control plane is
offline lose connectivity
only until the control
plane quorum is re-
established.
n Setting the restart
priority to high reserves
the highest priority for
flexibility for adding
services that must be
started before NSX
Manager.

VMware by Broadcom 233


VMware Cloud Foundation Design Guide

Table 15-29. NSX Manager Design Recommendations for Stretched Clusters in VMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-LM-RCMD- Add the NSX Manager Ensures that, by default, None.


CFG-006 appliances to the virtual the NSX Manager
machine group for the first appliances are powered
availability zone. on a host in the primary
availability zone.

NSX Global Manager Design Elements


You must perform manually the configuration tasks for the design requirements and
recommendations for NSX Global Manager.

Table 15-30. NSX Global Manager Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-GM-REQD- Place the appliances of n Simplifies IP addressing None.


CFG-001 the NSX Global Manager for management VMs.
cluster on the Management n Provides simplified
VM network in each secure access to
VMware Cloud Foundation all management VMs
instance. in the same VLAN
network.

Table 15-31. NSX Global Manager Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-GM-RCMD- Deploy three NSX Global Provides high availability You must have sufficient
CFG-001 Manager nodes for the for the NSX Global resources in the default
workload domain to Manager cluster. cluster of the management
support NSX Federation domain to run three NSX
across VMware Cloud Global Manager nodes.
Foundation instances.

VCF-NSX-GM-RCMD- Deploy appropriately sized Ensures resource The recommended size


CFG-002 nodes in the NSX Global availability and usage for a management domain
Manager cluster for the efficiency per workload is Medium and for VI
workload domain. domain. workload domains is Large.

VCF-NSX-GM-RCMD- Create a virtual IP (VIP) Provides high availability of n The VIP address
CFG-003 address for the NSX Global the user interface and API feature provides high
Manager cluster for the of NSX Global Manager. availability only. It
workload domain. does not load-balance
requests across the
cluster.
n When using the VIP
address feature, all NSX
Global Manager nodes
must be deployed on
the same Layer 2
network.

VMware by Broadcom 234


VMware Cloud Foundation Design Guide

Table 15-31. NSX Global Manager Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-GM-RCMD- Apply VM-VM anti-affinity Keeps the NSX Global You must allocate at
CFG-004 rules in vSphere DRS to Manager appliances least four physical hosts
the NSX Global Manager running on different ESXi so that the three
appliances. hosts for high availability. NSX Manager appliances
continue running if an ESXi
host failure occurs.

VCF-NSX-GM-RCMD- In vSphere HA, set the n NSX Global Manager n Management of NSX
CFG-005 restart priority policy for implements the global components will
each NSX Global Manager management plane for be unavailable until the
appliance to medium. global segments and NSX Global Manager
firewalls. virtual machines restart.
n The NSX Global
NSX Global Manager is
Manager cluster is
not required for control
deployed in the
plane and data plane
management domain,
connectivity.
where the total number
n Setting the restart
of virtual machines
priority to medium
is limited and where
reserves the high
it competes with
priority for services that
other management
impact the NSX control
components for restart
or data planes.
priority.

VCF-NSX-GM-RCMD- Deploy an additional NSX Enables recoverability of Requires additional NSX


CFG-006 Global Manager Cluster in NSX Global Manager in Global Manager nodes in
the second VMware Cloud the second VMware Cloud the second VMware Cloud
Foundation instance. Foundation instance if a Foundation instance.
failure in the first VMware
Cloud Foundation instance
occurs.

VMware by Broadcom 235


VMware Cloud Foundation Design Guide

Table 15-31. NSX Global Manager Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-GM-RCMD- Set the NSX Global Enables recoverability of Must be done manually.
CFG-007 Manager cluster in the NSX Global Manager in
second VMware Cloud the second VMware Cloud
Foundation instance as Foundation instance if a
standby for the workload failure in the first instance
domain. occurs.

VCF-NSX-GM-RCMD- Establish an operational Ensures secured The administrator must


SEC-001 practice to capture and connectivity between the establish and follow an
update the thumbprint of NSX Manager instances. operational practice by
the NSX Local Manager Each certificate has its using a runbook or
certificate on NSX Global own unique thumbprint. automated process to
Manager every time the NSX Global Manager stores ensure that the thumbprint
certificate is updated by the unique thumbprint of is up-to-date.
using SDDC Manager. the NSX Local Manager
instances for enhanced
security.
If an authentication failure
between NSX Global
Manager and NSX Local
Manager occurs, objects
that are created from NSX
Global Manager will not be
propagated on to the SDN.

Table 15-32. NSX Global Manager Design Recommendations for Stretched Clusters in VMware
Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-GM-RCMD- Add the NSX Global Ensures that, by default, Done automatically by
CFG-008 Manager appliances to the the NSX Global Manager VMware Cloud Foundation
virtual machine group for appliances are powered when stretching a cluster.
the first availability zone. on a host in the primary
availability zone.

VMware by Broadcom 236


VMware Cloud Foundation Design Guide

NSX Edge Design Elements


Table 15-33. NSX Edge Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-EDGE-REQD- Connect the management Provides connection from None.


CFG-001 interface of each NSX the NSX Manager cluster to
Edge node to the VM the NSX Edge.
management network.

VCF-NSX-EDGE-REQD- n Connect the fp-eth0 n Because VLAN trunk None.


CFG-002 interface of each NSX port groups pass
Edge appliance to a traffic for all VLANs,
VLAN trunk port group VLAN tagging can
pinned to physical NIC occur in the NSX
0 of the host, with Edge node itself for
the ability to failover to easy post-deployment
physical NIC 1. configuration.
n Connect the fp-eth1 n By using two separate
interface of each NSX VLAN trunk port
Edge appliance to a groups, you can direct
VLAN trunk port group traffic from the edge
pinned to physical NIC node to a particular
1 of the host, with the host network interface
ability to failover to and top of rack switch
physical NIC 0. as needed.
n Leave the fp-eth2 n In the event of failure of
interface of each NSX the top of rack switch,
Edge appliance unused. the VLAN trunk port
group will failover to
the other physical NIC
and to ensure both fp-
eth0 and fp-eth1 are
available.

VMware by Broadcom 237


VMware Cloud Foundation Design Guide

Table 15-33. NSX Edge Design Requirements for VMware Cloud Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-EDGE-REQD- Use a dedicated VLAN A dedicated edge overlay n You must have routing
CFG-003 for edge overlay that is network provides support between the VLANs for
different from the host for edge mobility in edge overlay and host
overlay VLAN. support of advanced overlay.
deployments such as n You must allocate
multiple availability zones another VLAN in
or multi-rack clusters. the data center
infrastructure for edge
overlay.

VCF-NSX-EDGE-REQD- Create one uplink profile n An NSX Edge node None.


CFG-004 for the edge nodes with that uses a single N-
three teaming policies. VDS can have only one
n Default teaming policy uplink profile.
of load balance source n For increased resiliency
with both active uplinks and performance,
uplink1 and uplink2. supports the
n Named teaming policy concurrent use of both
of failover order edge uplinks through
with a single active both physical NICs on
uplink uplink1 without the ESXi hosts.
standby uplinks. n The default teaming
n Named teaming policy policy increases overlay
of failover order performance and
with a single active availability by using
uplink uplink2 without multiple TEPs, and
standby uplinks. balancing of overlay
traffic.
n By using named
teaming policies, you
can connect an edge
uplink to a specific host
uplink and from there
to a specific top of
rack switch in the data
center.
n Enables ECMP because
the NSX Edge nodes
can uplink to the
physical network over
two different VLANs.

VMware by Broadcom 238


VMware Cloud Foundation Design Guide

Table 15-34. NSX Edge Design Requirements for NSX Federation in VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-EDGE-REQD- Allocate a separate VLAN The RTEP network must be You must allocate another
CFG-005 for edge RTEP overlay that on a VLAN that is different VLAN in the data center
is different from the edge from the edge overlay infrastructure.
overlay VLAN. VLAN. This is an NSX
requirement that provides
support for configuring
different MTU size per
network.

Table 15-35. NSX Edge Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implications

VCF-NSX-EDGE-RCMD- Use appropriately sized Ensures resource You must provide sufficient
CFG-001 NSX Edge virtual availability and usage compute resources to
appliances. efficiency per workload support the chosen
domain. appliance size.

VCF-NSX-EDGE-RCMD- Deploy the NSX Edge Simplifies the configuration Workloads and NSX Edges
CFG-002 virtual appliances to the and minimizes the number share the same compute
default vSphere cluster of ESXi hosts required for resources.
of the workload domain, initial deployment.
sharing the cluster between
the workloads and the
edge appliances.

VCF-NSX-EDGE-RCMD- Deploy two NSX Edge Creates the minimum size For a VI workload domain,
CFG-003 appliances in an edge NSX Edge cluster while additional edge appliances
cluster in the default satisfying the requirements might be required to satisfy
vSphere cluster of the for availability. increased bandwidth
workload domain. requirements.

VCF-NSX-EDGE-RCMD- Apply VM-VM anti-affinity Keeps the NSX Edge nodes None.
CFG-004 rules for vSphere DRS to running on different ESXi
the virtual machines of the hosts for high availability.
NSX Edge cluster.

VCF-NSX-EDGE-RCMD- In vSphere HA, set the n The NSX Edge nodes If the restart priority for
CFG-005 restart priority policy for are part of the another VM in the cluster
each NSX Edge appliance north-south data path is set to highest, the
to high. for overlay segments. connectivity delays for
vSphere HA restarts the edge appliances will be
NSX Edge appliances longer.
first to minimise the
time an edge VM is
offline.
n Setting the restart
priority to high reserves
highest for future
needs.

VMware by Broadcom 239


VMware Cloud Foundation Design Guide

Table 15-35. NSX Edge Design Recommendations for VMware Cloud Foundation (continued)

Recommendation ID Design Recommendation Justification Implications

VCF-NSX-EDGE-RCMD- Create an NSX n Satisfies the availability None.


CFG-006 Edge cluster with requirements by
the default Bidirectional default.
Forwarding Detection n Edge nodes must
(BFD) configuration remain available to
between the NSX Edge create services such
nodes in the cluster. as NAT, routing to
physical networks, and
load balancing.

VCF-NSX-EDGE-RCMD- Use a single N-VDS in the n Simplifies deployment None.


CFG-007 NSX Edge nodes. of the edge nodes.
n The same N-VDS switch
design can be used
regardless of edge
form factor.
n Supports multiple TEP
interfaces in the edge
node.
n vSphere Distributed
Switch is not supported
in the edge node.

Table 15-36. NSX Edge Design Recommendations for Stretched Clusters in VMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implications

VCF-NSX-EDGE-RCMD- Add the NSX Edge Ensures that, by default, None.


CFG-008 appliances to the virtual the NSX Edge appliances
machine group for the first are powered on upon
availability zone. a host in the primary
availability zone.

VMware by Broadcom 240


VMware Cloud Foundation Design Guide

BGP Routing Design Elements for VMware Cloud Foundation


Table 15-37. BGP Routing Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- To enable ECMP between Supports multiple equal- Additional VLANs are
CFG-001 the Tier-0 gateway and cost routes on the Tier-0 required.
the Layer 3 devices gateway and provides
(ToR switches or upstream more resiliency and better
devices), create two bandwidth use in the
VLANs. network.
The ToR switches or
upstream Layer 3 devices
have an SVI on one of the
two VLANS, and each Edge
node in the cluster has an
interface on each VLAN.

VCF-NSX-BGP-REQD- Assign a named teaming Pins the VLAN traffic on None.


CFG-002 policy to the VLAN each segment to its target
segments to the Layer 3 edge node interface. From
device pair. there, the traffic is directed
to the host physical NIC
that is connected to the
target top of rack switch.

VCF-NSX-BGP-REQD- Create a VLAN transport Enables the configuration Additional VLAN transport
CFG-003 zone for edge uplink traffic. of VLAN segments on the zones might be required
N-VDS in the edge nodes. if the edge nodes are not
connected to the same top
of rack switch pair.

VCF-NSX-BGP-REQD- Deploy a Tier-1 gateway Creates a two-tier routing A Tier-1 gateway can only
CFG-004 and connect it to the Tier-0 architecture. be connected to a single
gateway. Abstracts the NSX logical Tier-0 gateway.
components which interact In cases where multiple
with the physical data Tier-0 gateways are
center from the logical required, you must create
components which provide multiple Tier-1 gateways.
SDN services.

VCF-NSX-BGP-REQD- Deploy a Tier-1 gateway to Enables stateful services, None.


CFG-005 the NSX Edge cluster. such as load balancers
and NAT, for SDDC
management components.
Because a Tier-1 gateway
always works in active-
standby mode, the
gateway supports stateful
services.

VMware by Broadcom 241


VMware Cloud Foundation Design Guide

Table 15-38. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Extend the uplink VLANs Because the NSX Edge You must configure a
CFG-006 to the top of rack switches nodes will fail over stretched Layer 2 network
so that the VLANs are between the availability between the availability
stretched between both zones, ensures uplink zones by using physical
availability zones. connectivity to the top network infrastructure.
of rack switches in
both availability zones
regardless of the zone
the NSX Edge nodes are
presently in.

VCF-NSX-BGP-REQD- Provide this SVI Enables the communication You must configure a
CFG-007 configuration on the top of of the NSX Edge nodes to stretched Layer 2 network
the rack switches. the top of rack switches in between the availability
n In the second both availability zones over zones by using the physical
availability zone, the same uplink VLANs. network infrastructure.
configure the top
of rack switches or
upstream Layer 3
devices with an SVI on
each of the two uplink
VLANs.
n Make the top of
rack switch SVI in
both availability zones
part of a common
stretched Layer 2
network between the
availability zones.

VCF-NSX-BGP-REQD- Provide this VLAN Supports multiple equal- n Extra VLANs are
CFG-008 configuration: cost routes on the Tier-0 required.
n Use two VLANs to gateway, and provides n Requires stretching
enable ECMP between more resiliency and better uplink VLANs between
the Tier-0 gateway and bandwidth use in the availability zones
the Layer 3 devices network.
(top of rack switches or
Leaf switches).
n The ToR switches
or upstream Layer 3
devices have an SVI to
one of the two VLANS
and each NSX Edge
node has an interface
to each VLAN.

VMware by Broadcom 242


VMware Cloud Foundation Design Guide

Table 15-38. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Create an IP prefix list Used in a route map to You must manually create
CFG-009 that permits access to prepend a path to one or an IP prefix list that is
route advertisement by any more autonomous system identical to the default one.
network instead of using (AS-path prepend) for BGP
the default IP prefix list. neighbors in the second
availability zone.

VCF-NSX-BGP-REQD- Create a route map-out n Used for configuring You must manually create
CFG-010 that contains the custom neighbor relationships the route map.
IP prefix list and an AS- with the Layer 3 The two NSX Edge nodes
path prepend value set to devices in the second will route north-south
the Tier-0 local AS added availability zone. traffic through the second
twice. n Ensures that all ingress availability zone only if
traffic passes through the connection to their
the first availability BGP neighbors in the first
zone. availability zone is lost, for
example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.

VCF-NSX-BGP-REQD- Create an IP prefix list that Used in a route map to You must manually create
CFG-011 permits access to route configure local-reference an IP prefix list that is
advertisement by network on learned default-route identical to the default one.
0.0.0.0/0 instead of using for BGP neighbors in the
the default IP prefix list. second availability zone.

VMware by Broadcom 243


VMware Cloud Foundation Design Guide

Table 15-38. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Apply a route map-in that n Used for configuring You must manually create
CFG-012 contains the IP prefix list for neighbor relationships the route map.
the default route 0.0.0.0/0 with the Layer 3 The two NSX Edge nodes
and assign a lower local- devices in the second will route north-south
preference , for example, availability zone. traffic through the second
80, to the learned default n Ensures that all egress availability zone only if
route and a lower local- traffic passes through the connection to their
preference, for example, 90 the first availability BGP neighbors in the first
any routes learned. zone. availability zone is lost, for
example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.

VCF-NSX-BGP-REQD- Configure the neighbors of Makes the path in and The two NSX Edge nodes
CFG-013 the second availability zone out of the second will route north-south
to use the route maps as In availability zone less traffic through the second
and Out filters respectively. preferred because the AS availability zone only if
path is longer and the the connection to their
local preference is lower. BGP neighbors in the first
As a result, all traffic passes availability zone is lost, for
through the first zone. example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.

VMware by Broadcom 244


VMware Cloud Foundation Design Guide

Table 15-39. BGP Routing Design Requirements for NSX Federation in VMware Cloud
Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Extend the Tier-0 gateway n Supports ECMP north- The Tier-0 gateway
CFG-014 to the second VMware south routing on all deployed in the second
Cloud Foundation instance. nodes in the NSX Edge instance is removed.
cluster.
n Enables support for
cross-instance Tier-1
gateways and cross-
instance network
segments.

VCF-NSX-BGP-REQD- Set the Tier-0 gateway n In NSX Federation, None.


CFG-015 as primary for all a Tier-0 gateway
VMware Cloud Foundation lets egress traffic
instances. from connected Tier-1
gateways only in its
primary locations.
n Local ingress and
egress traffic
is controlled
independently at
the Tier-1 level.
No segments are
provisioned directly to
the Tier-0 gateway.
n A mixture of network
spans (local to
a VMware Cloud
Foundation instance
or spanning multiple
instances) is enabled
without requiring
additional Tier-0
gateways and hence
edge nodes.
n If a failure in a VMware
Cloud Foundation
instance occurs,
the local-instance
networking in the
other instance remains
available without
manual intervention.

VMware by Broadcom 245


VMware Cloud Foundation Design Guide

Table 15-39. BGP Routing Design Requirements for NSX Federation in VMware Cloud
Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- From the global Tier-0 n Enables the learning None.


CFG-016 gateway, establish BGP and advertising of
neighbor peering to the routes in the
ToR switches connected to second VMware Cloud
the second VMware Cloud Foundation instance.
Foundation instance. n Facilitates a potential
automated failover
of networks from
the first to the
second VMware Cloud
Foundation instance.

VCF-NSX-BGP-REQD- Use a stretched Tier-1 n Enables network span None.


CFG-017 gateway and connect it between the VMware
to the Tier-0 gateway for Cloud Foundation
cross-instance networking. instances because NSX
network segments
follow the span of
the gateway they are
attached to.
n Creates a two-tier
routing architecture.

VMware by Broadcom 246


VMware Cloud Foundation Design Guide

Table 15-39. BGP Routing Design Requirements for NSX Federation in VMware Cloud
Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Assign the NSX Edge n Enables cross-instance You must manually fail over
CFG-018 cluster in each VMware network span between and fail back the cross-
Cloud Foundation instance the first and instance network from
to the stretched Tier-1 second VMware Cloud the standby NSX Global
gateway. Set the first Foundation instances. Manager.
VMware Cloud Foundation n Enables deterministic
instance as primary and ingress and egress
the second instance as traffic for the cross-
secondary. instance network.
n If a VMware Cloud
Foundation instance
failure occurs, enables
deterministic failover of
the Tier-1 traffic flow.
n During the recovery
of the inaccessible
VMware Cloud
Foundation instance,
enables deterministic
failback of the
Tier-1 traffic flow,
preventing unintended
asymmetrical routing.
n Eliminates the need
to use BGP attributes
in the first and
second VMware Cloud
Foundation instances
to influence location
preference and failover.

VCF-NSX-BGP-REQD- Assign the NSX Edge n Enables instance- You can use the
CFG-019 cluster in each VMware specific networks to be service router that is
Cloud Foundation instance isolated to their specific created for the Tier-1
to the local Tier-1 gateway instances. gateway for networking
for that VMware Cloud n Enables deterministic services. However, such
Foundation instance. flow of ingress and configuration is not
egress traffic for required for network
the instance-specific connectivity.
networks.

VCF-NSX-BGP-REQD- Set each local Tier-1 Prevents the need to use None.
CFG-020 gateway only as primary in BGP attributes in primary
that instance. Avoid setting and secondary instances
the gateway as secondary to influence the instance
in the other instances. ingress-egress preference.

VMware by Broadcom 247


VMware Cloud Foundation Design Guide

Table 15-40. BGP Routing Design Recommendations for VMware Cloud Foundation
Recommendation Recommendation
Recommendation ID Design Recommendation Justification Implication

VCF-NSX-BGP-RCMD- Deploy an active-active Supports ECMP north-south Active-active Tier-0


CFG-001 Tier-0 gateway. routing on all Edge nodes gateways cannot provide
in the NSX Edge cluster. stateful services such as
NAT.

VCF-NSX-BGP-RCMD- Configure the BGP Keep Provides a balance By using longer timers to
CFG-002 Alive Timer to 4 and Hold between failure detection detect if a router is not
Down Timer to 12 or lower between the top of responding, the data about
between the top of tack rack switches and the such a router remains in
switches and the Tier-0 Tier-0 gateway, and the routing table longer. As
gateway. overburdening the top of a result, the active router
rack switches with keep- continues to send traffic to
alive traffic. a router that is down.
These timers must be
aligned with the data
center fabric design of your
organization.

VCF-NSX-BGP-RCMD- Do not enable Graceful Avoids loss of traffic. None.


CFG-003 Restart between BGP On the Tier-0 gateway,
neighbors. BGP peers from all the
gateways are always
active. On a failover, the
Graceful Restart capability
increases the time a remote
neighbor takes to select an
alternate Tier-0 gateway.
As a result, BFD-based
convergence is delayed.

VCF-NSX-BGP-RCMD- Enable helper mode for Avoids loss of traffic. None.


CFG-004 Graceful Restart mode During a router restart,
between BGP neighbors. helper mode works
with the graceful restart
capability of upstream
routers to maintain the
forwarding table which in
turn will forward packets
to a down neighbor even
after the BGP timers have
expired causing loss of
traffic.

VCF-NSX-BGP-RCMD- Enable Inter-SR iBGP In the event that an None.


CFG-005 routing. edge node has all of its
northbound eBGP sessions
down, north-south traffic
will continue to flow by
routing traffic to a different
edge node.

VMware by Broadcom 248


VMware Cloud Foundation Design Guide

Table 15-40. BGP Routing Design Recommendations for VMware Cloud Foundation (continued)
Recommendation Recommendation
Recommendation ID Design Recommendation Justification Implication

VCF-NSX-BGP-RCMD- Deploy a Tier-1 gateway Ensures that after a failed None.


CFG-006 in non-preemptive failover NSX Edge transport node
mode. is back online, it does
not take over the gateway
services thus preventing a
short service outage.

VCF-NSX-BGP-RCMD- Enable standby relocation Ensures that if an edge None.


CFG-007 of the Tier-1 gateway. failure occurs, a standby
Tier-1 gateway is created
on another edge node.

Table 15-41. BGP Routing Design Recommendations for NSX Federation in VMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-BGP-RCMD- Use Tier-1 gateways to Enables a mixture of To control location span,


CFG-008 control the span of network spans (isolated a Tier-1 gateway must
networks and ingress and to a VMware Cloud be assigned to an edge
egress traffic in the Foundation instance cluster and hence has the
VMware Cloud Foundation or spanning multiple Tier-1 SR component. East-
instances. instances) without requiring west traffic between Tier-1
additional Tier-0 gateways gateways with SRs need to
and hence edge nodes. physically traverse an edge
node.

VCF-NSX-BGP-RCMD- Allocate a Tier-1 gateway in n Creates a two-tier None.


CFG-009 each instance for instance- routing architecture.
specific networks and n Enables local-instance
connect it to the stretched networks that are
Tier-0 gateway. not to span between
the VMware Cloud
Foundation instances.
n Guarantees that local-
instance networks
remain available if
a failure occurs in
another VMware Cloud
Foundation instance.

VMware by Broadcom 249


VMware Cloud Foundation Design Guide

Overlay Design Elements for VMware Cloud Foundation


Table 15-42. Overlay Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-OVERLAY-REQD- Configure all ESXi hosts in Enables distributed routing, None.


CFG-001 the workload domain as logical segments, and
transport nodes in NSX. distributed firewall.

VCF-NSX-OVERLAY-REQD- Configure each ESXi host n Enables the None.


CFG-002 as a transport node using participation of ESXi
transport node profiles. hosts and the virtual
machines running on
them in NSX overlay
and VLAN networks.
n Transport node profiles
can only be applied at
the cluster level.

VCF-NSX-OVERLAY-REQD- To provide virtualized n Creates isolated, multi- Requires configuring


CFG-003 network capabilities to tenant broadcast transport networks with an
workloads, use overlay domains across data MTU size of at least 1,600
networks with NSX Edge center fabrics to deploy bytes.
nodes and distributed elastic, logical networks
routing. that span physical
network boundaries.
n Enables advanced
deployment topologies
by introducing Layer
2 abstraction from the
data center networks.

VCF-NSX-OVERLAY-REQD- Create a single overlay n Ensures that overlay All clusters in all workload
CFG-004 transport zone in the NSX segments are domains that share the
instance for all overlay connected to an same NSX Manager share
traffic across the host and NSX Edge node for the same transport zone.
NSX Edge transport nodes services and north-
of the workload domain. south routing.
n Ensures that all
segments are available
to all ESXi hosts
and NSX Edge nodes
configured as transport
nodes.

VCF-NSX-OVERLAY-REQD- Create an uplink profile For increased resiliency None.


CFG-005 with a load balance source and performance, supports
teaming policy with two the concurrent use of both
active uplinks for ESXi physical NICs on the ESXi
hosts. hosts that are configured
as transport nodes.

VMware by Broadcom 250


VMware Cloud Foundation Design Guide

Table 15-43. Overlay Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-OVERLAY-RCMD- Use static IP pools to n Removes the need None.


CFG-001 assign IP addresses to the for an external DHCP
host TEP interfaces. server for the host
overlay VLANs.
n You can use NSX
Manager to verify static
IP pool configurations.

VCF-NSX-OVERLAY-RCMD- Use hierarchical two-tier Hierarchical two-tier None.


CFG-002 replication on all overlay replication is more efficient
segments. because it reduced the
number of ESXi hosts
the source ESXi host
must replicate traffic to
if hosts have different
TEP subnets. This is
typically the case with
more than one cluster and
will improve performance in
that scenario.

Table 15-44. Overlay Design Recommendations for Stretched Clusters inVMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-OVERLAY-RCMD- Configure an NSX sub- n You can use static Changes to the
CFG-003 transport node profile. IP pools for the host host transport node
TEPs in each availability configuration are done at
zone. the vSphere cluster level.
n The NSX transport
node profile can
remain attached when
using two separate
VLANs for host TEPs
at each availability
zone as required for
clusters that are based
on vSphere Lifecycle
Manager images.
n Using an external DHCP
server for the host
overlay VLANs in both
availability zones is not
required.

VMware by Broadcom 251


VMware Cloud Foundation Design Guide

Application Virtual Network Design Elements for VMware Cloud


Foundation
Table 15-45. Application Virtual Network Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-AVN-REQD- Create one cross-instance Prepares the environment Each NSX segment requires
CFG-001 NSX segment for the for the deployment of a unique IP address space.
components of a VMware solutions on top of VMware
Aria Suite application Cloud Foundation, such as
or another solution that VMware Aria Suite, without
requires mobility between a complex physical network
VMware Cloud Foundation configuration.
instances. The components of
the VMware Aria Suite
application must be
easily portable between
VMware Cloud Foundation
instances without requiring
reconfiguration.

VCF-NSX-AVN-REQD- Create one or more local- Prepares the environment Each NSX segment requires
CFG-002 instance NSX segments for the deployment of a unique IP address space.
for the components of solutions on top of VMware
a VMware Aria Suite Cloud Foundation, such as
application or another VMware Aria Suite, without
solution that are assigned a complex physical network
to a specific VMware Cloud configuration.
Foundation instance.

VMware by Broadcom 252


VMware Cloud Foundation Design Guide

Table 15-46. Application Virtual Network Design Requirements for NSX Federation in VMware
Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-AVN-REQD- Extend the cross-instance Enables workload mobility Each NSX segment requires
CFG-003 NSX segment to the without a complex physical a unique IP address space.
second VMware Cloud network configuration.
Foundation instance. The components of
a VMware Aria Suite
application must be
easily portable between
VMware Cloud Foundation
instances without requiring
reconfiguration.

VCF-NSX-AVN-REQD- In each VMware Cloud Enables workload mobility Each NSX segment requires
CFG-004 Foundation instance, create within a VMware a unique IP address space.
additional local-instance Cloud Foundation instance
NSX segments. without complex physical
network configuration.
Each VMware Cloud
Foundation instance should
have network segments to
support workloads which
are isolated to that VMware
Cloud Foundation instance.

VCF-NSX-AVN-REQD- In each VMware Configures local-instance Requires an individual Tier-1


CFG-005 Cloud Foundation NSX segments at required gateway for local-instance
instance, connect or sites only. segments.
migrate the local-instance
NSX segments to
the corresponding local-
instance Tier-1 gateway.

Table 15-47. Application Virtual Network Design Recommendations for VMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-AVN-RCMD- Use overlay-backed NSX n Supports expansion to Using overlay-backed NSX


CFG-001 segments. deployment topologies segments requires routing,
for multiple VMware eBGP recommended,
Cloud Foundation between the data center
instances. fabric and edge nodes.
n Limits the number of
VLANs required for the
data center fabric.

VMware by Broadcom 253


VMware Cloud Foundation Design Guide

Load Balancing Design Elements for VMware Cloud Foundation


Table 15-48. Load Balancing Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-LB-REQD- Deploy a standalone Provides independence You must add a separate


CFG-001 Tier-1 gateway to support between north-south Tier-1 Tier-1 gateway.
advanced stateful services gateways to support
such as load balancing advanced deployment
for other management scenarios.
components.

VCF-NSX-LB-REQD- When creating load Provides load balancing to You must connect
CFG-002 balancing services for applications connected to the gateway to each
Application Virtual the cross-instance network. network that requires load
Networks, connect the balancing.
standalone Tier-1 gateway
to the cross-instance NSX
segments.

VCF-NSX-LB-REQD- Configure a default static Because the Tier-1 gateway None.


CFG-003 route on the standalone is standalone, it does not
Tier-1 gateway with a next auto-configure its routes.
hop the Tier-1 gateway for
the segment to provide
connectivity to the load
balancer.

VMware by Broadcom 254


VMware Cloud Foundation Design Guide

Table 15-49. Load Balancing Design Requirements for NSX Federation in VMware Cloud
Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-LB-REQD- Deploy a standalone Tier-1 Provides a cold-standby n You must add


CFG-004 gateway in the second non-global service router a separate Tier-1
VMware Cloud Foundation instance for the second gateway.
instance. VMware Cloud Foundation n You must manually
instance to support configure any services
services on the cross- and synchronize them
instance network which between the non-global
require advanced services service router instances
not currently supported as in the first and
NSX global objects. second VMware Cloud
Foundation instances.
n To avoid a network
conflict between the
two VMware Cloud
Foundation instances,
make sure that the
primary and standby
networking services are
not both active at the
same time.

VCF-NSX-LB-REQD- Connect the standalone Provides load balancing to You must connect
CFG-005 Tier-1 gateway in the applications connected to the gateway to each
second VMware Cloud the cross-instance network network that requires load
Foundationinstance to in the second VMware balancing.
the cross-instance NSX Cloud Foundation instance.
segment.

VMware by Broadcom 255


VMware Cloud Foundation Design Guide

Table 15-49. Load Balancing Design Requirements for NSX Federation in VMware Cloud
Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-LB-REQD- Configure a default static Because the Tier-1 gateway None.


CFG-006 route on the standalone is standalone, it does not
Tier-1 gateway in the autoconfigure its routes.
second VMware Cloud
Foundation instance with
a next hop as the Tier-1
gateway for the segment
it connects with to provide
connectivity to the load
balancers.

VCF-NSX-LB-REQD- Establish a process to Keeps the network service n Because of incorrect


CFG-007 ensure any changes made in the failover load configuration between
on to the load balancer balancer instance ready the VMware Cloud
instance in the first VMware for activation if a failure Foundation instances,
Cloud Foundationinstance in the first VMware the load balancer
are manually applied to the Cloud Foundation instance service in the second
disconnected load balancer occurs. instance might come
in the second instance. Because network services online with an
are not supported as global invalid or incomplete
objects, you must configure configuration.
them manually in each n If both VMware Cloud
VMware Cloud Foundation Foundation instances
instance. The load balancer are online and active
service in one instance at the same time,
must be connected and a conflict between
active, while the service in services could occur
the other instance must be resulting in a potential
disconnected and inactive. outage.
n The administrator
must establish and
follow an operational
practice by using a
runbook or automated
process to ensure that
configuration changes
are reproduced in
each VMware Cloud
Foundation instance.

SDDC Manager Design Elements for VMware Cloud


Foundation
Use this list of requirements and recommendations for reference related to SDDC Manager in an
environment with a single or multiple VMware Cloud Foundation instances.

For full design details, see Chapter 9 SDDC Manager Design for VMware Cloud Foundation.

VMware by Broadcom 256


VMware Cloud Foundation Design Guide

Table 15-50. SDDC Manager Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-SDDCMGR-REQD- Deploy an SDDC Manager SDDC Manager is required None.


CFG-001 system in the first to perform VMware Cloud
availability zone of the Foundation capabilities,
management domain. such as provisioning
VI workload domains,
deploying solutions,
patching, upgrading, and
others.

VCF-SDDCMGR-REQD- Deploy SDDC Manager with The configuration of None.


CFG-002 its default configuration. SDDC Manager is not
configurable and should
not be changed from its
defaults.

VCF-SDDCMGR-REQD- Place the SDDC Manager n Simplifies IP addressing None.


CFG-003 appliance on the VM for management VMs
management network. by using the same
VLAN and subnet.
n Provides simplified
secure access to
management VMs in
the same VLAN
network.

Table 15-51. SDDC Manager Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-SDDCMGR-RCMD- Connect SDDC Manager SDDC Manager must be The rules of your
CFG-001 to the Internet for able to download install organization might not
downloading software and upgrade software permit direct access to the
bundles. bundles for deployment of Internet. In this case, you
VI workload domains and must download software
solutions, and for upgrade bundles for SDDC Manager
from a repository. manually.

VCF-SDDCMGR-RCMD- Configure a network proxy To protect SDDC Manager The proxy must not
CFG-002 to connect SDDC Manager against external attacks use authentication because
to the Internet. from the Internet. SDDC Manager does
not support proxy with
authentication.

VMware by Broadcom 257


VMware Cloud Foundation Design Guide

Table 15-51. SDDC Manager Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-SDDCMGR-RCMD- Configure SDDC Manager Software bundles for Requires the use of a
CFG-003 with a VMware Customer VMware Cloud Foundation VMware Customer Connect
Connect account with are stored in a repository user account with access to
VMware Cloud Foundation that is secured with access VMware Cloud Foundation
entitlement to check for controls. licensing.
and download software Sites without an internet
bundles. connection can use local
upload option instead.

VCF-SDDCMGR-RCMD- Configure SDDC Manager Provides increased security An external certificate


CFG-004 with an external certificate by implementing signed authority, such as Microsoft
authority that is responsible certificate generation and CA, must be locally
for providing signed replacement across the available.
certificates. management components.

VMware Aria Suite Lifecycle Design Elements for VMware


Cloud Foundation
Use this list of requirements and recommendations for reference related to VMware Aria Suite
Lifecycle in an environment with a single or multiple VMware Cloud Foundation instances.

For full design details, see Chapter 10 VMware Aria Suite Lifecycle Design for VMware Cloud
Foundation.

VMware by Broadcom 258


VMware Cloud Foundation Design Guide

Table 15-52. VMware Aria Suite Lifecycle Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-VASL- Deploy a VMware Aria Suite Provides life cycle You must ensure that
REQD-CFG-001 Lifecycle instance in the management operations for the required resources are
management domain of each VMware Aria Suite applications available.
VMware Cloud Foundation and Workspace ONE Access.
instance to provide life cycle
management for VMware Aria
Suite and Workspace ONE
Access.

VCF-VASL- Deploy VMware Aria Suite n Deploys VMware Aria Suite None.
REQD-CFG-002 Lifecycle by using SDDC Lifecycle in VMware Cloud
Manager. Foundation mode, which
enables the integration
with the SDDC Manager
inventory for product
deployment and life cycle
management of VMware
Aria Suite components.
n Automatically configures
the standalone Tier-1
gateway required for load
balancing the clustered
Workspace ONE Access
and VMware Aria Suite
components.

VCF-VASL- Allocate extra 100 GB of n Provides support for None.


REQD-CFG-003 storage to the VMware Aria VMware Aria Suite product
Suite Lifecycle appliance for binaries (install, upgrade,
VMware Aria Suite product and patch) and content
binaries. management.
n SDDC Manager automates
the creation of storage.

VCF-VASL- Place the VMware Aria Provides a consistent You must use an
REQD-CFG-004 Suite Lifecycle appliance deployment model for implementation in NSX to
on an overlay-backed management applications. support this networking
(recommended) or VLAN- configuration.
backed NSX network segment.

VCF-VASL- Import VMware Aria Suite n You can review the validity, When using the API, you must
REQD-CFG-005 product licenses to the Locker details, and deployment specify the Locker ID for the
repository for product life cycle usage for the license across license to be used in the JSON
operations. the VMware Aria Suite payload.
products.
n You can reference and
use licenses during product
life cycle operations, such
as deployment and license
replacement.

VMware by Broadcom 259


VMware Cloud Foundation Design Guide

Table 15-52. VMware Aria Suite Lifecycle Design Requirements for VMware Cloud Foundation
(continued)

Requirement ID Design Requirement Justification Implication

VCF-VASL- Configure datacenter objects You can deploy and manage You must manage a separate
REQD-ENV-001 in VMware Aria Suite the integrated VMware Aria datacenter object for the
Lifecycle for local and cross- Suite components across the products that are specific to
instance VMware Aria Suite SDDC as a group. each instance.
deployments and assign the
management domain vCenter
Server instance to each data
center.

VCF-VASL- If deploying VMware Aria Supports the deployment of None.


REQD-ENV-002 Operations for Logs, create a an instance of VMware Aria
local-instance environment in Operations for Logs.
VMware Aria Suite Lifecycle.

VCF-VASL- If deploying VMware Aria n Supports deployment and You can manage instance-
REQD-ENV-003 Operations or VMware Aria management of the specific components, such as
Automation, create a cross- integrated VMware Aria remote collectors, only in
instance environment in Suite products across an environment that is cross-
VMware Aria Suite Lifecycle VMware Cloud Foundation instance.
instances as a group.
n Enables the deployment
of instance-specific
components, such as
VMware Aria Operations
remote collectors. In
VMware Aria Suite
Lifecycle, you can deploy
and manage VMware
Aria Operations remote
collector objects only
in an environment that
contains the associated
cross-instance components.

VMware by Broadcom 260


VMware Cloud Foundation Design Guide

Table 15-52. VMware Aria Suite Lifecycle Design Requirements for VMware Cloud Foundation
(continued)

Requirement ID Design Requirement Justification Implication

VCF-VASL- Use the custom vCenter Server VMware Aria Suite Lifecycle You must maintain the
REQD-SEC-001 role for VMware Aria Suite accesses vSphere with the permissions required by the
Lifecycle that has the minimum minimum set of permissions custom role.
privileges required to support that are required to support
the deployment and upgrade the deployment and upgrade
of VMware Aria Suite products. of VMware Aria Suite products.
SDDC Manager automates the
creation of the custom role.

VCF-VASL- Use the service account in n Provides the following n You must maintain the life
REQD-SEC-002 vCenter Server for application- access control features: cycle and availability of the
to-application communication n VMware Aria Suite service account outside of
from VMware Aria Suite Lifecycle accesses SDDC manager password
Lifecycle to vSphere. Assign vSphere with the rotation.
global permissions using the minimum set of
custom role. required permissions.
n You can introduce
improved accountability
in tracking request-
response interactions
between the
components of the
SDDC.
n SDDC Manager automates
the creation of the service
account.

Table 15-53. VMware Aria Suite Lifecycle Design Requirements for Stretched Clusters in VMware
Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-VASL- For multiple availability zones, Ensures that, by default, the If VMware Aria Suite Lifecycle
REQD-CFG-006 add the VMware Aria Suite VMware Aria Suite Lifecycle is deployed after the creation
Lifecycle appliance to the VM appliance is powered on a host of the stretched management
group for the first availability in the first availability zone. cluster, you must add the
zone. VMware Aria Suite Lifecycle
appliance to the VM group
manually.

VMware by Broadcom 261


VMware Cloud Foundation Design Guide

Table 15-54. VMware Aria Suite Lifecycle Design Requirements for NSX Federation in VMware
Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-VASL- Configure the DNS settings Improves resiliency in the event As you scale from a
REQD-CFG-007 for the VMware Aria Suite of an outage of external deployment with a single
Lifecycle appliance to use DNS services for a VMware Cloud VMware Cloud Foundation
servers in each instance. Foundation instance. instance to one with multiple
VMware Cloud Foundation
instances, the DNS settings
of the VMware Aria Suite
Lifecycle appliance must be
updated.

VCF-VASL- Configure the NTP settings Improves resiliency if an outage As you scale from a
REQD-CFG-008 for the VMware Aria Suite of external services for a deployment with a single
Lifecycle appliance to use NTP VMware Cloud Foundation VMware Cloud Foundation
servers in each VMware Cloud instance occurs. instance to one with multiple
Foundation instance. VMware Cloud Foundation
instances, the NTP settings
on the VMware Aria Suite
Lifecycle appliance must be
updated.

VCF-VASL- Assign the management Supports the deployment None.


REQD-ENV-004 domain vCenter Server of VMware Aria Operations
instance in the additional remote collectors in an
VMware Cloud Foundation additional VMware Cloud
instance to the cross-instance Foundation instance.
data center.

Table 15-55. VMware Aria Suite Lifecycle Design Recommendations for VMware Cloud
Foundation
Recommendati
on ID Design Recommendation Justification Implication

VCF-VASL- Protect VMware Aria Suite Supports the availability None.


RCMD-CFG-001 Lifecycle by using vSphere HA. objectives for VMware
Aria Suite Lifecycle without
requiring manual intervention
during a failure event.

VCF-VASL- Obtain product binaries for n You can upgrade VMware The site must have an Internet
RCMD-LCM-001 install, patch, and upgrade in Aria Suite products connection to use VMware
VMware Aria Suite Lifecycle based on their general Customer Connect.
from VMware Customer availability and endpoint Sites without an Internet
Connect. interoperability rather than connection should use the local
being listed as part of upload option instead.
VMware Cloud Foundation
bill of materials (BOM).
n You can deploy and
manage binaries in an
environment that does not
allow access to the Internet
or are dark sites.

VMware by Broadcom 262


VMware Cloud Foundation Design Guide

Table 15-55. VMware Aria Suite Lifecycle Design Recommendations for VMware Cloud
Foundation (continued)
Recommendati
on ID Design Recommendation Justification Implication

VCF-VASL- Use support packs (PSPAKS) Enables the upgrade of an None.


RCMD-LCM-002 for VMware Aria Suite Lifecycle existing VMware Aria Suite
to enable upgrading to later Lifecycle to permit later
versions of VMware Aria Suite versions of VMware Aria
products. Suite products without an
associated VMware Cloud
Foundation upgrade. See
VMware Knowledge Base
article 88829

VCF-VASL- Enable integration between n Enables authentication to You must deploy and configure
RCMD-SEC-001 VMware Aria Suite Lifecycle VMware Aria Suite Lifecycle Workspace ONE Access
and your corporate identity by using your corporate to establish the integration
source by using the Workspace identity source. between VMware Aria Suite
ONE Access instance. n Enables authorization Lifecycle and your corporate
through the assignment identity sources.
of organization and cloud
services roles to enterprise
users and groups defined
in your corporate identity
source.

VCF-VASL- Create corresponding security Streamlines the management n You must create the
RCMD-SEC-002 groups in your corporate of VMware Aria Suite Lifecycle security groups outside of
directory services for VMware roles for users. the SDDC stack.
Aria Suite Lifecycle roles: n You must set the desired
n VCF directory synchronization
n Content Release Manager interval in Workspace
ONE Access to ensure
n Content Developer
that changes are available
within a reasonable period.

Workspace ONE Access Design Elements for VMware Cloud


Foundation
Use this list of requirements and recommendations for reference related toWorkspace ONE
Access in an environment with a single or multiple VMware Cloud Foundation instances. The
design elements also considers whether the management domain has a single or multiple
availability zones.

For full design details, see Chapter 11 Workspace ONE Access Design for VMware Cloud
Foundation.

VMware by Broadcom 263


VMware Cloud Foundation Design Guide

Table 15-56. Workspace ONE Access Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-WSA-REQD-ENV-001 Create a global A global environment is None.


environment in VMware required by VMware Aria
Aria Suite Lifecycle to Suite Lifecycle to deploy
support the deployment of Workspace ONE Access.
Workspace ONE Access.

VCF-WSA-REQD-SEC-001 Import certificate authority- n You can reference When using the API, you
signed certificates to and use certificate must specify the Locker ID
the Locker repository authority-signed for the certificate to be
for Workspace ONE certificates during used in the JSON payload.
Access product life cycle product life cycle
operations. operations, such
as deployment and
certificate replacement.

VCF-WSA-REQD-CFG-001 Deploy an appropriately The Workspace ONE None.


sized Workspace ONE Access instance is
Access instance according managed by VMware
to the deployment model Aria Suite Lifecycle and
you have selected by imported into the SDDC
using VMware Aria Suite Manager inventory.
Lifecycle in VMware Cloud
Foundation mode.

VCF-WSA-REQD-CFG-002 Place the Workspace Provides a consistent You must use an


ONE Access appliances deployment model for implementation in NSX
on an overlay-backed or management applications to support this network
VLAN-backed NSX network in an environment with configuration.
segment. a single or multiple
VMware Cloud Foundation
instances.

VCF-WSA-REQD-CFG-003 Use the embedded Removes the need for None.


PostgreSQL database with external database services.
Workspace ONE Access.

VCF-WSA-REQD-CFG-004 Add a VM group for You can define the None.


Workspace ONE Access startup order of virtual
and set VM rules to machines regarding the
restart the Workspace service dependency. The
ONE Access VM group startup order ensures that
before any of the VMs vSphere HA powers on the
that depend on it for Workspace ONE Access
authentication. virtual machines in an
order that respects product
dependencies.

VMware by Broadcom 264


VMware Cloud Foundation Design Guide

Table 15-56. Workspace ONE Access Design Requirements for VMware Cloud Foundation
(continued)

Requirement ID Design Requirement Justification Implication

VCF-WSA-REQD-CFG-005 Connect the Workspace You can integrate your None.


ONE Access instance to enterprise directory with
a supported upstream Workspace ONE Access
Identity Provider. to synchronize users and
groups to the Workspace
ONE Access identity
and access management
services.

VCF-WSA-REQD-CFG-006 If using clustered Adding the additional Each of the Workspace


Workspace ONE Access, native connectors provides ONE Access cluster nodes
configure second and third redundancy and improves must be joined to the
native connectors that performance by load- Active Directory domain
correspond to the second balancing authentication to use Active Directory
and third Workspace ONE requests. with Integrated Windows
Access cluster nodes to Authentication with the
support the high availability native connector.
of directory services
access.

VCF-WSA-REQD-CFG-007 If using clustered n During the deployment You must use the load
Workspace ONE Access, of Workspace ONE balancer that is configured
use the NSX load balancer Access by using by SDDC Manager and the
that is configured by SDDC VMware Aria Suite integration with VMware
Manager on a dedicated Lifecycle, SDDC Aria Suite Lifecycle.
Tier-1 gateway. Manager automates the
configuration of an
NSX load balancer
for Workspace ONE
Access to facilitate
scale-out.

VMware by Broadcom 265


VMware Cloud Foundation Design Guide

Table 15-57. Workspace ONE Access Design Requirements for Stretched Clusters in VMware
Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-WSA-REQD-CFG-008 Add the Workspace ONE Ensures that, by default, n If the Workspace
Access appliances to the the Workspace ONE ONE Access instance
VM group for the first Access cluster nodes are is deployed after
availability zone. powered on a host in the the creation of the
first availability zone. stretched management
cluster, you must add
the appliances to the
VM group manually.
n ClusteredWorkspace
ONE Access might
require manual
intervention after a
failure of the active
availability zone occurs.

Table 15-58. Workspace ONE Access Design Requirements for NSX Federation in VMware Cloud
Foundation

Requirement ID Design Requirement Justification Implication

VCF-WSA-REQD-CFG-009 Configure the DNS settings Improves resiliency if None.


for Workspace ONE Access an outage of external
to use DNS servers in each services for a VMware
VMware Cloud Foundation Cloud Foundation instance
instance. occurs.

VCF-WSA-REQD-CFG-010 Configure the NTP settings Improves resiliency if If you scale from a
on Workspace ONE Access an outage of external deployment with a single
cluster nodes to use NTP services for a VMware VMware Cloud Foundation
servers in each VMware Cloud Foundation instance instance to one with
Cloud Foundation instance. occurs. multiple VMware Cloud
Foundation instances,
the NTP settings on
Workspace ONE Access
must be updated.

VMware by Broadcom 266


VMware Cloud Foundation Design Guide

Table 15-59. Workspace ONE Access Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-WSA-RCMD-CFG-001 Protect all Workspace Supports high availability None for standard
ONE Access nodes using for Workspace ONE deployments.
vSphere HA. Access. Clustered Workspace ONE
Access deployments might
require intervention if an
ESXi host failure occurs.

VCF-WSA-RCMD-CFG-002 When using Active The native (embedded) n In a multi-domain


Directory as an Identity Workspace ONE Access forest, where the
Provider, use Active connector binds to Active Workspace ONE
Directory over LDAP as Directory over LDAP Access instance
the Directory Service using a standard bind connects to a
connection option. authentication. child domain, Active
Directory security
groups must have
global scope.
Therefore, members
added to the
Active Directory global
security group must
reside within the
same Active Directory
domain.
n If authentication to
more than one Active
Directory domain is
required, additional
Workspace ONE
Access directories are
required.

VCF-WSA-RCMD-CFG-003 When using Active Provides the following n You must manage the
Directory as an Identity access control features: password life cycle of
Provider, use an Active n Workspace ONE this account.
Directory user account with Access connects to the n If authentication to
a minimum of read-only Active Directory with more than one Active
access to Base DNs for the minimum set of Directory domain is
users and groups as the required permissions to required, additional
service account for the bind and query the accounts are required
Active Directory bind. directory. for the Workspace ONE
n You can introduce Access connector to
improved accountability bind to each Active
in tracking request- Directory domain over
response interactions LDAP.
between the
Workspace ONE
Access and Active
Directory.

VMware by Broadcom 267


VMware Cloud Foundation Design Guide

Table 15-59. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-WSA-RCMD-CFG-004 Configure the directory n Limits the number You must manage
synchronization to of replicated groups the groups from
synchronize only groups required for each your enterprise
required for the integrated product. directory selected for
SDDC solutions. n Reduces the replication synchronization to
interval for group Workspace ONE Access.
information.

VCF-WSA-RCMD-CFG-005 Activate the When activated, members None.


synchronization of of the enterprise directory
enterprise directory group groups are synchronized
members when a group is to the Workspace ONE
added to the Workspace Access directory when
ONE Access directory. groups are added. When
deactivated, group names
are synchronized to the
directory, but members
of the group are not
synchronized until the
group is entitled to an
application or the group
name is added to an access
policy.

VCF-WSA-RCMD-CFG-006 Enable Workspace ONE Allows Workspace ONE Changes to group


Access to synchronize Access to update and membership are not
nested group members by cache the membership of reflected until the next
default. groups without querying synchronization event.
your enterprise directory.

VCF-WSA-RCMD-CFG-007 Add a filter to the Limits the number of To ensure that replicated
Workspace ONE Access replicated users for user accounts are managed
directory settings to Workspace ONE Access within the maximums, you
exclude users from the within the maximum scale. must define a filtering
directory replication. schema that works for your
organization based on your
directory attributes.

VMware by Broadcom 268


VMware Cloud Foundation Design Guide

Table 15-59. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-WSA-RCMD-CFG-008 Configure the mapped You can configure the User accounts in your
attributes included when minimum required and organization's enterprise
a user is added to the extended user attributes directory must have
Workspace ONE Access to synchronize directory the following required
directory. user accounts for the attributes mapped:
Workspace ONE Access n firstname, for example,
to be used as an givenname for Active
authentication source for Directory
cross-instance VMware
n lastName, for example,
Aria Suite solutions.
sn for Active Directory

n email, for example,


mail for Active
Directory
n userName, for
example,sAMAccountNam
e for Active Directory

n If you require users


to sign in with
an alternate unique
identifier, for example,
userPrincipalName, you
must map the
attribute and update
the identity and
access management
preferences.

VCF-WSA-RCMD-CFG-009 Configure the Workspace Ensures that any changes Schedule the
ONE Access directory to group memberships in synchronization interval
synchronization frequency the corporate directory to be longer than the
to a reoccurring schedule, are available for integrated time to synchronize from
for example, 15 minutes. solutions in a timely the enterprise directory.
manner. If users and groups
are being synchronized
to Workspace ONE
Access when the
next synchronization is
scheduled, the new
synchronization starts
immediately after the end
of the previous iteration.
With this schedule, the
process is continuous.

VMware by Broadcom 269


VMware Cloud Foundation Design Guide

Table 15-59. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-WSA-RCMD-SEC-001 Create corresponding Streamlines the n You must set the


security groups in management of Workspace appropriate directory
your corporate directory ONE Access roles to users. synchronization interval
services for these in Workspace ONE
Workspace ONE Access Access to ensure that
roles: changes are available
n Super Admin within a reasonable
period.
n Directory Admins
n You must create the
n ReadOnly Admin
security group outside
of the SDDC stack.

VCF-WSA-RCMD-SEC-002 Configure a password You can set a policy for You must set the policy
policy for Workspace Workspace ONE Access in accordance with your
ONE Access local local directory users that organization policies and
directory users, admin and addresses your corporate regulatory standards, as
configadmin. policies and regulatory applicable.
standards. You must apply the
The password policy is password policy on the
applicable only to the Workspace ONE Access
local directory users and cluster nodes.
does not impact your
organization directory.

Life Cycle Management Design Elements for VMware Cloud


Foundation
Use this list of requirements for reference related to life cycle management in a VMware Cloud
Foundation environment.

For full design details, see Chapter 12 Life Cycle Management Design for VMware Cloud
Foundation.

VMware by Broadcom 270


VMware Cloud Foundation Design Guide

Table 15-60. Life Cycle Management Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-LCM-REQD-001 Use SDDC Manager to Because the deployment The operations team must
perform the life cycle scope of SDDC Manager understand and be aware
management of the covers the full VMware of the impact of a
following components: Cloud Foundation stack, patch, update, or upgrade
n SDDC Manager SDDC Manager performs operation by using SDDC
patching, update, or Manager.
n NSX Manager
upgrade of these
n NSX Edges
components across all
n vCenter Server workload domains.
n ESXi

VCF-LCM-REQD-002 Use VMware Aria Suite VMware Aria Suite n You must deploy
Lifecycle to manage the Lifecycle automates the life VMware Aria Suite
life cycle of the following cycle of VMware Aria Suite Lifecycle by using
components: Lifecycle and Workspace SDDC Manager.
n VMware Aria Suite ONE Access. n You must manually
Lifecycle apply Workspace
n Workspace ONE ONE Access patches,
Access updates, and hotfixes.
Patches, updates, and
hotfixes for Workspace
ONE Access are not
generally managed by
VMware Aria Suite
Lifecycle.

VCF-LCM-RCMD-001 Use vSphere Lifecycle n With vSphere Lifecycle n Updating the firmware
Manager images to manage Manager images, with images requires
the life cycle of vSphere firmware updates are an OEM-provided
clusters. carried out through hardware support
firmware and driver manager plug-in, which
add-ons, which you add integrates with vSphere
to the image you use to Lifecycle Manager.
manage a cluster. n An updated vSAN
n You can check the Hardware Compatibility
hardware compatibility List (vSAN HCL) is
of the hosts in a cluster required during bring-
against the VMware up.
Compatibility Guide.
n You can validate
a vSphere Lifecycle
Manager image to
check if it applies to
all hosts in the cluster.
You can also perform a
remediation pre-check.

VMware by Broadcom 271


VMware Cloud Foundation Design Guide

Table 15-61. Life Cycle Management Design Requirements for NSX Federation in VMware Cloud
Foundation

Requirement ID Design Requirement Justification Implication

VCF-LCM-REQD-003 Use the upgrade The version of SDDC n You must explicitly
coordinator in NSX Manager in this design is plan upgrades of the
to perform life cycle not currently capable of life NSX Global Manager
management on the NSX cycle operations (patching, nodes. An upgrade
Global Manager appliances. update, or upgrade) for of the NSX Global
NSX Global Manager. Manager nodes might
require a cascading
upgrade of the NSX
Local Manager nodes
and underlying SDDC
Manager infrastructure
before upgrading the
NSX Global Manager
nodes.
n You must always align
the version of the NSX
Global Manager nodes
with the rest of the
SDDC stack in VMware
Cloud Foundation.

VCF-LCM-REQD-004 Establish an operations The versions of NSX Global The administrator must
practice to ensure that Manager and NSX Local establish and follow an
prior to the upgrade of Manager nodes must be operations practice by
any workload domain, the compatible with each other. using a runbook or
impact of any version Because SDDC Manager automated process to
upgrades is evaluated does not provide life ensure a fully supported
in relation to the need cycle operations (patching, and compliant bill of
to upgrade NSX Global update, or upgrade) materials prior to any
Manager. for the NSX Global upgrade operation.
Manager nodes, upgrade
to an unsupported version
cannot be prevented.

VCF-LCM-REQD-005 Establish an operations The versions of NSX Global The administrator must
practice to ensure that Manager and NSX Local establish and follow an
prior to the upgrade of Manager nodes must be operations practice by
the NSX Global Manager, compatible with each other. using a runbook or
the impact of any Because SDDC Manager automated process to
version change is evaluated does not provide life ensure a fully supported
against the existing NSX cycle operations (patching, and compliant bill of
Local Manager nodes and update, or upgrade) materials prior to any
workload domains. for the NSX Global upgrade operation.
Manager nodes, upgrade
to an unsupported version
cannot be prevented.

VMware by Broadcom 272


VMware Cloud Foundation Design Guide

Information Security Design Elements for VMware Cloud


Foundation
Use this list of requirements and recommendations for reference related to the management of
access controls, certificates and accounts in a VMware Cloud Foundation environment.

For full design details, see Chapter 14 Information Security Design for VMware Cloud Foundation.

Table 15-62. Design Requirements for Account and Password Management for VMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-ACTMGT-REQD- Enable scheduled n Increases the security You must retrieve new
SEC-001 password rotation in SDDC posture of your SDDC. passwords by using the API
Manager for all accounts n Simplifies password if you must use accounts
supporting scheduled management interactively.
rotation. across your
SDDC management
components.

VCF-ACTMGT-REQD- Establish operational Rotates passwords and None.


SEC-003 practice to rotate automatically remediates
passwords using SDDC SDDC Manager databases
Manager on components for those user accounts.
that do not support
scheduled rotation in SDDC
Manager.

VCF-ACTMGT-REQD- Establish operational Maintains password policies None.


SEC-003 practice to manually rotate across components not
passwords on components handled by SDDC Manager
that cannot be rotated by password management.
SDDC Manager.

VMware by Broadcom 273


VMware Cloud Foundation Design Guide

Table 15-63. Certificate Management Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-SDDC-RCMD-SEC-001 Replace the default VMCA Ensures that the Replacing the default
or signed certificates on communication to all certificates with trusted
all management virtual management components CA-signed certificates from
appliances with a certificate is secure. a certificate authority might
that is signed by an internal increase the deployment
certificate authority. preparation time because
you must generate and
submit certificate requests.

VCF-SDDC-RCMD-SEC-002 Use a SHA-2 algorithm The SHA-1 algorithm is Not all certificate
or higher for signed considered less secure and authorities support SHA-2
certificates. has been deprecated. or higher.

VCF-SDDC-RCMD-SEC-003 Perform SSL certificate life SDDC Manager supports Certificate management
cycle management for all automated SSL certificate for NSX Global Manager
management appliances by lifecycle management instances must be done
using SDDC Manager. rather than requiring a manually.
series of manual steps.

VMware by Broadcom 274

You might also like