Vmware v2d foundationRA
Vmware v2d foundationRA
Software-Defined
Data Center Foundation
REFERENCE ARCHITECTURE
VERSION 1.0
The VMware Software-Defined
Data Center Foundation
Table of Contents
Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Audience. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
VMware Software Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Architectural Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Virtual Management Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Management Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Edge Cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Compute Clusters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Physical Component Details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Compute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Software-Defined Data Center Component Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
vSphere Data Center Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
vSphere Data Protection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
NSX for vSphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
vRealize Operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
The VMware Validated Design Team. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
About the Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Overview
This reference architecture describes the implementation of a software-defined data center (SDDC) that
leverages the latest VMware components and capabilities. The reference architecture is built on the
Foundation VMware Validated Design (V2D).
The Foundation V2D is the core of all higher-level VMware Validated Designs. Foundation describes a
scalable, resilient, best-practice configuration on which all additional functionality is layered. The Foundation
configuration uses industry-standard servers, IP-based and VMware Virtual SAN™ storage, and
software-defined networking to support a scalable and redundant architecture.
The V2D process gathers data from customer support, VMware IT, and VMware and partner professional
services to create a standardized configuration that meets the majority of customer requirements. Internally,
VMware engineering teams test new product capabilities, installations, upgrades, and more, against the
standardized configuration. VMware and partner professional services teams build delivery kits based on
this design, knowing that they are deploying with the best possible configuration. Customers planning
“do it yourself” deployments also benefit from following this architecture, confident that future product
upgrades, patches, and so on, have already been tested against a configuration identical to theirs.
Audience
This document will assist those who are responsible for infrastructure services, including enterprise architects,
solution architects, sales engineers, field consultants, advanced services specialists, and customers. This guide
provides an example of a successful deployment of an SDDC.
Table 1. Components
Architectural Overview
This design uses three cluster types, each with its own distinct function. It provides a management plane
that is separate from the user workload (compute) virtual machines. It also leverages an edge cluster, which
provides dedicated resources for network services such as VMware NSX Edge™ devices, which provide access
to the corporate network and the Internet.
TERM DEFINITION
NSX for vSphere NSX for vSphere is virtual networking and security software that enables the
software-defined network.
Virtual switch The NSX virtual switch abstracts the physical network and provides
access-level switching in the hypervisor. It is central to network virtualization
because it enables logical networks that are independent of physical constructs
such as VLANs.
VPN A VPN can be used to connect networks to each other or a single machine to
a network.
This design utilizes NSX for vSphere virtual switches in the management cluster. VMware vRealize
Operations™ has its own VMware NSX virtual switch and NSX Edge device. As new management or
monitoring solutions are added, they too are placed on their own VMware NSX virtual switch with a
corresponding NSX Edge device. The grouping of these virtual machines onto a virtual switch is referred
to as a virtual management network.
Virtual management networks are connected to both the vSphere management VLAN and the external
VLAN via an NSX Edge device. Routing is configured between the virtual management network and the
vSphere management VLAN. This enables the solution and virtual machines on the vSphere management
VLAN, such as the vCenter Server instance, to communicate without exposing the management servers
directly to end users.
External connectivity to the business function of the virtual management network is provided by a load
balancer’s virtual IP on the external VLAN. External connectivity to the management function of the virtual
management network is provided, as needed, by the method—routing to the virtual management network
using Ethernet, MPLS, VPN, jump hosts, or other means—that best fits the customer’s environment.
Another option is to deploy the VMware NSX distributed logical router and to enable a dynamic routing
protocol, such as OSPF, between the router and the top-of-rack switches. This enables access to all virtual
machines on the virtual switches by advertising their IP addresses to the rest of the network. Virtual
management networks still require NSX Edge devices to provide load balancer functionality.
Virtual management networking isolates management solutions from each other and from compute
workloads. More important, it enables disaster recovery of the automation and monitoring stacks without
having to change the IP addresses of the virtual machines. The virtual machines can be moved to precreated
virtual switches in another site that has been configured the same as the primary site, enabling quick recovery
of the solutions.
Top-of-Rack Switches
VLAN 1701 SVI - 10.155.170.1
VLAN 1680 SVI - 10.155.168.1
Static Routes
192.168.20.0/24 - 10.155.168.75
192.168.21.0/24 - 10.155.168.76
192.168.22.0/24 - 10.155.168.77
VM VM VM
NSX Edge
192.168.21.1 (Internal)
10.155.168.75 (vSphere)
10.155.170.150 (External)
Top-of-Rack Switches
OSPF
VM VM VM
NSX Edge
192.168.20.1 (Internal)
Management Cluster
The management cluster contains the management and monitoring solutions for the entire design. A single
management cluster can support multiple pods of edge and compute clusters. The minimum number of hosts
required in the management cluster is three, although four hosts are recommended for availability and
performance. The management cluster can scale out as the number of edge and compute pods increases.
With the exception of vSphere Data Protection, which stores all backups on NFS storage, all virtual machines
utilize Virtual SAN storage.
A single vCenter Server instance manages the resources in the management cluster. Additional vCenter Server
instances manage edge and compute clusters. A Platform Services Controller™ is deployed for each vCenter
Server instance. The Platform Services Controllers are joined to the same VMware vCenter Single Sign-On
domain, enabling features such as Enhanced Linked Mode and cross vCenter Server VMware vSphere vMotion®.
For more information on the Platform Services Controller and new enhancements in vSphere 6.0, see the
VMware vCenter Server 6.0 Deployment Guide and the What’s New in vSphere 6.0 white papers.
VMware NSX Manager™ instances, one for each vCenter Server instance, are deployed into the management
cluster. NSX for vSphere components, such as VMware NSX Controller™ instances, are also deployed for and
in the management cluster. VMware NSX Controller instances for the edge and compute clusters are deployed
into the edge cluster.
Spine
Leaf
Virtual SAN
Edge Cluster
The edge cluster simplifies the physical network configuration. It is used to deliver networking services to the
compute cluster virtual machines. All external networking, including corporate and Internet, for user-workload
virtual machines is accessed via the edge cluster.
The minimum edge cluster size is three hosts, but it can scale depending on the volume of services required
by the compute cluster virtual machines.
Spine
Leaf
Virtual SAN
Edge
Compute Clusters
The compute clusters are the simplest of the three types; they run user-workload virtual machines. Compute
cluster networking is completely virtualized using NSX for vSphere. A single transport zone exists between all
compute clusters and the edge cluster. Virtual switches are created for user-workload virtual machines.
The minimum compute cluster size is 4 hosts; the maximum is 64 hosts. Additional compute clusters can be
created until the maximum number of either hosts (1,000) or virtual machines (10,000) for vCenter Server is
reached. Additional vCenter Server instances can be provisioned in the management cluster to facilitate more
compute clusters.
Spine
Leaf
Virtual SAN
Virtual SAN
COMPONENT S P E C I F I C AT I O N
Fans Redundant
Storage
The management cluster utilizes Virtual SAN in addition to NFS datastores. Virtual machines reside on Virtual SAN;
vSphere Data Protection backups reside on NFS datastores.
The edge cluster utilizes Virtual SAN, which serves the VMware NSX Controller instances for the edge and
compute clusters as well as NSX Edge devices.
The compute clusters utilize Virtual SAN, NFS, and VMware vSphere VMFS datastores. The size and
number, if any, of datastores other than Virtual SAN depend on available capacity, redundancy requirements,
and application I/O needs. Table 4 presents some guidelines for sizing storage.
S TO R AG E IOPS M B /S E C R E P L I C AT I O N D E D U P L I C AT I O N
CLASS (PER 10 0G B) (PER 1TB)
Bronze 25 2 No Yes*
Network
Each rack contains a pair of multichassis link aggregation–capable 10 Gigabit Ethernet (10GbE) top-of-rack
switches. Each host has one 10GbE network adapter connected to each top-of-rack switch. The vSphere hosts
utilize the VMware vSphere Distributed Switch™ configured with an 802.3ad Link Aggregation Control
Protocol (LACP) group that services all port groups.
802.1Q trunks are used for carrying a small number of VLANs—for example, NSX for vSphere, management,
storage, and vSphere vMotion traffic. The switch terminates and provides default gateway functionality for
each respective VLAN; that is, it has a switch virtual interface (SVI) for each VLAN. Uplinks from the top-of-
rack switches leaf layer to the spine layer are routed point-to-point links. VLAN trunking on the uplinks—even
for a single VLAN—is not allowed. A dynamic routing protocol (OSPF, ISIS, or BGP) is configured between the
top-of-rack and spine layer switches. Each top-of-rack switch advertises a small set of prefixes, typically one
per VLAN or subnet that is present. In turn, it calculates equal cost paths to the prefixes received from other
top-of-rack switches.
Spine
OSPF
Leaf
(Top-of-Rack)
LACP
AT T R I B U T E S P E C I F I C AT I O N
Number of hosts 11
Memory 256GB
The solution uses VMware vSphere High Availability (vSphere HA) and VMware vSphere Distributed Resource
Scheduler™ (vSphere DRS). vSphere HA is set to monitor both hosts and virtual machines. Its admission
control policy utilizes a percentage of cluster resources reserved—25 percent in a four-node cluster—
guaranteeing sustainability with one node failure. To calculate the percentage, divide 100 by the number of
hosts in a cluster; then, multiply the result by the number of hosts that can be eliminated while still
guaranteeing resources for the virtual machines in the cluster. For example, an eight-host cluster for which
sustainability for a two-host failure is the goal would be 100 ÷ 8 = 12.5 x 2 = 25 percent. vSphere DRS is set to
fully automated mode.
VLAN ID FUNCTION
1701 External
Table 9. Datastores
AT T R I B U T E S P E C I F I C AT I O N
Quantity Four (two Platform Services Controllers, two vCenter Server instances)
Appliance size Small for management vCenter Server, large for compute vCenter Server
AT T R I B U T E S P E C I F I C AT I O N
Enhanced Linked Mode Automatic by joining same vCenter Single Sign-On domain
S P E C I F I C AT I O N VA LU E
MTU 9000
VLAN 14 (management)
24 (edge)
34 (compute)
To enable external (north–south) connectivity for the compute workloads, an NSX Edge router is deployed in
HA mode. This NSX Edge instance is referred to as the provider edge. One interface is connected to the
external network; another is connected to an NSX virtual switch, which is also connected to an NSX Edge
router. The NSX Edge routers are configured with OSPF to facilitate the exchange of routing information.
This enables the virtual machines on the NSX virtual switch to communicate with the external network and
vice versa as long as firewall rules permit the communication.
External Network/Internet
Spine
Leaf
External VLAN
Virtual SAN
Virtual SAN
VXLAN 5001
Compute Transport Zone
VM VM
Edge Compute
Monitoring
Monitoring the performance, capacity, health, and logs in any environment is critical.
Using vRealize Operations along with vRealize Log Insight is a unified management solution for performance
management, capacity optimization, and real-time log analytics. Predictive analytics leverages both
structured and unstructured data to enable proactive issue avoidance and faster problem resolution.
The solution extends intelligent operations management beyond vSphere to include operating systems,
physical servers, and storage and networking hardware. It is supported by a broad marketplace of extensions
for third-party tools.
vRealize Operations
vRealize Operations provides operations dashboards, performance analytics, and capacity optimization
capabilities needed to gain comprehensive visibility, proactively ensure service levels, and manage capacity in
dynamic virtual and cloud environments.
vRealize Operations is deployed as a virtual appliance and is distributed in the OVA format. In this
architecture, vRealize Operations is deployed on an NSX virtual switch. Four vRealize Operations appliances
are deployed. The first is configured as the master node, the next as the master replica, and the last two as
data nodes. The four appliances access the vSphere management VLAN via the NSX Edge device configured
with either static or dynamic routing. The NSX Edge device also load-balances the four virtual appliances on
port 443, providing access to the vRealize Operations cluster via a single FQDN.
External VLAN
vSphere VLAN
VM VM VM VM
NSX Edge
192.168.21.1 (Internal) vRealize Operations Appliances
10.155.168.76 (vSphere) Load-Balanced IP 10.155.170.10
10.155.170.152 (External)
To ensure a complete picture of how the environment is running, vRealize Operations is configured to monitor
the management, edge, and compute vCenter Server instances. Additionally, the NSX for vSphere content
pack is installed and configured to provide insight into the virtualized networking environment.
.
vRealize Operations requires updates to the default monitoring settings for most organizations.
For more information on how to customize vRealize Operations for a specific environment,
see the vRealize Operations documentation.
Conclusion
The reference architecture presented in this paper describes the implementation of a software-defined
data center (SDDC) that uses the latest VMware components as its foundation. Customers following this
architecture can be confident that they will have the best possible supported configuration, one that is fully
backed by the VMware Validated Design process.
For a guided tutorial that shows step-by-step instructions for deploying this configuration, see
http://featurewalkthrough.vmware.com/#!/defining-the-sddc/dcv.