Example VDI Solution Architecture
Example VDI Solution Architecture
Solution Architecture
Prepared by: Alex St. Amand, VMware Solutions Architect, VCP
Revision: 20160901
You can always find the most up-to-date version of this document on the Your Company’s SharePoint
Website.
Your Company, the Your Company logo, and combinations thereof are trademarks of Your Company in the
United States and/or other jurisdictions. Other names used in this presentation are for identification purposes
only and may be trademarks of their respective owners.
1.2 Scope
The scope of this document is limited to the installation and configuration of the VMware Horizon View
environment including any VDI specific networking and storage.
The following topics are considered OUTSIDE of the scope of this document:
• Core vSphere Environment: Except where noted in this document, the installation and configuration of
the core components of vSphere, including the ESXi Hypervisor, VMware Single Sign On, vCenter
Server, and any related database(s), are outside the scope of this document.
• RSA Authentication Manager 8.0 Core Installation: Although RSA Two Factor Authentication is a
mandatory and critical component of this solution, the only configuration steps discussed within this
document are those that are relevant to VMware Horizon View. The installation and configuration of
the RSA Authentication Manager 8.0 Core Infrastructure is outside the scope of this document.
• Windows 8.1 Image Customization: The procedure for building, installing, customizing, and deploying a
Windows 8.1 Custom Image for use with VDI is outside the scope of this document.
Enterprise Class
Full Virtual 3D Graphics over WAN and
LAN
Storage Acceleration with vSphere
Content-Based Read Cache
Unified Communications Integration
for VoIP with Supported Partnerships
Application Virtualization
vShield Endpoint
Integrated Online and Offline Virtual
Desktop Management
Streamlined Installation and Ease of
Management
Table 2 provides detailed specifications for the physical servers as configured for this project.
Total Physical Storage 765GB on NAND Flash / 730GB RAID 5 on 10k SATA
Primary Network Controller Intel® ET2 82576 Quad Port Gigabit NIC - (Embedded)
Secondary Network Controller Broadcom® NetXtreme II 5709 Quad Port Gigabit NIC
Figure 3 - Fusion-io ioDrive2 785GB MLC High Performance Solid State Drive
Component Function
Central administration platform for configuring, provisioning, and
vCenter Server
managing VMware virtualized datacenters.
Web-based administration platform for View Infrastructure
View Administrator
components.
A service running on the View Servers used to create pools of virtual
View Composer desktops from a shared base image to reduce storage capacity
requirements.
A software service that acts as a broker for client connections by
View Connection Server authenticating and then directing incoming user requests to the
appropriate virtual desktop, physical desktop, or terminal server.
A service that runs on all systems used as sources for View desktops
View Agent and facilitates communication between the View Clients and View
Server.
View Client Software that is used to access View desktops.
Client Devices Personal computing devices used by end users to run the View Client.
2 4 2
R810
R810
BED-VDIESXi01 BED-VDIESXi02
Local Storage
Local Storage
\\dataintensity.com\Profiles.VDI \\dataintensity.com\Pro
\\dataintensity.com\Users.VDI \\dataintensity.com\Use
DI-BEDVDFS01 DI-BEDVDFS02
Local Shared Storage
DFS Desktop Desktop Desktop Desktop DFS Desktop Desktop Desktop Desktop DFS Desktop Desktop Desktop Desktop
VSA Desktop Desktop Desktop Desktop VSA Desktop Desktop Desktop Desktop Desktop Desktop Desktop Desktop Desktop
2 4 2 4 2 4
R810
R810
R810
BED-VDIESXi01 BED-VDIESXi02 BOS-VDIESXi01
Local Storage
Local Storage
Local Storage
700GB SATA RAID5 VDI01 700GB SATA RAID5 VDI01 700GB SATA RAID5 VDI01
Local Storage
VSA Replication VSA Replication
Fusion-io ioDrive2
Fusion-io ioDrive2 VDI02
Fusion-io ioDrive2
785GB MLC Drive
785GB MLC Drive VDI02 785GB MLC Drive Scripted Replication
(Replica VM Only)
Linked Linked Linked Linked Linked Linked Linked Linked Linked Linked Linked Linked
Clone Clone Clone Clone Replica VM Clone Clone Clone Clone Clone Clone Replica VM Clone Clone
Print
Services
Composer
A/D
View
View Security Server
dbi-view.dataintensity.com
vCenter Server
DI-BEDVCS02
RSA
Server
(VLAN 19)
di-view.dataintensity.com
View Composer (TCP 18443) vCenter Server
di-bedvcs02.dataintensity.com
View Agent
SSH
Internal Datacenter
or
SSH
N
VP
DBI Users
External
RDP HTTPS
ur
MMR AJP13
[Multi-path Multi-channel Routing] [Apache JServ Protocol]
DB
USB JMS/JMSIR
[Universal Serial Bus] [Java Message Service/Java Message Service Inter-Router]
Component Version
Tier 1: Fusion-io
The first storage tier is the performance tier and is comprised of a single Fusion-io ioDrive2 365GB flash
memory storage card which is installed in each host. The Fusion-io card is configured as a single 365GB
datastore which is then mirrored to all subsequent hosts by means of the vSphere Storage Appliance (see
section 5.8). This storage tier is dedicated to storing the VDI desktop replica images which are the basis
from which every link cloned is spawned. The replica image requires very little space, but has the highest
IOPS requirement making it
The second storage tier is the capacity tier and is comprised of six 146GB 10k SATA disk drives in a hardware
RAID 5 configuration. This storage tier is used for storing both the VDI desktop linked clones and the virtual
server disks for the Windows CIFS VM for Persona Management.
In Figure 9 we see that dedicated portgroups have been configured for both vSAN (VMkernel) and NFS
(Vmkernel) traffic on a separate vSwitch: vSwitch1. The interfaces associated with this vSwitch are
dedicated solely to the purpose of routing NFS Storage Traffic. In this configuration, the NFS portgroup is
configured to only use physical adapter vmnic1 as its primary uplink interface with vmnic5 set as standby
uplink. The portgroup dedicated to vSAN has been configured the exact opposite with its primary uplink set
to use vmnic5 with vmnic1 set as a standby.
Figure 10 Illustrates the portgroup configurations for the production virtual machines. Each category of
virtual machine (Servers, Workstations, etc.) has been configured to run inside their own dedicated
portgroup. In this configuration each portgroup is configured to use the following physical adapters:
vmnic2, vmnic3, vmnic6, and vmnic7.
This datastore stores the base images copies that need to be created and maintained for the virtual
desktops. The following formula was used to calculate the capacity required for the Base Image Datastore:
(𝐺𝐵)=𝐵𝑎𝑠𝑒 𝐼𝑚𝑎𝑔𝑒 𝑆𝑖𝑧𝑒 ×(2×𝑉𝑀𝑀𝑒𝑚𝑜𝑟𝑦)×𝑁𝑢𝑚𝑏𝑒𝑟 𝑂𝑓 𝐵𝑎𝑠𝑒 𝐼𝑚𝑎𝑔𝑒𝑠
For this solution only one parent image is required, however any future expansion or special case
requirements will require that additional parent images be developed. Therefore we will base our
calculations on the storage requirements needed for three base images.
𝐶𝑎𝑝𝑎𝑐𝑖𝑡𝑦 (𝐺𝐵)=24×(2x2.5)×3=360𝐺𝐵
Replica Datastore
This datastore is used to host the replica disk images that are created from the base images during the
deployment of the linked clone virtual desktops. The replica is the image from which each link clone is
spawned and as such it
The space required for the replica images is identical to the space required for the OS images and the same
formula from above can be used.
𝐶𝑎𝑝𝑎𝑐𝑖𝑡𝑦 (𝐺𝐵)=24×(2x2.5)×2=240𝐺𝐵
These datastores are used to store the VDI VM images and the disposable disks for all the virtual desktops
created using linked clones. The capacity required to store these virtual desktops depends on the amount of
space reserved for the linked clone delta files and the aggressiveness of the storage overcommit used while
creating the desktop pool. The following formula was used to calculate the capacity required for the Linked
Clone Datastore:
(𝐺𝐵)=𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑉𝑀𝑠×(2×𝑉𝑀𝑀𝑒𝑚𝑜𝑟𝑦)×𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐷𝑎𝑡𝑎𝑠𝑡𝑜𝑟𝑒𝑠×𝑂𝑣𝑒𝑟𝑐𝑜𝑚𝑚𝑖𝑡 𝑓𝑎𝑐𝑡𝑜𝑟
To host 100 desktops with a conservative storage overcommit, the capacity required is:
𝐶𝑎𝑝𝑎𝑐𝑖𝑡𝑦 (𝐺𝐵)=100×(2×2.5𝐺𝐵)×2×0.25=250𝐺𝐵
IMPORTANT: Unless the environment is intended to only support the lightest of users, the solution should be
sized for the Power User (Standard) and Power User (Heavy).
DESIGN NOTE: Boot Storm IOPS are only calculated in order to understand a worst case scenario for storage
demand. In a real world deployment the view cluster is configured to only allow a predetermined number of
desktops to boot at any given time.
Table 9 Lists the specific configurations for each of the VMware Horizon View Infrastructure VMs required.
# vCPUs 4 4 2 2 2
vRAM (GB) 24 32 8 8 8
SCSI Controller LSI Logic SAS LSI Logic SAS LSI Logic SAS LSI Logic SAS LSI Logic SAS
Disk Provisioning Thin Provisioned Thin Provisioned Thin Provisioned Thin Provisioned Thin Provisioned
Swap File Store with VM Store with VM Store with VM Store with VM Store with VM
vRAM
50% of vRAM 50% of vRAM 50% of vRAM 50% of vRAM 50% of vRAM
Reservation
Enabling VMware Horizon View Accelerator turns on CBRC on the selected ESXi hosts.
CBRC works by creating a digest file for each VMDK on the VM and stores the hash information about
VMDK blocks with the VM itself. The size of this digest file is between 5 to 10MB for each GB of the VMDK
size. This means that for a 24GB Windows VM replica used in the testing, there was about 125MB of
storage space used for the digest file. This digest file is loaded into memory when it is accessed for the first
time.
When memory overcommit is used to assign more RAM to VMs than there is available memory in the host,
it is important to note that enabling CBRC can create a digest of significant size. When CBRC is enabled, the
digest file increases the memory utilized in a host and could cause increased memory ballooning and impact
the overall performance of the host server.
In the test setup, the base image was 24GB and the replica image had a digest of 125MB. Each VM had a
non-persistent disk size of 4GB which created a digest file of 32MB. If a server hosted 64VMs and a replica
disk, the total memory required for CBRC (assuming the maximum 2048MB is used for CBRC) would be:
2048𝑀𝐵+125𝑀𝐵+(64×32𝑀𝐵)=4221𝑀𝐵
A Desktop Pool is a collection of desktops that is managed as a single entity by the View Administration
interface. View Desktop Pools allow administrators to group users depending on the type of service the
user requires. There are two types of pools – Automated Pools and Manual Pools.
In View, an Automated Pool is a collection of VMs cloned from a base template, while a Manual Desktop
pool is created by the View Manager from existing desktop sources, physical or virtual. For each desktop in
the Manual Desktop pool, the administrator selects a desktop source to deliver View access to the clients.
Table 10
The following steps to configure each VMware Horizon View server for RSA SecurID authentication are
carried out using the web browser based View Administrator application.
1) Log into the web browser based View Administrator using an administrator username and password.
2) From the View Administrator page, expand the View Configuration and select Servers. Locate the list
of View Connection Servers on the right hand page, select the appropriate server and click Edit.
3) Within the Edit View Connection Server Settings window locate and select the Authentication tab.
4) Under RSA SecurID 2-Factor Authentication, select the Enable checkbox as shown Error! Reference s
ource not found. below:
5) Decide if RSA SecurID usernames must match usernames used in Active Directory. If they should be
forced to match, then select Enforce SecurID and Windows user name matching. In this case, the user
will be forced to use the same RSA SecurID username for Active Directory authentication. If this
option is not selected, the names are allowed to be different.
6) Upload the sdconf.rec file. Click Browse and select the sdconf.rec file. The sdconf.rec file was earlier
exported from the RSA Authentication Manager. It is important that the sdconf.rec file imported is
the correct files for this particular server.
NOTE: There is no need to restart VMware Horizon View after making these configuration changes. The
necessary configuration files for each View server are automatically distributed and the RSA SecurID
configuration takes effect immediately.
5.15 Scalability
This solution scales linearly as needed by adding additional hosts to existing pod. Each host can support up to
250 VDI workstations with lossless performance. When the cluster reaches 6 hosts a new cluster should be
added to the pod in accordance with VMware’s best practices. All management will continue to remain
centralized. The maximum theoretical VDI workstation limit is ~10,000.
• Multi-Factor Authentication:
http://en.wikipedia.org/wiki/Two-factor_authentication
• RSA SecurID:
http://en.wikipedia.org/wiki/SecurID
• VMware Optimization Guide for Windows 7 and Windows 8 Virtual Desktops in Horizon View:
http://www.vmware.com/techpapers/2010/optimization-guide-for-windows-7-and-windows-8-vir-
10157.html
B Ballooning
A technique used in VMware ESXi to reclaim the guest memory pages that are considered the least
valuable by the guest operating system. This is accomplished using the vmmemctl driver, which is
installed as part of the VMware Tools suite.
C Clone
A copy of a virtual machine. See also Full Clone and Linked Clone.
Core
A processing unit. Often used to refer to multiple processing units in one package (a so-called “multi-
core CPU”). Also used by Intel to refer to a particular family of processors (with the “Core
microarchitecture”). Note that the Intel “Core” brand did not include the Core microarchitecture.
Instead, this microarchitecture began shipping with the “Core 2” brand.
D DirectPath I/O
A vSphere feature that leverages Intel VT-d and AMD-Vi hardware support to allow guest operating
systems to directly access hardware devices.
Full Clone
A copy of the original virtual machine that has no further dependence on the parent virtual machine.
See also Linked Clone.
G Growable Disk
A type of virtual disk in which only as much host disk space as is needed is initially set aside, and the
disk grows as the virtual machine uses the space. Also called thin disk. See also Preallocated Disk.
Guest
A virtual machine running within VMware Workstation. See also Virtual Machine.
H Heisenberg Compensator
A Heisenberg Compensator is a device which removes the uncertainty from subatomic
measurements, thereby making transporter travel feasible. The compensator works around the
problems caused by the Heisenberg Uncertainty Principle, allowing the transporter sensors to
compensate for their inability to determine both the position and momentum of the target particles
to the same degree of accuracy. This ensures the matter stream remains coherent during transport,
and no data is lost.
Hyper-Threading
A processor architecture feature that allows a single processor to execute multiple independent
threads simultaneously. Hyper-threading was added to Intel's Xeon and Pentium® 4 processors. Intel
uses the term “package” to refer to the entire chip, and “logical processor” to refer to each hardware
thread. Also called symmetric multithreading (SMT).
L Linked Clone
A copy of the original virtual machine that must have access to the parent virtual machine’s virtual
disk(s). The linked clone stores changes to the virtual disk(s) in a set of files separate from the
parent’s virtual disk files. See also Full Clone.
M Memory Compression
One of a number of techniques used by ESXi to allow memory overcommitment.
NIC
Historically meant “network interface card.” With the recent availability of multi-port network cards,
as well as the inclusion of network ports directly on system boards, the term NIC is now sometimes
used to mean “network interface controller” (of which there might be more than one on a physical
network card or system board).
NIC Team
The association of multiple NICs with a single virtual switch to form a team. Such teams can provide
passive failover and share traffic loads between members of physical and virtual networks.
Nonpersistent Disk
All disk writes issued by software running inside a virtual machine with a nonpersistent virtual disk
appear to be written to disk, but are in fact discarded after the session is powered down. As a result,
a disk in nonpersistent mode is not modified by activity in the virtual machine. See also Persistent
Disk.
P Persistent Disk
All disk writes issued by software running inside a virtual machine are immediately and permanently
written to a persistent virtual disk. As a result, a disk in persistent mode behaves like a conventional
disk drive on a physical computer. See also Nonpersistent Disk.
Physical CPU
A processor within a physical machine. See also Virtual CPU.
Preallocated Disk
A type of virtual disk in which all the host disk space for the virtual machine is allocated at the time
the virtual disk is created. See also Growable Disk.
Socket
A connector that accepts a CPU package. With multi-core CPU packages, this term is no longer
synonymous with the number of cores.
Storage DRS
A vSphere feature that provides I/O load balancing across datastores within a datastore cluster. This
load balancing can avoid storage performance bottlenecks or address them if they occur.
Storage vMotion
A feature allowing running virtual machines to be migrated from one datastore to another with no
downtime.
T Template
A virtual machine that cannot be deleted or added to a team. Setting a virtual machine as a template
protects any linked clones or snapshots that depend on the template from being disabled
inadvertently.
Thick Disk
A virtual disk in which all the space is allocated at the time of creation.
Virtual Disk
A virtual disk is a file or set of files that appears as a physical disk drive to a guest operating system.
These files can be on the host machine or on a remote file system. When you configure a virtual
machine with a virtual disk, you can install a new operating system into the disk file without the need
to repartition a physical disk or reboot the host.
Virtual Machine
A virtualized x86 PC environment in which a guest operating system and associated application
software can run. Multiple virtual machines can operate on the same host system concurrently.
Virtual SMP
A VMware proprietary technology that supports multiple virtual CPUs (vCPUs) in a single virtual
machine.
Virtualization Overhead
The cost difference between running an application within a virtual machine and running the same
application natively. Since running in a virtual machine requires an extra layer of software, there is by
necessity an associated cost. This cost might be additional resource utilization or decreased
performance.
vMotion
A feature allowing running virtual machines to be migrated from one physical server to another with
no downtime.
VMware Tools
A suite of utilities and drivers that enhances the performance and functionality of your guest
operating system. Key features of VMware Tools include some or all of the following, depending on
your guest operating system: an SVGA driver, a mouse driver, the VMware Tools control panel, and
support for such features as shared folders, shrinking virtual disks, time synchronization with the
host, VMware Tools scripts, and connecting and disconnecting devices while the virtual machine is
running.
VMX Swap
A feature allowing ESXI to swap to disk some of the memory it reserves for the virtual machine
executable (VMX) process.
VMXNET
One of the virtual network adapters available in a virtual machine running in ESXi. The VMXNET
adapter is a high performance paravirtualized device with drivers (available in VMware Tools) for
many guest operating systems. See also Enhanced VMXNET, VMXNET3, E1000, vlance, and NIC
Morphing.
VMXNET Enhanced
One of the virtual network adapters available in a virtual machine running in ESXi. The Enhanced
VMXNET adapter is a high-performance paravirtualized device with drivers (available in VMware
Tools) for many guest operating systems. See also VMXNET, VMXNET3, E1000, vlance, and NIC
Morphing.
vSphere Client
A graphical user interface used to manage ESX/ESXi hosts or vCenter servers. Previously called the
VMware Infrastructure Client (VI Client).