0% found this document useful (0 votes)
89 views27 pages

HPE Reference Architecture For Citrix XenApp On HPE Synergy Platform

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 27

HPE Reference Architecture for

Citrix XenApp on HPE Synergy


Platform
Delivering cost effective client virtualization with
Citrix XenApp on VMware ESXi Server

Reference Architecture
Reference Architecture

Contents
Executive summary ................................................................................................................................................................................................................................................................................................................................ 3
Introduction ...................................................................................................................................................................................................................................................................................................................................................3
Solution overview ..................................................................................................................................................................................................................................................................................................................................... 5
Citrix software ....................................................................................................................................................................................................................................................................................................................................... 5
Solution components............................................................................................................................................................................................................................................................................................................................7
Hardware...................................................................................................................................................................................................................................................................................................................................................7
Compute block ..................................................................................................................................................................................................................................................................................................................................... 8
Network block ................................................................................................................................................................................................................................................................................................................................... 11
Management block ...................................................................................................................................................................................................................................................................................................................... 12
Storage block..................................................................................................................................................................................................................................................................................................................................... 12
AMD FirePro S7100X GPU .................................................................................................................................................................................................................................................................................................. 13
Solution software ........................................................................................................................................................................................................................................................................................................................... 13
Best practices and configuration guidance for the solution ......................................................................................................................................................................................................................... 15
Hardware configuration ........................................................................................................................................................................................................................................................................................................... 15
Software configuration ............................................................................................................................................................................................................................................................................................................. 16
ESXi iSCSI Multipath deployment for Nimble iSCSI Storage access ............................................................................................................................................................................................... 16
Nimble Storage Management and iSCSI Storage Data access ............................................................................................................................................................................................................ 17
Nimble iSCSI Multipath for ESXi....................................................................................................................................................................................................................................................................................... 18
Microsoft Windows Server 2016 session environment .............................................................................................................................................................................................................................. 18
AMD FirePro and Radeon Pro software ................................................................................................................................................................................................................................................................... 19
AMD FirePro passthrough setting ................................................................................................................................................................................................................................................................................. 19
Capacity and sizing ............................................................................................................................................................................................................................................................................................................................ 20
Testing strategy ............................................................................................................................................................................................................................................................................................................................. 21
Analysis and recommendations ....................................................................................................................................................................................................................................................................................... 24
Summary ...................................................................................................................................................................................................................................................................................................................................................... 24
Appendix A: Bill of materials ...................................................................................................................................................................................................................................................................................................... 26
Resources and additional links ................................................................................................................................................................................................................................................................................................ 27
Reference Architecture Page 3

Executive summary
Delivering discrete applications from a centralized location, such as a data center, has been a common practice for many years. And today Citrix®
XenApp, one of the leading solutions for application virtualization, is found in a production deployment in just about every sizable enterprise. As
successful as application virtualization has been, the modern digital workplace has created new challenges for administrators due to the
consumerization of IT and the increased end-user expectations for high-performance anywhere, any device access.

Fortunately, application virtualization technology has consistently advanced. Today, delivering not only apps, but full desktops with 3D graphics
is viable from a centralized environment. The development of graphics processing units (GPUs), from companies such as AMD®, boosts the end-
user graphics experience and expands the types of supported applications. Additionally, the maturation of solid state storage solutions delivers
fast and reliable performance at a much lower cost per seat than previously possible.

To meet IT environment management needs, Citrix has introduced Citrix Workspace Suite. It includes a broad toolset to manage the entire user
connection path - from user connectivity, to authentication, device management, application deployment, and monitoring. It provides IT
administrators the means to easily manage thousands of virtualized users.

Now nearly every user within an enterprise can benefit from the increased performance and ubiquitous access of a virtualized application and
desktop delivery model; and IT administrators can effectively drive improved data security, simplified management, and faster app delivery to a
larger population of users within their organization, that had not been ideal target users in the past.

In this Hewlett Packard Enterprise Reference Architecture (RA) we examined end users who require workstation class compute and graphic
rendering, and we conducted a Citrix XenApp Hosted Shared Desktops user density performance study on how their needs can be addressed on
HPE Synergy infrastructure.

This Reference Architecture demonstrates how HPE Synergy facilitates the delivery of Citrix XenApp in a cost-effective and highly manageable
fashion. HPE Synergy is an ideal platform for server-based computing deployments providing enhanced GPU acceleration for optimum user
experience. Using HPE Synergy Image Streamer with Citrix Provisioning Services (PVS), creates a simple way to manage server boot and user
configurations, leveraging multiple user configurations. With HPE Synergy Composer and the HPE OneView API, IT administrators can easily
change the deployment characteristics to meet their current needs.

In this RA, the Hewlett Packard Enterprise solution engineering team used an HPE Synergy system consisting of eight (8) HPE Synergy 480
Gen10 Compute Modules, six (6) HPE 480 Multi MXM Expansion Modules with AMD FirePro S7100X Server GPUs in passthrough mode, across
three (3) HPE Synergy 12000 Frames, an HPE Nimble CS3000 storage array to enable larger user-to-disk ratios than are possible with
traditional HDDs, and HPE FlexFabric 5940 top-of-rack switches that provide low latency and high performance resulting in robust end-user
experiences in high-traffic environments.

Citrix Provisioning Services (PVS) used in this RA can reduce network traffic between PVS clients and the HPE Synergy 480 Gen10 Compute
Modules thereby providing faster boot times during boot storms, and overall improved device performance in a Citrix XenApp 7.17 environment.
Target audience: This document is intended for IT decision makers and channel partners, as well as architects and implementation personnel
who want to understand the HPE Composable Infrastructure capabilities offered by the HPE Synergy platform. The reader should have a solid
understanding of end-user and graphic intensive applications, familiarity with the AMD Multiuser GPU with passthrough technology and Citrix
XenApp products, and an understanding of sizing/characterization concepts and limitations in client virtualization environments.

Introduction
One of the key drivers in the end-user computing market is end-user productivity. Users today expect a fully integrated and seamless experience
that integrates mobile and desktops with applications and connectivity to quickly perform business tasks.

HPE Synergy is an enterprise-level solution designed to be able to service all workloads. Hewlett Packard Enterprise storage, servers, and
networking provide the resilient and integrated infrastructure that meets the reliability, speed, and security needs of client infrastructure
administrators.
Reference Architecture Page 4

Client virtualization, desktop and application delivery, can vary based on use case requirements that range from task workers to workstation
users. Figure 1 below illustrates the client virtualization technology landscape as it exists today.

Figure 1. End-User landscape

The challenge for desktop and application virtualization has been to enable rich user experiences with graphics capabilities in a cost-effective
manner. Over the last 10 years, high-end applications with intensive graphics requirements were not compatible with virtualized environments.
They were expensive to develop and implement. In addition, the Graphics Processing Unit (GPU) resources could not be virtualized and had to
be dedicated to users or applications needing direct access to the GPU, making the solution expensive and difficult to scale. This is no longer true
with the HPE Synergy 480 Gen10 Compute Module via HPE Synergy 480 Multi MXM Expansion Module with up to 6 x AMD FirePro S7100X
GPUs. The GPUs can then be automatically or manually assigned to virtual machines on the VMware® ESXi host using the VMware vDGA
passthrough method, providing a rich user experience in a cost-effective manner.

The Reference Architecture demonstrates an architecture that facilitates the delivery of Citrix XenApp in a cost-effective and highly manageable
fashion. The purpose of this Reference Architecture is to deliver an experience to the broadest spectrum of multimedia-enabled end users with a
minimal set of compromises. HPE Synergy systems are uniquely architected as Composable Infrastructure (CI) to match the powerful
'infrastructure-as-code' capabilities of the HPE intelligent software architecture. Flexible access to compute, storage, and fabric resources allows
for use and repurposing. Linking multiple HPE Synergy Frames efficiently scales the infrastructure with a dedicated single view of the entire
management network.

HPE Synergy Frames, HPE Synergy Compute Modules, HPE Nimble CS3000 storage arrays, and HPE FlexFabric 5940 network switches reduce
complexity and can accelerate workload deployments to provide the resilient and integrated infrastructure that meets the reliability, performance,
and security needs of end-user computing architects. This drives IT efficiency as the business grows and delivers balanced performance across
resources to increase solution effectiveness.

Hewlett Packard Enterprise has tested this solution utilizing Citrix Provisioning Services (PVS) and Citrix XenApp Hosted Shared Desktops. HPE
successfully ran the Login VSI multimedia workload in order to showcase an integrated solution with the latest advancements in HPE
Composable Infrastructure and client virtualization technologies.
Reference Architecture Page 5

Figure 2 illustrates a high-level overview of Citrix XenApp to enable users to securely access their apps and data from anywhere. The
architecture relies on Microsoft® Windows® app and desktop delivery from Citrix XenApp, network security with NetScaler architecture designed
on HPE Synergy which can be leveraged by Hewlett Packard Enterprise customers and service provider partners to deliver solutions.

The Reference Architecture described in this document focuses on testing AMD GPU-enabled hosted shared desktops, with graphics virtualized
workloads within the context of Citrix XenApp to demonstrate that they run as designed on HPE Synergy. All the Citrix Hosted Shared Desktop
resources, applications, and data resources were hosted on an HPE Nimble CS3000 iSCSI Storage Network, and compute modules within an HPE
Synergy 12000 Frame.

While testing was limited to a Citrix XenApp Hosted Shared Desktops use case, HPE Synergy supports all use cases within the Citrix XenApp
7.17 architecture, regardless of provisioning method.

Citrix XenApp and XenDesktop

End Users Delivery Controllers Apps and Data

NetScaler XenApp Windows Apps


(Non GPU Aware )
SSL VPN Gateway
Load Balancing Delivery Controller

StoreFront Delivery Group1

NetScaler
Windows Apps
Delivery Controller
XenApp ( GPU Aware )

SSL VPN Gateway


Load Balancing StoreFront

Delivery Group2

Branch Offices
DMZ Citrix XenApp Management XenApp Resources

Figure 2. Solution architecture for Citrix XenApp on HPE Synergy Architecture

Solution overview
Citrix software
This Reference Architecture provides an overview of Citrix Hosted Shared Desktops virtualization features, and the ability to provide user
experiences via Citrix XenApp. The solution outlined offers secure, remote access deployed on an HPE Synergy multi-Frame Architecture. It is
not a step-by-step installation and configuration guide. The installation and configuration of the Citrix software layer can be understood by
consulting the documentation for Citrix Workspace Suite at citrix.com/products/xenapp-xendesktop.

Testing for this RA concentrated on the on-premises software that is testable by Login VSI. The tested pieces form the traditional core of the
Citrix XenApp offering. Citrix XenApp provides a unified framework for developing a solution comprised of virtual application resources. This
framework provides a better understanding of the technical architecture for the most common virtual application deployment scenarios.
Reference Architecture Page 6

For this Reference Architecture HPE utilized Citrix XenApp 7.17. Figure 3 describes at a high level the test environment for this Reference
Architecture and the general configuration of the solution deployed. HPE Synergy Image Streamer was used to boot VMware ESXi.
Hardware Layer

User Layer Access Layer Resource Layer


PVS Hosted Shared Desktops PVS Hosted Shared Desktops
with AMD Graphics PVS Hosted Shared Desktops with AMD Graphics
Delivery Group NetScaler VPX Gateway

Shared Shared Shared Shared Shared Shared Shared Shared Shared


Desktop Desktop Desktop Desktop Desktop Desktop Desktop Desktop Desktop
Delivery Group
Control Layer StoreFront

Shared Shared Shared Shared Shared Shared Shared Shared Shared


Desktop Desktop Desktop Desktop Desktop Desktop Desktop Desktop Desktop
Director SQL
Database

Studio License Server

Citrix XenApp 7.17 Infra Hosted Shared Desktops Hosted Shared Desktops
Data + Write Cache HPE Nimble CS3000 Series Data + Write Cache
Storage

VMware ESXi iSCSI Boot VMware ESXi iSCSI Boot VMware ESXi iSCSI Boot

UID
UID HP ProLiant UID
Graphics Expansion
HP ProLiant Blade HP ProLiant
Graphics Expansion Graphics Expansion
Blade Blade

1
1–3 1

1–3 1

HPE Synergy Frame 12000 HPE Synergy Frame 12000


2
2 2

Synergy UID

Synergy 480 Synergy


UID UID
480 Gen9 480
Gen9 Gen9

2 x HPE Synergy SY480 Gen10 3 x HPE Synergy SY480 Gen10 3 x HPE Synergy SY480 Gen10
Blade with AMD FIREPRO S7100X Blade with AMD FIREPRO S7100X Blade with AMD FIREPRO S7100X

Synergy Synergy
12000 12000
Frame Frame

Bay Bay Bay Bay


1 6 1 6

Appliance SynergyComposer
Appliance SynergyComposer
Bay 1 UID Bay 1 UID

Active Active

Power Power

HDD HDD HDD HDD

1TB 1TB 1TB 1TB

HDD HDD HDD HDD

1TB 1TB 1TB 1TB

HDD HDD HDD HDD

1TB 1TB 1TB 1TB

HDD HDD HDD HDD

1TB 1TB 1TB 1TB

Appliance Appliance
Bay 2 UID
Bay 2 UID
HDD HDD HDD HDD

1TB 1TB 1TB 1TB

Front Front
Panel Panel

B
HDD B
HDD B
HDD HDD

1TB
A 1TB
A 1TB
A 1TB

Bay
7
Bay
12 HPE Nimble CS3000 Series Bay
7
Bay
12

HPE Synergy Frames 1 – 3 HPE Synergy Frames 1 – 3

Figure 3. High level architecture for HPE Synergy with Citrix XenApp 7.17

This Reference Architecture is based on a unified and standardized 5-layer model as shown in Figure 3. These layers are as follows:
• User Layer –This layer defines the unique user groups, endpoints, and locations for the solution. There are three distinct delivery groups
corresponding to different sets of users defined in solution.
• Access Layer –This layer defines how a user group gains access to their resources including providing secure access policies and
desktop/application stores. Users access a list of available resources through Citrix StoreFront. Users not on a protected network must
establish an encrypted SSL tunnel across public network links to the NetScaler VPX Gateway, which is deployed within the DMZ of the network
• Resource Layer –This layer defines the virtual applications, and data provided to each user group. This layer defines the graphics hosted
shared desktops and applications; desktops and applications that are delivered based on a hosted Microsoft Windows Server® 2016 operating
system which is shared amongst multiple users at runtime.
Reference Architecture Page 7

• Control Layer –This layer defines the Citrix management layer which supports users accessing resources. The Delivery Controllers
authenticate users and enumerate resources from the StoreFront while creating, managing, and maintaining the virtual resources. All
configuration information about the Citrix XenApp site is stored within a SQL database.
• Hardware Layer – This layer defines the physical implementation of the overall solution. The corresponding hosts provide compute and
storage resources to the workloads hosted on the resource layer. One set of hosts centrally delivers virtual servers and virtual applications.

This Reference Architecture leveraged AMD FirePro S7100X GPU passthrough for offloading graphic processing operations from physical CPUs
for Windows Server 2016 hosted shared desktops created by Citrix PVS.

Citrix Provisioning Services minimizes the I/O to disk which can lower the amount of storage, and offloads physical CPU graphics processing
cycles to GPUs.

This Reference Architecture leverages dual Citrix Delivery Controller, Provisioning Services, and StoreFront servers which are hosted on the
dedicated dual HPE Synergy 480 Compute Modules for redundancy.

The following Citrix software components were leveraged as part of the testing performed.
• Citrix Provisioning Services (PVS): Citrix PVS allows for the streaming of a single shared vDisk image, rather than copying images to
individual machines. Citrix Provisioning Services enables organizations to reduce the number of disk images that they manage. Even as the
number of machines continues to grow, Citrix PVS provides centralized management and offers distributed processing. An administrator can
update a single image and this update is reflected in all disks associated with that image.
This Reference Architecture leverages PVS with the RAM Overflow to Nimble Storage option turned on in order to stream Microsoft Windows
Server 2016 hosted shared desktops.
• Citrix StoreFront: Citrix StoreFront allows internal users to access Citrix XenDesktop or Citrix XenApp either directly through Citrix Receiver
or via the Citrix StoreFront web page by offering a complete list of available resources for each user. It also allows users to mark certain
applications as favorites which makes them appear prominently to the end user. The subscriptions are synchronized to the other StoreFront
servers automatically. Upon successful authentication, StoreFront contacts the Delivery Controller to receive a list of available resources
(desktops and/or applications) for the user to select. Redundant StoreFront servers should be deployed to provide N+1 redundancy where, in
the event of a failure, the remaining servers have enough spare capacity to fulfill any user access requests.
• Citrix Delivery Controller: The Delivery Controller is the server-side component that is responsible for managing user access, as well as
brokering and optimizing connections. Controllers also provide Citrix Machine Creation Services and Citrix Provisioning Services which create
desktop and server images. Each Controller communicates directly with the site database. In a site with more than one zone, the controllers in
every zone communicate with the site database in the primary zone. Redundant Delivery Controller servers should be deployed to provide
N+1 redundancy where, in the event of a failure, the remaining servers have enough spare capacity to fulfill any user access requests.
• Citrix NetScaler: The NetScaler VPX software delivers reliable application availability, comprehensive L4-L7 load balancing, robust
performance optimization features, and secure remote access. It adds advanced traffic management, clustering support, stronger security
features, extended optimizations, SSO, and more. It also encompasses powerful security features, expanded application acceleration
capabilities, and enhanced management and visibility resources.

Solution components
Hardware
HPE Synergy systems are uniquely architected as Composable Infrastructure (CI) to match the powerful 'infrastructure-as-code' capabilities of
the HPE intelligent software architecture. Flexible access to compute, storage, and fabric resources allows for use and repurposing. The
combination of hardware flexibility with embedded intelligence enables auto-discovery of all available resources for quick deployment and use.
Management of hardware by profiles defined in software allows fast repurposing of compute, storage, and fabric resources to meet workload
demands.

This Reference Architecture focuses on deploying a Citrix XenApp 7.17 graphics-enabled hosted shared desktop environment on VMware
vSphere 6.5. The hardware is viewed as “blocks” of functionality and technology segmentation, namely compute, management, network, and
storage blocks. This Reference Architecture can be viewed as a series of building blocks which are summarized below.

Tested solutions save you time and resources compared to the do-it-yourself approach. This helps reduce deployment risk and can help lower
total cost of ownership. The HPE Composable Infrastructure gives you the foundation to successfully deliver client virtualization solutions to a
Reference Architecture Page 8

wide variety of users across your IT environment. Designed as modular, repeatable, and scalable building blocks, HPE Synergy can easily
integrate into your existing virtualization environment.

Figure 4 depicts the physical layout of the tested configuration.

2 x HPE Network 5940 42


1 2 Green=10Gbps,Yellow=1Gbps 15 16 17 18 SFP+ 31 32 33 34 47 48 49 QSFP28 Green=100Gbps,Yellow=40/10Gbps 54

42

Switches 41
1 2 Green=10Gbps,Yellow=1Gbps 15 16 17 18 SFP+ 31 32 33 34 47 48 49 QSFP28 Green=100Gbps,Yellow=40/10Gbps 54

41

40 40

39 39

38 38

37 37

36 36

35 35

34 34

33 33

HDD HDD HDD HDD

1TB 1TB 1TB 1TB

HDD HDD HDD HDD


32 32
1TB 1TB 1TB 1TB

Nimble CS3000 31
HDD

1TB

HDD
HDD

1TB

HDD
HDD

1TB

HDD
HDD

1TB

HDD
31

Storage
1TB 1TB 1TB 1TB

HDD HDD HDD HDD

30 1TB 1TB 1TB 1TB 30

B
HDD B
HDD B
HDD HDD

1TB
A 1TB
A 1TB
A 1TB

29 29

Synergy
12000
SynergyComposerUID
Frame OneView UID
HP ProLiant
Graphics Expansion
UID Blade

Bay Bay
28 1 6
28

Appliance
Bay 1
Active
SynergyComposer
UID

27 Power
Active
27
Power
1
1

2
2

26 26

25 25

HPE Synergy 24
Synergy
480
Gen9
UID

UID
Synergy
480
Gen9

24

Composers 2 x HPE SY480 Gen10 Blades


HP ProLiant
Graphics Expansion
Blade

23 Appliance
23
Bay 2 UID

22
1 Front
Panel 22
Holds Citrix XenApp 7.17 Infra
VMs
2

21 21

20 20
Bay Bay
Synergy
7 480
UID
12
Gen9

Synergy
19 12000 UID 19
Frame UID
HP ProLiant
Graphics Expansion
Blade
SynergyComposer
OneView
Bay Bay
1 UID 6

18 18
Appliance SynergyComposer
Bay 1 UID

Active
Active

Power
1
Power 1

17 17
2
2

16 16

HP ProLiant
Graphics Expansion
Blade

15 15
Synergy UID Synergy
480 480
Gen9 Gen9

UID

HP ProLiant
Graphics Expansion
14 Blade
14
Image Streamer
OneView
UID

Appliance
Bay 2 UID
UID

13 13
Active

Power 1 Front
Panel

2
12 12

HPE Synergy Image 11 11

Streamers 10
Bay
7
Synergy
480
Gen9
UID
Bay
12
10

09
Synergy
12000
Frame UID
HP ProLiant
Graphics Expansion
Blade

09
6 x HPE SY480 Gen10 Compute +
Bay Bay

HPE Synergy 480 Multi MXM Expansion


1 6

Appliance SynergyComposer
Bay 1 UID

08 08

Module with 6x AMD FirePro S7100X


Active

Power
1

2
07 07

06
HP ProLiant
Graphics Expansion
Blade 06
Holds Citrix XenApp 7.17 Graphic Hosted
05
Synergy
480
Gen9
UID

05
Applications
UID
HP ProLiant
Graphics Expansion
Blade

04 04
Appliance
Bay 2 Image Streamer UID
OneView
UID UID

Active

03 Power
Front 03
1
Panel

HPE Synergy 12000 02 02

Frame 01

Bay
7 UID
Synergy
480
Gen9
Bay
12
01

Rack 42U

Figure 4. HPE Synergy sample configuration with Multi MXM Expansion Modules

Compute block
The main compute block for this solution is comprised of six (6) HPE Synergy 480 Gen10 Compute Modules plus HPE Synergy 480 Multi MXM
Expansion Modules, with six AMD FirePro GPU graphics cards in each expansion module running within the context of an HPE Synergy 12000
Frame.

Descriptions of all of the components appear in the sections that follow.


Reference Architecture Page 9

HPE Synergy 12000 Frame


The HPE Synergy 12000 Frame is the base infrastructure that ties together compute, storage, network fabric, and power into a scalable solution
that easily addresses and scales with various customer workloads and infrastructure. The Synergy 12000 Frame reduces complexity in the IT
infrastructure by unifying all these resources into a common bus, and with the myriad of available network and storage interconnects, allows the
frame to interoperate with any other IT environment. At a high level the Synergy Frame supports the following:

• 12 half-height or 6 full-height compute modules per frame. The HPE Synergy design allows for the inclusion of double-wide modules as well
as support for internal storage with the HPE Synergy D3940 Storage Module
• Two Frame Link Modules for in-band and out-of-band management
• Up to six 2650 watt power supplies and ten fans
• Up to six interconnect modules for full redundancy of three fabrics

The HPE Synergy 12000 features a fully automated and managed composer module. HPE OneView handles all the setup, provisioning, and
management both at the physical and logical level.
HPE Synergy Composer
HPE Synergy Composer is a hardware management appliance that is powered by HPE OneView. The HPE Synergy Composer provides a single
interface for assembling and reassembling flexible compute, storage, and fabric resources to support business‐critical applications and a variety
of workloads, whether they are bare metal, virtualized, or containerized.

The HPE Synergy Composer provides lifecycle management to deploy, monitor, and update your infrastructure using a single interface or the
Unified API. IT departments can rapidly deploy infrastructure for traditional, virtualized, and cloud environments in just a few minutes -
sometimes in a single step. Resources can be updated, expanded, flexed, and redeployed without service interruptions. Key features of the HPE
Synergy Composer are:

• Simplify deployment and configuration of resources in your environment


• Accelerate updates using templates
• Automate applications and workloads using the Unified API
• Designed for high availability using redundant physical appliances

HPE Synergy Image Streamer


HPE Synergy Image Streamer is a management appliance option for the HPE Synergy solution that is used to deploy stateless compute modules
within the HPE Synergy environment. The HPE Synergy Image Streamer solution offers a stateless deployment experience for bare-metal
compute modules by managing and maintaining the software state (operating system and settings) separate from the physical state (firmware,
BIOS settings, etc.). Boot volumes for the compute modules are hosted and maintained on the HPE Synergy Image Streamer appliance as iSCSI
boot volumes. Image Streamer uses scripts and build plans to generalize and personalize the OS boot volumes during capture and deployment.

HPE Synergy Image Streamer adds a powerful dimension to “infrastructure as code” - the ability to manage physical servers like virtual machines.
In traditional environments, deploying an OS and applications or hypervisor is time-consuming because it requires building or copying the
software image onto individual servers, possibly requiring multiple reboot cycles. In HPE Synergy, the tight integration of HPE Synergy Image
Streamer with HPE Synergy Composer enhances server profiles with images and personalities for true stateless operation.

HPE Synergy Composer, powered by HPE OneView, captures the physical state of the server in the server profile. HPE Synergy Image Streamer
enhances this server profile (and its desired configuration) by capturing your golden image as the “deployed software state” in the form of
bootable image volumes. These enhanced server profiles and bootable OS images, plus application images are software structures (infrastructure
as code) - no compute module hardware is required for these operations. The bootable images are stored on redundant HPE Synergy Image
Streamer appliances, and they are available for fast implementation onto multiple compute modules at any time. This enables bare-metal
compute modules to boot directly into a running OS with applications, and multiple compute modules to be quickly updated.
HPE Image Streamer:

• Manages physical servers like virtual machines


• Enables true stateless operation by capturing software (OS and settings) state separate from the hardware (firmware, BIOS) state
Reference Architecture Page 10

• Deploys, updates, and rolls back compute images rapidly for multiple compute modules
• Enables automation via Unified API

Figure 5 depicts the HPE Synergy Composer, Image Streamer, compute, and server profile configuration.

Figure 5, Server profile for HPE Synergy 480 Compute Module

HPE Synergy 480 Compute Module


The HPE Synergy 480 Compute Module delivers superior capacity, efficiency, and flexibility in a two-socket, half-height, single-wide form factor
to support demanding workloads. Powered by Intel® Xeon® Scalable Family of processors, up to 3TB DDR4, more storage capacity and
controllers and a variety of GPU options within a Composable Architecture. HPE Synergy 480 Gen10 Compute Module is the ideal platform for
general-purpose enterprise workload performance now and in the future.

• The most secure server with exclusive HPE Silicon Root of Trust. Protect your applications and assets against downtime associated with hacks
and viruses.
• More customer choice for greater performance and flexibility with Intel Xeon Scalable Family of processors on the Synergy 480 Gen10
architecture.
• Intelligent System Tuning with processor smoothing and workloads matching to improve processor throughput/overall performance up to 8%
over previous generation.
• Max memory 3TB for large in-memory database and analytic applications.
• New hybrid Smart Array for both RAID and HBA zoning in a single controller; internal M.2 storage options that add boot flexibility and
additional local storage capacity.

HPE Virtual Connect SE 40Gb F8 Module for HPE Synergy


The HPE Virtual Connect SE 40Gb F8 Module, master module based on composable fabric, is designed for composable Infrastructure. Its
disaggregated, rack-scale design uses a Master/Satellite architecture to consolidate data center network connections, reduce hardware and
scales network bandwidth across multiple HPE Synergy 12000 Frames. The HPE Virtual Connect SE 40Gb F8 Module for HPE Synergy
eliminates network sprawl at the edge with one device that converges traffic inside the HPE Synergy 12000 Frames, and directly connects to
external LANs.
Reference Architecture Page 11

HPE Synergy 20Gb Interconnect Link Module


The HPE Synergy 20Gb Interconnect Link Module (satellite module) is designed for Composable Infrastructure. Based on a disaggregated, rack-
scale design, it uses a Master/Satellite architecture to consolidate data center network connections, reduce hardware and scale network
bandwidth across multiple HPE Synergy 12000 Frames.

Network block
This Reference Architecture is built on HPE FlexFabric 5940 network switches configured redundantly using a stacked network and shown to
function as intended during solution testing. Figure 6 depicts the network connectivity from the HPE Synergy 480 Gen10 Compute Module to
the HPE switches.
To provide high availability and maximum performance, VMware vSwitches with multiple active vmnics were created.

Figure 6 depicts the network logical design for deploying the solution.
HPE Nimble CS3000 Series
ModelCS700
Model CS700 Power supply A
Maximum Input power settings
THIS DEVICE COMPLIES WITH PART 15 OF THE FCC RULES.
OPERATION IS SUBJECT THE FOLLOWING CONDITIONS: 1) THIS DEVICE MAY NOT CAUSE
HARMFUL INTERFERENCE, AND 2) THIS DEVICE MUST ACCEPT ANY INTERFERENCE
Power supply B
For fully-loaded product RECEIVED INCLUDING INTERFERENCE THAT MAY CAUSE UNDESIRED OPERATION.
100 – 240 VAC THIS CLASS A DIGITAL APPARATUS COMPLIES WITH CANADIAN ICES-003
11.5 – 5.5 A Cet apperiel numerique de la classe A est conforme a la norme NMB-003 du Canada eth5 eth6
50 – 60 Hz

Controller
Heavy 34.5 kg (76 lb)
Controller
Unit is powered by
dual power cords. eth1 eth2 eth3 eth4 SAS OUT
p1 p2
To disconnect the unit,
remove both power cords.

iSCSI A
ACT/LNK B

ACT/LNK A

ACT/LNK B

ACT/LNK A
GRN=10G

GRN=10G

GRN=10G

GRN=10G

iSCSI B
ACT/LNK B

ACT/LNK A

ACT/LNK B

ACT/LNK A
GRN=10G

GRN=10G

GRN=10G

GRN=10G

A Stora
CSI ge iS
ge iS CSI
Stora B

1 2 Green=10Gbps,Yellow=1Gbps 15 16 17 18 SFP+ 31 32 33 34 47 48 49 QSFP28 Green=100Gbps,Yellow=40/10Gbps 54

1 2 Green=10Gbps,Yellow=1Gbps 15 16 17 18 SFP+ 31 32 33 34 47 48 49 QSFP28 Green=100Gbps,Yellow=40/10Gbps 54

HPE Network A HPE Network B


HPE Stacked Network

Synergy VC SE
UID UID

HPE VC SE 40Gb PID HPE VC SE 40Gb PID


F8 Module L/A F8 Module L/A

L1 L2 Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 L3 L4 Reset L1 L2 Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 L3 L4 Reset
Eth / X-Link Eth / X-Link

Port Group Port Group VMkernel Port VMkernel Port Port Group Port Group VMkernel Port VMkernel Port
Mgmt Prod iSCSI A iSCSI B Mgmt Prod iSCSI A iSCSI B
NIC Team NIC Team

Citrix PVS Server Citrix PVS Server Citrix PVS Server Citrix PVS Server
Citrix Deliver Controller Citrix Deliver Controller Citrix Deliver Controller Citrix Deliver Controller
Citrix StoreFront Citrix StoreFront Citrix StoreFront Citrix StoreFront
Citrix Shared Desktops Citrix Shared Desktops Citrix Shared Desktops Citrix Shared Desktops
VMs VMWare vCenter VMs VMWare vCenter VMs VMWare vCenter VMs VMWare vCenter

ESXi Host ESXi Host


Solution Production iSCSI A iSCSI B Solution Production iSCSI A iSCSI B
Synergy

Synergy
Gen10

Gen10
iLO

iLO
480

480
UID

UID

1-8
2

HPE Synergy SY480 HPE Synergy SY480

Figure 6. Network layout from the compute block to top of rack

Solution, production, storage network, VLANs, and configuration


HPE OneView manages the infrastructure and is used to define the network and infrastructure related components. Several VLANs are defined
to segment and isolate traffic including separate networks for the management of hardware and hypervisors, end-user productivity resources
and VM migration. An Active/Active configuration is used throughout the network. Brief descriptions of the VLANs follow.
Solution management network (VLAN 21)
The network used for solution management connects all physical components managed by HPE Synergy Composer and HPE Nimble Storage
management components. This network has its own domain/DNS infrastructure and is isolated from users.
Production network (VLAN 223)
The production network is dedicated to virtual desktops, virtual apps, file shares, and user data. This solution leverages a Login VSI environment
hosted on a separate infrastructure in order to generate load by emulating end users. These virtual end users also reside on the production
network. This network is represented by a single VLAN during test but might be tied to a number of VLANs in a production environment.
Storage network (VLANs 22 and 23)
The storage network is dedicated to providing access to HPE Nimble iSCSI storage which houses virtual machines running Citrix software,
VMware infrastructure, and end-user applications, file shares, and data. This solution leverages dual iSCSI VLANs 22 and 23 network segments
for VMware ESXi iSCSI software initiator via VMkernel interfaces to access Nimble iSCSI storage target via iSCSI protocol.
Reference Architecture Page 12

Management block
The management block of the solution is comprised of two HPE Synergy 480 Gen10 Compute Modules that host Citrix XenApp and VMware
infrastructure VMs for management of the solution. The solution management software stack includes HPE Synergy Composer, HPE Nimble
Management WebUI, VMware vCenter, and Citrix XenApp management components. The core enterprise services, such as Active Directory, are
hosted outside of this solution stack.

Note
The solution utilized a Microsoft Windows Storage Server with Failover Clustering enabled connected to HPE Nimble iSCSI Storage for hosting
end-user profiles and data.

Storage block
This Reference Architecture utilizes a 2-node HPE Nimble CS3000 storage. The HPE Nimble CS3000 iSCSI storage is connected to the HPE
Synergy 480 Gen10 Compute Modules via HPE Virtual Connect and HPE FlexFabric 5940 switching. HPE Nimble virtual volumes are created to
store Citrix and VMware infrastructure VMs, Citrix XenApp Provisioning Services Master Images, Citrix Hosted Shared Desktops with PVS
Overflow Cache disk as well as hosting external Microsoft file services to store end-user data for the solution. Table 1 highlights the configuration
of the volumes utilized for testing.

Note
HPE Nimble Storage uses Triple Parity RAID (or RAID-3P) which allows for greater protection of your data in a drive failure scenario, yet has zero
impact to performance or usable capacity.

Table 1. Storage volumes used for testing


Volume Name Volume Function Volume Size ESXi Cluster Datastore Name

Hosted Shared Desktops Virtual End user Windows Server 2016 Desktops with Application and 500GB Hosted-Shared-Desktop-Vol
Volume write cache hosted on storage
PVS Image Virtual Volume End user read only template with pre-installed and configured 250GB PVS-Vol
operating system with applications which are assigned on a per
user bases during access of hosted shared desktops
Citrix and VMware Citrix Provisioning Services Virtual Machines hosted on storage 500GB Mgmt-Vol
Infrastructure Virtual Volume
Citrix User Data Volume End user profiles and data hosted on storage 500GB N/A (Hosted on HPE StoreEasy
Server)
Reference Architecture Page 13

Figure 7 depicts the storage logical design for deploying the solution.

Citrix Hosted Desktop Provisioning Servers Resources

Hosted Shared Provision


Desktop Server Image Citrix Citrix User
XenApp Profile and Data

VMs

Hosted Shared PVS Image Citrix Infra Citrix User Data


Desktop Volume Volume Volume Volume

Cache Accelerated Sequential Layout


(CASL)

Logical disks

Sequential Stripe
SAS Drives SAS Drives
HDD HDD HDD HDD

1TB 1TB 1TB 1TB

HDD HDD HDD HDD

1TB 1TB 1TB 1TB

HDD HDD HDD HDD

1TB 1TB 1TB 1TB

HDD HDD HDD HDD

1TB 1TB 1TB 1TB

HDD HDD HDD HDD

1TB 1TB 1TB 1TB

B
HDD B
HDD B
HDD HDD

1TB
A 1TB
A 1TB
A 1TB

HPE Nimble CS3000


Series
Figure 7. Illustrates HPE Nimble iSCSI Storage solution design

AMD FirePro S7100X GPU


The AMD FirePro S7100X Server GPU is part of AMD’s family of hardware-based virtualized GPUs bringing the power of a physical GPU to
virtual environments, which lets users effortlessly run professional applications Equipped with 8GB of GDDR5 memory, the FirePro S7100X
Server GPU can accelerate applications and process computationally complex workflows with ease. It enables consistent, predictable and secure
performance from your virtualized desktop with a workstation-class user experience.

Note
Graphics solution sizing is highly application centric which results in a wide range of sizing scenarios. AMD has performed extensive testing on a
variety of use cases. For more information, search for sizing information at amd.com/mxgpu.

Solution software
Software for this Reference Architecture is segmented primarily into solution software and management software.
Reference Architecture Page 14

Management software
This layer is comprised of the software that IT administrators will use to manage the environment. HPE Synergy utilizes the software outlined in
Table 2, below.
Table 2. Management server software specifications
Management Version

VMware vCenter 6.5


HPE OneView 4.0

Solution software
This layer is comprised of the software resources that create end-user experiences. This includes Citrix software as well as the individual
applications that make up the end-user virtual machines. Table 3 describes the versions of Citrix and Microsoft software used in the creation of
this Reference Architecture.
Table 3. Citrix and Microsoft software
Software Version

Citrix XenApp 7.17


Citrix PVS 7.17
Citrix NetScaler VPX 12.0
Citrix Virtual Desktop Agent 7.17
Radeon Pro and AMD FirePro 18.Q1
Microsoft SQL Server 2016
Citrix Receiver 4.11

Virtual machines
In addition to the pre-installed solution management VMs, several virtual machines have to be created as part of the Citrix software
infrastructure.

Table 4 details the configuration of each of these VMs. VM counts varied based on the test conducted and will vary widely based on the
customer environment and image configuration. As such, the master image configuration is included in the table but no counts are given for the
total VMs deployed from that image.
Table 4. Virtual machine specifications during testing
Virtual Machine (VM) vCPU Memory ESXi Datastore Storage Networks Number Operating System (OS)
Size of VMs

VMware vCenter 8 24GB Mgmt-Vol 300GB Management 1 OVA


Citrix Deliver Controller 4 8GB Mgmt-Vol 60GB Management 2 Windows Server 2016 Standard
4 8GB Mgmt-Vol 60GB Management 2 Windows Server 2016 Standard
Citrix StoreFront Server
Production
6 12GB Mgmt-Vol 100GB Management 2 Windows Server 2016 Standard
Citrix Provisioning
PVdisk-Vol 250GB Production
Services
iSCSI
Citrix NetScaler VPX 2 2GB Mgmt-Vol 20GB Management 2 OVA
Citrix License Server 2 4GB Mgmt-Vol 40GB Management 1 Windows Server 2016 Standard
Microsoft Windows Server Hosted-Shared-Desktop-
4 4GB Production 1 Windows Server 2016 Standard
2016 (Image Template) Vol 40GB
Citrix Hosted Shared Hosted-Shared-Desktop-
12 30GB Production 36 Windows Server 2016 Standard
Desktops Vol 1000GB
Reference Architecture Page 15

Best practices and configuration guidance for the solution


The success of any end-user computing deployment greatly depends on a robust and fully thought out evaluation plan. Service personnel and IT
shops must be clear about desired outcomes prior to beginning an evaluation. This plan will influence how your hardware and software stacks
will be procured, configured, and tuned. During the development of the evaluation plan for infrastructure deployment, best practices must be
considered for each deployment phase and technology area.

Below are the best practices that Hewlett Packard Enterprise utilized in the testing of this solution.

Hardware configuration
ESXi deployment using Image Streamer
The first step in an HPE Image Streamer deployment is building the golden image from a reference host operating system. At a high level the
steps for deploying ESXi using Image Streamer are:

1. Create Server Profiles to be used by HPE Synergy 480 Compute Modules.


a. Create an empty volume and deploy the server profile.
b. The OS deployment plan creates an empty volume on the HPE Synergy Image Streamer local storage.
c. The empty volume of 20 GB size will be mapped to the server profile as an iSCSI volume, later VMware ESXi 6.5 will be installed.
2. Create a golden image for installing VMware ESXi 6.5 on HPE Synergy 480 Compute Modules.
a. Assign the server profile created in step 1 and power on the server.
b. Install ESXi 6.5 on an HPE Synergy Image Streamer OS volume using iLO on to iSCSI disk.
c. Log in to ESXi with user credentials, configure basic IP Address to download HPE Nimble iSCSI Multipath using FTP network access.
Below is the Nimble iSCSI Multipath downloadable link. (An HPE Passport account is required to access the website).
https://infosight.hpe.com/tenant/Nimble.Tenant.0018000000ovsdmAAA/resources/nimble/software/Integration%20Kits/HPE%20Nimble
%20Storage%20Connection%20Manager%20(NCM)%20for%20VMware
d. Shut down the OS.
3. Create deployment and build plans for Image Streamer to install VMware ESXi 6.5 OS on HPE Synergy 480 Compute Modules.
a. Choose a build plan for customization. We have used the HPE-ESXi-2017-10-06-v3.0 build plan, which was available from the HPE-ESXi
Artifact bundle at the link below.
https://github.hpe.com/ImageStreamer/image-streamer-esxi/tree/v3.1/artifact-bundles
b. Add the HPE-ESXi Artifact bundle under HPE Image Streamer Deployment.
c. Find the golden image OS volume name created as per step 2, select the OS build plan that was extracted using the HPE-ESXi-2017-10-
06-v3.0 Artifact bundle as per step 3a.
d. On the Image Streamer Golden Image screen, select “Create Golden Image” and specify a name (“ESXi Image”), description, OS volume,
and Capture OS build plan.
4. Create the server profile template for HPE Image Streamer to deploy ESXi 6.5 OS on HPE Synergy 480 Compute Modules.
a. Under the OS deployment plan, choose the deployment build plan that was created in step 3.
b. Under Connections, create two NICs for Management and Production, and two NICs for Nimble iSCSI Storage with respective VLAN IDs.
c. Under the Deployment setting, supply the domain name, host name, management NIC IP Address, and password.
d. Under default BIOS settings, configure the settings to take full advantage of the virtualization capabilities of HPE Synergy 480 Compute
Modules to optimize server performance. For a high-end graphics VDI configuration, the BIOS settings were tweaked to maximize
performance and user experience. See table 5 below for the BIOS settings.
e. Create the Server Profile Template for deploying across 8 HPE Synergy 480 Compute Modules.
Reference Architecture Page 16

Server tuning
The default BIOS tunings for the Synergy compute modules were altered. Table 5 below shows the BIOS settings used for the compute modules
in this Reference Architecture.
Table 5. Server BIOS tunings
BIOS Setting Value

Power Management (Power Profile) Maximum Performance


Power Management (Power Regulator) Static High Performance Mode (Default)
Minimum Processor Idle Power Core C-State No C-states
Minimum Processor Idle Package C-State No Package States
Intel QPI Link Power Management Disabled
Energy Performance Bios Maximum Performance
QPI Bandwidth Optimization (RTID) Optimized for I/O (Alternate RTID)
Hyper-Threading Enabled
Intel Turbo Boost Enabled

Software configuration
ESXi iSCSI Multipath deployment for Nimble iSCSI Storage access
VMware vSphere offers iSCSI Multipathing that provides both high availability and load distribution for Nimble iSCSI storage targets. This
solution utilized software iSCSI adapters connected to dedicated VMkernel ports within ESXi bound to individual vmnics.

To provide high availability for Nimble iSCSI storage access at the ESXi host level, a dedicated vSwitch is defined with an iSCSI VMkernel port
group and two vmnic interfaces with active/unused failover policy enabled.

Figure 8 shows a screenshot of the ESXi host-level iSCSI software adapter bound to VMkernel, for carrying iSCSI traffic to the HPE Nimble iSCSI
storage target.

Figure 8. iSCSI software adapter network port binding


Reference Architecture Page 17

Figure 9 shows a screenshot of the ESXi host vSwitch iSCSI portgroup teaming and failover policy settings. This configuration needs to be
repeated for each port group as shown in Figure 8 and you should ensure a different physical network adapter is set to active for each port
group.

Figure 9. vSwitch iSCSI adapters failover order policy

Nimble Storage Management and iSCSI Storage Data access


To enable dedicated iSCSI Data and Management access on Nimble Storage, it is recommended to have 1Gb Ethernet interfaces, eth1 and eth2,
for management traffic, and 10Gb Ethernet interfaces, tg1 and tg3, for iSCSI Data storage. You need to make sure that a private address (non-
routable) set is used for connectivity between the ESXi VMkernel ports and the Nimble Storage Controllers (Active / Passive) data access
interfaces for traffic isolation.

Figure 10 shows a screenshot of the Nimble Ethernet interfaces dedicated to carrying management and iSCSI data traffic.

Figure 10. Rear view of HPE Nimble CS3000 storage active front-end ports
Reference Architecture Page 18

Nimble iSCSI Multipath for ESXi


To enable iSCSI Multipathing between the ESXi server and the Nimble iSCSI storage system, which provides the ability to load-balance between
multiple paths for performance and also to handle failures of a path at any point between the server, network and storage, it is recommend to
select the Nimble path selection policy (PSP) for iSCSI Multipathing environments.

Figure 11 shows selecting the Nimble Path selection policy (PSP).

Figure 11. Manage Path selection policy in vCenter 6.5 under ESXi host Configure  Storage Devices  Device Details  Properties  Multipathing Policies

Figure 12 shows a CLI view of the enabled Nimble Path selection policy (PSP).

Figure 12. CLI view for ESXi host path selection policy

Microsoft Windows Server 2016 session environment


Configure the hardware graphics renderer group policy
To enable hardware-based virtualization for all Remote Desktop Services sessions on Microsoft Windows Server 2016, the local group policy
should be enabled with the hardware graphics renderer instead of the Microsoft Basic Render Driver as the default adapter.
The local group policy Remote Session Environment can be found under Local Computer Policy  Computer Configuration  Administrative
Templates  Windows Components Remote Desktop Services  Remote Desktop Session Host  Remote Session Environment
Reference Architecture Page 19

Figure 13 shows the Microsoft Windows 2016 group policy setting used to enable hardware graphics renderer for Remote Desktop Services
sessions.

Figure 13. Group policy setting to enable hardware GPU graphics rendering for RDS sessions

AMD FirePro and Radeon Pro software


To utilize AMD GPUs in passthrough mode from the Guest Citrix XenApp VM, download the Radeon Pro Software, Enterprise Edition for
Microsoft Windows Server 2016, 64-bit platforms from the following link.
http://support.amd.com/en-us/download/workstation?os=Windows%20Server%202016%20-%2064#pro-driver

AMD FirePro passthrough setting


Hardware-based virtualization enables workstation-grade AMD graphics acceleration using the PCI passthrough method. This eliminates
proprietary and complex software from the hypervisor, and allows each VM to use native Radeon Pro drivers with natural compatibility and
access to all GPU and compute functions on the server. Each physical GPU can be dedicated in passthrough mode directly to support users for
hosted desktops and applications.

The table below shows Citrix XenApp Windows 2016 hosted desktop virtual machine CPU, memory, and GPUs settings.
Table 6. Windows 2016 Hosted Desktop VM resource specifications
User Type vCPU System memory (GB) Number PCI devices (passthrough) GPU Memory per Server (GB)

Multimedia 12 30 1 8

Note
Each physical AMD FirePro GPU S7100X was assigned using the ESXi vDGA passthrough mode directly to Citrix XenApp Windows 2016 hosted
server VMs to render graphics related tasks.

For detailed configuration information, refer to amd.com/Documents/MxGPU-Setup-Guide-VMware.pdf.


Reference Architecture Page 20

Figure 14 shows the ESXi host level settings of AMD FirePro GPUs assigned to Windows Server 2016 VMs that were configured by editing the
hardware settings for the PCI/PCIe devices.

Figure 14. Enable GPU passthrough mode in ESXi 6.5

Capacity and sizing


This document shows the flexibility that HPE channel partners and customers can have in taking advantage of Citrix XenApp, resulting in a more
fully functional solution that scales well on HPE Synergy. To demonstrate the validity of the solution, HPE used Login VSI to run user simulations
against a fully configured environment.
About Login VSI
Login VSI is a load generating test tool designed to test remote computing solutions via a variety of different protocols. HPE maintains an
infrastructure dedicated to running Login VSI against a variety of solutions from Citrix. The test software works by starting a series of launchers
which are best thought of as virtual end-user access devices. These launchers connect to the end-user computing infrastructure under test via a
connection protocol, then a series of scripts executed on the compute resources simulate the load of actual users. The test suite utilizes a series
of desktop applications running via automated scripts within the context of the Citrix XenApp virtual environment. A standardized set of
applications are installed within every virtual machine and actions are taken against the installed applications. The set of applications HPE tested
with are listed in the table below, with versions shown where applicable.
Table 7. Standard Login VSI worker software specifications
Software Version

Microsoft Windows 2016 Standard x64


Adobe® Acrobat® 11
Adobe Flash Player 11
Adobe Shockwave Player 11
Bullzip PDF printer
Freemind
7-Zip
Microsoft Office Professional x64 bit 2013
Microsoft Internet Explorer 11
Reference Architecture Page 21

New to Login VSI is the ability to look at users with multimedia requirements via a Multimedia workload. The Multimedia workload is a type of
workload designed to really stress the CPU by using software that benefits from graphics acceleration. When a GPU is added, the most compute-
intensive sections of an application are offloaded to the GPU while the CPU processes the remaining code. From a user perspective, this means
users will experience an improvement in their perception of the environment. From an IT administrator perspective, it means that the resources
will be utilized in a more efficient fashion and that more users may fit onto any given resource.

The multimedia workload uses the applications shown in Table 8 for its GPU/CPU-intensive operations.
Table 8. Standard Login VSI Multimedia worker software specifications
Software Version

Windows client 2016 standard x64


Adobe Acrobat 11
Google™ Chrome browser 59.0
Google Earth mapping service 7.1.8
Microsoft Office Professional 2013 x64
Microsoft Internet Explorer 11

Response times are measured for a variety of actions within each session. When response times climb above a certain level on average, the test is
finalized and a score, called VSImax, is created. VSImax represents the number of users at or below the average response time threshold. A
detailed explanation can be found on the Login VSI website at loginvsi.com/documentation/index.php?title=Login_VSI_VSImax.

For purposes of showing a solution functions as intended, Hewlett Packard Enterprise does not drive load to a saturation point in order to
achieve a VSImax score. Rather, the goal is to show that the environment behaves as expected and that all paths work and users receive an
experience as expected under a given load.
Login VSI Multimedia workload
Table 9 shows the characteristics of the Login VSI Multimedia workload utilized in the creation of this Reference Architecture.
Table 9. Login VSI Multimedia workload configuration
Workload Login VSI version Workload vCPU Memory Apps open Video CPU usage Estimated IOPS

Multimedia worker 4.1.25 Medium 2 vCPU 15GB 8-11 360p 100% 10

Testing strategy
Citrix XenApp 7.17 was tested via the Login VSI Multimedia workload with and without accelerated graphics.
Benchmarks versus field implementation
Login VSI provides a set of tests that can be used to compare platforms and solutions within a fairly close range, provided that all underlying
variables remain the same including CPU, memory, disk, software versions, system tunings, VM tunings, networks, and Login VSI version. The test
uses a standardized set of workloads to create those comparison points. In the real world, it is highly unlikely that a customer will be running the
exact set of applications featured in the test. As with most test tools, Login VSI results should be used in conjunction with results from actual
system performance data from the field or via proof-of-concept (POC) or production implementations. Login VSI presents response times from
various tasks and applications that could be used as a primitive baseline in a controlled environment with limited applications and resource
assignments. Although these metrics are useful when comparing systems with similar resource attributes, they can be misleading when used to
simulate real-world implementations. As a result, the numbers in this document are guidelines only.

Hewlett Packard Enterprise recommends a complete analysis of the specific user requirements prior to any VDI implementations and not sizing
implementations based solely on benchmark results. Customers, new or inexperienced with VDI, should undergo a deeper assessment of their
environment prior to implementing VDI to make sure they attain the results they desire. If such an assessment interests you, please engage with
your Hewlett Packard Enterprise account team or find further information on our HPE Mobility and Workplace Services web page,
hpe.com/us/en/services/consulting/mobility-workplace.html.
Reference Architecture Page 22

Single node Hosted Shared Desktops for multimedia worker results


Table 10 shows the Citrix Hosted Shared Desktops use case of multimedia worker that was tested on HPE Synergy with an HPE Synergy 480
Gen10 Compute Module in an HPE Synergy 12000 Frame in a single-node configuration with a server pool of Citrix XenApp on PVS Hosted
Shared Desktops with Windows Server 2016.

Table 10 below shows the configuration and workload results for the multimedia worker user type and summarizes the Login VSI score for the
platform.
Table 10. Test results on Windows Server 2016 Hosted Shared Desktops without GPU acceleration
User type VM type Microsoft Office Microsoft Windows Number of VM VM PVS Write
version Server version users vCPU memory Cache Disk

Multimedia Hosted Shared 2013 2016 119 12 36GB 10GB


worker Desktops

Figure 15 shows a single server Login VSI score of 119 multimedia workers using Citrix XenApp PVS Hosted Shared Desktops with locally
installed applications on a single-node HPE Synergy 480 Gen10 Compute Module.

Figure 15. Multimedia Workload test result on one HPE Synergy 480 Compute Module without AMD graphics card

Single node Hosted Shared Desktops with AMD GPUs for multimedia worker results
Table 11 shows the Citrix Hosted Shared Desktops with AMD GPUs use case of multimedia worker that was tested on HPE Synergy with an HPE
Synergy 480 Gen10 Compute Module with AMD GPUs in an expansion module in an HPE Synergy 12000 Frame in a single-node configuration
with a server pool of Citrix XenApp on PVS Hosted Shared Desktops with Windows Server 2016.

Table 11 below shows the configuration and workload results for the multimedia worker user type and summarizes the Login VSI score for the
platform.
Table 11. Test results for Windows Server 2016 Hosted Shared Desktops with AMD GPUs
User type VM type Microsoft Office Microsoft Windows Number of VM VM PVS Write
version Server version users vCPU memory Cache Disk

Multimedia Hosted Shared 2013 2016 165 12 36GB 10GB


worker Desktops
Reference Architecture Page 23

Figure 16 shows a single server Login VSI score of 165 multimedia workers using Citrix XenApp PVS Hosted Shared Desktops with locally
installed applications on a single-node HPE Synergy 480 Gen10 Compute Module with AMD GPUs in an expansion module.

Figure 16. Multimedia Workload test result on one HPE Synergy 480 Compute Module with AMD GPUs in an expansion module

Multiple nodes Hosted Shared Desktops with AMD GPUs for multimedia worker results
Table 12 shows the Citrix Hosted Shared Desktops with AMD GPUs use case of multimedia worker that was tested on HPE Synergy with HPE
Synergy 480 Gen10 Compute Modules with AMD GPUs in expansion modules in multiple HPE Synergy 12000 Frames across a 6-node
configuration with a server pool of Citrix XenApp on PVS Hosted Shared Desktops with Windows Server 2016.

For this test, Login VSI Multimedia worker for Citrix Provisioning Services with XenApp Hosted Shared Desktops and PVS RAM Cache overflow
to Nimble storage shared disk features were enabled on non-persistent Windows Server 2016 Graphic Apps.

Table 12 shows the configuration and workload results for the multimedia worker user type and summarizes the Login VSI score for the
platform.
Table 12. Test results for 6-node configuration of Windows Server 2016 Hosted Shared Desktops, with AMD GPUs
User type VM type Microsoft Microsoft Number of VM Memory PVS Number of
Office version Windows version users vCPU Minimum Write compute
Cache modules
Disk

Multimedia Hosted 2013 Microsoft Windows 949 12 36GB 10GB 6


worker Shared Server Datacenter
Desktops Edition 2016
Reference Architecture Page 24

Figure 17 shows a multiple-server Login VSI score of 949 multimedia workers using Citrix XenApp PVS Hosted Shared Desktops with locally
installed applications on a 6-node configuration of HPE Synergy 480 Gen10 Compute Modules with AMD GPUs in expansion modules.

Figure 17. Multimedia Workload test result on a 6-node configuration of HPE Synergy 480 Compute Modules, with AMD GPUs

Analysis and recommendations


The data presented in the prior sections suggests that HPE Synergy deployed with VMware ESXi hypervisor and HPE Nimble CS3000 storage is
a flexible platform for running multimedia enabled end-user computing workloads. The platform offers demonstrated performance, simplified
management, and cost effective, linear scaling for end-user computing solutions.

Key takeaways:

• Hewlett Packard Enterprise provides industry leading compute, storage, and networking infrastructure that you can use to deploy Citrix
XenApp solutions in your environment.
• This HPE Reference Architecture is a tested and tuned solution architecture that offers optimal performance to deploy Citrix XenApp 7.17
virtual Hosted Shared Desktops application environments.
• HPE Nimble CS3000 storage provides uncompromising performance to run Citrix XenApp 7.17 client virtualization workloads.
• HPE Synergy performance testing of Citrix XenApp 7.17 with GPU-enabled Hosted Shared Desktops with 165 users, compared to Hosted
Shared Desktops with 119 users and no GPU, on a single HPE Synergy 480 Gen10 Compute Module exhibited up to 38% higher scaling.

Summary
Delivering both discrete applications and full desktops within the modern digital workplace has been a challenge for many IT admins due to the
consumerization of IT and the increased end-user expectations for high-performance anywhere, any device access.

Citrix XenApp 7.17, with or without AMD Multiuser GPU virtualized graphics, can deliver and manage published applications with local desktop-
like performance, from the data center to virtually any device, anywhere.

This Reference Architecture demonstrates how HPE Synergy facilitates the delivery of Citrix XenApp in a cost-effective and highly manageable
fashion. HPE Synergy is an ideal platform for server-based computing deployments providing enhanced GPU acceleration for optimum user
experience. Using the HPE Synergy Image Streamer with Citrix PVS creates a simple way to manage server boot and user configurations,
leveraging multiple user configurations. With the HPE Synergy Composer with the HPE OneView API, administrators can easily change the
deployment characteristics to meet current needs.
Reference Architecture Page 25

The infrastructure-as-code capability of HPE Synergy accelerates transformation to a hybrid infrastructure and provides on-demand creation
and delivery of applications and services with consistent governance, compliance, and integration. HPE OneView creates, aggregates, and hosts
internal IT resources so automation tools can provision on-demand and programmatically, without needing a detailed understanding of the
underlying physical elements

The following points define the value of deploying a Citrix XenApp solution on HPE Synergy and AMD GPU platform.

• HPE Synergy server profiles and templates are a powerful new way to quickly and reliably update and maintain existing infrastructure.
• HPE Synergy Composer uses templates to simplify one-to-many updates and manage HPE Synergy Compute Module profiles. These
templates allow changes to be implemented automatically, significantly reducing manual interactions and errors.
• HPE Synergy Image Streamer enables HPE Synergy to quickly deploy new compute modules or update existing ones by booting them directly
into their desired running OS in minutes.
• HPE Synergy offers unmatched AMD Multiuser GPU with passthrough technology, and virtualized graphics density per HPE Synergy 480
Compute Module via HPE Synergy 480 Multi MXM Expansion Module with up to 6 x AMD FirePro S7100X GPUs.
Reference Architecture Page 26

Appendix A: Bill of materials


The following table shows the bill of materials (BOM) for this solution.

Note
Part numbers are at time of testing and subject to change. The bill of materials does not include complete support options or other rack and
power requirements. If you have questions regarding ordering, please consult with your HPE Reseller or HPE Sales Representative for more
details hpe.com/us/en/services/consulting.html.

Table 13. Bill of materials


Qty Part number Description

Frame components
3 797739-B21 HPE Synergy 12000 Frame
2 804353-B21 HPE Synergy Composer
2 804937-B21 HPE Synergy Image Streamer
6 804942-B21 HPE Synergy Frame Link Module
2 794502-B23 HPE Virtual Connect SE 40Gb F8 Module for HPE Synergy
18 798095-B21 HPE 2650 Watts Titanium Hot Plug AC Power Supply
4 779218-B21 HPE Synergy 20Gb Interconnect Link Module
HPE Synergy Compute Module components
8 871940-B21 HPE Synergy 480 Gen10 Configure-to-order Compute Module
8 872139-L21 HPE Synergy 480 Gen10 Intel Xeon-Gold 6140 (2.3GHz/18-core/140W) FIO Processor Kit
8 872139-B21 HPE Synergy 480/660 Gen10 Intel Xeon-Gold 6140 (2.3GHz/18-core/140W) Processor Kit
8 777430-B21 HPE Synergy 3820C 10/20Gb Converged Network Adapter
48 815100-B21 HPE 32GB (1x32GB) Dual Rank x4 DDR4-2666 CAS-19-19-19 Registered Smart Memory Kit
6 872627-B21 HPE Synergy 480 Gen10 Multi MXM FIO Expansion Module
HPE Nimble Storage
1 Q8B39A HPE Nimble Storage CS3000 Hybrid Dual Controller 10GBASE-T 2-port Base Array
1 Q8B64A HPE Nimble Storage CS/SF Hybrid Array 3x1.92TB Cache Bundle
1 Q8B68A HPE Nimble Storage CS/SF Hybrid Array 21x1TB HDD Bundle
1 Q8B80A HPE Nimble Storage CS Hybrid Array 3x240GB Cache Bundle
1 Q8B89A HPE Nimble Storage 4x10GbE 2-port Adapter Ki
1 Q8G27A HPE Nimble Storage NOS Default Software
HPE Switches
1 JH179A HPE FlexFabric 5930 4-slot Switch
AMD FirePro GPU
36 M3X68A HPE AMD FirePro S7100X x2 Accelerator Kit
Reference Architecture Page 27

Resources and additional links


HPE Reference Architectures, hpe.com/info/ra

HPE Servers, hpe.com/servers

HPE Storage, hpe.com/storage


HPE Networking, hpe.com/networking

HPE Technology Consulting Services, hpe.com/us/en/services/consulting.html

HPE OneView User Guide for HPE Synergy, https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c05098314

HPE Nimble Storage User Guide (Requires HPE Passport Account)


https://infosight.hpe.com/InfoSight/media/cms/active/Nimble_OS_2_1_3_user_guide_2_1_x_PN970_0013_002.pdf

Get Started with Windows Server 2016, https://docs.microsoft.com/en-us/windows-server/get-started/server-basics

AMD MxGPU Radeon Pro Settings for VMware vSphere Client,


https://www2.ati.com/relnotes/radeon_pro_settings_for_vmware_vsphere_client_user_guide_v1.0.pdf
AMD MxGPU and VMware Deployment Guide, https://www2.ati.com/relnotes/amd_mxgpu_deploymentguide_vmware.pdf

Citrix XenDesktop Implementation and Configuration, https://docs.citrix.com/en-us/categories/solution_content/implementation_guides.html

Windows Server Blog, https://blogs.technet.microsoft.com/windowsserver

To help us improve our documents, please provide feedback at hpe.com/contact/feedback.

Sign up for updates

© Copyright 2018 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice.
The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying
such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall
not be liable for technical or editorial errors or omissions contained herein.

Microsoft, Windows, and Windows Server are registered trademarks or trademarks of Microsoft Corporation in the United States and/or
other countries. Citrix and XenDesktop are trademarks of Citrix Systems, Inc. and/or one more of its subsidiaries, and may be registered in
the United States Patent and Trademark Office and in other countries. Intel, Xeon, and Intel Xeon, are trademarks of Intel Corporation or
its subsidiaries in the U.S. and/or other countries. Adobe and Acrobat are trademarks of Adobe Systems Incorporated. AMD is trademark
of Advanced Micro Devices, Inc. VMware is a registered trademark or trademark of VMware, Inc. and its subsidiaries in the United States
and other jurisdictions. Google is a trademark of Google Inc.

a00049199enw, June 2018

You might also like