0% found this document useful (0 votes)
480 views354 pages

Creating HPE VMware Solutions Learner Guide

This document provides an overview of creating HPE VMware solutions. It discusses key concepts around cloud computing and hybrid cloud with VMware and HPE. The document also covers VMware Cloud Foundation 4 with Tanzu, HPE Synergy composable infrastructure, various HPE compute and storage solutions, VMware compatibility, and HPE OneView and InfoSight management tools. It aims to educate learners on designing HPE and VMware hybrid cloud solutions to meet customer needs.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
480 views354 pages

Creating HPE VMware Solutions Learner Guide

This document provides an overview of creating HPE VMware solutions. It discusses key concepts around cloud computing and hybrid cloud with VMware and HPE. The document also covers VMware Cloud Foundation 4 with Tanzu, HPE Synergy composable infrastructure, various HPE compute and storage solutions, VMware compatibility, and HPE OneView and InfoSight management tools. It aims to educate learners on designing HPE and VMware hybrid cloud solutions to meet customer needs.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 354

Creating HPE VMware Solutions

Learner Guide
Rev. 21.31
 Copyright 2021 Hewlett Packard Enterprise Development LP
The information contained herein is subject to change without notice. The only warranties for
Hewlett Packard Enterprise products and services are set forth in the express warranty
statements accompanying such products and services. Nothing herein should be construed
as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for
technical or editorial errors or omissions contained herein.
This is a Hewlett Packard Enterprise copyrighted work that may not be reproduced without
the written permission of Hewlett Packard Enterprise. You may not use these materials to
deliver training to any person outside of your organization without the written permission of
Hewlett Packard Enterprise.
Microsoft, Windows, and Windows Server are registered trademarks of the Microsoft
corporation in the United States and other countries.

Printed in the United States of America


Creating HPE VMware Solutions
Rev. 21.31
Contents

Module 1: Overview of HPE VMware Solutions


Learning objectives .................................................................................................................................... 1
Course map ................................................................................................................................................. 2
Customer scenario: Financial Services 1A .............................................................................................. 3
Cloud computing ......................................................................................................................................... 4
Formal definition of cloud computing ....................................................................................................... 5
Cloud infrastructure and benefits ............................................................................................................. 6
Use cases for cloud services ................................................................................................................... 7
Deployment models ................................................................................................................................. 8
HPE GreenLake: The cloud that comes to customers .......................................................................... 10
SDI, the first step to hybrid cloud ........................................................................................................... 11
What is software-defined infrastructure (SDI)? ...................................................................................... 12
Different types of workloads .................................................................................................................. 13
Hybrid cloud with VMware and HPE ....................................................................................................... 15
VMware and HPE .................................................................................................................................. 16
VMware Cloud Foundation 4 with Tanzu ............................................................................................... 17
VCF components ................................................................................................................................... 18
vSphere LifeCycle Manager: Simpler lifecycle management ................................................................ 20
Deploy VCF or just VMware solutions customer needs ........................................................................ 21
HPE Synergy: Composable infrastructure for VCF and virtualized, containerized, and bare metal
workloads ............................................................................................................................................... 22
Review HPE Compute solutions ............................................................................................................ 24
Review HPE Storage solutions .............................................................................................................. 25
Review HPE Hyperconverged solutions ................................................................................................ 26
VMware compatibility guide ................................................................................................................... 27
HPE OneView and InfoSight .................................................................................................................. 28
Summary .................................................................................................................................................... 29
Activity 1 .................................................................................................................................................... 30

Rev. 21.31 | © Copyright 2021 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Contents

Learning checks ........................................................................................................................................ 33

Module 2: Design an HPE Composable Infrastructure Solution for a


Virtualized Environment
Learning objectives .................................................................................................................................. 35
Customer scenario: Financial Services 1A ............................................................................................ 36
Sizing the HPE Synergy solution for VMware vSphere ......................................................................... 37
Gathering information: Migrating an existing vSphere deployment ....................................................... 38
Gathering information: Migrating from physical machines..................................................................... 40
How do you get the information? ........................................................................................................... 41
Considering types of workloads ............................................................................................................. 42
Positioning the HPE Synergy compute module for the workload .......................................................... 44
Sizing the solution .................................................................................................................................. 45
Example sizing for Financial Services 1A cluster 1 ............................................................................... 46
Activity 2.1 .............................................................................................................................................. 47
Best practices for deploying VMware vSphere on HPE Synergy......................................................... 54
VMware vSphere on HPE Synergy best practices guide ...................................................................... 55
Cluster design ........................................................................................................................................ 56
Support scalability with templates .......................................................................................................... 57
Review fabric best practices: Connections ............................................................................................ 59
Review fabric best practices: Mapped VLAN vs Tunneled mode .......................................................... 62
Review fabric best practices: Redundancy ............................................................................................ 64
Review fabric best practices: Internal and private networks.................................................................. 66
HPE Synergy support for key features .................................................................................................. 68
Best practices for server profile template design ................................................................................... 69
Best practices for VMware vSphere ESXi hypervisor provisioning ....................................................... 70
Lifecycle management and VMware vSphere integration with HPE OneView ...................................... 71
One infrastructure for virtualized and bare metal .................................................................................. 74
Summary .................................................................................................................................................... 75
Activity 2.2 ................................................................................................................................................. 76
Learning checks ........................................................................................................................................ 78

Module 3: Design an HPE Software-Defined Storage (SDS) Solution


Learning objectives .................................................................................................................................. 79
Customer scenario: Financial Services 1A ............................................................................................ 80
Virtual environment requirements for SDS ............................................................................................ 81
Going beyond capacity and performance requirements ........................................................................ 82
Evolution of VMware storage integration ............................................................................................... 83

Rev.21.31 ii Confidential – For Training Purposes Only


Contents

VMware vSAN on HPE Synergy ............................................................................................................... 85


SDS on HPE Synergy ............................................................................................................................ 86
VMware vSAN overview ........................................................................................................................ 87
Why HPE Synergy for vSAN? ................................................................................................................ 88
HPE Synergy D3940—Ideal platform for SDS and vSAN ..................................................................... 89
Right sized provisioning for any workload ............................................................................................. 91
Selecting certified configurations for Synergy and vSAN ...................................................................... 92
Following best practices for vSAN on HPE Synergy: Cluster and network design ............................... 93
Following best practices for vSAN on HPE Synergy: Drivers and controllers ....................................... 94
Following best practices for vSAN on HPE Synergy: Redundant Connectivity for D3940s .................. 95
HPE vSAN Ready Nodes .......................................................................................................................... 96
HPE approach to vSAN Ready Nodes .................................................................................................. 97
HPE ProLiant DL325 All-Flash 6 for virtualization ................................................................................. 98
HPE ProLiant DL360 All-Flash 8 for data management and processing .............................................. 99
HPE ProLiant DL380 8SFF All-Flash 4 for accelerated infrastructure ................................................ 100
HPE ProLiant DL380 24SFF Hybrid 8 for data warehousing .............................................................. 101
Fully automated storage with HPE Synergy and HPE storage arrays ............................................... 102
HPE Synergy fluid resource pools for Tier 1 storage .......................................................................... 103
Extra features supported by HPE storage arrays ................................................................................ 104
Fully automated provisioning ............................................................................................................... 105
Additional HPE storage array benefits for VMware environments .................................................... 106
Overview of HPE storage integrations with VMware ........................................................................... 107
Overview of vVols storage architecture ............................................................................................... 108
How vVols changes storage management .......................................................................................... 109
How vVols transforms storage in vSphere........................................................................................... 110
The HPE Primera and Nimble advantages with vVols ........................................................................ 112
HPE Storage vCenter plugins .............................................................................................................. 113
HPE management and automation portfolio for VMware .................................................................... 114
VMware vCenter Site Recovery Manager introduction ........................................................................ 115
HPE Nimble and Primera array benefits for SRM................................................................................ 116
HPE Recovery Manager Central (RMC) and RMC for VMware (RMC-V) overview ........................... 117
One RMC-V copy policy to orchestrate: Array Snapshot, Express Protect, Catalyst Copy, and Cloud
Copy ..................................................................................................................................................... 118
Additional reasons HPE storage is cloud-ready and automated ......................................................... 120
Typical challenges with cloud block storage ........................................................................................ 121
HPE Cloud Volumes Block .................................................................................................................. 122
Benefits of HPE Cloud Volumes .......................................................................................................... 123

Rev.21.31 iii Confidential – For Training Purposes Only


Contents

HPE InfoSight—Key distinguishing feature for the HPE SDDC .......................................................... 124
Architecting the AI Recommendation Engine ...................................................................................... 125
Example of HPE InfoSight in action ..................................................................................................... 126
Summary of HPE storage array benefits for VMware environments ................................................... 127
Activity 3 .................................................................................................................................................. 129
Summary .................................................................................................................................................. 131
Learning checks ...................................................................................................................................... 132

Module 4: Design an HPE Software-Defined Networking (SDN)


Solution for a Virtualized Environment
Learning objectives ................................................................................................................................ 133
Network virtualization is at the core of an SDDC approach ............................................................... 134
Common networking challenges ........................................................................................................... 135
VMware NSX ............................................................................................................................................ 136
VMware NSX ....................................................................................................................................... 137
VMware NSX architecture.................................................................................................................... 138
Use case 1: Networking virtualization .................................................................................................. 140
Overlay networking .............................................................................................................................. 141
Overlay segments ................................................................................................................................ 143
Transport zones ................................................................................................................................... 144
Uplink profiles for transit nodes ........................................................................................................... 145
NSX modes for flooding traffic ............................................................................................................. 146
Example: Original network ................................................................................................................... 147
Example: Plan for overlay segments ................................................................................................... 148
Use case 2: Microsegmentation .......................................................................................................... 149
How NSX implements micro-segmentation ......................................................................................... 150
Security extensibility ............................................................................................................................ 151
Use case 3: Network automation with NSX + vRealize ....................................................................... 152
Options for the physical underlay ........................................................................................................ 153
NSX + ArubaOS-CX ................................................................................................................................. 154
Design considerations for the physical infrastructure .......................................................................... 155
More details on MTU............................................................................................................................ 156
Introducing Aruba NetEdit .................................................................................................................... 158
Aruba NetEdit features ......................................................................................................................... 159
NetEdit value summary ........................................................................................................................ 161
Interoperability with third-party Cisco ACI ........................................................................................... 162
Cisco ACI ............................................................................................................................................. 163

Rev.21.31 iv Confidential – For Training Purposes Only


Contents

Endpoint Groups (EPGs) and other key ACI components .................................................................. 164
Activity 4 .................................................................................................................................................. 166
Summary .................................................................................................................................................. 168
Learning checks ...................................................................................................................................... 169
Appendix: Review VMware networking ................................................................................................ 170
Standard switch (vSwitch).................................................................................................................... 171
How vSwitch forwards traffic ................................................................................................................ 172
VMkernel adapters ............................................................................................................................... 173
Implementing VLANs ........................................................................................................................... 174
vSphere distributed switch (VDS) ........................................................................................................ 175

Module 5: Use Orchestration and Configuration Management to


Deploy and Manage the HPE SDI Solution
Learning objectives ................................................................................................................................ 177
Customer scenario: Financial Services 1A .......................................................................................... 178
Automation .............................................................................................................................................. 179
Orchestration ........................................................................................................................................... 180
HPE OneView integration with VMware vSphere and vRealize .......................................................... 181
HPE OneView integration with VMware tools ...................................................................................... 182
HPE Plug-ins simplify management for vSphere admins .................................................................... 183
HPE OV4VC benefits ........................................................................................................................... 184
HPE OV4VC: Server only integration .................................................................................................. 185
HPE OV4VC licensing and managed devices ..................................................................................... 186
HPE OneView Hardware Support Manager for VMware vLCM .......................................................... 187
HPE OV4VC features .......................................................................................................................... 188
HPE OV4VC views .............................................................................................................................. 189
HPE OV4VC features: Cluster imports ................................................................................................ 192
HPE OV4VC features: Grow a cluster ................................................................................................. 193
HPE OV4VC features: Consistency checks and non-disruptive firmware upgrades .......................... 194
HPE OV4VC features: Proactive HA ................................................................................................... 196
HPE Storage Integration Pack for VMware vCenter: Benefits............................................................. 197
HPE Storage Integration Pack for VMware vCenter: Configuration and management ....................... 198
VMware vRealize Suite ........................................................................................................................ 199
vRealize Suite options ......................................................................................................................... 200
HPE Content Packs for vRealize Log Insight ...................................................................................... 201
HPE OneView for vRealize Operations ............................................................................................... 202
VMware vRealize Orchestrator (vRO) ................................................................................................. 204

Rev.21.31 v Confidential – For Training Purposes Only


Contents

vRA + vRO ........................................................................................................................................... 205


HPE plug-ins for vRO........................................................................................................................... 206
OV4vRO workflows and actions .......................................................................................................... 207
HPE InfoSight integration with VMware................................................................................................ 211
HPE InfoSight: Industry’s most advanced AI for infrastructure............................................................ 212
Benefits of HPE InfoSight VMware integration .................................................................................... 213
HPE InfoSight’s cross-stack analytics ................................................................................................. 214
Example: Diagnose abnormal latency with VM analytics .................................................................... 215
Data-centric visibility for every VM ....................................................................................................... 216
Other automation and orchestration tools ........................................................................................... 217
GitHub .................................................................................................................................................. 218
HPE DEV ............................................................................................................................................. 219
HPE RESTful API ................................................................................................................................ 220
iLO RESTful API and Redfish conformance ........................................................................................ 221
HPE Python SDK for HPE OneView.................................................................................................... 222
PowerShell for HPE platforms ............................................................................................................. 224
Introduction to Chef, Ansible, and Puppet ........................................................................................... 226
Chef and HPE OneView ...................................................................................................................... 227
HPE OneView and Ansible .................................................................................................................. 229
Puppet Forge ....................................................................................................................................... 232
Terraform providers ............................................................................................................................. 235
Mutable and immutable infrastructures ................................................................................................ 238
Final thoughts on CM and orchestration tools .................................................................................... 239
Summary .................................................................................................................................................. 240
Learning checks ...................................................................................................................................... 241

Module 6: Design an HPE Hyperconverged Solution for a Virtualized


Environment
Learning objectives ................................................................................................................................ 243
Customer scenario .................................................................................................................................. 244
Emphasizing the software-defined benefits of HPE SimpliVity ......................................................... 245
Deduplication with HPE SimpliVity ...................................................................................................... 246
HPE SimpliVity Data Virtualization Platform ........................................................................................ 247
HPE SimpliVity Data Virtualization Platform in action ......................................................................... 248
Storage IO reduction ............................................................................................................................ 249
HPE SimpliVity data protection mechanisms: RAIN ............................................................................ 252
HPE SimpliVity data protect ion mechanisms: RAID ........................................................................... 253

Rev.21.31 vi Confidential – For Training Purposes Only


Contents

HPE SimpliVity for mission-critical apps .............................................................................................. 254


Why HPE SimpliVity data protection is better ...................................................................................... 255
How HPE SimpliVity localizes data ...................................................................................................... 256
Keeping data local with HPE SimpliVity Intelligent Workload Optimizer ............................................. 257
Speeding up data restores ................................................................................................................... 258
HPE SimpliVity integration with VMware ............................................................................................. 259
HPE SimpliVity plug-ins for VMware .................................................................................................... 260
HPE SimpliVity Deployment Manager with VMware vCenter .............................................................. 261
Why REST API .................................................................................................................................... 263
Using the REST API ............................................................................................................................ 264
HPE SimpliVity Upgrade Manager ....................................................................................................... 267
Sizing an HPE SimpliVity solution ........................................................................................................ 268
HPE SimpliVity design process ........................................................................................................... 269
Data gathering ..................................................................................................................................... 270
Reviewing choices for the HPE SimpliVity platform ............................................................................ 272
Preparing for sizing .............................................................................................................................. 273
Getting started with the HPE SimpliVity Sizing Tool ............................................................................ 274
Inputting information to size the cluster ............................................................................................... 275
Architecting the HPE SimpliVity Solution ............................................................................................ 278
HPE SimpliVity design process ........................................................................................................... 279
Architectural design ............................................................................................................................. 280
Network design .................................................................................................................................... 282
Cluster and federation sizing guidelines .............................................................................................. 283
Determine when to submit a DSR ....................................................................................................... 284
Activity 6 .................................................................................................................................................. 285
Scenario ............................................................................................................................................... 285
Task ..................................................................................................................................................... 285
HPE Nimble Storage dHCI: Emphasizing the Software-Defined Benefits ......................................... 287
HPE Nimble Storage dHCI versus traditional HCI solutions ................................................................ 288
Scaling compute and storage with disaggregated HCI ........................................................................ 289
HPE Nimble Storage dHCI and vVols features.................................................................................... 291
HPE InfoSight integration with Nimble Storage dHCI .......................................................................... 292
HPE Nimble Storage dHCI: Architecting the solution .......................................................................... 293
HPE Nimble Storage dHCI platform building blocks ............................................................................ 294
HPE Nimble Storage dHCI architecture ............................................................................................... 295
HPE Nimble Storage dHCI: Multiple vSphere HA/DRS cluster support .............................................. 296
Guidelines for deploying HPE Nimble Storage dHCI ........................................................................... 297

Rev.21.31 vii Confidential – For Training Purposes Only


Contents

Required VMware licenses .................................................................................................................. 299


HPE InfoSight Welcome Center: Guided deployments ....................................................................... 300
Two deployment paths ......................................................................................................................... 301
HPE InfoSight integration with Nimble Storage dHCI .......................................................................... 303
HPE Nimble Storage dHCI vCenter plug-in ......................................................................................... 304
HPE Nimble dHCI tools ........................................................................................................................ 305
Summary .................................................................................................................................................. 306
Learning checks ...................................................................................................................................... 307

Module 7: Design an HPE VMware Cloud Foundation (VCF) Solution


Learning objectives ................................................................................................................................ 309
Customer scenario: Financial Services 1A .......................................................................................... 310
VMware Cloud Foundation (VCF) architecture .................................................................................... 311
VCF SDDC Manager and domains ...................................................................................................... 312
VCF architecture: Standard model ...................................................................................................... 313
VCF architecture: Consolidated model ................................................................................................ 315
HPE integration with VCF ....................................................................................................................... 316
First composable platform that seamlessly integrates with SDDC Manager ....................................... 317
Why VCF on HPE Synergy? ................................................................................................................ 318
Deployment options for VCF on HPE Synergy .................................................................................... 319
Sizing HPE Synergy for VCF ............................................................................................................... 320
Order, build, and validate automation on HPE Synergy ...................................................................... 322
VCF Cloud Builder VM ......................................................................................................................... 323
HPE Synergy + VCF guidelines ........................................................................................................... 324
HPE OneView Connector for VCF ....................................................................................................... 325
Automated lifecycle management for VCF on HPE Synergy .............................................................. 326
Learning checks ...................................................................................................................................... 327

Appendix: Answers
Module 1 ................................................................................................................................................... 329
Activity .................................................................................................................................................. 329
Possible answers ................................................................................................................................. 329
Module 1 Learning checks ................................................................................................................... 330
Module 2 ................................................................................................................................................... 331
Activity 2.1 ............................................................................................................................................ 331
Activity 2.2 ............................................................................................................................................ 332
Module 2 Learning checks ................................................................................................................... 333
Module 3 ................................................................................................................................................... 334

Rev.21.31 viii Confidential – For Training Purposes Only


Contents

Activity 3 ............................................................................................................................................... 334


Module 3 Learning checks ................................................................................................................... 335
Module 4 ................................................................................................................................................... 336
Activity 4 ............................................................................................................................................... 336
Module 4 Learning checks ................................................................................................................... 337
Module 5 ................................................................................................................................................... 338
Module 5 Learning checks ................................................................................................................... 338
Module 6 ................................................................................................................................................... 339
Activity 6 ............................................................................................................................................... 339
Module 6 Learning checks ................................................................................................................... 339
Module 7 ................................................................................................................................................... 340
Module 7 Learning checks ................................................................................................................... 340

Rev.21.31 ix Confidential – For Training Purposes Only


Contents

PAGE INTENTIONALLY LEFT BLANK

Rev.21.31 x Confidential – For Training Purposes Only


Overview of HPE VMware Solutions
Module 1

Learning objectives
This module reviews cloud computing and introduces you to software-defined infrastructure (SDI). It also
highlights the close partnership that HPE and VMware have developed to deliver SDI and hybrid cloud
solutions and then reviews the solutions they offer.
After completing this module, you will be able to:
• Engage customers in a meaningful discussion about cloud computing and cloud management
• Explain the benefits of a software defined infrastructure
• Describe the HPE Composable Strategy and position HPE value prop for SDI and Software-Defined
Datacenter (SDDC)

Rev. 21.31 | © Copyright 2021 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Module 1: Overview of HPE VMware Solutions

Course map

Figure 1-1: Course map

This course includes the modules shown here. You are starting module 1.

Rev. 21.31 2 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Customer scenario: Financial Services 1A

Figure 1-2: Customer scenario: Financial Services 1A

Throughout this course, you will follow a scenario, which demonstrates how a customer transformed their
legacy environment to a software-defined data center.

Financial Services 1A is a prominent institution in its region, but it is facing new competition and its growth
has slowed significantly. The company has one main goal: attract and retain more customers. After
extensive research, C-level executives have determined that the best way to reach this goal is to offer
personalized services, based on each customer’s lifestyle, stage in life, and financial goals. Like many
financial institutions, the company offers self-service options for customers, but the company wants to
add more financial services and also simplify access, while maintaining strict security. IT is also
investigating using AI to make its fraud protection services more reliable.

This customer currently has a highly virtualized deployment with more than 80% of workloads virtualized.
The company uses VMware vSphere version 7.0, but none of the vRealize Suite applications. The CIO
feels that IT has reached a stalling point with the virtualized environment. Admins can provision a new
virtual machine (VM) very quickly, but getting a new host deployed takes a very long time. The same
goes for setting up new storage volumes and datastores.

IT has started using tools such as Ansible to start automating. Everyone is enthusiastic about using these
tools at first, but when admins get down to trying to automate everything, they run into issues. There are
always parts of service deployment, particularly with the physical infrastructure, that resist automation.
Finally, the CIO cannot obtain a good view of the entire environment. The bare metal workloads and
virtual workloads are totally siloed. The vSphere admins do not have a clear idea about what is going on
in the physical infrastructure. They and the network and storage admins sometimes seem to struggle to
communicate what the virtual workloads need as far as physical resources.

Rev. 21.31 3 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Cloud computing
You will first review the options your customers have for deploying cloud computing. Your understanding
of cloud computing will lay the groundwork for learning how you can help your customers achieve a
hybrid cloud deployment with SDI. Note that if you are already familiar with these concepts, you can skip
this section.

Rev. 21.31 4 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Formal definition of cloud computing

Figure 1-3: Formal definition of cloud computing

Cloud computing is computing as-a-service (aaS). It is delivered on-demand, often on a pay-per-use


basis, through a cloud services platform.
According to the National Institute of Standards and Technology (NIST), “cloud computing is a model for
enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing
resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned
and released with minimal management effort or service provider interaction.” (NIST, “Final version of
NIST Cloud Computing Definition Published,” Oct. 25,2011.)
The NIST also identified five characteristics of cloud computing:
• Broad network access: The computing resources are available over the network. This can be an
internal network or the Internet. The resources can be accessed in a standard way, so that different
platforms such as PCs, mobile devices or thin clients can use the computing resources.
• On-demand self-service: A consumer can provision the computing resources themselves as
needed, fully automated, without human interaction with each service provider.
• Rapid elasticity: The computing resources can be elastically provisioned and released, on demand.
To the user, the computing resources might appear to be unlimited.
• Measured service: The computing resources are metered. The information from the metering
system can be used to optimize the resources to the demand, or for billing purposes.
• Resource pooling: The computing resources are pooled, so that they can be served to multiple
users at the same time (multi-tenancy).

Rev. 21.31 5 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Cloud infrastructure and benefits

Figure 1-4: Cloud infrastructure and benefits

Cloud infrastructure itself is no different from typical data center infrastructure, except that it’s consistently
virtualized and offered as a service to be consumed via the network. Servers, storage, compute
resources and security are all key components of cloud infrastructure. This as-a-service consumption
model offers several key benefits:

Financial benefits
Cloud computing provides consumption-based pricing, which allows customers to pay only for the
resources they actually use. There are no upfront costs and the consumption-based model allows
customers to stop paying for resources if they become unnecessary.

Elasticity
Another fundamental aspect of cloud computing is that resources can be increased or decreased on
demand. The resources can be scaled-up (adding more CPU, memory, storage or network capacity) to an
existing compute node or scaled out (adding more compute nodes that function in a team of nodes that
run an application).
Scaling can often be automated, based on the rules for an application. For instance, if a news website
has an article that is very popular for a period of time, the system can automatically increase the network
capacity when needed and decrease the network capacity when the article gets less and less hits.

Rapid deployment
Resources can be deployed through easy to use interface, often automatic. Patches and updates to the
infrastructure can also be deployed automatically, keeping the infrastructure current and more secure.
In addition, cloud service providers often provide pre-packaged application services that make the
deployment of new applications easier and faster.

Rev. 21.31 6 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Use cases for cloud services

Figure 1-5: Use cases for cloud services

Organizations use cloud services for a wide range of use cases, such as:
• Test and develop software applications: Because cloud infrastructures can easily be scaled up
and down, organizations can save costs and time for application development.
• Implement new services and applications: Organizations can quickly gain access to the resources
they need to meet their performance, security, and compliance requirements. Organizations can then
develop, implement and scale applications more easily.
• Deliver software on request: Software-as-a-service provides software on demand, which helps
organizations offer users the software versions and updates whenever they need them.
• Analyze data: The data from all the organization’s services and users can be collected in the cloud.
Then, cloud services, such as machine learning, can be used to analyze the data and get better
insights for more and better decisions.
• Save, back up, and restore data: Data protection can be done cost-effectively (and on a very large
scale) by transferring data to an external cloud storage system. The data can then be accessed from
any location.

Rev. 21.31 7 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Deployment models

Figure 1-6: Deployment models

Customers who are considering how to deploy their workloads have to make some difficult decisions,
particularly because they have more options for investing their IT budgets than ever before. Do they
deploy workloads on premises, using “traditional” infrastructure solutions? In a public cloud? In a private
cloud or managed cloud?
These options are outlined below.
• Traditional on-premises infrastructure: IT is responsible for provisioning services for line of
business (LOB). Although companies can deploy solutions that are tailored to the needs of their
organizations, procurement and provisioning cycles can unfortunately take months.
• Public cloud: Public cloud consists of on-demand IT services, delivered with a pay-per-use funding
model. The services are hosted on infrastructure that is owned by the cloud service provider, is
shared by multiple customers, and is more or less transparent to customers.
Common cloud services include:
– Software-as-a-service (SaaS), which allows users to access software applications from the cloud.
Users do not need to install and run a purchased application on their own devices. The service
hides the underlying OS and the infrastructure.
– Infrastructure-as-a-service (IaaS), which offers a computing environment, typically a virtualized
OS, as a service. Companies can add any applications that they desire to the virtual machine
(VM), and the service also includes supporting storage and networking resources. (A VM is a
virtual instance of a computing system.)
– Platform-as-a-service (PaaS), which is similar to IaaS but adds a standard stack of developer tools
that enables developers to write applications designed specifically to run in the cloud.
• Private cloud: A private cloud delivers on-demand IT services, which IT can easily scale and Line of
Business (LOB) users can request using self-service portals. The customer owns the infrastructure
that hosts the services, and the infrastructure is dedicated to the customer.
Typically the customer must build, manage, and maintain the on-prem infrastructure that hosts the
cloud services. However, some service providers offer managed private clouds, in which they take
over many of these responsibilities.

Rev. 21.31 8 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

• Hybrid cloud: A hybrid cloud consists of one or more private clouds and one or more public clouds.
With a hybrid cloud, the customer can choose which workloads to deploy in which cloud, based on
the business and workload needs. Some hybrid clouds support “bursting,” which means scaling
services from one cloud to another cloud on-demand.
Most organizations prefer to use a hybrid of public and private cloud, because this strategy allows
companies to match individual workloads to the environment that is best-suited for them.
For example, companies like Financial Services 1A need to meet strict regulatory requirements when
storing their customers’ personal financial data. Such sensitive data is best kept in the safety of an
on-prem environment, where Financial Services 1A has the most control over their data.
However, it might make sense to deploy other business applications, such as those for marketing and
sales, to the public cloud. Perhaps certain times of the year, such as the winter holidays or the start of
the school year, correlate with more activity in these departments. If that is the case, the public cloud
is naturally a better option for seamlessly scaling the environment up or down to meet demand.
Regardless of their specific challenges, all businesses seek to optimize their IT spend while
minimizing operational risks, which is why they prefer the flexibility offered by hybrid cloud.

Rev. 21.31 9 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

HPE GreenLake: The cloud that comes to customers

Figure 1-7: HPE GreenLake: The cloud that comes to customers

Cloud has historically been a destination: a public cloud that is “out there” or a private cloud that is on-
premises. With that idea of cloud, you might struggle to see how “cloud-enabled” fits in an edge-centric
world in which 70% of apps run on-prem. But HPE makes the seeming contradiction fade away by
bringing “cloud” on-prem and at the edge.
If cloud isn’t defined by being “out there,” what does define a cloud? A cloud lets businesses obtain the
services that they need on-demand with a self-service process. From this agility, springs flexibility. The
company can scale services up and down as it makes sense to meet the company’s needs at that time.
Cloud is also characterized by a particular economic model. Companies pay for only the IT resources that
they use when they use them, removing the roadblock of a large up-front capital expenditure and letting
the company invest that capital elsewhere. Finally, from an operational viewpoint, the provider manages
the infrastructure, freeing up the company’s IT staff members for other innovative pursuits.
Companies need a way to bring these cloud characteristics to the locations that a data-driven, edge-
centric world demands. If it makes sense for the business case, then certainly, workloads can reside in a
public cloud such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. But if business
apps such as enterprise resource planning (ERP) and customer relationship management (CRM)
applications need to run on-prem, the cloud needs to be on-prem, whether in a private or a co-located
data center. If IoT-enabled systems call for intelligence at the edge, the cloud needs to be at the edge.
HPE understands that customers need a cloud that comes to them, anywhere and everywhere they need
it. HPE GreenLake offers a full portfolio of as-a-service solutions, including pre-configured compute and
storage solutions, workload-optimized solutions, and fully customized solutions based on particular
customer requirements. In this way, HPE GreenLake offers customers self-service, scalability, pay-per-
use, and provider management across the complete edge-to-cloud platform.
For more information about HPE GreenLake, you can take the Configure HPE GreenLake Solutions
course.

Rev. 21.31 10 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

SDI, the first step to hybrid cloud


Companies with traditional on-premises infrastructure are often looking for ways to move to a hybrid cloud
environment. Implementing a software-defined infrastructure can provide the first step. With SDI,
companies can automate provisioning, monitoring, and management while making their environment self-
healing.

Rev. 21.31 11 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

What is software-defined infrastructure (SDI)?

Figure 1-8: What is software-defined infrastructure (SDI)?

Moor Insights defines SDI as “the ability to manage hardware resources (compute, storage, and
networking in a programmable manner” (Moor Insights and Strategy, “Accelerating Software-Defined
Infrastructure with HPE Synergy,” Mar. 2019). What does programmable mean in more practical terms?
Moor Insights explains that "true SDI" is:
• Self-provisioning—Self-provisioning enables users to provision the resources that they need on the
fly. Whether users deploy workloads using an application, script, or catalog as in a private cloud, one
common factor applies. Users can quickly provision servers with all the accompanying volumes, OS,
and network connections without having to involve manual processes and multiple teams of experts.
• Self-monitoring—Users can easily monitor utilization of compute and other resources. The SDI
enables simple scaling in response to needs for more capacity.
• Self-managing and self-healing—Artificial Intelligence (AI) monitors systems, proactively detects
potential issues, and alerts admins only about important issues. It can even take actions on its own
before problems occur.

As Moor Insights points out, “SDI is the foundational building block to the software-defined datacenter
(SDDC)” (Moor Insights and Strategy, “Accelerating Software-Defined Infrastructure with HPE Synergy,”
Mar. 2019).

Rev. 21.31 12 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Different types of workloads

Figure 1-9: Different types of workloads

A software-defined infrastructure is more than a virtualized one. SDI can support virtualized, bare metal,
and containerized workloads, bringing automation to all of them. Because an SDI can support all three
types of deployment, it lets your focus remain where it should be: on helping customers choose the right
deployment for their individual workloads.
Read the following sections to explore considerations for bare metal, virtualized, and containerized
workload deployment.

Bare metal
While virtualization has made performance improvements, it always introduces a hypervisor layer
between a workload and physical resources. In addition, virtualization typically means sharing resources
with other workloads, which might interfere (the “noisy neighbor” problem). Bare metal cannot be beaten
when it comes to pure performance.
Bare metal can also be the preference for customers who are particularly anxious about isolating and
securing a workload.
Traditionally customers have struggled the most with automating deployment of workloads on bare metal,
but, as you will see, an HPE SDI makes such automation possible.

Virtualization
Virtualization offers many benefits for a wide array of workloads. Most workloads do not require the full
resources of a modern server, so sharing the resources is more efficient.
Admins often find it much simpler to apply standard and automated processes to a virtualized
environment than a bare metal one. They can clone VMs and script the deployment of more VMs from a
template. They can easily stop and start VMs. They can snapshot VMs and revert to snapshots. VMs can
be moved from one location to another (although, without extra help from network virtualization, live
migrations are often limited in extent).
In addition, admins can consolidate Windows and Linux operating systems on the same physical server.
This gives them the freedom to deploy workloads on the operating system that is best suited for each
particular workload. They can deploy these workloads using familiar virtualization management tools. In a
VMware environment, for example, they can use vSphere.
Compared to containers, virtualization is a mature technology with which most customers are very
familiar.

Containers
Containers are designed to make it easier to move applications from one server to another without the
risk of missing dependencies causing issues.

Rev. 21.31 13 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Traditionally, moving an application from one system to another could lead to major problems. To
understand why, you need to understand a little bit about an application’s “runtime system” and why any
changes to that system can cause problems. The runtime system is the environment in which an
application runs. It includes the binaries that translate human readable code to machine code for
execution. The runtime system also includes libraries, which are common pieces of code that multiple
applications can call on and run. For example, python has a math library with many mathematical
functions already defined within it so that every developer does not have to recreate these functions. Most
modern applications use dynamic libraries, which are linked to the code when it is compiled (if the
application uses a compiled language), but only have their code loaded into the application when the
application starts to run.
As developers create an application, they set up the runtime system with all the binaries and libraries that
the application needs. Now imagine that the code moves from one server to another—for example, from a
server in the development environment to a server in the production environment. If the new server’s
runtime system does not exactly mirror the development one, the application might link to a dynamic
library that does not exist—causing it to fail to compile or run.
A container combines an application with its runtime system so that the application always has the correct
binaries and libraries to run successfully.
A container platform can run on either bare metal hosts or virtual machine (VMs), as companies choose.

Orchestration and Kubernetes


Containers are portable and scalable, but containers cannot move or scale themselves. For simple lab
environments, users can run Docker CLI commands to deploy containers. For an enterprise solution,
however, customers need container orchestration.
The orchestration solution handles the tasks around deploying and scheduling containers. It can bundle
containers with supporting storage, networking, and other services. It can schedule containers for
deployment and scale the deployment up and down. The container orchestration solution might also
move containers across worker nodes based on factors such as node health and load. In this way the
orchestration solution can help to provide high availability. For example, if a node fails, the orchestration
solution can move or restart the containers on other nodes.
A container orchestration solution might also help integrate the container environment with other
automation tools.
Kubernetes is designed to help customers more easily deploy and scale their containerized applications.
It automates the selection of the node on which a container runs based on specifications input by users
and integrated tools, as well as on dynamic factors such as load and node health. It also makes the
environment self-healing by restarting containers on a new node if a node fails.
Kubernetes does not include its own container runtime but rather integrates with a broad range of
container runtimes such as Docker, containerd, and cri-o. Kubernetes is also not designed to provide
middleware such as messaging systems; however, some platforms built on Kubernetes do provide these
services.

Rev. 21.31 14 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Hybrid cloud with VMware and HPE


This section explains the unique partnership between VMware and HPE and then introduces some of the
solutions they offer.

Rev. 21.31 15 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

VMware and HPE

Figure 1-10: VMware and HPE

HPE and VMware have been collaborating on delivering solutions for their customers for more than 20
years. With the delivery of HPE Synergy, the world’s first composable infrastructure solution, HPE made it
even easier for HPE VMware customers to move to a software-defined infrastructure (SDI).

The two companies have also collaborated to help companies simplify their hybrid cloud environment. By
integrating VMware Cloud Foundation (VCF) and HPE solutions, the two companies have made it easier
to design, install, validate, deploy, and manage a hybrid cloud solution.

Learn more about this ongoing alliance by visiting the: HPE and VMware Alliance page.

Rev. 21.31 16 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

VMware Cloud Foundation 4 with Tanzu

Figure 1-11: VMware Cloud Foundation 4 with Tanzu

VMware Cloud Foundation (VCF) is a hybrid cloud platform, which can be deployed on-premises as a
private cloud or can run as a service within a public cloud. This integrated software stack combines
compute virtualization (VMware vSphere), storage virtualization (VMware vSAN), network virtualization
(VMware NSX), and cloud management and monitoring (VMware vRealize Suite) into a single platform.
In the version 4 release of VCF, VMware added Tanzu, which embeds the Kubernetes runtime within
vSphere. VMware has also optimized its infrastructure and management tools for Kubernetes, providing a
single hybrid cloud platform for managing containers and VMs.

Rev. 21.31 17 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

VCF components

Figure 1-12: VCF components

SDDC Manager is the management platform for VCF, enabling admins to configure and maintain the
logical infrastructure. It also automatically provisions VCF hosts.
In addition to SDDC Manager, VCF includes the following components:

vSphere (compute)
VMware vSphere hypervisor technology lets organizations run applications in a common operating
environment, across clouds and devices. vSphere includes key features such as:
• VM migration
• Predictive load balancing
• High availability and fault tolerance
• Centralized administration and management

vSAN (storage)
vSAN is a storage solution that is embedded in vSphere. It delivers storage for virtual machines, with
features like:
• Hyper-converged object storage
• All flash or hybrid
• Deduplication and compression data services
• Data protection and replication

NSX-T (networking)
Networking is often the last part of the stack to virtualize, but virtualizing the network is key to achieving
the full benefits of a software-defined data center. Without it, network and security services will still be a
manual configuration and provisioning process, ultimately becoming the bottleneck to faster delivery of IT
resources.
VMware NSX-T has been updated to support hybrid cloud environments. In addition to supporting ESXi
servers, NSX-T supports containers and bare-metal servers. It also supports Kubernetes and OpenShift

Rev. 21.31 18 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

as well as AWS and Azure. Furthermore, it is not tied to an hypervisor so it supports Microsoft Hyper-V
environments.
NSX-T also supports:
• Distributed switching/routing
• Micro-segmentation
• Load balancing
• L2-L7 networking services
• Distributed firewall
• Analytics

vRealize Suite
vRealize Suite is the integrated management environment of the hybrid cloud environments.
For example, within VMware’s vRealize Suite, operational capabilities continuously optimize workload
placement for running services based on policies that reflect business requirements. Automation
capabilities within the Suite can leverage those same policies when deciding where to place a newly
requested service.

Rev. 21.31 19 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

vSphere LifeCycle Manager: Simpler lifecycle management

Figure 1-13: vSphere LifeCycle Manager: Simpler lifecycle management

VMware introduced vSphere Lifecycle Manager (vLCM) in vSphere 7. As the name suggests, vLCM is
designed to help customers manage the entire lifecycle of ESXi hosts. For example, vLCM helps
customers deploy clusters more easily and quickly and then helps IT admins monitor and manage them.
With vLCM, IT admins can establish a “desired state,” and vLCM will automatically check to ensure hosts
meet that state. vLCM also supports vendor add-ins, which allow vendors to integrate their products
tightly with vLCM and vSphere. You will learn more about the HPE Hardware Support Manager plug-in
later in this course.

Rev. 21.31 20 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Deploy VCF or just VMware solutions customer needs

Figure 1-14: Deploy Hybrid Cloud Platform or Just Products Customer Needs

Customers can deploy VCF on:


• Composable systems such as HPE Synergy
• vSAN ReadyNodes and networking switches
They can also purchase VCF as a service from a public cloud provider or as part of VMware Cloud
Universal subscription. These options will not be covered in this course.
Rather than purchasing the complete hybrid cloud platform, customers can also purchase VMware
components separately. For example, customers may just want to virtualize compute and storage. In this
case, they would purchase only vSphere and vSAN.
As you will learn in Module 3, vSAN is not the only option customers have for making storage more
software-defined and better integrated within a VMware environment. In addition to vSAN solutions, HPE
provides HPE SAN arrays, including Nimble and Primera, for integrating with VMware.

Rev. 21.31 21 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

HPE Synergy: Composable infrastructure for VCF and virtualized,


containerized, and bare metal workloads

Figure 1-15: HPE Synergy: Composable infrastructure for VCF and virtualized, containerized, and bare
metal workloads

You will now consider how the HPE Composable Infrastructure empowers SDI and hybrid cloud in more
detail. A composable infrastructure is designed to be programmed from the hardware up.
A Composable Infrastructure supports bare metal and containerized workloads, as well as virtualized
ones. But it also abstracts resources into fluid pools created from underlying physical resources.
Customers can then dynamically assign and release resources from these pools. These resource pools
must be programmable by open-standards-based APIs, which allow for scripting and automation of
resource allocation. Automation enables real-time resource allocation, which helps companies to support
their on-demand application and services, particularly for developer environments, but for other use cases
as well
Customers often find themselves pulled between the demands of their traditional applications, which
require stability and are carefully managed by IT operations teams, and the demands of emerging cloud
apps, which are driven by developers’ requirements and the need for speed. An HPE Composable
Infrastructure helps customers simplify because it is a single infrastructure that supports both types of
apps. Whether customers need to deploy workloads on bare metal, as VMs, or in containers—or some
mixture of the three—the HPE Composable Infrastructure provides the same fluid resource pools that can
be composed for the current needs and the programmable processes to ease deployment.
HPE Synergy Gen10 compute modules also support the HPE Silicon Root of Trust.
The key benefits are described in more depth below.

Fluid resource pools


• Single infrastructure of disaggregated resource pools: Admins can map packages of storage and
compute resources, called modules, to each other within the Synergy frame; this creates miniature
networks of resources that have just the right proportion of storage and compute for a given workload.
• Physical, virtual, and containers: Thanks to the Unified API and SDI, you can provision and automate
bare metal with the same smooth simplicity that you would expect from VMs.
• Auto-integrating of resource capacity: the composer, which controls a Synergy system, will discover
new modules that you add to the system, and can be configured to provision them.

Rev. 21.31 22 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Unified API
• Single line of code to abstract every element of infrastructure: the API, which is hosted by the
composer, allows admins to write and run scripts that tell any part of the infrastructure what to do.
• Full infrastructure programmability: Because admins can script their commands, they can automate
management work that they previously had to perform manually.

Software-defined intelligence
• Template-driven workload composition: Admins can dynamically compose workloads; for example,
they can write a script that directs Synergy to support virtual desktop infrastructure (VDI) in the day,
and perform analytics at night.
• Frictionless operations: By automating so many processes, Synergy helps IT teams reduce the cost
of human error, which commonly occurs when admins have to perform repetitive tasks manually.

Rev. 21.31 23 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Review HPE Compute solutions

Figure 1-16: HPE Compute Solutions for virtualized environments

HPE has a diverse selection of servers optimized to accommodate virtualized workloads.


HPE ProLiant servers are the most popular HPE servers. In addition to providing firmware-level security,
ProLiant servers automate workload optimization. Theses servers include features such as workload
matching and workload performance advisor.
HPE Apollo Systems meet the needs for companies who have data-driven workloads such as big data
analytics and artificial intelligence/machine learning (AI/ML). Apollo systems are modular, easily scalable,
and high-performance.
For companies who need to deploy virtualized workloads at the edge—often as part of an Internet of
Things (IoT) solution—HPE offers HPE Edgeline. Edgeline is rugged, so it can be deployed wherever the
customer’s network edge may be, and it converges operational technology (OT)—such as data
acquisition, control systems, and industrial networks—with IT.
All HPE Gen10 servers give customers a secure foundation with firmware-level security delivered by HPE
Silicon Root of Trust. They also support HPE InfoSight (which is described later in this module).

Rev. 21.31 24 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Review HPE Storage solutions

Figure 1-17: HPE Storage for virtualized environments

HPE offers a variety of storage options for customers deploying VMware solutions.
HPE MSA is an entry-level SAN storage solution, designed for businesses with 100 to 250 employees
and remote office/branch offices (ROBOs). MSA offers the speed and efficiency of flash and hybrid
storage and advanced features such as Automated Tiering.
Most customers can benefit from HPE Nimble to meet their storage needs. HPE Nimble provides
99.9999% guaranteed availability. It also uses Triple+ Parity RAID for resiliency, which allows a Nimble
array to withstand three simultaneous drive failures in one group.
Nimble simplifies the storage lifecycle. For example, Nimble is simple to install and provision, so IT
generalists can deploy it. HPE Nimble can also scale up or scale out as needed, without disruption to the
customer.
HPE Primera redefines what’s possible in mission-critical storage with three key areas of unique value.
First, it delivers a simple user experience that enables on-demand mission-critical storage, reducing the
time it takes to manage storage. Second, HPE Primera delivers app-aware resiliency backed with 100%
availability, guaranteed. Third, HPE Primera delivers predictable performance for unpredictable workloads
so the customer’s apps and business are always fast.
As this course was being developed, HPE announced two new storage solutions: HPE Alletra 6000 and
Alletra 9000.
Please note that this course does not cover these solutions, but HPE expects to provide the same
VMware integration for these solutions that it provides for HPE Nimble and Primera.
HPE Alletra is engineered to be tightly coupled with the HPE Data Services Cloud Console. Together,
they deliver a common, cloud operational experience across workload-optimized systems on-premises
and in the cloud. Alletra solutions deliver the same agility and simplicity for every application across their
entire lifecycle, from edge to cloud. Customers can deploy, provision, manage, and scale storage in
significantly less time. For example, the platform can be set up in minutes, and provisioning is automated.
HPE Alletra 6000 is designed for business-critical workloads that require fast, consistent performance. It
guarantees 99.9999% availability and scales easily. HPE Alletra 9000, on the other hand, is designed for
mission-critical workloads that have stringent latency and availability requirements. It guarantees 100%
availability.
HPE also offers data protection solutions. HPE StoreOnce meets the needs of customers who require
comprehensive, low-cost backup for a broad range of applications and systems. It provides extensive
support for applications and ISVs so customers can consolidate backups from multiple sources.

Rev. 21.31 25 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Review HPE Hyperconverged solutions

Figure 1-18: HPE Storage for virtualized environments

HPE SimpliVity takes convergence to a new level by assimilating eight to twelve core data center
activities, including solid state drive (SSD) arrays for all-flash storage; appliances for replication, backup
and data recovery; real-time deduplication; WAN optimization; cloud gateways; backup software; and
more. And all of these functions are accessible under a global, unified management interface.
With the convergence of all infrastructure below the hypervisor, HPE SimpliVity allows businesses of all
sizes to completely virtualize the IT environment while continuing to deliver enterprise-grade performance
for mission-critical applications.
A core set of values unites all HPE SimpliVity models. Customers gain simple VM-centric management
and VM mobility. As they add nodes, capacity and performance scale linearly, delivering peak and
predictable performance. Best-in-class data services, powered by the SimpliVity Data Virtualization
Platform, deliver data protection, resiliency, and efficiency.
HPE Nimble Storage dHCI provides a disaggregated hyperconverged infrastructure solution. It allows
customers to scale compute and storage separately, while providing a low-latency, high-performance
solution.

Rev. 21.31 26 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

VMware compatibility guide

Figure 1-19: HPE vSAN ReadyNode configurations

HPE offers solutions, including vSAN ReadyNodes, that are validated to be compatible with VMware
products. Visit the VMware Compatibility Guide and select Hewlett Packard Enterprise as the vendor. You
will see a list of available solutions and can select each solution to view more information about it.

Rev. 21.31 27 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

HPE OneView and InfoSight

Figure 1-20: HPE OneView and InfoSight

HPE InfoSight and HPE OneView work hand in hand to establish a data center that can run itself.
HPE InfoSight helps to deliver the self-monitoring and self-healing components of an SDI, with predictive
analytics that support automation and provide customers with helpful AI-based recommendations for
resolving issues and optimizing. In some cases, InfoSight can predict and mitigate issues before they
occur without human intervention. You will learn more about InfoSight in Module 3.
HPE OneView helps to make the SDI self-provisioning and self-managing with template-based
provisioning and management and a Unified API.
HPE OneView is the engine for the HPE automated data center. Much of HPE OneView’s power comes
from the Unified API, which enables OneView to communicate with infrastructure devices and users to
reprogram servers, storage, and networking. HPE OneView conceals the complexity of infrastructure
management from upper layer applications while exposing functionality to an ecosystem of tools,
infrastructure applications, and business applications.

Rev. 21.31 28 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Summary

Figure 1-21: Summary

In this module you have learned about the benefits that businesses stand to gain by transforming to an
SDI or hybrid cloud environment. You also learned the HPE and VMware have a long-standing alliance,
working together to integrate their solutions. Together they provide the SDI and hybrid cloud solutions
customers need.

Rev. 21.31 29 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Activity 1

Figure 1-22: Activity 1

You will now return to the customer scenario introduced at the beginning of the module and learn more
about it.
Financial Services 1A is a prominent institution in its region, but it is facing new competition, and its
numbers are flagging. The company has one top goal: attract and retain more customers. After extensive
research, C-level executives have determined that the best way to do so is to offer personalized services
based on each customer's lifestyle, stage in life, and financial goals. Like many financial institutions, the
company offers digital self-service options for customers, but the company wants to add more financial
services and also simplify access, while maintaining strict security. IT is also investigating using AI to
make its fraud protection services more reliable.
The new initiatives will require the customer to scale up services and accommodate changing workloads
more aggressively.
This customer currently has a highly virtualized deployment with more than 80% of workloads virtualized.
The company uses VMware vSphere version 6.7, but none of the vRealize Suite applications. The CIO
feels that IT has reached a stalling point with the virtualized environment. The CIO has shared issues
such as these with you:
• The virtual environment and the physical environment are out of sync. Admins can provision a new
VM very quickly, but getting a new host deployed takes a very long time. The same goes for setting
up new storage volumes and datastores.
• IT has started using tools such as Ansible to start automating. Everyone is enthusiastic at first, but
when admins get down to trying to automate everything, they run into issues. There are always parts
of service deployment, particularly with the physical infrastructure, that resist automation.
• The CIO does not have a good view of the entire environment. The bare metal workloads and virtual
workloads are totally siloed.
• The vSphere admins do not have a firm idea about what is going on in the physical infrastructure.
They and the network and storage admins sometimes seem to struggle to communicate what the
virtual workloads need as far as physical resources.

Additional background on the company


Financial Services 1A is primarily a credit union (a member-owned banking association). It has 120
branches and about 850,000 members who have accounts with it. In addition to savings and checking

Rev. 21.31 30 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

accounts, the company offers loans such as mortgages and auto loans. The company has about 1200
employees, including a sizeable development and IT staff.
Based on your research Financial Services 1A has about US$10 billion in assets.
The customer has one primary data center and a disaster recovery (DR) site. About 10 years ago, the
customer consolidated services in a VMware vSphere deployment. The primary data center has 30
VMware hosts in 6 clusters, running a variety of workloads including:
• General Active Directory services
• General enterprise solutions
• An extensive web farm for both internal and external sites
• Development platforms
• The Web front end interacts with a number of applications, including
– Customer banking and self-service applications
– Investment management
– Loan management
– Inventory management
– Business management
The company also has about 20 bare metal servers running more intensive data analysis and risk
management applications. The company further has several load balancing appliances and security
appliances such as firewalls and an intrusion detection system/intrusion prevention system (IDS/IPS).
While the vSphere deployment hosts some business management solutions, the company moved some
of its customer relationship management (CRM), HR, and payroll services to the cloud about 3 years ago.
The customer also archives some less sensitive data in Amazon Web Services (AWS).
The ESXi hosts and bare metal servers are mostly HPE ProLiant DL servers (primarily 300 series and
Gen8). The customer also has about a dozen legacy Dell servers. The storage backend for the vSphere
deployment currently consists of Dell EMC storage arrays.
The data center has a leaf and spine network using HPE FlexFabric 5840 switches. Traffic is routed at
the top of the rack.
After reviewing the scenario, take about 20 minutes to create a presentation about the HPE approach to
SDDC and how it applies to the customer’s pain points and goals.
You can use the space below to record ideas for your presentation.

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 21.31 31 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 21.31 32 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

Learning checks
1. What is one feature of a software-defined infrastructure (SDI) according to Moor Insights?
a. It monitors and heals itself.
b. It is 100 percent virtualized.
c. It is 100 percent containerized.
d. It requires a hybrid environment.
2. Which are benefits that HPE Synergy provides? (Select two.)
a. Synergy converges all of the infrastructure below the hypervisor, providing an ideal
platform for VMs.
b. Synergy is a density-optimized solution that is designed for IoT solutions.
c. Synergy provides a unified API, which enables companies to use tools such as Chef
and Ansible to automate tasks.
d. Synergy includes HPE OneView, which automates the management of both Synergy
and VCF, replacing SDDC Manager in a VCF deployment.
e. Synergy enables companies to deploy virtualized, containerized, and bare metal
workloads on the same infrastructure.

Rev. 21.31 33 Confidential – For Training Purposes Only


Module 1: Overview of HPE VMware Solutions

PAGE INTENTIONALLY LEFT BLANK

Rev. 21.31 34 Confidential – For Training Purposes Only


Design an HPE Composable
Infrastructure Solution for a Virtualized
Environment
Module 2

Learning objectives
In this module, you will learn how to size an HPE Synergy solution for VMware vSphere. You will then
look at best practices for deploying VMware vSphere on HPE Synergy.
After completing this module, you will be able to:
• Given a set of customer requirements, position software-defined infrastructure (SDI) solutions to
solve the customer’s requirements
• Given a set of customer requirements, determine the appropriate software defined platform (such as
virtualization farm, scale out database, VDI, streaming analytics, and scale out storage).
• Given a set of customer requirements for a virtualized environment, determine the appropriate
software defined compute technology

Rev. 21.31 | © Copyright 2021 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Customer scenario: Financial Services 1A

Figure 2-1: Customer scenario: Financial Services 1A

Working with Financial Services 1A’s CIO and top decision makers, you have created a plan for
accelerating the company’s efforts to attract and retain customers. You are going to revitalize the
customer’s vSphere deployment by moving it to the composable HPE Synergy. This plan will help make
the customer’s network more automated and orchestrated from the physical infrastructure to the virtual
infrastructure.

Rev. 21.31 36 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Sizing the HPE Synergy solution for VMware vSphere


You will begin by learning how to gather the information that you need to plan the solution. You will then
explore how to apply that information to size the HPE Synergy solution properly.

Rev. 21.31 37 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Gathering information: Migrating an existing vSphere deployment

Figure 2-2: Gathering information: Migrating an existing vSphere deployment

When you are planning to migrate an existing vSphere deployment to Synergy, you need to collect as
much information about that environment as you can. You also need to understand the customer’s
expectations for the environment.

VM profiles
VM profiles allow you to standardize the configuration of VMs. You can establish a VM profile for each
type of VM. As you plan the migration, must catalog the resources that are required for each type of VM:
• Number of vCPUs
• Allocated RAM
• Disk size
You should also attempt to determine the input/output operations per second (IOPS) and disk throughput
requirements for each type of VM.
In addition to documenting the VM profiles, you should track how many of each type of VM are required.

Storage capacity requirements


You need to know how much space VMware datastores are using. These datastores hold the virtual disks
as well as snapshots, logs, and config files.

Subscription expectations
The virtual resources allocated to VMs consume physical resources on the ESXi host. Because not every
VM will operate at 100% utilization at the same time, resources can be oversubscribed. However, too
much oversubscription can compromise performance. Based on the "VMware vSphere ESXi Solution on
HPE Synergy: Best practices for sizing vSphere and ESXi on HPE Synergy" white paper, a 4:1 vCPU-to-
processor core ratio provides ample performance for most environments. In other words, a host with 32
cores could support VMs with 128 vCPUs total. If the customer has lower priority or lighter VMs, the ratio
could even go as high as 8:1 (which might create a small degradation in performance).
If VMs have higher CPU utilization, the ratio could be lower.
The same white paper suggests that 125% oversubscription for memory should be conservative, based
on memory sharing and other technologies provided by VMware. However, some customers might not be
comfortable oversubscribing memory.

Rev. 21.31 38 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

You need to work with the customer to specifically define the amount of oversubscription that the
customer will tolerate. If the customer has mission-critical VMs, you will need to assess their specific
requirements and plan for them without oversubscription.

Cluster plans
You need to know whether the vSphere environment uses clusters. A cluster consists of multiple ESXi
hosts. VMs are deployed to the cluster rather than to the individual host. VMware Dynamic Resource
Scheduler (DRS) assigns the VM to the host based on considerations such as load, as well as
configurable affinity and non-affinity rules for VMs. A cluster can also implement high availability (HA).
Among other features, HA ensures that, if a host within the cluster fails, its VMs restart on another host in
the cluster.
If the customer uses clusters, you need to know which clusters will support which VMs. You also need to
define the availability requirements. Should the cluster be able to tolerate the failure of one host for N+1
redundancy or more than one host?
Also find out whether the cluster will apply Fault Tolerance (FT) to any VMs. FT creates a standby copy of
the VM, so it will essentially double the requirements for that VM.

Current host profiles


It can be useful to profile the legacy ESXi hosts. What processors do they use and how many cores do
those processors have? What is the average CPU, memory, IOPS, disk throughput, and network
utilization on the hosts? The last three values will be particularly important to know so that you can plan
adequate storage performance. However, it can also be useful to look at the current average CPU and
memory utilization. If those exceed 80%, then the customer’s VMs are currently oversubscribed. You
should take that into account when you discuss the desired oversubscription levels with the customer.

Growth requirements
Discuss how quickly the solution is expanding. Agree on a growth rate per year and a number of years for
which the solution will accommodate that growth. For example, you might size the solution to
accommodate 5% growth for 3 years.

Rev. 21.31 39 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Gathering information: Migrating from physical machines

Figure 2-3: Gathering information: Migrating from physical machines

Rather than migrate an existing vSphere deployment, you might be working with a customer who wants to
virtualize physical workloads, migrating them to VMware vSphere on HPE Synergy. In this case, you
should profile each physical machine. Here you see information that you should collect. You can then
work with the customer to convert that information into a profile for a VM that can handle the same
workload. For example, if the physical machine has 16 cores and currently operates at 15-20 percent
utilization, you and the customer might decide that 4 vCPUs is sufficient for the VM.
Similar to a migration from an existing vSphere environment, you should also discuss desired
oversubscription levels, plans for using VMware clustering, and expected growth.

Rev. 21.31 40 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

How do you get the information?

Figure 2-4: How do you get the information?

In addition to interviewing the customer, you can obtain the information that you need from a number of
tools. It is strongly recommended that you use one or more of these tools to collect information, as
customer documentation can be spotty or outdated, leading you to undersize a solution if you rely on
them alone.

HPE Assessment Foundry (SAF)


This free suite of tools helps you collect data customer environments. It analyzes configuration and
workloads, generating detailed reports. It also helps you size HPE solutions. For more information, click
HPE Assessment Foundry.

HPE Software-Defined Opportunity Engine (SDOE)


SDOE is an AI-enabled tool, which is available through HPE InfoSight. Using AI and deep learning,
SDOE provides insights into customers’ storage environment and then offers recommendations for
technology solutions. In less than a minute, SDOE auto-generates customer proposals, which include
sizing, configuration, and total cost of ownership analysis. It also adds optional HPE Financial Services
and Pointnext information. For more information about SDOE, visit Seismic or HPE Products and
Solutions Now.

Perfmon
If you are migrating physical workloads to vSphere on Synergy, you can track resource utilization on
Windows machines using Perfmon. Perfmon shows utilization for any hardware resources on the
machine, including CPU, memory, and disk drives. The figure above shows an example in which you are
monitoring a number of disk related counters. You can also create a Data Collector Set for System
Performance to collect data on an ongoing basis.

Microsoft Assessment and Planning (MAP) Toolkit


If you are migrating physical workloads to vSphere on Synergy, MAP can provide a more sophisticated
tool for planning the migration. It lets you export reports with information about inventory and resource
utilization.

Rev. 21.31 41 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Considering types of workloads


You need to consider types of workloads that will be running as VMs. The following sections will help you
review characteristics of common workloads.

Traditional databases (OLTP)


These workloads tend to:
• Scale up (but parallelized on the scale up server)
• Exhibit a high number of random reads and writes
• Require data integrity
• Be mission critical

Business management
These applications run on databases, which might be traditional or in-memory. Characteristics include:
• Be latency sensitive
• Be mission critical
• Require high IOPS

Object storage
Common characteristics include:
• Scale out
• IOPS intensive

In-memory database (e.g. SAP HANA)


Common characteristics include:
• Scale up
• Memory intensive
• Write heavy
• High IOPS
• Latency sensitive
• Data integrity required

Big data and analytics


Common characteristics include:
• Scale out
• Compute and IOPS intensive (balance depends on application)
– More IO heavy: Sorting and searching
– More compute heavy : Classification, feature extraction, data mining
• Latency sensitive
• Sometimes memory intensive (Spark, Hive)

Rev. 21.31 42 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

EUC or VDI
End user computing (EUC) refers to any solution for allowing users to access compute resources
remotely. Virtual desktop infrastructure (VDI) is a common example. Common characteristics include:
• Latency sensitive
• Possible need for GPU acceleration (power users using applications like CAD)

Rev. 21.31 43 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Positioning the HPE Synergy compute module for the workload

Figure 2-5: Positioning the HPE Synergy compute module for the workload

You can mix and match compute modules for the Synergy frames based on the workloads that the
customer needs to support. Use the figure above to match your customers’ workload to an appropriate
Synergy compute module.
As you see, the Synergy 480 Gen 10 is a great go to option for many workloads, including VDI, email,
collaboration, system management, web serving, engineering, object storage, networking services, and
content or application development. It can even support SAP and business management workloads if
they are on the lighter end in term of number of users and requests. For similar applications but more
demanding requirements, recommend the HPE Synergy 660 Gen10.

Rev. 21.31 44 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Sizing the solution

Figure 2-6: Sizing the solution

You are now ready to input the information that you gathered and turn that into a BOM, specifying the
type and number of Synergy compute modules that you need, as well as their configuration and
accompanying components such as D3940 modules, Synergy frames, Composers, Frame Link Modules,
and interconnect modules.
This course assumes that you are familiar with the components of a Synergy solution and focuses on
sizing the compute modules for the vSphere deployment.
As you size such a solution, keep some additional best practices in mind. You should size to keep VM
load on the host’s resources at 80 percent or under. You also need to consider redundancy if the
customer uses HA clusters. For example, if the customer wants N+1 redundancy, you should scope the
solution with an extra module so that the remaining modules can support the load if one module fails. If
the customer plans to use fault tolerance, you should double the requirements for each FT-protected VM.
Whenever possible use an HPE sizer to size the solution. You can look for sizers at these links:
HPE Assessment Foundry
HPE SSET
Note that HPE SSET provides guidance on sizing VMware ESXi and VMware Cloud Foundation (VCF)on
HPE Synergy.
HPE Products and Solutions Now
HPE Tech Pro Community

Rev. 21.31 45 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Example sizing for Financial Services 1A cluster 1

Figure 2-7: Example sizing for Financial Services 1A cluster 1

You will now consider the example scenario again.


The customer’s vSphere deployment runs mostly generic enterprise applications, particularly web
servers, and supports application development environments. You recommend that the customer
standardize on SY460 Gen10 servers for the ESXi hosts. In this example, you are sizing just one of the
customer’s clusters.
With the help of the sizer, you might decide on two Intel Xeon Scalable Gold-6222V processors. These
processors are optimized for supporting VMs and provide 20 cores each. The modules are also
provisioned with 384GB RAM and a Smart Array Controller.
You will focus on the storage solution in Module 3, including when you would use a vSAN or backend
SAN arrays.
Four of these modules provide 160 cores, which meet the customers’ vCPU needs based on a 4:1 ratio.
They also provide 1536GB RAM, which exceed the requirements too. (In this example, the customer
wanted 100% memory subscription—or in other words no oversubscription.) You will recommend five
modules to provide N+1 redundancy for the cluster.

Rev. 21.31 46 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Activity 2.1

Figure 2-8: Activity 2.1

For this activity, you will return to the Financial Services 1A customer scenario.
After your discussion of the plan for helping the company transform to an SDI, Financial Services 1A has
decided to have you propose migrating vSphere to HPE Synergy.
Earlier in this module, you reviewed how to size one cluster for Financial Services 1A. Now you will look
at a second cluster for the customer. (The environment has additional clusters, but you do not need to
consider them for the purposes of this activity.) This second cluster supports a variety of Web applications
and services for the customer's website and mobile banking apps.
The customer has told you that this cluster must support 60 VMs with this per-VM profile:
• 4 vCPUs
• 16 GB RAM
• 60 GB disk

Task 1
What additional information do you need to collect in order to properly size the deployment?

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 21.31 47 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 21.31 48 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Task 2
In response to your questions, the customer has indicated that a 4 vCPU-to-core oversubscription ratio is
acceptable and 100% RAM subscription. The customer wants N+1 redundancy for the cluster (one host
can fail without impacting performance). You used Lanamark and vCenter to discover this information:
• VM count, vCPUs, and allocated RAM given by customer are confirmed as correct
• Total IOPS = 2034 write; 4325 read
• Datastore Total = 5600 GB
• Datastore Provisioned = 3600GB
• Datastore Used = 3023 GB
Create a BOM for this cluster. Use the HPE Synergy sizer for VMware vSphere, which you can find by
following the steps below.
1. Access https://psnow.ext.hpe.com

Figure 2-9: Task 2: Products & Solutions Now

2. Log in with your credentials.

Rev. 21.31 49 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

3. Click the arrow next to Tools & Resources.

Figure 2-10: Task 2: Access Tools & Resources

Rev. 21.31 50 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

4. Select the check box next to 4. Sizing.

Figure 2-11: Task 2: Select Sizing

5. Scroll down and select the HPE SSET (Solution Sales Enablement Tool).

Figure 2-12: Task 2: Select HPE SSET

6. If prompted, log in again.

Rev. 21.31 51 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

7. Select Start New Guidance.

Figure 2-13: Task 2: Start New Guidance

8. Choose the appropriate sizing from the list (VMware ESXi on HPE Synergy).

Figure 2-14: Task 2: Choose sizing from list

Rev. 21.31 52 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

9. Click Start.
10. Fill in the information that the customer provided you and click Review. (Have no preference for
storage at this point).
11. Export the BOM and take notes on how you will present the BOM to the customer.

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 21.31 53 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Best practices for deploying VMware vSphere on HPE


Synergy
You will now review best practices for deploying the VMware vSphere solution on HPE Synergy.

Rev. 21.31 54 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

VMware vSphere on HPE Synergy best practices guide

Figure 2-15: VMware vSphere on HPE Synergy best practices guide

The best practices outlined in this section are based on the "VMware vSphere on HPE Synergy Best
Practices Guide." It provides a set of standard practices for deploying VMware vSphere clusters on HPE
Synergy infrastructure. You will now look at each step in more detail. If, after you have completed this
section, you want to learn more, you can download this guide and read it in full.

Rev. 21.31 55 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Cluster design

Figure 2-16: Cluster design

You have already looked at many topics related to cluster design, as these were relevant to sizing the
solution. For the cluster design step, you simply follow one best practice for Synergy. Distribute nodes in
the same cluster across frames.
For example, your solution for Financial Services 1A might have six clusters: one with three modules, two
with five modules, and three with six modules. The figure above illustrates how you could distribute those
clusters across 3 frames. Distributing the nodes evenly minimizes the impact if a full frame fails.
For example, you might have sized Financial Services 1A as requiring two three-module clusters and
several five-module clusters. You should distribute those modules as evenly across the frames as
possible. You should also clarify the customer requirements. Should the cluster operate without
degradation of services if one compute module fails or if an entire frame fails? In the latter case, you
might want to add another module and make the five module clusters have six modules so that the cluster
can tolerate the loss of a two modules that would occur if a frame failed.

Rev. 21.31 56 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Support scalability with templates

Figure 2-17: Support scalability with templates

For the next two steps, you will look at best practices for fabric and server profile design. First, though
consider a basic best practice rule: Support scalability by using templates, which include logical
interconnect groups (LIGs), and server profile templates (SPTs).
For example, look at a case in which a customer has one rack with three Synergy frames. Now the
customer wants to add another rack. Review each step to see how templates make it easy for customers
to scale the solution.

Step 1
You open up the management ring on the existing rack and easily integrate the frame link modules on the
three new frames into the ring. The existing Composer will now manage both racks. You could move the
redundant Composer to a frame in the new rack for rack-level redundancy.

Step 2
You power on everything, and Composer auto-discovers the new frames.

Step 3
Admins can apply the existing enclosure group (EG) and LIG templates to the new frames, which quickly
establishes the correct connectivity and network settings. The new frames just need to have their
conductor interconnect modules cabled into the row switches, following a similar layout as used in the
original rack.

Step 4
The logical enclosure settings are applied in tandem with firmware updates. Within a few hours and with
minimal admin work, the new Synergy frames are available.

Step 5
Admins can apply the existing SPTs to compute modules in the new frames to quickly scale up the
desired workloads.

Rev. 21.31 57 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Rev. 21.31 58 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Review fabric best practices: Connections


To ensure smooth functioning for the Synergy fabric, you should follow a few rules. Read each section to
review best practices for the compute module connections.

Figure 2-18: Review fabric best practices—Mezzanine and ICM connections

Mezzanine and Internet Communication Manager (ICM) connections


The figure above reviews which interconnect module bay connects to which mezzanine on compute
modules. Bays 1-3 connect to the first port on the mezzanines, and bays 4-6 to the second port.
If you want to connect compute modules to D3940 modules for local storage, to storage arrays using fibre
channel (FC), and to the Ethernet data center network for management and production traffic, follow
these population rules:
• Install SAS Interconnect Modules in bays 1 and 4; install Smart Array controllers in compute modules’
mezzanine 1
• Install your choice of FC interconnect modules in bays 2 and 5; install FC HBAs in compute modules’
mezzanine 2
• Install your choice Ethernet interconnect or satellite modules in bays 3 and 6; install Ethernet or
FlexFabric NICs in compute modules’ mezzanine 3
Virtual Connect FC modules, for FC, and Virtual Connect FlexFabric modules, for Ethernet plus optional
fibre channel over internet (FCoE), are recommended to unlock the full benefits of Synergy composability.

Rev. 21.31 59 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Multiple FlexNICs

Figure 2-19: Review fabric best practices—Multiple FlexNICs

You will now focus on mezzanine 3. A Converged Network Adapter (CNA) plus a VC SE 40Gb F8 ICM
together unlock the full benefits of composable networking. A CNA can be divided into multiple FlexNICs
or connections, each of which looks like a physical port to the OS running on the compute module. The
number of supported FlexNICs per port depends on the VC module and CNA capabilities, whichever is
lower. The VC SE 40Gb F8 module supports eight per port, as does the 4820C CNA, but the 3820C CNA
supports only four.
Admins can set bandwidth policies per connection. For our purposes the compute module is an ESXi
host, so virtual switches or virtual distributed switches (vDS) own the FlexNICs and connect VMkernel
adaptors or VM port groups to them.
As just one example, with a single two-port CNA on the compute module, the ESXi host can have
redundant deployment, management, vMotion/FT, and production ports.

Synergy FC convergence

Figure 2-20: Review fabric best practices—Synergy FC convergence

Rev. 21.31 60 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Admins can also configure one of the FlexNICs on a CNA port, and the paired FlexNIC on the other port,
to use FC or enhanced iSCSI; the FlexNICs are then called FlexHBAs. In this example, 3:1c and 3:2c
operate in FCoE mode and are assigned to Synergy FC networks. The ports appear as storage adapters
on the ESXi host, which it can use to connect to SAN storage arrays, accessible through the VC ICMs,
which require FC licenses.
This design could eliminate the need for a mezzanine 2 and interconnect modules in bays 2 and 5. On
the other hand, fewer FlexNICs are available for other purposes.

Rev. 21.31 61 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Review fabric best practices: Mapped VLAN vs Tunneled mode


Synergy controls how compute module ports connect to each other and to other devices in the data
center through networks. You will now consider some best practices for using either mapped VLAN mode
or tunneled mode for networks.

Mapped VLANs

Figure 2-21: Mapped VLANs

The example you see in the figure above has fewer connections for simplicity, but the same principles
apply even if you are using more connections.
You assign each compute module connection to a network. Interconnect modules have uplink sets that
own one or more external ports on the interconnect module. The uplink set also has networks assigned to
it. The compute module connection can send traffic to any other compute module connections in the
same network and over the uplink ports assigned an uplink set with its network.
With mapped VLANs, every network is assigned a VLAN ID. An uplink set can support multiple networks
so that those networks can share the uplinks. To maintain network divisions, traffic for all of the networks,
except the one marked as the native network, is tagged with the network VLAN ID as it is sent over the
uplink. If a compute module connection is assigned to a single network, the traffic is untagged on the
connection. But a downlink connection can also support multiple networks, which are bundled in a
network set. Again traffic for all networks, except the network set’s native network, is tagged on the
downlink. This is useful for connecting to virtual switches that send tagged traffic for multiple port groups.
Mapped VLANs give Synergy the most control, and are recommended in most circumstances. However,
they do require VMware admins to coordinate the VLANs that they set up in VMware and in Synergy.

Rev. 21.31 62 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Tunneled mode

Figure 2-22: Tunneled mode

Tunneled mode opens up the network to support any VLAN tags. If a virtual switch uses a connection
with a tunneled mode network, admins can add new port groups and VLANs without needing to change
the Synergy configuration. However, tunneled mode causes all networks to share the same broadcast
domain and ARP table. If upstream switches bridge VLANs, this will cause MAC addresses to be learned
incorrectly and disrupt traffic. Therefore, tunneled mode is only recommended for very changeable
environments such as with DevOps.
And you will learn how to create an even better solution with NSX in Module 4; that solution will keep
mapped VLAN networks stable on Synergy while allowing VMware admins to add new VM networks
flexibly.

Rev. 21.31 63 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Review fabric best practices: Redundancy


Now that you understand how networks link compute module connections to uplink sets on ICMs, you can
look at some best practices for using redundant links. The following sections describe the two main ways
to establish multiple connections to the data center LAN.

M-LAG and LACP-S

Figure 2-23: M-LAG and LACP-S

For most Ethernet networks, it is recommended that you use LACP-S, or S-channel, to create link
aggregations between pairs of compute module connections. Pairs of connections are defined as
FlexNICs with the same letter on different ports. Connections in the same LAG are assigned to the same
network, and the OS that runs on the compute module must define the connections as a LAG too. For
ESXi this means that a distributed switch configured with a LAG must own the connections. LACP-S
provides faster fault recovery and better load balancing compared to traditional NIC teaming with OS load
balancing.
LACP-S works best with the connected ICMs use an M-LAG to carry the connections’ networks. The
ICMs automatically establish an M-LAG when the same uplink set has ports on both ICMs. The two ICMs
present themselves as a single entity to the devices connected to those ports. They could connect to one
data center switch or two switches in a stack that also support M-LAG. VC SE 40Gb F8 modules supports
up to eight active links per M-LAG. (Each module has six 40GbE uplinks, which can be split into four
10GbE links each. All links in the M-LAG must be the same speed).
When you use LACP-S and M-LAG together, whichever ICM receives traffic from the downlink LACP-S
LAG forwards the traffic across a local link in the M-LAG. Similarly when an ICM receives traffic from
upstream, destined to the compute module connection, it forwards the traffic on its local downlink in the
S-channel. This reduces traffic on the links between ICMs.
Note also that this view shows the compute module connected directly to the ICMs for simplicity. In reality
the compute module might connect to satellite modules, which connect to the conductor VC ICMs in
another frame. Only conductor ICMs have uplinks. Logically, though, the topology is the same.

Rev. 21.31 64 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Single ICM LAGs and Smart Link

Figure 2-24: Single ICM LAGs and Smart Link

For iSCSI a different configuration is recommended. The compute module’s pair of iSCSI connections
should be assigned to the two different networks with no aggregation. To decrease unnecessary traffic
over the conductor-to-conductor links, the VC conductor modules should have different uplink sets, which
only support their own downlink’s network. They can establish a LAG to the uplink switch with their own
links, but not an M-LAG.
This design requires smart link to handle failures. Without smart link, if all uplinks on an interconnect
module fail, but the downlinks are still operational, the compute modules will contain to send traffic on the
iSCSI network with a failure, causing disruption. Smart link shuts down the downlinks in a network if all
the uplinks fail, allowing the compute module to detect the failure and fail over to the other connection.
You might also choose to use this design to permit an active/active configuration if the data center
switches do not support a stacking technology such as IRF, DRNI, or VSX. The virtual switch could load
balance with originating source port (by VM), for example, so some VMs would use the uplinks on ICM 3
and some would use the uplinks on ICM 6.
Although the last two figures have shown the two approaches separately for clarity, the same CNA can
combine the two approaches on different FlexNICs. For example, you can have the iSCSI connections
using Smart link and no link aggregation while the management and production connections use LAGs.
Similarly, the ICMs can have some uplink sets that use LAGs and some that use M-LAGs, but each uplink
set owns ports exclusively.

Rev. 21.31 65 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Review fabric best practices: Internal and private networks


As a final topic for fabric design, you should know the use cases for internal networks and private
networks.

Internal networks

Figure 2-25: Internal networks

Internal networks are not assigned to uplink sets on interconnect modules, but are assigned to downlink
ports on compute modules. That means that compute modules can communicate with each other through
the interconnect modules, but their traffic does not pass out into the data center network. The traffic
extends as far as the connected conductor and satellite modules, which could be three frames.
If a cluster is confined to three frames, internal networks can be useful for functions like vMotion and FT.
A production network, to which VMs connect, can also be an internal network, but only if the VMs in that
network only need to communicate within the three-frame Synergy solution. Also remember that VC
modules are not routers. Consider whether VMs need to communicate at Layer 3, even with VMs on
hosts in the same Synergy frames. If the data center network is providing the routing, the VMs' networks
must be carried on an uplink set.

Rev. 21.31 66 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Private networks

Figure 2-26: Private networks

A private network blocks connections between downlinks, but permits traffic out uplinks. This can be
useful if the network includes less trusted or more vulnerable VMs. Many hackers attempt to move from
one compromised machine to others, seeking to find more privileges and sensitive information as they go.
Preventing VMs from accessing VMs on another host can limit the extent of an attack. Of course, a
private network does not work when VMs need to communicate together as part of their functionality.

Rev. 21.31 67 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

HPE Synergy support for key features

Figure 2-27: HPE Synergy support for key features

The Synergy adapters support some key functions for the virtualization workload.
Single root input/output virtualization (SR-IOV) enables network traffic to bypass the software switch layer
typical in a hypervisor stack, which results in less network overhead and performance that more closely
mimic non-virtualized environments. To make this feature available to the customer, you must choose an
Ethernet adapter that supports it. You must also deploy compatible ICMs for the selected adapter.
The SR-IOV architecture on VC allows up to 512 VFs, but the Ethernet adapter itself might support fewer.
When admins create a connection in a Synergy server profile or SPT, they can enable VFs and set the
number of VFs from 8 to the max supported by the adapter. Admins can then assign individual VMs on
that host to a port group and the SR-IOV-enabled adapter. Each VM is assigned its own VF on the
adapter and has its own IP address and dynamic MAC Address; VLAN settings come from the port group
and should match what is configured for the network on Synergy. In this way, admins can continue to
manage VMs connection in a mostly familiar way, but the VMs experience dramatically improved
performance.
Many Synergy adapters also support DirectPath IO. This technology improves performance and
decreases the CPU load on the hypervisor by allowing VMs direct access to the hardware. However, this
technology is only recommended for workloads that need maximum network performance as it comes
with some significant drawbacks. It is not compatible with HA, vMotion, or snapshots.

Rev. 21.31 68 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Best practices for server profile template design

Figure 2-28: Best practices for server profile template design

Here you see some best practices for designing SPTs for ESXi hosts. When the Synergy frame uses
VCs, an SPT can include connections, which define the correct networks for the compute modules’
adapters. You already saw some typical designs for these in the previous section. Admins can set
bandwidth reservations on each connection from the SPT. They can use NetIOC in VMware to set limits
on dvport groups. NETIOC support bandwidth limitation at a VM virtual adapter level.
The SPT can also define BIOS settings, which include workload profiles that customize server operations
so as to optimize for the expected workload. VMware recommends setting the workload profile to either
"Virtualization – Power Efficient" or "Virtualization – Max Performance" depending on whether the
customer prioritizes efficiency or performance.
You can also use the SPT to create volumes on local drives on D3940 modules and attach them to the
compute module. And you can even manage HPE storage arrays in Synergy, create volumes on them,
and attach those volumes to the compute module through the SPT. Synergy handles all of the
complexities in the background. This feature is a great value add for customers who are used to having to
coordinate with storage expects to get ESXi hosts attached to volumes. You will learn more about both
local and SAN options in Module 3.

Rev. 21.31 69 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Best practices for VMware vSphere ESXi hypervisor provisioning

Figure 2-29: Best practices for VMware vSphere ESXi hypervisor provisioning

HPE provides a custom ESXi image for deploying on HPE Synergy compute modules (as well as other
HPE ProLiant modules). This image comes pre-loaded with HPE management tools, utilities, and drivers,
which help to ensure that Synergy modules can perform tasks such as boot from SAN correctly.
Customers can obtain the HPE Custom Image for the Synergy compute modules at this link.
You should also make sure that Synergy compute modules’ firmware is updated to align with the driver
versions used in the HPE Custom Image. See the Service Pack for ProLiant (SPP) documentation at
https://hpe.com/info/spp and the “HPE ProLiant server and option firmware and driver support recipe”
document on http://vibsdepot.hpe.com for information on SPP releases supported with HPE Custom
Images.

If customers want to customize the image further, they can use VMware Image Builder, which is included
with the vSphere PowerCLI. They can add vSphere Installation Bundles (VIBs) with additional drivers,
HPE components, or third party tools. They also have the option of downloading HPE ESXi Offline
Bundles and third-party driver bundles and applying them to the image supplied by VMware. Or
companies can choose from the ESXi Offline Bundles and third-party drivers to create their own custom
ESXi image.

If VMware updates the image in the future, HPE supports application of the update or patches to the HPE
Custom Image. However, HPE does not issue an updated Custom Image every time that VMware
updates. Instead, it updates the image on its own cadence.

Rev. 21.31 70 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Lifecycle management and VMware vSphere integration with HPE


OneView
HPE offers multiple OneView integrations with VMware vSphere tools. These integrations enable
customers to monitor, manage, optimize, and troubleshoot the virtual and physical environments together.
In this way, customers orchestrate host provisioning with VM deployment—for example—and automate
the complete lifecycle of their virtual environment. The following sections provide a brief preview of just
some of the benefits. Module 5 explains more.

Step 1

Figure 2-30: HPE OneView for vCenter plugin

Using the HPE OneView for vCenter (OV4VC) plugin, VMware admins can monitor the physical
infrastructure with the virtual infrastructure. They can view information such as utilization or see a map of
the network connectivity from virtual switch to data center switch.

Step 2

Figure 2-31: Custer-aware firmware upgrades

Cluster-aware firmware upgrades make it simple to upgrade ESXi hosts’ software without disrupting
services.

Rev. 21.31 71 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Step 3

Figure 2-32: Expanding a cluster

If the admins need to expand a cluster, all they have to do is install the new compute module, and a
simple wizard gets the OS deployed and the new host joined to the cluster in a few clicks.

Step 4

Figure 2-33: Viewing alerts in VMware vRealize Operations

Admins can look in VMware vRealize Operations and see alerts about potential issues related both to the
virtual and physical environment. They can troubleshoot more quickly and with a lot less frustration.

Rev. 21.31 72 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Step 5

Figure 2-34: OV4VC’s Proactive HA capabities

With OV4VC’s Proactive HA capabilities, if OneView detects an issue with a Synergy ESXi host, it alerts
vCenter, which moves the host’s VM to other hosts in the cluster. This protects the VMs in case the host
fails. The infrastructure is one step closer to zero downtime and to driving itself.

Rev. 21.31 73 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

One infrastructure for virtualized and bare metal

Figure 2-35: One infrastructure for virtualized and bare metal

Many customers, even ones with highly virtualized environments, have some workloads that need to stay
on bare metal, whether because of performance requirements or the customer’s preference. With HPE
Synergy, however, customers can consolidate bare metal workloads and virtualized workloads in
infrastructure. Customers can use many of the same features to manage the bare metal workloads as
they do the virtualized ones. They can define SPTs to deploy the OS to the bare metal, define networks,
and attach volumes. They can automate deploying SPTs with the OneView API. While each workload
remains deployed in the ideal environment for it, customers can have a single infrastructure for both.

Rev. 21.31 74 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Summary

Figure 2-36: Summary

This module has guided you through taking a customer from a traditional virtualized environment to a
software-defined environment on the composable HPE Synergy. You learned about sizing and design
considerations, as well as deployment best practices.

Rev. 21.31 75 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Activity 2.2

Figure 2-37: Activity 2.2

You will now practice implementing some of what you learned.


Make a list of things to discuss with Financial Services 1A on this page and on the next page.
• Information that you need to help the deployment to run smoothly

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________
• Best practices for the deployment

Rev. 21.31 76 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 21.31 77 Confidential – For Training Purposes Only


Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment

Learning checks
1. What does VMware recommend as a typical good starting place for vCPU-to-core ratio?
a. 1:1
b. 1:2
c. 4:1
d. 16:1
2. You are advising a customer about how to deploy VMware vSphere on HPE Synergy.
The customer wants to use redundant ESXi host adapters to carry VMs’ production
traffic. What is a best practice for providing faster failover and best load sharing of traffic
over the redundant adapters? (Select two.)
a. Use an LACP LAG on the VMware virtual distributed switch.
b. Use a Network Set with multiple networks on the uplink set that supports the
production traffic.
c. Make sure to enable Smart Link on the uplink set that supports the production traffic.
d. Set up one link aggregation on one interconnect module and another link
aggregation on the other interconnect module.
e. Use LACP-S on a pair of connections on the compute modules on which ESXi hosts
are deployed.
3. You are advising a customer about how to deploy VMware vSphere on HPE Synergy.
What is a simple way to ensure that the ESXi host has the proper HPE monitoring tools
and drivers?
a. Provision the hosts with the HPE custom image for ESXi.
b. Use Insight Control server provisioning to deploy the ESXi image to the hosts.
c. Manage the ESXi hosts exclusively through Synergy, rather than in vCenter.
d. Customize a Service Pack for ProLiant and upload it to Synergy Composer before
using Composer to deploy the image.
4. How far can an HPE Synergy internal network extend?
a. Within a single Synergy frame
b. Up to the ICM and on its uplink sets, but not back to any downstream ports
c. Across multiple Synergy frames, as long as they are in the same data center
d. Across multiple Synergy frames that are connected with conductor and satellite
modules

You can check the correct answers in “Appendix: Answers.”

Rev. 21.31 78 Confidential – For Training Purposes Only


Design an HPE Software-Defined
Storage (SDS) Solution
Module 3

Learning objectives
This module gives you the opportunity to explore multiple HPE solutions for making storage more
software-defined and better integrated within a VMware environment. You will first look at VMware vSAN,
the VMware option for software-defined storage (SDS) and in particular how you can implement vSAN on
HPE Synergy. You will then look at the options that HPE SAN arrays, including Nimble and Primera,
provide for integrating with VMware.
After completing this module, you will be able to:
• Position supported software defined storage solutions
• Given a set of customer requirements, determine the appropriate storage virtualization technologies
and solutions

Rev. 21.31 | © Copyright 2021 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Customer scenario: Financial Services 1A

Figure 3-1: Customer scenario: Financial Services 1A

You are still in the process of helping a company migrate its vSphere deployment to HPE Synergy, and
you need to propose an HPE storage component of the solution. Customer discussions have revealed a
few key requirements. The customer is tired of endless struggles with storage being a black box that
VMware admins have little insight into and that slows down provisioning processes. For the upgrade, they
want a storage solution that provides tight integration with VMware. Ideally, VMware admins should be
able to provision and manage volumes on demand.
Because Financial Services 1A runs mission critical services on vSphere, the company is also concerned
with protecting its and its customers' data. Their current backup processes are too time consuming and
complex, and the customer is concerned that the complexity will lead to mistakes—and lost data.

Rev. 21.31 80 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Virtual environment requirements for SDS


In the first section of this module, you will build on what you learned in the previous module about
gathering customers' storage requirements for virtualization. You will then explore customers' pressing
need for more software-defined storage.

Rev. 21.31 81 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Going beyond capacity and performance requirements

Figure 3-2: Going beyond capacity and performance requirements

In the previous module, you learned about scoping a customer’s requirements for a VMware vSphere
deployment, including storage capacity and performance requirements. But to deliver a truly software-
defined solution, you must go beyond those requirements to help customers solve their vexing problems.
Some of the issues customers face are outlined below.

Finding performance issues


Customers need insight into their storage performance and assurance that they can meet service level
agreements (SLAs). However, most customers do not have the visibility that they need to understand the
true cause of performance gaps, nor to assess whether their environment is operating efficiently.

Obtaining VM-aware storage


When VMware admins need to add new VMs, they typically have to coordinate with storage admins,
sometimes across multiple storage siloes. The average customer is frustrated with managing complex
LUNs and the holdups caused by coordinating storage provisioning.

Gaining visibility
Most customers struggle to correlate and analyze usage data. Isolating and solving issues can take
weeks. They even lack the visibility into their environment that they need to know when they are running
out of disk space.

Rev. 21.31 82 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Evolution of VMware storage integration

Figure 3-3: Evolution of VMware storage integration

A brief look at how VMware storage has evolved can help you understand the challenges that customers
have faced in managing storage for their virtual environments. The focus for this course will be vSAN and
vVols, as well as the unique storage automation features enabled by HPE Synergy and HPE OneView.
The following sections outline these technologies. If you want more information, you can click here.

VMFS
A VM's drive is traditionally backed by a virtual machine disk (VMDK). This VMDK is a file, which can be
stored on a SAN array. Virtual Machine File System (VMFS) is the file system imposed on the SAN array
for storing the VMDKs. VMware created VMFS in ESX 1.0 in 2001 to fulfill the special requirements of
block storage and impose a file structure on block storage. This file structure was initially flat, but became
clustered in later versions. VMFS enables multiple devices to access the same block storage, locking
each individual VM's VMDKs for that VM's exclusive access.
VMware added support for Network File System (NFS) volumes, which use an NFS server rather than
block storage to store VMDKs, as an alternative to VMFS in Vl3.
With vSphere 7.0, VMware introduced clustered VMDKs. Clustered VMDKs require VMFS 6; they are
useful for supporting clustered applications such as Microsoft Windows Server Failover Cluster (WSFC).
Many customers still use VMFS datastores, but VMFS can be challenging and require a lot of
coordination with storage admins.

VAAI
vStorage API for Array Integration (VAAI) was introduced in ESX 4.1 in 2010 to enhance functionality for
VMFS datastores; it was extended with more primitives in ESX 5.0. VAAI aimed to enlist the storage as
an ally to vSphere by offloading certain storage options to the storage hardware. For example, cloning an
image requires xcopy operations. With VAAI, a VAAI primitive requests that the storage array perform the
operations, freeing up ESXi host CPU cycles. Other VAAI primitives include unmap and block zero. VAAI
also introduced a better locking mechanism called atomic test and set (ATS).
VAAI is an important enhancement, which is fully supported out-of-the-box on HPE Nimble and HPE
Primera arrays. However, all vendors that support VAAI do so in the same way. In addition to supporting
VAAI, HPE extends its VMware integration to vSAN and vVols, which you will learn more about in this
module, and vCenter, which you will learn more about later in this course.

Rev. 21.31 83 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

VASA
vStorage APIs for Storage Awareness (VASA) was introduced in vSphere 5.0 in 2011. VASA APIs let the
storage array communicate its attributes to vCenter. This lets VMware recognize capabilities on storage
arrays such as RAID, data compression, and thin provisioning. While VASA 1.0 was basic, admins can
now create VASA storage profiles to define different tiers of storage, helping them to choose the correct
datastore on which to deploy a VM.
However, VASA only characterizes capabilities at the datastore level. Admins cannot, for example, select
different services for VMDKs stored within the same datastore.

vSAN
VMware introduced virtual SAN (vSAN) in vSphere 5.5 U1 in 2014. This software-defined storage solution
is VMware's second try at virtual storage. vSAN transforms physical servers and their local disks into a
VMware-centric storage service. It is integrated in vSphere and does not require separate virtual storage
appliances (VSAs). In vSAN, VMs write objects to the disks provided by the vSAN nodes without the
requirement of a file system. vSAN features an advanced storage policy based management engine.
You will look at HPE platforms for supporting vSAN throughout this module.

vVols
VMware introduced Virtual Volumes (vVols) in vSphere 6.0 in 2015 as an alternative to VMFS and NFS
datastores. With this solution, a VM's drive can be a vVol—which is an actual volume on the SAN array—
rather than a VMDK file.
The vVol technology provides a similar level of sophistication and VMware-integration as vSAN but for
customers who want to use a storage array backend rather than servers with local drives. Building on
VASA 2.0/3.0, vVols transforms storage to be VM-centric. VMs can write natively to the vVols instead of
through a VMFS file system. As of vSphere 6.5, replication is supported with vVols, and, as of vSphere
7.0, Site Recovery Manager (SRM) integrates with vVols. These features make vVols much more
attractive to enterprises for which availability and disaster recovery (DR) are critical.
Storage vendors create their own vVols solutions to plug into vSphere so vendors such as HPE can
provide a lot of value adds to customers. You will look at the benefits of HPE's solutions for vVols later in
this module.

Rev. 21.31 84 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

VMware vSAN on HPE Synergy


In this section, you examine how to deploy VMware vSAN on HPE Synergy.

Rev. 21.31 85 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

SDS on HPE Synergy

Figure 3-4: SDS on HPE Synergy

The HPE Synergy D3940 modules fully support SDS, including VMware vSAN. Use cases for SDS on
Synergy include supporting a VM farm, as you are examining for the Financial Services 1A scenario, as
well as supporting virtual desktop infrastructure (VDI). SDS can provide the flexible support for shared
DevOps volumes that app development environments need, and also work well for Web development.
You can also deploy SDS on Synergy to provide managed data services for mid-tier storage.

Rev. 21.31 86 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

VMware vSAN overview

Figure 3-5: VMware vSAN overview

VMware vSAN is VMware's integrated SDS solution. It enables a cluster of ESXi hosts to contribute their
local HDDs, SSDs, or NVMe drives to creating a unified vSAN datastore. VMs that run on the cluster can
then be deployed on this datastore. The vSAN cluster can also present the datastore for use by other
hosts and clusters using iSCSI. vSAN provides benefits such as establishing a high-speed cache tier and
automatically moving more frequently accessed data to that tier.
Because vSAN eliminates the need for a SAN backend, it can save customers money and simplify their
data center administration. VMware vSAN appeals to customers who want the simplicity of a storage
solution that is integrated with the compute solution and is easy to install with their existing vCenter
server. A vSAN solution can also provide simplicity of scaling; to expand, you simply add another host to
the vSAN cluster.

Rev. 21.31 87 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Why HPE Synergy for vSAN?

Figure 3-6: Why HPE Synergy for vSAN?

What are the benefits of running VMware vSAN on Synergy?

Disaggregated compute and storage


HPE Synergy allows customers to scale compute and storage independently. Because they do not have
to purchase more compute just to get more storage, they can save upfront expenditure. They have the
power to optimize the storage and compute ration to meet the needs for their specific workloads. Because
they can easily re-provision compute modules or recompose how compute and storage connect together,
if their needs change in the future, they can repurpose extra compute and storage for other use cases.

Single infrastructure for any workload


With HPE Synergy, customers can deploy an SDS solution like vSAN on some compute modules while
running traditional workloads that use SAN connected storage on other modules. They obtain a standard
architecture for SDS, SAN connected storage, virtualization, containers, and bare-metal—all managed
and monitored by OneView.

High speed interconnect between frames


HPE Synergy frames feature built-in, high-speed, redundant 20Gbps fabric. The iSCSI traffic used for
vSAN can flow east-west without being routed through ToR switches, yielding low latency for server to
storage access.

Reduce complexity and cost


You already learned how HPE Synergy helps customer reduce overprovisioning.
In addition, HPE Virtual Connect (VC) modules help customers to deploy rack-scale fabric that eliminate
ToR switches in favor of end of row (EoR) switches only. This reduces costs and simplifies cabling and
networking.
Synergy also provides a consistent operational experience that lets customers leverage existing tools,
processes and people as they deploy new workloads.

Rev. 21.31 88 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

HPE Synergy D3940—Ideal platform for SDS and vSAN

Figure 3-7: HPE Synergy D3940—Ideal platform for SDS and vSAN

You will now consider what makes the D3940 the ideal platform for SDS solutions like vSAN in more
detail.

Flexibility
The D3940 provides a flexible ratio of zoned drives to compute nodes. That means that customers can
choose to assign as many drives to each node as makes sense for their business needs. This flexibility
represents a vast improvement over legacy blade solutions in which storage blades were tied to a single
server blade, causing inefficient use of resources.
Each D3940 storage module provides up to 40 drives and 600 TB capacity. With a fluid pool of up to five
storage modules per frame, up to 200 drives can be zoned to any compute module in the frame.
Each compute module uses its own Smart Array controller to manage the drives zoned to it, so a single
module can support File, Block and Object storage formats together.
The conductor-satellite fabric enabled by VC modules also creates a flat, high-speed iSCSI network for
vSAN that extends over multiple frames, which means that vSAN clusters can extend over multiple
frames, too.

Performance
A non-blocking SAS fabric provides optimal performance between vSAN hosts and the drives zoned to
them on D3940 modules. HPE tests showed that the non-blocking SAS fabric delivers up to 2M IOPs for
4KB random read workload using SSDs. (2M IOPs is for a single storage module connected to multiple
compute modules in as DAS scenario.)
HPE Synergy enables customers to deploy a customized mix of compute and storage resources and to
scale those separately, it provides an ideal SDS platform.

Rev. 21.31 89 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Figure 3-8: HPE Synergy D3940—Ideal platform for SDS and vSAN

Rev. 21.31 90 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Right sized provisioning for any workload

Figure 3-9: Right sized provisioning for any workload

The flexibility in drive-to-compute module ratio means that the D3940 can deliver the right-sized
provisioning to any workload, including the SDS scenarios that you are examining.
This graph depicts three scenarios with different combinations of half-height compute modules and
D3940 modules in a frame. In the first scenario, the frame has 10 compute modules and one D3940
module, meaning that each compute module can have an average of 4 SFFs zoned to it. This scenario is
ideal for small databases and file sharing servers.
In the second scenario, the frame has six half-height compute modules and three D3940s, giving each
compute module an average of 20 SFFs. This configuration could work for SDS cluster nodes. The final
configuration has four half-height compute modules and four D3940s, meaning that each computer
module can have 40 SFFs dedicated to it, which is ideal for mail and collaboration services, VDI or VM
farms, and mid-sized databases.

Rev. 21.31 91 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Selecting certified configurations for Synergy and vSAN

Figure 3-10: Selecting certified configurations for Synergy and vSAN

You will now move on to looking at ways to ensure a successful vSAN deployment for your customers,
beginning with proposing Synergy module configurations that HPE has tested and validated with VMware.
To find a certified vSAN Ready Node configuration, use the VMware Compatibility Guide, available by
clicking here. Select that you are looking for vSAN and choose Hewlett Packard Enterprise as the vSAN
Ready Node Vendor. You can also choose a vSAN Ready Node Profile. Select HY for hybrid HDD and
flash or AF for all flash. The profile also has a number that indicates its general scale.
Then select Update and View Results. You can scroll through the results and find a Synergy compute
module model and components that are certified for your profile.

Rev. 21.31 92 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Following best practices for vSAN on HPE Synergy: Cluster and


network design

Figure 3-11: Following best practices for vSAN on HPE Synergy: Cluster and network design

You should follow a few best practices to ensure that the vSAN cluster, deployed on HPE Synergy,
functions optimally. Use a minimum 3-node cluster. All nodes in the cluster must act as vSAN nodes. As
mentioned earlier, though, the vSAN cluster can present datastores to other clusters.
You should provide redundant connections for the vSAN network and raise the bandwidth limit on each
connection to at least 10 Gbps. The vSAN network can be an internal network as long as the cluster is
confined within a logical frame, which can include multiple Synergy frames connected with a
conductor/satellite architecture. If the cluster extends beyond the logical frame, the vSAN networks
should be carried in conductor module uplink sets, following the guidelines for iSCSI networks laid out in
the previous module.

Rev. 21.31 93 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Following best practices for vSAN on HPE Synergy: Drivers and


controllers

Figure 3-12: Following best practices for vSAN on HPE Synergy: Drivers and controllers

Each vSAN node should use a P416ie-m Smart Array controller operating in HBA only mode to access
D3940 drives (through two SAS Connection Modules in bays 1 and 4). The controller should configure
these drives as just a bunch of disks (JBODs). It is important not to use RAID for these drives.
VMware requires a caching (SSD) drive and one or more capacity drives per node. The Compatibility
Guide will indicate the number and type of drives for each tier. In the SPT or server profile for the vSAN
nodes you should configure the recommended set of caching drives as a single caching logical JBOD.
You can configure the capacity drives as one or more capacity logical JBODs.
You should help the customer understand that vSAN has some restrictions on the boot options. The
compute node can boot from internal M.2 hard drives (mirrored) but it requires a P204i storage controller.
PXE boots are also supported, as are USB boots. However, with USB boots, VMware requires the
customer to make other accommodations for log files so that they are stored in persistent storage.
You cannot configure the P416ie-m in mixed mode and create a boot volume from D3940 drives.

Rev. 21.31 94 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Following best practices for vSAN on HPE Synergy: Redundant


Connectivity for D3940s

Figure 3-13: Following best practices for vSAN on HPE Synergy: Drivers and controllers

It is also best practice to provide redundant connectivity for the D3940s used in the vSAN solution. You
should install two I/O adapters on each D3940. You must also install two Synergy 12Gb SAS Connection
Modules in the Synergy frame, one in ICM bay 1 and one in ICM bay 4.

Rev. 21.31 95 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

HPE vSAN Ready Nodes


You now understand how to design HPE vSAN on HPE Synergy. In this section, you will look at a
different use case: a customer who needs a ready-made solution for VMware vSAN outside of a Synergy
environment.

Rev. 21.31 96 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

HPE approach to vSAN Ready Nodes

Figure 3-14: HPE approach to vSAN Ready Nodes

The HPE process for meeting customer needs with vSAN Ready nodes begins by using our expertise to
define a ProLiant DL-based configuration that is optimized for a particular workload. We work with
VMware to certify the configuration. We then add the new node to the catalog for our partners to
recommend to customers.
The VMware Compatibility Guide gives you certified options, but there are many options to choose from
without much guidance as to when you would choose one over the other. HPE has added just a few
vSAN Ready Nodes to the catalog, on the other hand, and we have listed those by workloads. When you
select a VSAN Ready node, OCA only permits you to customize its configuration with a limited set of
certified options, helping to prevent you from making mistakes.
HPE has done this to better help you as a partner, as you know that you should always begin with the
workload to help you position the correct vSAN solution for a customer.
HPE now provides configurations for each supported platform (HPE ProLiant DL325, HPE ProLiant
DL360, and HPE ProLiant DL380 Gen10) that cover all vSAN profiles (HY2, HY4, HY6, HY8, AF4, AF6,
and AF8). There are also four workload-optimized solutions available. The next several pages cover the
four workload-optimized solutions in more detail.

Rev. 21.31 97 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

HPE ProLiant DL325 All-Flash 6 for virtualization

Figure 3-15: HPE ProLiant DL325 All-Flash 6 for virtualization

The HPE ProLiant DL325 All-Flash 6 solution is optimized for heavily virtualized and/or web infrastructure
environments. It offers balanced compute, memory, and network resources to support exceptional VM
density.

Workloads or use cases


HPE has optimized this solution for a broad range of use cases related to virtualization, IT infrastructure,
and Web infrastructure. Example workloads include:
• Web Serving
• App/dev (Microservices)
• Collaboration
• Content Delivery Network
• Streaming
• Security
• Systems Management
• Data Management
• Customer Relationship Management (CRM)

Processors
This node uses AMD EPYC processors with 24 to 32 cores.

Rev. 21.31 98 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

HPE ProLiant DL360 All-Flash 8 for data management and processing

Figure 3-16: HPE ProLiant DL360 All-Flash 8 for data management and processing

The HPE ProLiant DL360 All-Flash 8 node is optimized for data management and processing. It provides
high disk throughput, low latency, and very high random IO performance.

Workloads or use cases


This node provides high disk throughput and low latency. Turn to this solution to support data-heavy
workloads such as structured analytics and structured data management. Examples of these workloads
include:
• Data analytics
• Data management
• Search
• Electronic Design Automation

Processor
This node uses Intel Xeon Gold processors with 28 to 40 cores (total on two processors).

Rev. 21.31 99 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

HPE ProLiant DL380 8SFF All-Flash 4 for accelerated infrastructure

Figure 3-17: HPE ProLiant DL380 8SFF All-Flash 4 for accelerated infrastructure

The HPE ProLiant DL380 8SFF All-Flash 4 is optimized for accelerated infrastructure use cases. It
provides dedicated co-processors to support high-end workloads.

Workloads or use cases


This node provides excellent performance for high-end workloads such as:
• 2D/3D visualization
• VDI

Processor
This node uses Intel Xeon Silver processors with 20 to 24 cores (total on two processors).

Rev. 21.31 100 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

HPE ProLiant DL380 24SFF Hybrid 8 for data warehousing

Figure 3-18: HPE ProLiant DL380 24SFF Hybrid 8 for data warehousing

The HPE ProLiant DL380 24SFF Hybrid 8 model is intended for data warehousing and storage use
cases. It is capacity optimized with options for storage expansions.

Workloads or use cases


This node is optimized for workloads such as:
• Data management for colder data (warehousing)
• Long term storage and archival
• Collaboration
• Large analytics

Processor
This node uses Intel Xeon Silver processors with 24 to 32 cores (total on two processors).

Rev. 21.31 101 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Fully automated storage with HPE Synergy and HPE storage


arrays
You will now look at another option for HPE Synergy storage: connecting to HPE storage arrays. HPE
storage arrays offer a number of features that help them

Rev. 21.31 102 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

HPE Synergy fluid resource pools for Tier 1 storage

Figure 3-19: HPE Synergy fluid resource pools for Tier 1 storage

HPE Synergy also offers fluid resource pools for Tier 1 storage through a backend connection to
enterprise flash arrays. HPE storage arrays can provide managed data services such as Quality of
Service (QoS). They are preferable to SDS on D3940 modules when customers need a highly available
solution with disaster recovery capabilities. They are also top choice for workloads such as CRM, ERP,
Oracle, and SQL, which require low latency and high IO.
Nimble is positioned for business-critical storage and mid-sized companies. HPE Primera provides
mission-critical storage. Designed for ease of use and performance, these arrays provide a 100%
availability guarantee and an architecture designed for NVMe.
This section gives more details about how HPE Nimble and Primera arrays provide value-adds for an
VMware environment.

Rev. 21.31 103 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Extra features supported by HPE storage arrays

Figure 3-20: Extra features supported by HPE storage arrays

This figure compares the features supported by the HPE Synergy D3940 to the features supported by
HPE Primera arrays, as an example. Extra features such as advanced replication, the ability to support
stretched cluster, and snapshots explain why customers with mission-critical applications often prefer an
HPE storage array-based solution.

Rev. 21.31 104 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Fully automated provisioning

Figure 3-21: Fully automated provisioning

Traditionally getting a volume hosted on a storage array attached to an ESXi host involves many,
relatively complex steps. Storage admins must create the volume. They need to find out the ESXi host's
WWNs, add the host to the array, and export the volume to it. SAN admins must also zone the SAN to
permit the server's WWNs to reach the array. Server admins must find the exported volume by LAN and
add it. HPE Synergy provides fully automated volume provisioning for volumes on Primera or Nimble.
In the steps below, you can see how Synergy simplifies provisioning volumes.

Step 1
Synergy admins can add SAN Managers such as Cisco, Brocade, and HPE to bring SAN switches into
Synergy. Admins can then create networks for the SANs and manage servers' SAN connectivity using
templates and profiles, as they do servers' Ethernet connectivity.

Step 2
Synergy admins can also add Primera and Nimble arrays to Synergy and create volumes on them from
Synergy. They can use server pools and templates to apply policies to volume management.

Step 3
When admins create server profiles and server profile templates, they can add connections for the
servers in the managed SANs. They can also attach volumes to the servers. When the profile is applied
to a compute module bay, Synergy will automate all the heavy lifting of configuring the SAN zoning, as
well as exporting and attaching the volume.

Rev. 21.31 105 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Additional HPE storage array benefits for VMware


environments
You will now look at further benefits of HPE storage arrays for VMware environments. These benefits
apply whether the storage arrays support an HPE Synergy solution or another HPE compute solution
such as HPE ProLiant DL servers.

Rev. 21.31 106 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Overview of HPE storage integrations with VMware

Figure 3-22: Overview of HPE storage integrations with VMware

Whatever the compute solution underlying the VMware environment, HPE storage arrays can make the
VMware environment work more efficiently, deliver simpler management, and provide higher performance
and availability.
Key features that you will examine in this topic include vVols, plugins for vCenter, integration with
VMware Site Recovery Manager (SRM), and integration with HPE Recovery Manager Central for VMware
(RMC-V).

Rev. 21.31 107 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Overview of vVols storage architecture

Figure 3-23: Overview of vVols storage architecture

You examined one option by which HPE automates storage provisioning. Next, you will examine an
alternative solution that is specific to VMware. vVols represents the culmination of the evolution of
VMware and storage. You will now look at vVols in more detail.

Protocol endpoint
Logical I/O proxy that serves as the data path between ESXi hosts to VMs and their respective vVols

VASA provider
Software component that mediates out-of-band communication for vVols' traffic between the vCenter
Server, ESXi hosts, and the storage array

Storage container
Pool of raw storage capacity that becomes a logical grouping of vVols, seen as a virtual datastore by
ESXi hosts

Virtual Volume (vVol)


Container that encapsulates VM files, virtual disks and their derivatives

Storage Policy-based Manager


Set of rules that define storage requirements for VMs based on capabilities provided by storage array;
same policy manager as vSAN

Rev. 21.31 108 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

How vVols changes storage management

Figure 3-24: How vVols changes storage management

vVols empowers vSphere admins to control the functions that they need to control. They get to choose to
create a VM snapshot, to thin provision a VM, to create a virtual disk, or to delete a VM. At the same time,
vSphere ESXi hosts should not spend CPU cycles copying or deleting. Under vSphere's direction, the
storage array executes the task automatically. For example, when admins delete a VM, the array deletes
the VM in the vVols Container and reclaims space. This automation eliminates common tasks for storage
admins and frees up their time for more sophisticated optimization.

Rev. 21.31 109 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

How vVols transforms storage in vSphere


vVols transforms the VMware storage environment.
VMFS is LUN-centric. Storage pools are siloed away from VMware management, and because the
storage array cannot see inside the VMFS datastore, it can only apply features to the entire LUN. For
example, array-based snapshots are a great value add to a VMware environment, but with VMFS, the
array must take a snapshot of the entire LUN. Customers cannot set up different snapshot policies for
different VMs. vVols, on the other hand, breaks down the siloes, and array services are aligned to VMs.
Because vVols lets arrays see VMs as objects, arrays can apply features on a granular basis based on
the company's needs and priorities.

Figure 3-25: How vVols transforms storage in vSphere

With VMFS storage volumes are pre-allocated, which typically means that companies must over-provision
resources, which leads to inefficiency. But with vVols vSphere admins can then dynamically allocate
storage only when they need it.

Figure 3-26: How vVols transforms storage in vSphere

With VMFS it is complicated to provision storage as it requires vendor specific tools for the storage array.
vVols provides simple provisioning and management through vSphere interfaces. The vSphere admins
can easily add a vVol datastore based on a vVol created on an HPE storage array and attach the
datastore to ESXi hosts. The LUNs are managed in the background, making the process much simpler
and more intuitive for non-storage experts.

Rev. 21.31 110 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

In addition to reducing the lengthy VMFS provisioning processes, as you saw before, vVols enables
vSphere decisions to automate actions on HPE Nimble and Primera arrays. For example, when a
vSphere admin deletes a VM, the array automatically reclaims space.

Figure 3-27: How vVols transforms storage in vSphere

Rev. 21.31 111 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

The HPE Primera and Nimble advantages with vVols

Figure 3-28: The HPE Primera and Nimble advantages with vVols

It is important that you understand vVols is not properly a VMware solution; rather it is a design
specification that storage vendors can use to plug their functionality into vSphere. Therefore, vendors like
HPE have a great opportunity to innovate and prove their value in this space. HPE has among the most
mature solutions in this area. The following sections explain the differentiating benefits of HPE Primera
and Nimble solutions for vVols.

Solid and mature


HPE has already taken vVols well beyond the growing pains stage. Its mature vVols solutions have had
six plus years of development. HPE is an integral VMware design partner.

Simple and reliable


Alike some competitors, HPE Primera and Nimble both have internal VASA Providers built into the
solution, rather than requiring external appliances. For customers this means that they gain the benefits
of vVols with zero additional installation requirements. This approach also increases the solution
availability as it avoids introducing another failure point.

Innovative and efficient


The HPE storage arrays' innovative vVols features help customers to operate more efficiently. For
example, they support placing snapshots on different tiers. The vVol-based snapshots are highly efficient.
Nimble, for example, can quickly snapshot a vVol without actually copying any data. His efficiency helps
customers to snapshot more frequently while reducing VMs' footprint in storage capacity. HPE storage
arrays also allow admins to manage them easily, folder by folder.

Rev. 21.31 112 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

HPE Storage vCenter plugins

Figure 3-29: HPE Storage vCenter plugins

Some customers are not ready to shift to vVols. HPE plugins for vCenter allows customers to enjoy a
simpler provisioning process for both VMFS and vVol datastores. The Nimble Storage vCenter plug in
supports both vSphere Web Client and HTML5. Customers can easily create datastores based on Nimble
volumes and then attach those to hosts directly without having to search for LUNs.
HPE Storage Integration Pack for VMware vCenter provides similar benefits for HPE Primera. Admins
can create and manage VMFS and vVol based datastores on their Primera arrays directly from VMware.

Rev. 21.31 113 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

HPE management and automation portfolio for VMware

Figure 3-30: HPE management and automation portfolio for VMware

In addition to the plugins for vCenter, which you just examined, HPE provides an extensive management
and automation portfolio for integrating with VMware. You will look at much of this portfolio in Module 5,
which covers orchestration of management and monitoring. Over the next part of this module, you will
focus on the data protection portions of the portfolio, examining how HPE arrays integrate with VMware
Site Recovery Manager and also looking at HPE Recovery Manager Central for VMware.

Rev. 21.31 114 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

VMware vCenter Site Recovery Manager introduction

Figure 3-31: VMware vCenter Site Recovery Manager introduction

The VMware vCenter Site Recovery Manager (SRM) is a plugin to the vCenter Server that enables you to
create disaster recovery plans for a VMware environment. The recovery plan automates bringing up VMs
in a recovery site to replace failed VMs at a primary site. Because such plans can be complex and require
precise ordering to function correctly, SRM provides a testing feature that lets admins test their plans in
advance. SRM also supports sub-site failover scenarios and failback to move services back to the primary
site again.
SRM can work in scenario without stretched clusters (a stretched cluster has ESXi hosts in the same
cluster at two sites), in which case it brings VMs back up on a new cluster after some downtime. As of
version 6.1 SRM can also work with stretched clusters.

Rev. 21.31 115 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

HPE Nimble and Primera array benefits for SRM

Figure 3-32: HPE Nimble and Primera array benefits for SRM

SRM requires storage array replication to ensure that VMs can access the correct data at the recovery
site if the primary site fails.
Both HPE Nimble and HPE Primera arrays support Storage Replication Adapters (SRAs) for SRM. These
SRAs integrate the arrays' volume replication features with SRM. The Nimble SRA brings the inherent
efficiency of Nimble replication. Nimble also supports zero-copy clones for DR testing. In other words,
Nimble can create the clones without copying any data, making them highly space efficient and fast to
create.
The Primera SRA supports a broad range of features:
• Synchronous, asynchronous periodic, and asynchronous streaming replication (Remote Copy [RC])
modes
• Synchronous Long Distance (SLD) operation in which an array uses synchronous replication to a
secondary array at a metro distance and asynchronous replication to a tertiary array at long distance
• Peer Persistence with synchronous replication and 3 Data Center Peer Persistence (3DC PP) with
SLD
• VMware SRM stretched storage with 2-to-1 remote copy
Refer to the VMware Compatibility Site to look up the SRA versions compatible with various SRM
versions.
You should also be aware that VMware SRM v8.3 has added support for vVols. Now SRM can replicate
and restore vVols and include vVols in DR plans. When companies use SRM with vVols, SRM can handle
the replication natively and seamlessly. No SRA is required.
HPE provided day 0 integration with this feature on Nimble and has also added support for HPE Primera.
Companies can use SRM with vVols on the HPE storage arrays in a vSphere 6.5/6.7 or 7 environment.
Because SRM is so important to companies, the ability to use vVols with SRM will encourage many more
enterprises to start using vVols and leveraging the other benefits of this technology. HPE remains one of
the few vendors to support SRM with vVols, positioning HPE storage well in the VMware space.

Rev. 21.31 116 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

HPE Recovery Manager Central (RMC) and RMC for VMware (RMC-V)
overview

Figure 3-33: HPE Recovery Manager Central (RMC) and RMC for VMware (RMC-V) overview

Next, you will look at HPE Recovery Manager Central. RMC is a software solution for integrating HPE
storage arrays with HPE StoreOnce Systems. RMC enables customers to enhance array-based
snapshots, which they love for their ease and speed, but which do not provide true 1-2-3 data protection,
as they are stored in a single location. With RMC, snapshots are easily copied to StoreOnce and even to
the cloud for painless backup and recovery.
RMC can protect several types of applications, including SQL. For this course, though, your main focus is
on RMC for VMware (RMC-V). RMC-V provides backup and replication for VMware environments. It
enables application-consistent and crash-consistent snapshots of VMware virtual machine disks and
datastores. Backups are stored on a StoreOnce system and can be restored to the original or a different
HPE storage array. With HPE StoreOnce Catalyst Copy, customers can even copy backups to a remote
StoreOnce Catalyst or to the cloud.
RMC-V 6.3 supports both HPE Primera and HPE Nimble arrays.

Rev. 21.31 117 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

One RMC-V copy policy to orchestrate: Array Snapshot, Express


Protect, Catalyst Copy, and Cloud Copy
You will now consider an example of how RMC-V helps to automate data protection. In this example, a
customer has a vSphere environment with a datastore based on a Primera LUN. You are now adding
RMC-V to the solution. You add two StoreOnce Systems, one at the local site and one at a remote site.
The customer also decides to create cloud copies. You deploy HPE RMC and the RMC-V plugin on
vCenter server. The customer simply needs to create a copy policy, and the data is snapshot and
protected accordingly. The steps are outlined below.

Step 1
vCenter tells the ESXi host to freeze the VMs, and a snapshot is taken of the datastore.

Figure 3-34: One RMC-V copy policy to orchestrate: Array Snapshot, Express Protect, Catalyst Copy
and Cloud Copy

Step 2
The RMC-V plugin contacts RMC, which contacts the Primera array. The array uses Express Protect to
copy the snapshot as backup data to the HPE StoreOnce-A Catalyst.

Figure 3-35: One RMC-V copy policy to orchestrate: Array Snapshot, Express Protect, Catalyst Copy
and Cloud Copy

Rev. 21.31 118 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Step 3
The StoreOnce-A system uses Catalyst Copy to copy the data to the StoreOnce-B system. It uses Cloud
Copy to copy the data to HPE Cloud Bank, which is supported on Azure, AWS, and Scality.

Figure 3-36: One RMC-V copy policy to orchestrate: Array Snapshot, Express Protect, Catalyst Copy
and Cloud Copy

Step 4
Customers can define a variety of rules for each type of copy in their copy policy.

Figure 3-37: One RMC-V copy policy to orchestrate: Array Snapshot, Express Protect, Catalyst Copy
and Cloud Copy

Rev. 21.31 119 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Additional reasons HPE storage is cloud-ready and automated

Figure 3-38: Additional reasons HPE storage is cloud-ready and automated

You will now look at two more key distinguishing features for HPE storage solutions. HPE Nimble arrays
are cloud-ready with the ability to migrate data to and from HPE Cloud Volumes. And both HPE Nimble
and Primera benefit from the AI-driven optimization of Infosight. The next several pages guide you
through the benefits of these solutions in more detail.

Rev. 21.31 120 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Typical challenges with cloud block storage

Figure 3-39: Typical challenges with cloud block storage

You have explored how HPE storage solutions help to protect the VMware environment on-prem. But
now it is relatively common for customers to keep at least some of their data in the cloud. How safe is
data there? Customers may face some additional challenges when they use cloud block storage such as
Amazon EBS or Azure Disks.

Lack of durability and features


Data loss on EBS is real; Amazon EBS has up to 1 in 500 annual failure rate. In addition, cloud block
storage can lack data services. While cloud services offer snapshots, backups and recoveries can take
many hours and cause large drops in performance.
Traditional applications need a variety of enterprise features, such as the ability to share storage to
support failover and load balancing, to resize volumes, and to encrypt volumes. These features are
missing from cloud storage.

Lack of visibility and costs


Customers using cloud block storage often complain of huge “surprise” bills. Clouds are a black box, so it
is hard for customers to know how much storage they are using, and usage can easily spin out of control.
Without proper monitoring and tracking tools, it is not surprising customers run into such surprises. While
third party vendors offer monitoring tools, deploying them can be costly and complex. And they are
usually designed for the cloud, so they do not give customers a full view of their data in the cloud and on-
prem.

Lock-in
Customers do not want to be locked into services that are increasing in cost or that no longer make sense
for them. But once customers move their data into the cloud, it is difficult and expensive to move the data
out. Cloud providers often hit customers with egress charges if they want to remove their data.

Rev. 21.31 121 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

HPE Cloud Volumes Block

Figure 3-40: HPE Cloud Volumes Block

HPE helps customers to surmount these challenges with HPE Cloud Volumes. HPE Cloud Volumes is a
suite of enterprise cloud data services that help customers unlock the potential of hybrid cloud.
Cloud Volumes Block helps customers move their data to the cloud to be near their cloud workloads with
greater ease and less risk. HPE Cloud Volumes Block provides as-a-service block storage for workloads
that run in Microsoft Azure or AWS. Customers can easily migrate their data from on-prem Nimble arrays
to Cloud Volumes Block and then attach the data to Azure or AWS services. Cloud Volumes Block stores
customers’ data in an HPE cloud, with locations that are strategically near Azure and AWS locations to
deliver low latency.
Here you see an example of how HPE Nimble arrays and HPE Cloud Volumes Block provide a simple
and consistent hybrid solution for a variety of workloads. On-prem HPE ProLiant DL servers and Nimble
arrays can support production databases on VMs and cloud-native apps on Kubernetes-managed
containers. The VMs use vVols and the Kubernetes containers use Persistent Volumes (PVs), both
provisioned dynamically on Nimble arrays. The company can have a hybrid solution that spans multiple
public clouds with database workloads in AWS and cloud-native apps in Google Cloud. The Nimble
arrays also hook into the cloud with bi-directional mobility to HPE Cloud Volumes.

Rev. 21.31 122 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Benefits of HPE Cloud Volumes

Figure 3-41: Benefits of HPE Cloud Volumes

Cloud Volumes Block helps customers achieve their goals for cloud storage. HPE also provides a solution
called Cloud Volumes Backup for backup and restoration use cases. Both solutions provide enterprise-
grade availability, ease of mobility, and visibility.

Enterprise grade
You manage Cloud Volumes through a simple web portal just like you do with AWS or Azure but it
provides you with the enterprise grade reliability you expect. Compare Nimble's proven six 9s storage
availability and Triple+ Parity RAID protection with native cloud storage's three to four 9s uptime and high
annual failure rates. Cloud Volumes delivers data durability a millions times better.
Its enterprise grade backups occur in seconds, not hours, so customers can back up their data as often
as they need. Nimble also supports instant clones for use cases such as testing, analytics, or bursting. In
addition, Nimble's efficient snapshots mean that customers are not paying for full copies, but just for
incrementally changed data, which typically adds just a few percentage points of overhead.

Ease of mobility
Cloud Volumes gives customers a faster on-ramp to the cloud without requiring drawn out data migration
projects. Customers can migrate data to the cloud without worrying about their infrastructure not being
compatible with the cloud. Cloud Volumes also enables easy mobility between cloud providers so
customers can use multiple clouds and avoid lock-in. If customers find that they need to switch providers,
they do not experience the pain of complex data migration or costly egress charges. They just choose the
new provider in Cloud Volumes, and Cloud Volumes automatically switches the connection to the new
cloud provider instantly with moving a single byte of data. The same ease applies if customers decide to
move data back on-prem—no egress charges.

Global visibility
The Cloud Volumes portal allows customers to track current usage and estimate future costs easily.
Powered by InfoSight, Cloud Volumes gives customers visibility across the cloud and on-prem—without
requiring complex and expensive third party monitoring tools.

Rev. 21.31 123 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

HPE InfoSight—Key distinguishing feature for the HPE SDDC

Figure 3-42: HPE InfoSight—Key distinguishing feature for the HPE SDDC

You cannot leave your examination of how HPE arrays make the infrastructure more software-defined
without examining HPE InfoSight. HPE InfoSight is the AI-driven engine behind HPE Nimble, HPE
Primera, and HPE server solutions, helping the data center to manage and monitor itself and leading to
79 percent lower storage operational expenses. It is a game-changer for customers, transforming the
support experience. With InfoSight, 86 percent of issues are automatically opened and resolved. Because
InfoSight can solve problems proactively before the dire consequences occur, HPE storage systems can
deliver six nines or even 100% availability.

Rev. 21.31 124 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Architecting the AI Recommendation Engine

Figure 3-43: Architecting the AI Recommendation Engine

This figure illustrates the architecture for the InfoSight AI Recommendation Engine.

Predictive models
Good predictive models require good data. InfoSight has been collecting and correlating data from
millions of sensors every minute across many installed solutions for years. Because understanding why
applications are not performing as they should requires a broad view, InfoSight collects metrics across
compute and storage. VMVision lets customers choose to send vSphere data along with the other data
packages periodically sent to InfoSight. InfoSight analyzes and correlates that data with the rest, giving
customers deeper insight into their complete environment.
Good predictive models also require guidance, so InfoSight is also expert-trained by the PEAK team of
data scientists.

Recommendation
Too many competing solutions act as if giving customers visibility means giving them more data. But data
without guidance can leave admins with more questions than answers. If IOPS suddenly increase, for
example—what does that mean? Have application demands changed? Has something changed in the
infrastructure? Is it a normal fluctuation or something to worry about? InfoSight gives customers answers.
Its prioritization matrix helps them to understand what their real issues are.

Customer environments (Automatic)


InfoSight is even able to take some actions proactively to mitigate an issue before it causes downtime.
For example, after detecting an issue in one customer's environment, InfoSight can predict that a similar
issue could occur on other systems.

Rev. 21.31 125 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Example of HPE InfoSight in action

Figure 3-44: Example of HPE InfoSight in action

Review an example of how InfoSight has protected customer environments.


• HPE InfoSight detected that a controller in a storage array went down unexpectedly. Because HPE
arrays include built in redundancy, this failure did not cause an impact, but because this indicated a
serious issue InfoSight flagged the event for analysis.
• Analysis revealed a bug, for which HPE Nimble engineers created and pushed out a fix within 24
hours.
• HPE InfoSight then provided 40 customers a non-disruptive update to avoid potential issues of the
same kind.

Rev. 21.31 126 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Summary of HPE storage array benefits for VMware environments

Figure 3-45: Summary of HPE storage array benefits for VMware environments

Before moving on to the next topic, review the HPE storage solution benefits for VMware environments.

Application aware
vVols on Nimble and Primera enables storage VM-level awareness that helps customers to align storage
resources with VMs and their workload requirements. InfoSight also gives customers clear visibility into
VMs with VMVision.

Deeply integrated
HPE arrays provide full VAAI & VASA 1.0, 2.0 & 3.0 support. HPE Primera also supports VASA 4.0. HPE
Nimble and Primera provide SRAs to enhance SRM's disaster recovery capabilities. HPE also provide
plugins for vCenter to help customers manage the storage environment from vCenter.

Predictive
HPE InfoSight delivers predictive AI for the data center. It supports a broad array of HPE infrastructure,
including Nimble arrays, HPE Primera arrays, and HPE servers. Its ability to proactively solve issues and
help the data center manage itself represents a key value add for HPE solutions. InfoSight, as well as
other technologies embedded in the HPE storage solutions, help HPE deliver 6-nines uptime on HPE
Nimble and a 100% availability guarantee on HPE Primera. In this way, HPE storage helps to protect
critical VMs.

Leadership
HPE has partnered with VMware for over 20 years, delivering proven solutions from the datacenter to the
desktop to the cloud. HPE was the first vendor to support the vVols array-based replication capability that
was first available in vSphere 6.5, and one of only three vendors to support replication as of 2021. HPE
also supported vVols in SRM, as soon as v8.3 added that feature to SRM. Because replication and SRM
are key features for many enterprises, HPE storage provides the natural choice for companies who want
the benefits of vVols.
HPE continues to lead in the vVol space. HPE telemetry shows that HPE vVols support over 160,000
VMs as of April 2021.

Rev. 21.31 127 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Rich and powerful


HPE storage arrays have rich vVols features and an application optimized for vVols, rather than having
vVols bolted on. The rich features help customers to enhance their data protection. For example, HPE
arrays offer application-consistent snapshots for applications such as SQL Server or Exchange. HPE
arrays also distinguish themselves with their replication features, and, in the case of a failure or disaster,
customers can perform VM Recovery directly from vCenter.

Rev. 21.31 128 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Activity 3

Figure 3-46: Activity 3

Begin this activity by reviewing more details for the scenario.

Scenario
You are still in the process of helping Financial Services 1A migrate its vSphere deployment to HPE
Synergy, and you need to propose an HPE storage component of the solution. Customer discussions
have revealed a few key requirements. The customer is tired of endless issues with storage being a black
box that VMware admins have little insight into and that slows down provisioning processes. For the
upgrade, they want a storage solution that provides tight integration with VMware. Ideally, VMware
admins should be able to provision and manage volumes on demand.
Because Financial Services 1A runs mission critical services on vSphere, the company is also concerned
with protecting its own data, as well as its customers' data. The company's current backup processes are
too time consuming and complex, and the customer is concerned that the complexity will lead to
mistakes—and lost data.
In sizing the Synergy solution, you determined these total requirements for all of the clusters in the
vSphere deployment:
• Total IOPS = 13,000 write; 26,000 read
• Datastore Total = 40 TB
• Datastore Provisioned = 48TB
• Datastore Used = 36 TB

Task
Prepare a presentation on the relative benefits of vSAN or an HPE storage array as the storage solution
for this customer. In your presentation, note the advantages and disadvantages of both solutions. Also
emphasize the particular distinguishing benefits of HPE for either solution.

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 21.31 129 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 21.31 130 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Summary

Figure 3-47: Summary

This module has guided you through the designing HPE storage solutions for VMware environments. You
focused in particular on how you can deploy SDS as part of a composable infrastructure with HPE
Synergy. However, you also learned about other HPE vSAN Ready Nodes. You learned about using HPE
storage arrays and the many benefits that these arrays provide for VMware environments.

Rev. 21.31 131 Confidential – For Training Purposes Only


Module 3: Design an HPE Software-Defined Storage (SDS) Solution

Learning checks
1. What is one benefit of HPE Synergy D3940 modules?
a. A single D3940 module can provide up to 40 SFF drives each to 10 half-height
compute modules.
b. Customers can assign drives to connected compute modules without fixed ratios of
the number per module.
c. A D3940 module provides advanced data services like Peer Persistence.
d. D3940 modules offload drive management from compute modules, removing the
need for controllers on compute modules.
2. What is one rule about boot options for a VMware vSAN node deployed on HPE
Synergy?
a. The node must boot from a volume stored on the same D3940 module that supplies
the drives for vSAN.
b. The node must use HPE Virtual Connect to boot.
c. The node cannot boot using PXE.
d. The node can boot from internal M.2 drives with an internal P204i storage controller.
3. What is one strength of HPE Nimble and Primera for vVols?
a. They help the customer unify management of vVol and vSAN solutions.
b. They have mature vVols solutions that support replication.
c. They automatically convert VMFS datastores into simpler vVol datastores.
d. They provide AI-based optimization for Nimble volumes exported to VMware ESXi
hosts.
You can check the correct answers in “Appendix: Answers.”

Rev. 21.31 132 Confidential – For Training Purposes Only


Design an HPE Software-Defined
Networking (SDN) Solution for a
Virtualized Environment
Module 4

Learning objectives
This module outlines options for making the network as software-defined as the rest of the data center.
The scenario for this course features a VMware environment, so in this module you will learn how to use
a combination of VMware and HPE technologies to virtualize and automate the network.
You will first learn about NSX and specifically NSX-T, which is the network component for VMware Cloud
Foundation (VCF). You will then look at using ArubaOS-CX switches as the underlay for the data center
and how Aruba Net Edit helps companies automate. Finally, you will briefly review Cisco ACI for cases in
which you need to integrate with this third-party solution.
After completing this module, you will be able to:
• Position HPE software-defined networking (SDN) solutions based on use case
• Design HPE SDN solutions

Rev. 21.31 | © Copyright 2021 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Network virtualization is at the core of an SDDC approach

Figure 4-1: Network virtualization is at the core of an SDDC approach

Network virtualization is at the core of an SDDC approach. In this module, you will learn about strategies
for virtualizing and automating the network. You will learn how you can create software-defined network
management and control planes that let companies use GUIs and scripts to reconfigure the network to
support new workloads on the fly. This "network hypervisor" overlays the virtualization layer, if present,
and can be programmed to orchestrate network provisioning in sync with workload deployment.

Rev. 21.31 134 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Common networking challenges

Figure 4-2: Common networking challenges

Companies with highly virtualized environments can face issues with making the physical network as
flexible as they need.
Consider a simple example. A company might run a Web service on an ESXi cluster, shown here as
"compute cluster." The company wants to expand the number of Web service VMs, but needs more hosts
to support them. After evaluating the data center, IT founds a place for the new hosts—across a Layer 3
boundary in a new section of the data center. The networking team has a strict rule about terminating
VLANs at the ToR switch. Team members say that trying to change this will cause instability throughout
the data center. Traditionally, this restriction poses a problem because the company wants to keep VMs
of the same type in the same subnet.
Throughout this module, you will look at how network virtualization technologies can help companies
deploy workloads without having to consider the underlying physical topology.
You will also learn about how companies can increase network automation and orchestration so that they
can deploy new workloads, or move workloads, without long delays for network provisioning.

Rev. 21.31 135 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

VMware NSX
You will now learn more about VMware NSX. While step-by-step implementation instructions and detailed
technology dives are beyond the scope of this course, by the end of this section, you should understand
the most important capabilities of NSX and be able to make key design decisions for integrating NSX into
your data center solutions.

Rev. 21.31 136 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

VMware NSX

Figure 4-3: VMware NSX

VMware NSX comes in two versions.


NSX-V was specific to ESXi hosts controlled by VMware vSphere. However, VMware has announced that
it will end general support for NSX-V in January of 2022. This course focuses on NSX-T, which works with
ESXi hosts, KVM hosts, and bare metal hosts, enabling companies to orchestrate networking for
virtualized, containerized, and bare metal workloads.
NSX-T helps customers virtualize networking and then to automate and orchestrate networks in sync with
their compute workloads. It moves networking to software, creating never-before-seen levels of flexibility.
It fundamentally transforms the data center’s network operational model as server virtualization did 10
years ago. In just minutes admins can move VMs and all of their associated networks across Layer 3
boundaries within a data center and also between data centers. No interruption to the application occurs,
enabling active-active data centers and immediate disaster recovery options.
On the security front, NSX-T brings firewall capabilities inside hosts with automated, fine-grained policies
tied to VMs or container workloads. NSX-T enables micro-segmentation, in which security policies can be
enforced between every VM or workload to significantly reduce the lateral spread of threats inside the
data center. By making network micro-segmentation operationally feasible, NSX-T brings an inherently
better security model to the data center.

Rev. 21.31 137 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

VMware NSX architecture

Figure 4-4: VMware NSX architecture

A brief look at the VMware NSX architecture will give you the foundation you need to understand the NSX
features. Review each section to learn about that component of the architecture.

Management plane
The management plane consists of the NSX Manager, which holds and manages the configuration. It
plugs into vCenter, as well as NSX Container Plugin, and Cloud Service Manager.

NSX Manager
Admins can access the NSX manager through a GUI, as well as through a plugin to vCenter, and
configure and monitor NSX functions. The NSX Manager also provides an API, which enables it to
integrate with third-party applications. By allowing these applications to program network connectivity, the
NSX API provides the engine for wide-scale network orchestration.
You deploy an NSX Manager together with a Controller in an NSX Manager Appliance VM. VMware
recommends deploying a cluster of three NSX Manager Appliances for redundancy.

Control plane
The control plane builds up MAC forwarding tables and routing tables.

NSX Controller
Each NSX Manager Appliance also includes an NSX Controller. The controllers for the Central Control
Plane (CCP). They perform tasks such as building MAC forwarding tables and routing tables, which they
send to the Local Control Plane.
Control plane objects are distributed redundantly across controllers such that they can be reconstructed if
one controller fails.

Local Control Plane (LCP)


ESXi hosts, KVM hosts, and bare metal servers in the data plane are collectively called "transport nodes."
The transport nodes receive forwarding information from the controllers in the CCP, enabling them to
forward traffic in a more efficient, distributed fashion. They also receive firewall rules from the CCP.

Rev. 21.31 138 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Data plane
The data plane consists of the transport nodes. They are responsible for receiving traffic in logical
networks, switching the traffic toward its destination, and implementing any encapsulation necessary for
tunneling the traffic through the underlay network. The data plane also routes traffic and applies edge
services.

NSX virtual switch


Each transport node has one or more NSX virtual switches.
On ESXi hosts, the NSX virtual switch was originally a specialized NSX Virtual Distributed Switch (N-
VDS). Now the NSX virtual switch can be a familiar VDS, provided that it is VDS 7.0 or above. On any
other non-ESXi host, such as KVM hosts, the NSX virtual switch is based on Open vSwitch (OVS). The
focus for this course will be on ESXi hosts, however.

Edge services
NSX-T provides a distributed router (DR) in the data plane for routing traffic directly on transport nodes.
However, some services such as NAT, DHCP, and VPNs are not distributed. A Services Router (SR),
which is deployed in an edge cluster, provides these services. The SR is also responsible for routing
traffic outside of the NSX domain into the physical data center network.
NSX-T also supports edge bridges, which can connect physical servers into the NSX networks.

Rev. 21.31 139 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Use case 1: Networking virtualization

Figure 4-5: Use case 1: Networking virtualization

You will now examine NSX-T features in more detail, starting with the network virtualization use case.
Network virtualization enables VMs to connect into a common logical network regardless of where their
hosts are located in the physical network. The physical network can implement routing at the top of the
rack without compromising the portability of VMs.

Rev. 21.31 140 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Overlay networking

Figure 4-6: Overlay networking

NSX-T uses overlay networking to provide network virtualization. A brief discussion of overlay networking
in general will be useful.
When designing a data center network, network architects typically prioritize values such as stability,
load-sharing across redundant links, and fast failover. They have found that an architecture that routes
between each network infrastructure device delivers these values well. However, such an architecture
can make it harder to extend application networks wherever they need to go.
With overlay networking, the physical infrastructure remains as it is: scalable, stable, and load-balancing.
Virtualized networks, or overlay networks, lie over the physical infrastructure, or underlay network. An
overlay network can be extended without regard to the architecture of the underlay network. Companies
can then deploy workloads in any location, but still the workloads can belong to the same subnet and
communicate at Layer 2. VMware managers can also deploy overlay networks on demand, without
having to coordinate IP addressing and other settings with the data center network admins.
Overlay networking technologies are also highly scalable, typically offer millions of IDs for the virtual
(overlay) networks.
There are many strategies to build an overlay network. Here you are focusing on one of the most
common. Tunnel endpoints (TEPs) create tunnels between them. The tunnels are based on UDP
encapsulation. When a TEP needs to deliver Layer 2 traffic in an overlay network, it encapsulates the
traffic with a header specific to the overlay technology. It also adds a delivery header, which directs the
traffic to the TEP behind which the destination resides. The underlay network only needs to know how to
route traffic between TEPs, and has no visibility into the addressing used for the overlay networks.
Common overlay technologies include Virtual Extensible LAN (VXLAN), Network Virtualization using
Generic Routing Encapsulation (NVGRE), and Generic Network Virtualization Encapsulation (Geneve).
Geneve is a newer standard that supports the capabilities of VXLAN, NVGRE and other network
virtualization techniques; NSX-T uses this technology.
Like VXLAN, Geneve encapsulates L2 frames into UDP segments and uses a 24-bit Virtual Network
Identifier. In Geneve however, the header format is variable in length, making it possible to add extra
information to the header. This information can be used by the underlay network to decide how to handle
the traffic in the best way.
The Geneve header is also extensible. This means that it will be easier to add new optional features to
the protocol by adding new fields to the header.

Rev. 21.31 141 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Technologies such as VXLAN and Geneve do not provide automation on their own. However, NSX-T
provides the orchestration layer, enabling admins to simplify and automate the configuration of overlay
networks.

Rev. 21.31 142 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Overlay segments

Figure 4-7: Overlay segments

NSX-T calls overlay networks, overlay segments.


In a vSphere deployment without NSX, a distributed port group (dvportgroup) on a VDS creates a
network, or VLAN, to which VMs on multiple hosts can connect. However, those hosts must all be in the
same Layer 2 domain; otherwise, the VMs cannot connect on the same network.
An NSX-T overlay segment, on the other hand, creates a logical network that interconnects VMs at Layer
2, even when their hosts are divided by Layer 3 boundaries. The overlay segment is analogous to the
dvportgroup. It associated with a VDS or VDSes through a transport zone, and VMs connect to the
overlay segment. In fact, the overlay segment even appears as a NSX-T dvportgroup inside of vCenter
(when the ESXi host is using a VDS as the NSX virtual switch). However, rather than define a VLAN, the
overlay segment defines a logical, overlay network that can extend anywhere.
NSX-T also supports traditional networks, called VLAN segments.
Note that segments used to be called logical switches, and you might sometimes still hear this term.

Rev. 21.31 143 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Transport zones

Figure 4-8: Transport zones

NSX-T uses transport zones to group segments. An overlay transport zone includes one or more overlay
segments, while a VLAN transport zone contains one or more VLAN segments.
NSX admins assign transport nodes to the transport zone, which makes the segments in that zone
available to those nodes. In this example, a compute ESXi cluster and an edge ESXi cluster have been
assigned to the overlay transport zone, "my overlays." Admins can then connects VMs running on those
clusters to the overlay segments in the transport zone.
A gateway (which consists of DR and SR components) can route traffic between the overlay segments in
the same zone.

Rev. 21.31 144 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Uplink profiles for transit nodes

Figure 4-9: Uplink profiles for transit nodes

NSX-T provides uplink profiles for defining the uplink used for transporting overlay traffic. It is important
for you to understand these settings because you need to coordinate them with the physical network
infrastructure.
The uplink profile defines a transport VLAN ID. The transport node's TEP component uses this VLAN to
communicate with other TEPs. For example, a transport node might use transport VLAN ID 100, and this
VLAN is associated with subnet 10.5.100.0/24 in the data center network. The transport node might
receive IP address 10.5.100.5, and it would send and receive encapsulated traffic for overlay networks
using this address.
To account for encapsulation, the uplink needs a larger MTU. The minimum MTU is 1600, but VMware
recommends at least 1700 to account for future expansions to the Geneve header.
The uplink profile also includes the names of active uplinks and standby uplinks (if any), as well as the
NIC teaming settings. The NIC teaming options are similar to those available for traditional VDSes. An
uplink can be a link aggregation group (LAG), as shown in this example.

Rev. 21.31 145 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

NSX modes for flooding traffic

Figure 4-10: NSX modes for flooding traffic

NSX-T floods broadcast, unknown unicast, and multicast (BUM) traffic to ensure that all VMs, containers,
and other endpoints in the overlay segment receive them. Each segment can use one of two modes for
flooding.
Two-tier hierarchical mode is the default, and typically recommended, mode. The figure illustrates it.
Transport node 1 receives BUM traffic in overlay segment 110. It replicates the traffic and sends it to
every transport node that:
• Is attached to the same overlay segment
• Is in the same transport subnet as it (10.5.5.0/24 in this example)
In this example, only transport node 2 is in the same subnet, but in the real world, more nodes will
typically reside in the subnet.
Transport node 1 also sends one copy to an arbitrary node in each other subnet used by nodes attached
to this segment. In this example, transport nodes 3 and 4 are in 10.5.6.0/24. Transport node 1 sends the
BUM traffic to transport node 3. Transport node 3 then replicates the traffic and sends it to all the
transport nodes in its segment.
Alternatively, NSX-T can use headend replication mode, in which transport node 1 would send a copy of
the traffic to all other transport nodes attached to the overlay segment. Two-tier hierarchical mode
distributes the burden of replication and tends to reduce rack-to-rack traffic. (Note that NSX-T implements
mechanisms to ensure that MAC forwarding tables are programmed correctly, regardless of the mode.)
Unlike some modes supported in NSX-V, neither NSX-T mode requires the data center network to
support multicast routing.

Rev. 21.31 146 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Example: Original network

Figure 4-11: Example: Original network

You will now look at a simplified example of how NSX-T can alter the network architecture.
In this example, a company has an ESXi cluster called "compute cluster" with a VDS called
"Compute_VDS." Compute_VDS has a port group for "web_front-end" VMs and for "web_app" VMs. It
also has networks for vMotion and management traffic.

Rev. 21.31 147 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Example: Plan for overlay segments

Figure 4-12: Example: Plan for overlay segments

Now the company is deploying NSX-T. NSX-T will enable the company to virtualize the production
networks with the Web front-end and Web app VMs.
The company creates an overlay network for "web_front-end" and "web_app" and places them in an
overlay transport zone. They attach the compute cluster to that zone.
Now the company can remove the VLANs that used to be associated with these networks from the
Compute_VDS uplinks, as well as from the connected physical infrastructure. Instead the uplink carries
VLAN 100, which is the transport VLAN in this example. Even more importantly, admins can add new
overlay segments in the future without having to add corresponding VLANs and subnets in the physical
infrastructure.
Note that it is typically best practice to leave the management and vMotion networks in traditional VLAN-
backed segments. The same holds true for storage networks such as for vSAN or other iSCSI traffic.

Rev. 21.31 148 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Use case 2: Microsegmentation

Figure 4-13: Use case 2: Microsegmentation

You will now learn about how NSX-T fulfills the microsegmentation use case, helping customers to
enhance their control over their virtualized workloads more easily and more flexibly. In this blog, VMware
outlines what it means by micro-segmentation. Read each section for a summary of the key features.

Topology agnostic
With traditional security solutions, traffic must pass through the firewall to be filtered and the firewall
location determines the extent of security zones. But as workloads become more portable, companies
need more flexibility in creating security zones based on business need, not location. NSX micro-
segmentation deploys an instance of the firewall to each host, enabling companies to implement
topology-agnostic controls.

Centralized control
While firewall functionality is distributed to the ESXi hosts, the firewall is controlled centrally. Admins
create security policies for their distributed services through an API or management platform, and those
policies are implemented everywhere.

Granular control based on high-level policies


NSX micro-segmentation uses a policy-based approach. The distributed firewall can filter traffic at many
levels and based on criteria—such as OS type—beyond the traditional packet-header related criteria.

Network overlay based segmentation


Companies can use network overlays to divide VMs into logical groups based on security policy and
business need.

Policy-driven service insertion


NSX can insert third-party applications into security policies to provide enhanced IDS/IPS and other
security features.

Rev. 21.31 149 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

How NSX implements micro-segmentation

Figure 4-14: How NSX implements micro-segmentation

NSX-T includes two types of firewall. The distributed firewall (DFW) empowers microsegmentation for the
complete NSX domain. Defined centrally, the DFW is instantiated on every transports and filters all traffic
that enters and leaves every VM or container. An edge firewall is implemented on an ESG, and it filters
traffic between the NSX domain and external networks. The stateful firewalls use rules that should be
familiar to you from other firewall applications. Rules specify the source and destination for traffic, the
service (defined by protocol and possibly TCP or UDP port), a direction, and an action—either allow or
deny. However, NSX-T permits great flexibility in defining the source and destination, making it easy for
admins to group devices together based on the company's security requirements.

Rev. 21.31 150 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Security extensibility

Figure 4-15: Security extensibility

NSX-T provides a platform for bringing the industry’s leading networking and security solutions into the
SDDC. By taking advantage of tight integration with the NSX-T platform, third-party products can not only
deploy automatically as needed, but also adapt dynamically to changing conditions in the data center.
NSX enables two types of integration. With network introspection, a third-party security solution such as
an IDS/IPS registers with NSX-T. A third-party service VM is then deployed on each ESXi host and
connected to the VDS used as the NSX virtual switch. The host then redirects all traffic from vNICs to the
service VM. The service VM filters the traffic, which is then redirected back to the VDS. Examples of
supported next-generation firewalls and IDS/IPSes are listed in the figure above.
The second type of security extensibility is guest introspection. Guest introspection installs a thin third-
party agent directly on the VM, and this agent then takes over monitoring for viruses or vulnerabilities on
the VM. The figure above lists examples of supported solutions in this area.

Rev. 21.31 151 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Use case 3: Network automation with NSX + vRealize

Figure 4-16: Use case 3: Network automation with NSX + vRealize

Companies can deploy vRealize, which fully integrates with NSX-T, to permit orchestrated delivery of
virtualized services, including compute, storage, and networking components, through ordered workflows
and API calls. Companies can create policies to govern how resources are allocated to services to ensure
that applications are matched to the correct service level, based on business priorities. IT can deliver a
private cloud experience, allowing users to obtain their own services through an IT catalog. The vRealize
solution also provides extensibility through an API, allowing customers to integrate the applications of
their own choice and use those applications to dynamically provision workloads. You will learn more
about integrating HPE OneView with vRealize in the next module.

Rev. 21.31 152 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Options for the physical underlay

Figure 4-17: Options for the physical underlay

You have explored the major use cases for NSX-T and understand at a high level how NSX-T provides
software-defined networking and security for your customer's VMware-centric data center.
While NSX-T is meant to be deployed over any underlay, that does not mean that the underlay is
immaterial to the success of the solution. The tunneled traffic still ultimately crosses the underlay network,
and issues there can compromise traffic delivery or network performance. Because different teams
usually manage the virtual and physical networks, no one team has all of the information that they need,
and IT staff can find it difficult to troubleshoot.
In short, the physical data center network matters. In the next section, you will learn how ArubaOS-CX
switches fulfill this role, integrating with and enhancing an NSX solution.
Aruba also provides an SDN solution called Aruba Fabric Composer, which provides tight integration with
VMware and enhanced visibility across physical and virtual networks. See Aruba training for more
information about Aruba Fabric Composer.

Rev. 21.31 153 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

NSX + ArubaOS-CX
This section explains how to set up an ArubaOS-CX environment to integrate with NSX.

Rev. 21.31 154 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Design considerations for the physical infrastructure

Figure 4-18: Design considerations for the physical infrastructure

You just need to check a few settings on your ArubaOS-CX switches to ensure that they work well for the
NSX-T environment.
Determine the settings in the uplink profile for each transit node. You will need to match those settings in
the ToR switches that connect to those nodes. The ToR switches must support the transport node's
transport VLAN ID on the links connected to it. Typically, these switches will also be the default gateway
for that VLAN. Also make sure to match the MTU settings in the uplink profile in this VLAN on the switch.
The next page explains more.
Also remember VLANs for any non-overlay networks, such as management, vMotion, and storage. The
physical infrastructure will need to be tagged for the correct VLANs.
Also make sure that the link aggregation settings sync with the VMware NIC teaming settings, both on the
VXLAN transport network and other networks. You will generally deploy ToR switches in pairs for
redundancy. You should deploy ArubaOS-CX switches with VSX, which unifies the data plane between
two switches, but leaves the control plane separate. A LAG on a transport node can connect to both
switches in the VSX group. The switches use an M-LAG technology to make this possible.

Rev. 21.31 155 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

More details on MTU

Figure 4-19: More details on MTU

You will now look at the MTU requirements in a bit more detail.

Standard Ethernet
The standard Ethernet payload or Maximum Transmission Unit (MTU) is 1500 bytes.
The Ethernet protocol adds a header and a checksum to the payload. In total, according to the IEEE
802.3 Ethernet Version 2, the default maximum frame size is 1518 bytes. The 802.1Q tag adds 4 bytes to
the standard Ethernet frame header, so the default maximum frame size is 1522 bytes.

Jumbo frames
Ethernet frames between 1500 bytes up to 1600 bytes in size are called baby giant frames (or baby
jumbo frame) and Ethernet frames up to 9216 bytes are called jumbo frames.
Jumbo frames can cause problems in the underlay network because all components in the network, from
end-to-end, must support it. That means careful planning and careful implementation. In other words, you
must increase the MTU on the ToR switches that connect to transport nodes and on all network
infrastructure devices in between.

Advantages of jumbo frames


Jumbo frames can be more efficient than smaller frames because they only need one header to transport
a larger payload.
In theory, compared to a 1500 MTU, a 9000 MTU would be able to send six times the data with the same
number of frames. That means that there is less frame handling. In other words, a 9.78Gbps transfer at
8900 MTU would equal the same number of frames per second as a 1.65Gbps transfer at 1500 MTU.

Disadvantages of jumbo frames


On the other hand, jumbo frames can also introduce issues.
When a jumbo frame is lost, more data is lost than with a regular frame. Therefore, in unreliable networks,
jumbo frames can be counterproductive.
Remember that encapsulation adds extra bytes to the frame. You also need to keep in mind if
applications are increasing the payload on their side. The payload becoming too large can cause

Rev. 21.31 156 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

problems as well. For instance, Geneve can add 50 or more bytes to the header. Adding bytes to an
already expanded payload could make traffic exceed the MTU. The payload must then be sent in two
frames.
This would make the transport very inefficient. Some network components might even drop frames that
are too large, which would result in no communication at all.

Rev. 21.31 157 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Introducing Aruba NetEdit

Figure 4-20: Introducing Aruba NetEdit

You will now look at ways that Aruba, an Hewlett Packard Enterprise company, makes managing the
physical infrastructure simpler and more automated.
Network operators are often slowed down as they make configurations because they do not have all the
relevant information at their fingertips. For example, they might not know the IP address of a server or
what address is available on the management network for a new switch. And even expert operators can
make mistakes, which can cause serious repercussions for the network. Fully 74% of companies report
that configuration errors cause problems more than once a year.
ArubaOS-CX switches offer Aruba NetEdit, which provides orchestration through the familiar CLI. It gives
operators the intelligent assistance and continuous validation they need to ensure that device
configurations are consistent, compliant, and error free. IT operators edit configurations in much the way
that they are used to, working within a CLI, so no knowledge of scripting or retraining is necessary.
However, they create the configuration safely in advance with all the help tools they need. They can
search through multiple configurations and quickly find information such as the IP addresses that other
switches are using. They can also tag devices based on role or location. The editor also provides
validation so that a simple error does not get in the way of the successful application of a configuration.
Admins can then deploy the configuration with confidence. An audit trail helps admins easily track
changes for simpler change management and troubleshooting.

Rev. 21.31 158 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Aruba NetEdit features


Read about each point on the time line to see how NetEdit can help admins as they go about completing
some common tasks.

Conformance

Figure 4-21: Aruba NetEdit features

With NetEdit, admins can:


• Customize rules for each network
• Monitor devices that follow the company policy
• Audit changes fit the policy

Planning and orchestration

Figure 4-22: Aruba NetEdit features

• Ability to view and edit multiple configurations at the same time


• Contextual insights
• Command completion
• Syntax checking

Rev. 21.31 159 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Change validation

Figure 4-23: Aruba NetEdit features

• Quickly verify changes had the desired effect on the network


• Verify connectivity of devices
• Check that 3rd party devices didn’t lose connectivity

Rev. 21.31 160 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

NetEdit value summary

Figure 4-24: NetEdit value summary

Beyond making life easier for operators, NetEdit delivers key business benefits to your customers. Read
the sections below to learn more.

Simpler device orchestration


Configuration changes can be pushed to multiple devices at the same time without any knowledge of
APIs.

Improved configuration consistency


Configurations can be easily unified using conformance rules.

Reduced outage window


The ability to deploy and validate configurations on multiple devices at once reduces outage windows.

Better change management


Large configuration changes can be easily rolled back with the click of a button.

Rev. 21.31 161 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Interoperability with third-party Cisco ACI


This section teaches you a little about Cisco Application Centric Infrastructure (or ACI), which is Cisco's
SDN solution. The knowledge will help you to attach your software-defined compute and storage solution
to a Cisco network, if necessary.

Rev. 21.31 162 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Cisco ACI

Figure 4-25: Cisco ACI

While the solutions covered earlier are the preferred SDN solutions for HPE SDDCs, some customers
have Cisco entrenched as their data center networking solution. If you cannot dislodge Cisco in the
network, you can still win the compute and storage components of the SDDC and integrate them with
Cisco.
In Cisco ACI, Cisco Nexus 9000 series switches, deployed in a leaf-spine topology, provide the data
plane. They also provide the control plane, using OSPF as the underlay protocol and VXLAN as the
overlay protocol. However, management of the 9000 switches is completely taken over by Application
Policy Infrastructure Controllers (APICs). The APICs manage all aspects of the fabric. Instead of
configuring OSPF, VXLAN, VLANs, and other features manually, admins configure policies about how
they want to group endpoints and handle their traffic. The APICs then configure the underlying protocols
as required to implement desired functions.
For customers with VMware-centric environments, APICs can integrate with VMware.

Rev. 21.31 163 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Endpoint Groups (EPGs) and other key ACI components

Figure 4-26: Endpoint Groups (EPGs) and other key ACI components

In Cisco ACI, the endpoint group (or EPG) serves as the fundamental block for controlling endpoint
communications. It can act like a VLAN, VXLAN, or subnet; however, it is not exactly any of those things.
This map shows the components that relate to EPGs in the ACI policy universe. To learn more about
some of the key components, read about them below.
This is, of course, just a brief introduction to ACI. If you need more information, refer to Cisco
documentation.

Endpoint Group (EPG)


The EPG defines endpoints that should be treated in the same way in regards to security and Quality of
Service (QoS). Endpoints in the EPG can communicate at L2, and an EPG is often associated with a
subnet and VLAN. However, there is not a one-to-one correspondence with an EPG and subnet.

Domain profile
The EPG is associated with one or more domain profiles. An access policy applies domain profiles to leaf
edge ports to control how traffic from endpoints is assigned to EPGs. The domain profile includes VLAN
instance profiles, which differ depending on its type. A physical domain might specify a specific VLAN ID
for the EPG. A VMM domain has a dynamic VLAN pool, which, through VMware integration, is presented
in VMware for configuration on VM networks.

Attachable Entity Profile (AEP)


A domain profile's VLAN instance policy can include one or more AEPs. The AEP defines interface
settings, including permitted VLANs. It indicates to which leaf switch ports the settings are applied based
on switch and interface profiles.
With VMware integration, VM state changes can dynamically add and remove VLANs from ports.

Access
An access policy can include multiple domain profiles.

Rev. 21.31 164 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Application profile
The application profile helps admins to group workloads by application. The profile can be associated with
one or more EPGs, as well as with AEPs.
For traffic to flow, the leaf port needs an AEP that permits VLANs, and an appropriate EPG needs to be
applied to the port. Application profiles can help to correlate the two. When an AEP is applied to port, the
EPG in its application policy is automatically applied as well.

Bridge domain
The bridge domain defines the Layer 2 boundary for communications. It is associated with one (or more)
subnets within a virtual routing and forwarding (VRF) instance. (The VRF enables the establishment of
completely separate routing domains.)

Rev. 21.31 165 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Activity 4

Figure 4-27: Activity 4

After moving to Synergy, Financial Services 1A's ESXi hosts are using the plan shown in this figure for
networking. The box on the left is a single ESXi host compute module, but the plan is the same for all of
the hosts in the clusters that you are examining.
The pairs of FlexNICs that connect to the Mgmt and vMotion VDSes each support a single network with
the same name as the VDS. The pair of FlexNICs that connect to the Prod VDS supports a Network Set
with multiple production networks.
Now the customer wants to implement NSX-T and move the production VLANs to overlay segments for
greater flexibility in extending clusters across multiple Synergy racks.
What are some of the considerations for integrating NSX-T with the Synergy networking? Consider
questions such as:
• How will the connections and networks on compute modules need to change?
• What settings will you need to check and synchronize with the switches at the top of rack?

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 21.31 166 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 21.31 167 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Summary

Figure 4-28: Summary

In the module, you learned about NSX-T and how its overlay capabilities make data center networks
more flexible and aligned with virtualized workload requirements. You also learned how to use ArubaOS-
CX switches as the physical underlay and how to use Aruba NetEdit to automate management.

Rev. 21.31 168 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Learning checks
1. What benefit do overlay segments provide to companies?
a. They provide encryption to enhance security.
b. They provide admission controls on connected VMs.
c. They enhance performance, particularly for demanding and data-driven workloads.
d. They enable companies to place VMs in the same network regardless of the
underlying architecture.
2. What is one way that NetEdit helps to provide orchestration for ArubaOS-CX switches?
a. It provides the API documentation and helps developers easily create scripts to
monitor and manage the switches.
b. It lets admins view and configure multiple switches at once and makes switch
configurations easily searchable.
c. It integrates the ArubaOS-CX switches into HPE IMC and creates a single pane of
glass management environment.
d. It virtualizes the switch functionality and enables the switches to integrate with
VMware NSX.
You can check the correct answers in “Appendix: Answers.”

Rev. 21.31 169 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Appendix: Review VMware networking


This appendix covers some foundational VMware networking concepts, which might be helpful if you are
not familiar with traditional VMware networking.

Rev. 21.31 170 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Standard switch (vSwitch)

Figure 4-29: Standard switch (vSwitch)

You will start with a standard virtual switch (or vSwitch), which is deployed on a single ESXi host. A
vSwitch is responsible for connecting VMs to each other and to the data center LAN. When you define a
vSwitch on an ESXi host, you can associate one or more physical NICs with that switch. The vSwitch
owns those NICs—no other vSwitch is allowed to send or receive traffic on them. You should define a
new vSwitch for every set of NICs that you want to devote to a specific purpose. For example, if you want
to use a pair of NICs for traffic associated with one tenant's VMs and a different pair of NICs for another
tenant's VMs, you should define two vSwitches. However, if you want the tenants to share physical NICs,
you should connect them to the same vSwitch using port groups to separate them.
In the vSphere client, adding a port group is called adding a network of the VM type. The port group
defines settings such as the NIC teaming policy, which determines how traffic is distributed over multiple
physical NICs associated with the vSwitch, and the VLAN assignment—more on that later. The port group
controls traffic shaping settings and other features such as promiscuous mode.
When you deploy a VM, you can add one or more vNICs to the VM, and connect each vNIC to a port
group. Each vNIC connects to a virtual port on exactly one port group on one vSwitch.
The figure above shows how the vCenter client presents the vSwitch and connected components.

Rev. 21.31 171 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

How vSwitch forwards traffic

Figure 4-30: How vSwitch forwards traffic

Like a physical Ethernet switch, a vSwitch creates a MAC forwarding table that maps each MAC address
to the port that should receive traffic destined to that address. However, the vSwitch does not build up the
MAC table by learning MAC addresses from traffic. Instead the hypervisor already knows the VMs' MAC
addresses. The vSwitch forwards any traffic not destined to a virtual NIC MAC address out its physical
NICs.
The vSwitch also knows, based on the hypervisor, for which multicast groups VMs are listening. It
replicates and forwards multicasts to the correct VMs accordingly. (In vSphere 6 and above, you can
enable multicast filtering, which includes IGMP snooping, to ensure that the vSwitch always assesses the
multicast group memberships correctly). The vSwitch does flood broadcasts.
The way that vSwitches handle unicasts and multicasts ensures better security. Because the switch does
not need to flood unicasts to unknown destinations, it does not ever need to forward traffic destined to
one VM's MAC address to another VM. And it helps to prevent reconnaissance and eavesdropping
attacks in which a hacker overloads the MAC table and forces a switch to flood all packets out all ports.
The figure above provides an example of the traffic flow. Assume that VMs' ARP tables are already
populated. Now VM 1 sends traffic to VM 2's IP address and MAC address. The vSwitch forwards the
traffic to VM 2, based on the MAC forwarding table. When VM 3 sends traffic to a device at 10.2.20.15,
which is in a different subnet, VM 3 uses its default gateway MAC address as the destination. The default
gateway is not on this host, so the vSwitch forwards this traffic out its physical NIC.

Rev. 21.31 172 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

VMkernel adapters

Figure 4-31: VMkernel adapters

You can create a second type of network connection on an ESXi host—a VMkernel adapter. The
VMkernel adapter is somewhat analogous to a port group. However, instead of connecting to VMs and
carrying their traffic, it carries traffic for the hypervisor. A VMkernel adapter can carry all the types of traffic
that you see in the figure above. When you create the adapter, you choose the function for which the
adapter carries traffic. In the figure above, you are creating a VMkernel adapter for the ESXi host's
management connection. You also give the adapter an IP address.
The figure above shows how VMware shows the settings after you have created the VMkernel and
connected it to a switch.
You can make the same VMkernel port carry multiple types of this traffic—you simply select multiple
types when you create the adapter. However, some functions, such as vMotion, should have a dedicated
adapter with its own IP address. In the past, admins preferred to dedicate a pair of 1GbE interfaces to
each VMkernel adapter. With 10GbE to the server edge so common now, though, you might connect
multiple VMkernel adapters to the same switch and consolidate traffic.

Rev. 21.31 173 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

Implementing VLANs

Figure 4-32: Implementing VLANs

VMware vSwitches define a VLAN for each port group and VMkernel adapter. Like a physical switch, the
vSwitch enforces VLAN boundaries, only forwarding traffic between ports in the same VLAN. A vSwitch
can take one of three approaches in defining the VLAN for a port group or VMkernel adapter. Read each
section to learn more about that approach.

Virtual switch tagging (VST)


• VLAN ID: Any ID between 1 and 4094
• Device that determines network's VLAN assignment: vSwitch (no awareness on VMs)
• Where traffic is tagged: Between vSwitch and physical switch
• Typical use: Permitting VMs in multiple subnets, and even VMkernel adapters, to share the same
physical NICS with logical separation

External switch tagging (EST)


• VLAN ID: 0
• Device that determines network's VLAN assignment: Physical switch, based on the physical port's
native (untagged) VLAN
• Typical use: A vSwitch that supports a single network such as a vSwitch dedicated to the
management VMkernel adapter

Virtual guest tagging (VGT)


• VLAN ID: 4095
• Device that determines network's VLAN assignment: VMs (and physical switches)
• Where traffic is tagged: All the way between VM and physical switch
• Typical use: A network with VMs that must support multiple VLANs on a single vNIC (802.1Q support
in guest OS required)

Rev. 21.31 174 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

vSphere distributed switch (VDS)

Figure 4-33: vSphere distributed switch (VDS)

For deployments with many hosts and clusters, defining standard vSwitches individually on each is
tedious and error prone. If an admin forgets to define a network on one host, moving a VM that requires
that network to that host will fail. A vSphere distributed switch (VDS) provides a centralized way to
manage network connections, simplifying administrators’ duties and reducing these risks. The
management plane for the VDS resides centrally on vCenter. There you create distributed port groups,
which include the familiar VLAN and NIC teaming policies. You also define a number of uplinks based on
the maximum number of physical NICs that a host should dedicate to this VDS.
You deploy the VDS to hosts, each of which replicates the VDS in its hypervisor. The individual instances
of the VDS hold the data and control plane and perform the actual switching. When you associate a host
to the VDS, you must associate a physical NIC with at least one uplink. Each uplink can be associated
with only one NIC, but if the VDS has additional uplinks defined, you can associate other physical NICs
with them. The multiple NICs act as a team much as they do on an individual virtual switch, using the
settings selected on the VDS centrally.
The VDS’s distributed port groups are available on the hosts for attaching VMs or VMkernel adapters.
Note that for VDSes, the VMkernel adapter attaches to a distributed port group, rather than directly to the
switch.

Rev. 21.31 175 Confidential – For Training Purposes Only


Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment

PAGE INTENTIONALLY LEFT BLANK

Rev. 21.31 176 Confidential – For Training Purposes Only


Use Orchestration and Configuration
Management to Deploy and Manage the
HPE SDI Solution
Module 5

Learning objectives
In this module, you will explore orchestration tools for software-defined data centers (SDDCs). You will
first look at HPE OneView integrations with VMware vSphere. You will then consider the integration
between HPE InfoSight and VMware. Finally, you will review scripting and automation tools that integrate
with HPE OneView.
After completing this module, you will be able to:
• Provision and deploy an HPE SDI solution using orchestration tools
• Manage and monitor an HPE SDI solution
• Demonstrate an understanding of the HPE integrations for given automation tools and scripting tools
• Explain the benefits of HPE DEV resources

Rev. 21.31 | © Copyright 2021 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Customer scenario: Financial Services 1A

Figure 5-1: Customer scenario: Financial Services 1A

Financial Services 1A has invested in a highly virtualized data center and taken steps to transform
compute, storage, and networking with software-defined technologies. But the company still needs help
bringing all of the components together. IT knows that it needs to respond to line of business (LOB)
requests more quickly. For example, IT would like to be able to deploy VMs more quickly. They would
also like to detect issues and resolve issues before they create outages.

Rev. 21.31 178 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Automation

Figure 5-2: Automation

Ultimately, Financial Services 1A is looking for automation and orchestration solutions.

Automation is creating a single task that can run on its own. Automated tasks can be combined to create
a sequence. Automation works in one area, or in other words: a single domain (for instance, automatically
checking email, installing an operating system on a server, or an automated welding machine in a car
factory).

Creating an automated process takes time and money. But the benefits are that you only must do it once
and then, after successful testing, the automated process can be used multiple times.

Automation can:

• Increase productivity—Once created and tested, the automated task can run repeatedly
• Increase quality and consistency—Automation ensures that tasks are performed identically. This will
result in consisted results of high quality. Consistency also means that tasks will be performed in
ways that are needed to comply with corporate governance or legislation.
• Decrease and provide faster turnover times—By creating an optimal workflow and eliminating
unnecessary tasks, IT admins can complete tasks more quickly. They can also meet performance
goals within tighter budget constraints.

Rev. 21.31 179 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Orchestration

Figure 5-3: Orchestration

Orchestration starts with automation, but it takes the concept a step further. Orchestration is creating a
workflow of automated tasks to arrange, coordinate and manage IT resources.

Where automation works on a single domain, orchestration works on multiple domains. It can work on
the hardware, the middleware and the services that are needed on top of the infrastructure. The
orchestration tool coordinates all the tasks, like a conductor leading an orchestra.

As an example, an orchestration tool could provide a web portal for an end-user. When the end user
needs an IT resource, it can make a request in the portal. Once the request is approved (this can also be
an automated task), the orchestration tool starts the automated provisioning of hardware. When the
hardware is ready, the automated installation of OS and software services could be scheduled.

Rev. 21.31 180 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE OneView integration with VMware vSphere and vRealize


In this topic, you will learn how HPE and VMware have worked together to integrate their products to
provide the automation and orchestration customers are looking for.

Rev. 21.31 181 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE OneView integration with VMware tools

Figure 5-4: HPE Plug-ins simplify management for vSphere admins

Customers can achieve a true software-defined data center (SDDC) by taking advantage of extensive
HPE OneView integrations with the VMware solutions. With OneView integration, VMware admins can
continue to use the VMware interfaces with which they are familiar but gain access to HPE’s deep
management ecosystem. The single-console access simplifies administration. IT can further reduce their
efforts by automating responses to hardware events. Customers can take control of the software-defined
data center (SDDC) by launching trusted HPE management tools directly from vCenter and proactively
managing changes with detailed relationship dashboards that extend across the virtual and physical
infrastructure. By automating hardware and virtualization together, IT can deliver on-demand server and
storage capacity.
Customers can also achieve a more stable and reliable environment with automation that enables online
firmware updates and workload deployment. They can also integrate information collected by OneView
into VMware vRealize Operations, Orchestrator, and Log Insight for deep analytics, automation, and
troubleshooting.

Rev. 21.31 182 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE Plug-ins simplify management for vSphere admins

Figure 5-5: HPE Plug-ins simplify management for vSphere admins

HPE provides several plug-ins for VMware integration. You will learn more about these plug-ins in this
module.

Rev. 21.31 183 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE OV4VC benefits

Figure 5-6: HPE OneView for VMware vCenter (OV4VC)

HPE OneView for vCenter (OV4VC) brings the power of HPE OneView to VMware environments. The
sections below summarize the key benefits.

Simplify operations and increase productivity


OV4VC simplifies management by integrating the physical and virtual infrastructure. It provides features
such as:
• Comprehensive health monitoring and alerting
• Firmware and driver updates

Deploy faster
OV4VX simplifies on-demand provisioning. Template-based tools let customers:
• Leverage the HPE OneView automation engine
• Quickly and easily create or expand a VMware cluster

Impose configuration consistency


OV4VC integrates directly into VMware consoles to give admins a consistent experience. Admins can use
familiar VMware tools for HPE management tasks. They can also launch HPE tools directly from the
vCenter console.

Increase visibility into environment


OV4VC provides customers with non-disruptive insight into the complete (virtual and physical)
environment:

Rev. 21.31 184 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE OV4VC: Server only integration

Figure 5-7: HPE OneView for VMware vCenter (OV4VC): Separation of server and storage integration

HPE OV4VC 9.6 and below supports both server and storage integration in vCenter. With HPE Ov4VC
10, however, the plug-in includes support for servers only.
You can download OV4VC from the Software Depot.
Storage integration is provided in the HPE Storage Integration Pack for VMware. You will learn more
about this plug-in later in this course.
When upgrading OV4VC 9.6, you should be aware that version 9.x backups cannot be restored using
version 10.x.

Rev. 21.31 185 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE OV4VC licensing and managed devices

Figure 5-8: OV4VC Licensing and Managed Devices

You deploy HPE OV4VC as a VM. The OV4VC VM must have access to vCenter and OneView, and you
must register it with vCenter. All vCenter clients connected to this vCenter Server can then access the
OV4VC views and features.

Licensing
OV4VC can be licensed with OneView standard or advanced licenses:
• Standard—Supports basic health and inventory features
• Advanced—Supports advanced features such as server profiles.
HPE Synergy includes the Advanced license, so no additional license is required when using HPE
Synergy.

Managed devices
With OV4VC 9.4 and above, all servers, enclosures, and Virtual Connect devices must be managed by
HPE OneView. OV4VC will report an error when trying to manage non-OneView managed devices. If
companies want to use OV4VC to manage devices that are not managed by OneView, they can use
OV4VC 9.3 (rather than upgrading to a later version).
As of the release of this course, supported devices include:
• HPE ProLiant BladeSystem c-Class
• HPE ProLiant 100, 300, 500, 700, or 900 series ML or DL servers
• HPE Synergy D3940 Storage Module
• HPE Synergy 12Gb SAS Connection Module
• HPE Synergy Server

Rev. 21.31 186 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE OneView Hardware Support Manager for VMware vLCM

Figure 5-9: HPE OneView Hardware Support Manager for VMware vLCM

HPE OV4VC 10.1 and above include an additional plug-in: HPE OneView Hardware Support Manager for
VMware vLCM. As the name suggests, this plug-in integrates with vLCM, providing one-click lifecycle
management for ESXi, HPE drivers and firmware, directly in the vSphere user interface. With the
OneView Hardware Support Manager plug-in, IT admins can:
• Set baselines for images and firmware versions
• Automatically check and validate components meet the baseline
• Update components that do not comply
This plug-in supports any HPE Gen10 server certified for ESXi 7.0 and HPE OneView. In addition, one
HPE OV4VC instance supports multiple vCenters/OneViews and external HPE firmware repository.

Rev. 21.31 187 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE OV4VC features


Below is a list of OV4VC features:
• Views
– Server Hardware Detail
– Server Firmware Inventory
– Server Monitoring and Alerts
– Server Port Reporting
• Launch links to HPE tools
• Enhanced Link Mode
– Network Diagram
– Enclosure Summary
• Cluster Consistency Check
• Cluster Remediation (including firmware updates via OV SPT)
• SBAC and OV domain accounts
• Proactive HA
• Grow Cluster
Only the basic monitoring and inventory features are available with a standard HPE OneView license.
Make sure that devices have advanced licenses (or use Synergy) to obtain the full features.
The next part of this section helps you dig deeper into the benefits of some of the key features.

Rev. 21.31 188 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE OV4VC views


HPE OV4VC provides links to iLO interfaces on HPE servers, enclosures' onboard administrators, and
OneView. It also adds a broad spectrum of information about the infrastructure directly to vCenter in an
HPE Server Hardware tab. When admins select the tab, they can choose from several views. Read the
sections below to see some of the information contained in each view.

Hardware overview

Figure 5-10: Hardware overview

The OneView Hardware views display detailed information about server processors, memory, and
physical adapters.

Firmware

Figure 5-11: Firmware

This view shows the firmware version installed on every server component.

Rev. 21.31 189 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Ports

Figure 5-12: Ports

This view lists network adapters and helps admins correlate physical and virtual settings.

Network diagram

Figure 5-13: Network diagram

The network diagram helps admins set up and troubleshoot networking with a complete view of
connections between virtual switches, server adapters, Virtual Connect modules, and uplinks.

Rev. 21.31 190 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Enclosures

Figure 5-14: Enclosures

This view shows information about the enclosure in which the server is installed.

Remote support

Figure 5-15: Remote support

Customers can use this view to check the status of the server's support services. You can use this view to
check the:
• Warranty expiration date for Server Hardware and Enclosures
• Remote Support contract type and status
HPE OV4VC also shows:
• Support/contract about to expire
• Support/contract already expired
The remote support page provides information about the Remote Support status. IT admins can use it to
create or manage a Remote Support case.

Rev. 21.31 191 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE OV4VC features: Cluster imports

Figure 5-16: HPE OV4VC benefits: Cluster imports

When the infrastructure that underlies VMware consists of a Synergy or HPE BladeSystem or solution
that uses Virtual Connect (VC) modules, customers can import VMware clusters into OneView. Admins
can then implement cluster-aware maintenance on the clusters from OneView. Integrating management
within OneView enables admins to automate tasks that would otherwise require hopping between tools.
For example, admins can choose to grow a cluster, and OneView handles all the steps from deploying
the OS to adding the host to the cluster. Similarly, admins can use OneView cluster management to
shrink a cluster, check cluster members' consistency with server profile template, and apply cluster-aware
firmware updates.

Rev. 21.31 192 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE OV4VC features: Grow a cluster

Figure 5-17: HPE OV4VC benefits: Grow a cluster

HPE OV4VC makes it simple for admins to expand a cluster.

Process initiation
IT admins initiate the grow cluster process in the Grow Cluster wizard. The cluster is associated with a
server profile template and OS deployment plan, which already define many required settings, including
OS build plans, that are stored and managed on OV4VC itself.
IT admins simply need to indicate the cluster, the new hardware, and the networking settings for the new
host. The networking settings can include vDS settings for particular functions such as management, FT,
and vMotion. They can also configure multi-NIC vMotion.

Server and OS deployment


HPE OneView configures the server settings, and then OV4VC installs the ESXi OS on the server.
Deployment takes about 30 minutes, and OV4VC can run 8 concurrent deployments.

Addition to the cluster


OneView uses its VMware integration to automatically add the new ESXi host to the proper cluster.

Rev. 21.31 193 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE OV4VC features: Consistency checks and non-disruptive


firmware upgrades
HPE OV4VC helps workloads stay online with non-disruptive firmware upgrades for the cluster. Look
through each step to see the simple process.

Firmware baseline
Admins can choose the new firmware baseline in the HPE Server Hardware tab on vCenter—no need to
jump to OneView to make edits there.

Figure 5-18: HPE OV4VC benefits: Host and cluster consistency check and remediation

Consistency check

Figure 5-19: Consistency check

Admins can easily determine which hosts are not on the new firmware by running a consistency check
against the selected server profile template. HPE OV4VC also supports clusters not managed by HPE
OneView Cluster Profiles, but automated remediation is not available for these cluster.

Rev. 21.31 194 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Upgrade through remediation


Upgrading the hosts is as simple as choosing to remediate the deviation from the SPT. OV4VC upgrades
each host in a cluster one at a time, first moving that host's VMs to another host to avoid disruption.

Rev. 21.31 195 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE OV4VC features: Proactive HA

Figure 5-20: HPE OV4VC benefits: Proactive HA

HPE OV4VC enhances VMware's HA capabilities to prevent downtime. When selected as a partial failure
provider in the cluster's HA settings, OV4VC monitors hosts' health and notifies vCenter of impending
issues on a host. Admins can chose from a broad range of failure conditions for OV4VC to monitor,
including issues with memory, storage, networking adapters, fans, and power. When OV4VC informs
vCenter of an issue, the cluster can then move VMs to other hosts or take another remediation action, as
specified in the cluster HA settings. In this way, workloads move to a fully operational host before a
hardware issue causes an outage.

Rev. 21.31 196 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE Storage Integration Pack for VMware vCenter: Benefits

Figure 5-21: HPE Storage Integration Pack for VMware vCenter

As mentioned earlier, HPE OV4VC 10 supports only server integration. Storage integration is provided in
the HPE Storage Integration Pack for VMware. This plug-in provides context-aware information about
HPE storage solutions and integrates the virtual and physical infrastructure.

Rev. 21.31 197 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE Storage Integration Pack for VMware vCenter: Configuration and


management

Figure 5-22: HPE Storage Integration Pack for VMware vCenter

With HPE Storage Integration Pack for VMware vCenter, admins can access context-aware information
about HPE Storage within vCenter. They can view storage information such as:
• Heath status
• Storage volumes and paths
• Performance details
• Alerts
They can also provision their HPE storage, completing tasks such as:
• Create, delete, and expand datastores
• Create VMs from a template
• Switch primary and standby roles for Peer Persistence
Admins can perform operations, such as:
• Set up quality of service policies on VMFS datastores
• Restore snapshots
• Check configurations to ensure they meet best practices

Rev. 21.31 198 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

VMware vRealize Suite

Figure 5-23: VMware vRealize Suite

You now understand how HPE OneView enhances what customers can do with vSphere. You will next
examine VMware vRealize Suite and then HPE plugins for it.
As an optional add-on for VCF, the VMware vRealize Suite transforms the SDDC into a true private cloud.
It enhances the intelligence of operations across the SDDC. Users can now obtain services through easy-
to-use catalogs. And cloud costing capabilities enable customers to track and optimize utilization across a
multi-cloud environment.
Read the sections below to learn more about the vRealize solutions that make all of this possible.

vRealize Suite Lifecycle Manager


vRealize Suite Lifecycle Manager helps customers to deploy and manage the components of the suite. It
automates all aspects of the suite's lifecycle, including deployment, configuration, and upgrades.

vRealize Log Insight


vRealize Log Insight provides admins with real-time visibility across the SDDC. Dashboards and analytics
help admins monitor and troubleshoot more effectively. Third-party extensibility enables admins to
achieve an integrated view of many facets of the SDDC.

vRealize Operations (vROPS)


vRealize Operations helps customers to gain a data center that runs itself, delivering insights across the
physical, virtual, and cloud infrastructure. It correlates data from apps, servers, and storage into one
easy-to-use tool, supporting proactive issue remediation. Customers can choose how they want to
optimize whether based on performance, consolidation, or another factor. Using Artificial Intelligence (AI)
and machine learning (ML), vRealize Operations then automates management task such as workload
placement and capacity management so as to help customers achieve their goal.

vRealize Business for Cloud


This component of the vRealize Suite helps customers track consumption across private and public
clouds. It helps customers manage their spending and align consumption with their priorities.

vRealize Automation
vRealize Automation provides a self-service catalog that allows users to select services in the private and
public clouds. With vRealize Automation, customers can dramatically accelerate workload delivery while
allowing IT to maintain control of the environment.

Rev. 21.31 199 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

vRealize Suite options

Figure 5-24: vRealize Suite options

Customers have three options for purchasing the vRealize Suite: Standard, Advanced, and Enterprise. All
three options include the Lifecycle Manager, Log Insight, and Operations. However, the Standard and
Advanced Suite provide the Advanced version of vRealize Operations while the Enterprise Suite features
the Enterprise version of this component. As compared to the Advanced version, the Enterprise version
of vRealize Operations provides performance monitoring, analytics, remediation, and troubleshooting
over more extensive hybrid cloud and multi-cloud environments, as well as containerized environments. It
also includes application, database, and middleware monitoring.
Only vRealize Suite Advanced and Enterprise deliver the private cloud features, including vRealize
Business for Cloud and vRealize Automation (vRA). vRA also comes in an Advanced or Enterprise
version. Both versions provide a self-service catalog with a variety of IaaS and other services. Both also
support multi-vendor virtual, physical, and public cloud services, but vRA Enterprise adds application
authoring capabilities.

Rev. 21.31 200 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE Content Packs for vRealize Log Insight

Figure 5-25: HPE Content Packs for vRealize Log Insight

You will now look at the HPE plugins for vRealize Suite components, starting with Log Insight.
HPE provides two content packs for vRealize Log Insight. The free content packs add dashboards,
extracted fields, saved queries, and alerts that are specific to the server and storage hardware. HPE
OneView for VMware vRealize Log Insight dashboards summarize and analyze log information from iLO
and Onboard Administrator (OA). The StoreFront Analytics Content Pack for vRealize Log Insight adds
dashboards and information specific to 3PAR. With operational intelligence and deep visibility across all
tiers of their IT infrastructure and applications, admins have a more complete picture of all the factors
behind performance and possible issues. They can troubleshoot and optimize more quickly using Log
Insight's intuitive, easy-to-use GUI to run searches and queries. And analytics help admins to find the
patterns behind data.

Rev. 21.31 201 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE OneView for vRealize Operations

Figure 5-26: HPE OneView for vRealize Operations

HPE OneView for vRealize Operations enhances the solution’s monitoring capabilities, helping customers
to gain visibility into their complete environment and solve problems more quickly. Read the examples
below to see what the HPE integration adds.

Infrastructure view

Figure 5-27: Infrastructure view

Admins can browse through the infrastructure tree, checking each device’s health and efficiency. Risk
alerts are clearly shown, ready to grab admins’ attention.

Rev. 21.31 202 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Risk details

Figure 5-28: Risk details

Admins can drill in on alerts to quickly discover potential issues for faster troubleshooting.

Dashboard

Figure 5-29: Dashboard

The view on the left shows the relationships between VMware and HPE OneView resources. On the right,
admins can click to open alerts and health trees. Metric graphs show historical data so that admins can
easily track trends over time.

Rev. 21.31 203 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

VMware vRealize Orchestrator (vRO)

Figure 5-30: VMware vRealize Orchestrator (vRO)

VMware Realize Orchestrator (vRO) helps customers to automate complex IT tasks and standardize
operations with workflows. A library of building block actions defines functions such as powering on or
stopping a VM. A wide array of plug-ins, including third-party ones, define various actions. Admins can
easily drag and drop actions to define a workflow, which ensures repeatable and reliable operations. The
workflow can feature logical constructions such as if/then statements or an order to wait for a particular
event to occur.
Admins can create workflows using the vRealize Orchestrator Client.

Rev. 21.31 204 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

vRA + vRO

Figure 5-31: vRA + vRO

However, vRO reveals its true power by making its workflows available to other VMware solutions, such
as vSphere and vRA, to use the workflows as part of their orchestration functions. Here you see how
vRA, in particular, and vRO interact. vRA communicates with vRO through a vRO's RESTful API. vRA
can invoke vRO workflows that execute when users select a particular service from the self-service
catalog. In this way, vRA and vRO work together to provide IT services and lifecycle management for
private and hybrid cloud services.

Rev. 21.31 205 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE plug-ins for vRO

Figure 5-32: HPE plug-ins for vRO

HPE offers two plugins for vRO. The HPE 3PAR plug-in for vRO provides predefined workflows for 3PAR
storage while the OneView for vRO (OV4vRO) plug-in offers actions and workflows for vRO to perform
server-focused functions. While offering many predefined workflows and actions, OV4vRO also permits
admins to customize and extend the workflows so that they can automate based on their company’s
needs. The HPE plug-ins for vRO help admins to easily automate the lifecycle of OneView-managed
hardware from deployment to firmware update to other maintenance tasks. Customers can make their
existing workflows more powerful by incorporating HPE OneView’s advanced management capabilities
within them. For example, a cloud service might allow deployment of the workload on bare metal servers.
A vRO workflow could manage the service deployment with OV4vRO furnishing the capability for tasks
such as deploying an OS to the bare metal server and updating its drivers.

Rev. 21.31 206 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

OV4vRO workflows and actions

Figure 5-33: OV4vRO workflows and actions

The following sections provide examples of the workflows and actions supported by OV4vRO

Rev. 21.31 207 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Server-focused workflows

Figure 5-34: OV4vRO workflows and actions

HPE OV4vRO has workflows for performing actions across clusters, configuring OneView instances,
managing hypervisors and clusters imported in OneView, managing hardware on single servers, and
using utilities to customize workflows.
Most workflows work on any HPE-managed servers, but the following workflows require blade or Synergy
modules connected to VC modules or HPE Composable Cloud: Import Hypervisor, Import Hypervisor
Cluster Profile, and Configure Host Networking from Server Profile.

Rev. 21.31 208 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Server-focused actions

Figure 5-35: OV4vRO workflows and actions

This figure shows many of the predefined actions for managing HPE Servers' lifecycle.

Rev. 21.31 209 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Workflows for storage

Figure 5-36: OV4vRO workflows and actions

OV4vRO also supports provides workflows for automating OneView-managed 3PAR systems' lifecycle.
Admins can automate storage provisioning, as well as configuration of Remote Copy—a technology that
helps to provide disaster recovery by replicating volumes to remote systems.

Rev. 21.31 210 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE InfoSight integration with VMware


In the next section you will focus on how HPE InfoSight integrates with VMware.

Rev. 21.31 211 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE InfoSight: Industry’s most advanced AI for infrastructure

Figure 5-37: HPE InfoSight: Industry’s most advanced AI for infrastructure

HPE InfoSight gives customers a new way to approach troubleshooting and optimization. Collecting
millions of pieces of data a day from deployments across the world, this AI-based solution can detect
potential issues, and recommend solutions, before the issues grow into larger problems. InfoSight
extends across the HPE storage, compute, and hyperconverged infrastructure. And it extends to the
virtualization layer. With the breadth and depth of insight delivered by InfoSight, customers can hone in
on the true causes of issues and better optimize their infrastructure.

Rev. 21.31 212 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Benefits of HPE InfoSight VMware integration

Figure 5-38: Benefits of HPE InfoSight VMware integration

HPE InfoSight’s integration with VMware provides greater insight in the environment. InfoSight can look at
the entire VMware infrastructure and provide detailed advice on both optimizing the environment and
mitigating and avoiding problems. With the in-depth analysis of its cross-stack telemetry, InfoSight
provides in-depth VMware analysis and troubleshooting. As shown in this figure, InfoSight can report
symptoms of an issue, pinpoint the root cause, and then suggest a solution.

Rev. 21.31 213 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE InfoSight’s cross-stack analytics

Figure 5-39: HPE InfoSight’s cross-stack analytics

In addition, InfoSight’s cross-stack analytics identifies VM noisy neighbors. Noisy neighbors are VMs or
applications that consume most of the resources and cause performance issues for other VMs. By
identifying high-consuming VMs, InfoSight allows companies to take corrective actions.
InfoSight provides information about resource utilization, providing visibility into host CPU and memory
usage. InfoSight not only identifies latency issues but also helps IT admins pinpoint the root causes
across hosts or storage. It also reveals inactive VMs, allowing IT admins to repurpose or reclaim their
resources. IT admins can also view reports showing the “top performing” VMs, based on IOPs and
latency.
By providing this detailed visibility into their environment and offering recommendations for optimizing
performance and remedying issues, InfoSight impro, IT admins better manage their environment, ensure
they have necessary resources, and optimize the distribution of workloads across the physical
infrastructure.

Rev. 21.31 214 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Example: Diagnose abnormal latency with VM analytics

Figure 5-40: Example: Diagnose abnormal latency with VM analytics

Consider just one example of how InfoSight enables admins to discover the root cause of an issue. A
customer's applications were experiencing issues with excessive latency. InfoSight VMVision pulls data
from the VMware environment and correlates it with data from across the infrastructure. Admins no longer
need to run extensive tests to determine whether the storage, network, or another factor lies behind the
latency. They can pinpoint the true root cause and then take steps to resolve the issue.

Rev. 21.31 215 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Data-centric visibility for every VM

Figure 5-41: Example: Diagnose abnormal latency with VM analytics

With InfoSight VMVision admins can examine and compare performance for all VMs. A heat map helps
the admins to quickly detect which VMs are experiencing issues. InfoSight further helps admins with
explicit root cause diagnostics for the underperforming VMs. It even provides recommendations for
improving the performance.

Rev. 21.31 216 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Other automation and orchestration tools


In this section, you will learn about two developer communities, GitHub and HPE DEV. You will also focus
on common programming languages that allow companies to further automate and orchestrate
operations. Developers can develop their code from scratch, or they can reuse code that is developed by
others and shared in a community.

Rev. 21.31 217 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

GitHub

Figure 5-42: GitHub

Git is a free and open-source distributed version control system. It can handle small to very large
projects. Git tracks the history of the projects that are stored in a repository.
GitHub is a website that uses Git for version control. GitHub is mostly used to publish software code, but
it can be used for other projects that need version control. GitHub offers a large variety of code projects,
ranging from small on-premise projects to large cloud-based infrastructures.
Developers can place their code in a repository and can allow others to collaborate on their projects.
Projects can be public (for instance for open-source software), or private (for instance, to allow only
specific team members to work on a project).
HPE has more than 200 repositories on GitHub, varying from the OneView provider for Terraform, to a
project for HPE Azure Stack on HPE Nimble Storage. Use the following link to access the HPE page:
https://github.com/HewlettPackard

Rev. 21.31 218 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE DEV

Figure 5-43: HPE DEV

HPE DEV is a website for developers in the HPE ecosystem. It is a hub that serves a community of
individuals and partners that want to share open-source software for HPE products and services. It offers
numerous resources to help developers learn and connect with each other, such as blogs, monthly
newsletters, technical articles with sample code, links to GitHub projects, and on demand workshops.
You can find HPE DEV at:

https://developer.hpe.com

Rev. 21.31 219 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE RESTful API

Figure 1-44: HPE RESTful API

The HPE APIs are a critical component of its ability to deliver a software-defined infrastructure.
HPE uses a Representational State Transfer (REST) model for its APIs. REST is a web service that
allows clients to use basic HTTP commands to perform create, read, update, and delete (CRUD)
operations on resources. When an application provides a RESTful API, it is called a RESTful application.
A RESTful API makes infrastructure programmable in ways that CLIs and GUIs cannot. For example, a
CLI show command provides output that an admin can read, but a script cannot. On the other hand, a
simple GET call to an API returns information in JSON format that is easily extractable for a script.
With RESTful APIs, developers can use their favorite scripting or programming language to script HTTP
calls for automating tasks such as inventorying, updating BIOS settings, and many more. Because
RESTful APIs provide a simple, stateless, and scalable approach to automating, they are common to
many modern web environments, and customer’s staff should be quite familiar with developing to them.

Rev. 21.31 220 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

iLO RESTful API and Redfish conformance

Figure 1-45: iLO RESTful API and Redfish conformance

Redfish is an open source RESTful API sponsored and controlled by Distributed Management Task Force
(DMTF), an industry recognized peer-review standards body. Redfish provides a schema for managing
heterogeneous servers in today’s cloud and web-based data center infrastructures, helping organizations
to transform to a software-defined data center.
In accordance with HPE’s commitment to open standards, the iLO API, used by OneView and other tools
to manage HPE ProLiant servers, is Redfish conformant. The Redfish API offers many advantages over
earlier interfaces such as IPMI as Redfish is designed for security and scalability.
The iLO RESTful API in iLO 5 has several new features, some of which keep it in conformance with the
latest Redfish developments and some of which add to its management capabilities. New features for
Gen10 include the ability to configure Smart Array controllers through the API and to stage and update
components using the iLO Repository.
A Software Development Kit with libraries and rich sample code helps developers to easily create scripts
for their own environments. Refer your customers to https://developer.hpe.com.

Rev. 21.31 221 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE Python SDK for HPE OneView

Figure 5-46: HPE Python SDK for HPE OneView

HPE has a Software Development Kit (SDK) for OneView that is available for Python. The SDK provides
a pure Python interface to the HPE OneView REST APIs.

The figure shows an example of the SDK used in a Python script. The script adds a ProLiant server to
OneView. The referenced Python script (add-server.py) instructs OneView to connect to the iLO of the
server (172.18.6.31) using the credentials Administrator/HP1nvent. Then the script will instruct OneView
to add the server to the OneView database.

You can find the HPE OneView Python SDK at:

https://github.com/HewlettPackard/oneview-python

Understanding Python
Python is a high-level programming language that can be used to work together with automation and
orchestration tools. One of the design goals of Python is readability of the code. The extensive use of
white space makes Python code look clear.

Python is an interpreted language. Interpreted programming languages execute instructions directly,


without having to compile the instructions first. Python interpreters are available for many operating
systems, such as Windows, Linux and Mac OS.

One of the benefits of Python is its expandability: the core programming language can be expanded with
custom-made modules. It is also available for many operating systems.

Example of Python code


The following is an example of Python code. Note how blocks in the code use the same indentation. This
makes the code easier to read.

Rev. 21.31 222 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Rev. 21.31 223 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

PowerShell for HPE platforms

Figure 5-47: PowerShell for HPE platforms

Many cmdlets are built-in to PowerShell. By adding modules to PowerShell, cmdlets can be added. HPE
provides PowerShell modules for many HPE platforms. One of these modules is the PowerShell module
for HPE OneView.

This PowerShell module provides access into the HPE OneView REST API with cmdlets that can be used
like a CLI, or scripting. It will install tens of new cmdlets. An example of one of these new cmdlets is the
Copy-HPOVServerProfile cmdlet

The Copy-HPOVServerProfile cmdlet will copy a OneView server profile to a profile with another name.
For example:
Copy-HPOVServerProfile -SourceName "Profile 1" -DestinationName "Profile 2"

You can find the HPE PowerShell modules at: https://github.com/HewlettPackard.

Background information on PowerShell


PowerShell is a scripting language that has its origins in Windows. PowerShell also interprets code, just
like Python. By importing custom modules, developers can combine multiple scripts. This makes it easier
to manage and maintain code.
PowerShell Core is cross-platform by nature. That means that one script will run on all supported
operating systems. For instance, a PowerShell script can be developed on a Windows system and used
on a supported Linux system.
PowerShell Core can be installed side-by-side with Windows PowerShell. This makes it easier for
organizations to migrate to PowerShell Core gradually.

Cmdlets
Cmdlets are typical for PowerShell. A cmdlet is a lightweight PowerShell script designed to perform a
single function.

Rev. 21.31 224 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

PowerShell cmdlets have a simple syntax. They use a verb-noun structure. The Verb is specifying which
kind of action to take, the Noun specifies where or on which type of object the action should take place.
Cmdlets can run without a parameter, however some cmdlets need additional parameters to run properly.
For example:
• Get-Command: Gets all the cmdlets that are registered in the PowerShell environment
• Get-Help Get-Process: Displays help about the Get-Process cmdlet

Rev. 21.31 225 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Introduction to Chef, Ansible, and Puppet

Figure 5-48: Introduction to Chef, Ansible, and Puppet

Chef, Ansible, and Puppet are examples of configuration management (CM) tools. They provide an
environment for automated provisioning and configuration of IT resources, such as VMs, containers,
applications, and patches.
One of the goals of the DevOps concept is to shorten software development cycles. Automation of all the
components in building software (from integration, to test, to release, to deployment) is essential in
DevOps. This is the reason that automation and automated CM tools are an essential component in a
DevOps and hybrid cloud environment.
Automation and CM tools change the way departments work together and change the human workflow.
Application developers can write, test and deploy applications without having to wait for the operations
department to supply resources.
In short, the benefits of CM tools such as Chef, Ansible and Puppet are:
• Reusability: Create reusable building blocks that can be used in multiple stacks
• Speed: Validate code on non-critical systems with fast feedback loops to catch issues earlier
• Uptime: Ensure changes are tested against downstream dependencies to prevent unforeseen
failures in production
• Common workflow: Ensure all changes are tested and approved with the same rigor and speed.
The configuration management tools ensure changes are only deployed once properly approved.
• Increased reliability: For instance, after integration with HPE OneView, bare‑metal servers are
configured the same way every time and maintain infrastructure compliance with automated rolling
upgrades.
• Automation and compliance: Automatically ensure that code matches the state of the
infrastructure. Automatically test that systems remain in compliance. Automatically test, review, build,
and deploy changes on commit

Rev. 21.31 226 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Chef and HPE OneView

Figure 5-49: Chef and HPE OneView

The unified API in HPE OneView provides a programmatic interface for higher-level configuration
management and orchestration tools. HPE OneView brings infrastructure as code to bare-metal through
templates that unify the processes for provisioning compute, connectivity, storage, and OS deployment in
a single step, just like provisioning VM instances from the public cloud.

Chef enables rapid and reliable deployment and updates of application infrastructure, using recipes that
can be versioned and tested just like application software.

HPE OneView can act as an infrastructure provider for Chef, bringing the speed of the public cloud to
internal IT processes. By using Chef and OneView in combination, developers can provision hardware
resources using infrastructure as code.

Background information on Chef Automate


Chef Automate can support any environment, from applications that run on bare metal in the data center
to container-based micro services in the cloud.

Chef Automate is powered by three open source engines:

• Chef: Is the engine for infrastructure automation.


• InSpec: Lets you specify compliance and security requirements as executable code.
• Habitat: Automates modern applications such as those that run in containers and are composed of
micro services.
The key features of Chef are:
• Chef scales to thousands of nodes per client.
• Chef offers compliance as code (with InSpec) to help a customer manage, update, and send through
new governmental requirements and mandates; Chef also offers audit reporting capabilities.

Rev. 21.31 227 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

• Chef offers application automation (with Habitat) helping customers stand up, maintain, correct
issues, and fix errors.

Chef recipes

Figure 5-50: Chef Recipes

To automate the infrastructure, Chef administrators write declarative scripts, called recipes, which are
stored in cookbooks. Chef recipes are relatively easy to write and can be shared among IT organizations
through the Chef Supermarket.

The cookbooks can be used to automate software processes. Chef recipes are more efficient and reliable
than standard shell scripts or manual processes, because they are repeatable, testable, and versionable.

Rev. 21.31 228 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

HPE OneView and Ansible

Figure 5-51: HPE OneView and Ansible

Provisioning hardware stacks is a multi-step process, requiring many tools to manage provisioning tasks.
Provisioning infrastructure can easily become a bottleneck in the continuous delivery of applications. HPE
OneView manages all provisioning functions through a single API, leveraging pre-existing profiles and
templates.

Binding Ansible with HPE OneView allows DevOps to introduce physical provisioning into the same
playbook used to deploy the software stack. Adding an additional line of code to our Ansible playbook
directs HPE OneView to provision hardware and to load the operating system using specified templates.

The Ansible role for HPE OneView is available for download at:
https://github.com/HewlettPackard/oneview-ansible

Example of HPE OneView with Ansible


The following playbook calls the Ansible role for HPE OneView to physically provision servers from bare
metal and configures networks, storage, BIOS, and firmware. The Ansible playbook then configures the
OS and application stack and assigns the servers to a load balancer, all in a single flow. Using Ansible
with HPE OneView also allows automated non-disruptive upgrades from the physical bare metal all the
way up through the software stack.
# Deploy physical servers with an OS
- hosts: all-servers
gather_facts: no
roles:
- hp-oneview-server

# Configure and deploy the web servers.

Rev. 21.31 229 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

- hosts: webservers
remote user: root
roles:
- base-apache
- web

# Configure and deploy the load balancer.


- hosts: lbservers
remote_user: root
roles:
- haproxy

HPE OneView and Ansible provide a software-defined approach to the management of the entire
hardware and software stack, giving IT the ability to deliver new or updated services on an as-needed or
on-demand basis.

Background information on Ansible


Ansible is an open-source community project sponsored by Red Hat. Ansible provides a simple IT
automation engine that automates cloud provisioning, configuration management, application
deployment, intra-service orchestration, and many other IT needs.
Ansible works by connecting to server nodes and pushing out small programs, called “Ansible Modules”
to those nodes. The modules describe the desired state of the system. Ansible executes these modules
(over SSH by default) and removes them when finished.

A library of modules can reside on any machine, and there are no servers, daemons, or databases
required. Typically, admins work with their favorite terminal program, a text editor, and a version control
system to keep track of changes to the content.

Ansible playbooks

Figure 5-52: Ansible playbooks

Rev. 21.31 230 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Ansible uses YAML (YAML Ain't Markup Language), a simple, human readable markup language, in
playbooks to automate and orchestrate the build, deployment, and management of an application’s
software stack.

Ansible playbooks can be version-controlled and tested just like application software, providing
repeatable and reliable installations and upgrades.

Using a simple Ansible playbook, like the one shown in the figure, DevOps can automate a task such as
the creation of a web server cluster with a load balancer. This playbook assumes that servers are ready
with hardware configured and the OS installed and that they are waiting to land the application stack.

Rev. 21.31 231 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Puppet Forge

Figure 5-53: Puppet Forge

Puppet Forge (https://forge.puppet.com) is a community repository of modules. It features modules for


OpenStack, Docker, Kubernetes, OpenShift, Azure and many others.
One of the available modules is the HPE OneView module. Like the integration modules for Chef and
Ansible, this module allows for management of HPE OneView Appliances. The integration module uses
Puppet manifests and resource declarations to make internal use of the HPE OneView Ruby SDK and
HPE OneView API.

https://forge.puppet.com/hewlettpackard/oneview

Background information on Puppet


Puppet is an open-source software configuration management and deployment tool. Puppet, like Chef
and Ansible, goes beyond traditional system administration. Puppet enables a DevOps environment.

Puppet has its own language, also called Puppet. Puppet is more than just a shell language, such as
Windows PowerShell, or a pure programming language, such as Python. Puppet uses a declarative,
model-based approach. In this way, Puppet can be used to define infrastructure as code and enforce
system configuration.

Figure 5-54: Background information on Puppet

Puppet treats everything in the environment as data: the compute node’s current state, the desired end
state, and all the actions needed to move from one state to the other.

Each Puppet-managed server instance gets a catalog of all the resources and their relationships. It
compares that catalog with the desired system state and will make changes as necessary to bring the
system in accordance with the desired state.

Rev. 21.31 232 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Puppet code

Figure 5-55: Puppet code

The Puppet hierarchy lets you write relatively simple, re-usable code using the following:

• Classes: Blocks of Puppet code that are stored in modules for later use.
• Manifest: Puppet programs are called manifests. A manifest is a collection of classes
• Modules: Manifests are stored in modules. Puppet modules are Puppet's fundamental building
blocks. To keep code reusable and clear, modules should act on the same technology type (for
instance, a module for Microsoft SQL or a module for Apache web server).
• Profiles: Profiles are classes that use multiple modules to configure a layered technology stack. For
example, you can create a profile to set up a web service, including load balancer etc.
• Roles: Roles are classes that wrap multiple profiles to build a complete system configuration. For
instance, a web server role specifies that the server should use standard profiles like “base operating
system profile” and “base web server profile.” In this example, the first roles could specify that the
server should run Ubuntu with a specific version, while the second role could specify that it should
use NGINX.
This hierarchical approach makes data easier to use and re-use, makes system configurations easier to
read, and makes refactoring easier. Classes, defined types, and plugins should all be related, and the
module should be as self-contained as possible.

Puppet resource declarations


Puppet code consists of resource declarations. A resource describes a specific part of a system's desired
state. For example, it can specify that a specific file should exist, or a package should be installed.
Puppet can do more than just describe the desired state of a system. Using Puppet’s declarative, model-
driven language, admins can:

• Enable simulated configuration changes before enforcing them.


• Enforce the deployed desired state automatically, correcting anomalies in a system configuration.
• Report on the differences between actual and desired states and the changes made when the
desired state was enforced.
Puppet uses a client-server approach to provide configuration management. Servers that are managed
with Puppet run an agent to connect with the Puppet server. At intervals, for instance every 30 minutes,
the agent pulls down its updated system configuration from the server.

Rev. 21.31 233 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Puppet Bolt
Puppet Bolt is an open source tool that automates infrastructure maintenance. It is not so much about
getting or keeping a system in a desired state, but instead automating tasks that need to be executed on
an as-needed basis. For instance, to stop or start a service, to run an update, or to run a troubleshooting
script.

Puppet Bolt can run on its own or be part of a larger orchestration tool.

Rev. 21.31 234 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Terraform providers

Figure 5-56: Terraform Providers

HashiCorp Terraform is not so much a configuration management tool like Chef, Ansible or Puppet, but
an infrastructure orchestration tool. Terraform can be used to create, manage, and update infrastructure
resources. These resources may be physical machines, VMs, network switches, containers, or others.
Almost any infrastructure type can be represented as a resource in Terraform.
Although Terraform is not a Configuration Management tool, it can use such tools as providers. The basic
idea is that Terraform is an orchestrator that uses providers to do the jobs they are good at.
The list of providers is long and ranges from large-scale cloud providers such as AWS, Azure and Google
Cloud, to tools like Chef and Puppet, to very specific providers such as HPE OneView. Each provider is
responsible for the underlying APIs and the interaction with the resources.

Rev. 21.31 235 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Terraform configuration

Figure 5-57: Terraform Configuration

An infrastructure configuration by Terraform is defined in .tf files, which are written in HashiCorp
Configuration Language (HCL) or JSON. tf files are simple to read and write.
Terraform supports variable interpolation based on different sources such as files, environment variables,
other resources, and so on.
The figure shows an example of a Terraform tf file that uses OneView as a provider.

Terraform apply
The terraform apply command is used to apply the changes. First, terraform will create an execution
plan. The execution plan will show all the actions that are needed to bring the infrastructure in the desired
state. If the plan was created successfully terraform will pause and wait for approval. If needed, the plan
can be aborted, but if the plan looks good it can be accepted and the plan will be executed.
The following is an example of the terraform apply command being executed.

Rev. 21.31 236 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Rev. 21.31 237 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Mutable and immutable infrastructures

Figure 5-58: Mutable and immutable infrastructures

Mutable means liable to change. A mutable infrastructure is an infrastructure that can be modified or
updated. Server architectures traditionally have been mutable infrastructures. For instance, patches for
the OS can be deployed into the existing OS. This is very flexible but can cause inconsistencies in the
infrastructure as a whole. Especially, an update over an update, over an update could cause configuration
drift.
On the other hand, an immutable infrastructure is an infrastructure with resources that cannot be changed
once it is deployed. If anything needs to be changed or updated, a completely new instance of a resource
will be deployed. Containers are an example of a resource in an immutable infrastructure. In the cloud,
where new environments can be created in minutes, an immutable infrastructure could be a feasible
strategy.

Rev. 21.31 238 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Final thoughts on CM and orchestration tools

Figure 5-59: Final thoughts on CM and orchestration tools

Orchestration tools are focused on the end result, the desired state. If anything in the current state is
missing, the orchestration can automatically provide the missing resource. This is very useful in
environments that require a steady state. In this sense, an orchestration tool fits to the concept of an
immutable infrastructure.
CM tools configure the resources in the environment. If there is a problem with a resource configuration
management tool will attempt to repair the resource instead of just replacing it. This fits to the idea of a
mutable infrastructure.
In theory, the distinction between CM tools and orchestration tools is clear. In daily practice however, it
can be hard to decide whether a tool is a CM tool or an orchestration tool, or whether an infrastructure is
mutable or immutable.
For instance, Chef is a CM tool. But it can work with OneView to replace servers by applying server
profile. Also, Chef can work with Docker containers to provision and replace complete container
resources. This could be seen as replacing a complete resource, like in an immutable infrastructure. By
using the OneView integration, Chef can also act on the hardware infrastructure layer, and thus acts an
orchestration tool.

Rev. 21.31 239 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Summary

Figure 5-60: Summary

This module has shown you the power of automation and orchestration. HPE OneView integrates with
vSphere vCenter and vRealize Suite. HPE InfoSight offers AI-driven insights and optimization for the
complete environment from the infrastructure to the VM. You also looked at various automation and
orchestration tools that you customers might be using so that you understand their role in a SDDC.

Rev. 21.31 240 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

Learning checks
1. What is one benefit of HPE OneView for vRealize Orchestrator (OV4vRO)?
a. It integrates a dashboard with information and events from HPE servers into
vRealize.
b. It provides an end-to-end view of servers' storage (fabric) connectivity within the
vRealize interface.
c. It adds pre-defined workflows for HPE servers to vRealize.
d. It integrates multi-cloud management into the VMware Cloud Foundation (VCF)
environment.
2. Which is an option for licensing HPE OneView for vCenter (OV4VC)?
a. InfoSight licenses
a. Remote Support licenses
b. Composable Rack licenses
c. OneView licenses
3. What is one benefit of OV4VC that is available with the OneView standard license?
a. An easy-to-use wizard for growing a cluster from a single tool
a. Non-disruptive cluster firmware updates from within vCenter
b. An inventory of servers and basic monitoring of them in vCenter
c. Workflows for managing servers and storage
You can check the correct answers in “Appendix: Answers.”

Rev. 21.31 241 Confidential – For Training Purposes Only


Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution

PAGE INTENTIONALLY LEFT BLANK

Rev. 21.31 242 Confidential – For Training Purposes Only


Design an HPE Hyperconverged
Solution for a Virtualized Environment
Module 6

Learning objectives
In this module, you will review the importance of emphasizing the benefits of HPE hyperconverged
solutions to your customers.
After completing this module, you will be able to:
• Given a set of customer requirements, position hyperconverged SDI solutions to solve the
customer’s requirements
• Given a set of customer requirements, determine the appropriate hyperconverged platform

• Explain the integration points between HPE hyperconverged solutions and VMware solutions

• Use the HPE SimpliVity Upgrade Manager

Rev. 21.31 | © Copyright 2021 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Customer scenario

Figure 6-1: Customer scenario

A small community college is struggling to maintain its data center, which has grown organically over the
years. The data center has a lot of aging equipment that is difficult for the limited IT staff to manage. The
college has shifted some services to the cloud, and, while the college wants to maintain other services
on-prem, the customer has made simplifying the data center a priority. The customer has already begun
virtualizing with VMware; your contact originally brought you in to help with a server refresh to handle the
consolidated workloads.
In this discussion, you have discovered some more issues. The CIO wants to improve availability by
adding VMware clustering. He realizes that clustering requires shared storage, but the data center does
not have a SAN—and the CIO does not want to add one. The IT staff doesn't have the expertise to run a
SAN. The CIO also has received complaints about the organization's current manual processes for
backups. But—he tells you—he doesn't have the budget for another project at this point.

Rev. 21.31 244 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Emphasizing the software-defined benefits of HPE SimpliVity


You need to explain to the customer how HPE SimpliVity makes their data center more software-defined
with an emphasis on simplicity. This topic shows how the SimpliVity Data Virtualization Platform simply
delivers responsive, always-there storage without a lot of tuning.

Rev. 21.31 245 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Deduplication with HPE SimpliVity

Figure 6-2: Deduplication with HPE SimpliVity

Some legacy hyperconverged vendors either support inline deduplication or post-process deduplication.
While their inline deduplication does have the intended effect of reducing IOPS and capacity demands on
their drives, it is CPU-intensive, taking power away from production VMs and reducing available IOPS.
Post-process deduplication does the same while also adding IOPS demands on the disk drives.
The HPE SimpliVity Data Virtualization Platform delivers inline deduplication and compression for all data
without compromising the performance of the application VMs running on the same hardware platform.

Rev. 21.31 246 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE SimpliVity Data Virtualization Platform

Figure 6-3: HPE SimpliVity Data Virtualization Platform

The HPE SimpliVity nodes look like standard x86 servers with components such as SSDs, DRAM, and
CPUs. And like any virtualized hosts, they run ESXi or Hyper-V.
But the Data Virtualization Platform empowers simple software-defined storage (SDS), built into the
solution. In logical architecture, it sits between the hardware and the hypervisor, abstracting the hardware
from the VMs and apps that are running on top.
The following sections summarize each part of the architecture.

Presentation Layer
The Presentation Layer interacts with the VMware hypervisor and presents datastores to the hypervisor.
From the point of view of hypervisors—and VMs and apps running on top of them—each datastore is full
of all of the data written to it. However, this layer does not contain any actual data or metadata.

Data Management Layer: File System


The Data Management Layer links the Presentation Layer with the disks that store the actual data. The
top part of this layer is the File System, which stores containers representing VMs and VM backups. The
File System does not store any actual data, only metadata, pointing to data in the object store. To back
up or clone a VM, the File System simply creates a new container with the same metadata. Because no
data is actually copied, the process completes very quickly.

Data Management Layer: Object Store


The Object Store forms the rest of the Data Management Layer. The Object Store stores deduplicated
data. As you see, if VMs (and the backups) have identical data blocks, the object store contains only one
copy of the data. The metadata in the File System simply refers to the same object more than once. The
Object Store data is physically stored on local drives contributed from each node.

Rev. 21.31 247 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE SimpliVity Data Virtualization Platform in action

Figure 6-4: HPE SimpliVity Data Virtualization Platform in action

This figure shows the Data Virtualization Platform in action. The figure simplifies a bit by collapsing the
two parts of the data management later. As you see, the data management layer only writes to the disk
when a VM sends a write request with a new block. If the block already exists, the data management
layer simply updates metadata, and no IO actually occurs on the disk. Because the best IO is the one that
you don't have to do, HPE SimpliVity doesn't just dramatically reduce capacity requirements, it also
improves performance.

Rev. 21.31 248 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Storage IO reduction

Figure 6-5: Storage IO reduction

In a legacy solution, workload IO makes up only a fraction of the total IO requirements. Snapshots, data
mirroring, and backups all add IOs too. With its ultra lightweight approach to protecting data and by
applying inline deduplication for all data, HPE SimpliVity helps customers to reduce their storage IO and
improve performance with less infrastructure.
Read the following sections to see how SimpliVity makes IO disappear.

Backups
When backups run, any data that has been changed since the last backup (at the very least) needs to be
read off the array and sent across the network to the backup storage location. In traditional solutions, this
creates a major spike every night, which is the reason backups are generally only scheduled in the
evenings. By taking local backups via metadata, HPE SimpliVity is able to take full backups with
essentially no I/O, thus eliminating the largest chunk of I/O.

Figure 6-6: Storage IO reduction—Backups

Mirror
To replicate data to a remote site, a traditional solution must read data from the array and send it across
the WAN. This results in additional I/O. By intelligently only moving unique data between data centers,
HPE SimpliVity dramatically reduces the amount of data moved.

Rev. 21.31 249 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Figure 6-7: Storage IO reduction—Mirror

Snapshots
Array-level or vSphere snapshots are quick and often used as a short-term recovery point. While their
effect is relatively small, these snapshots do add to IO requirements. Because HPE SimpliVity backups
can be taken in seconds and have no IO impact, they make an easy replacement for local snapshots.

Figure 6-8: Storage IO reduction—Snapshots

Rev. 21.31 250 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Workload
HPE SimpliVity leaves just the primary application workload, with just a bit of data protection overhead.
And remember that SimpliVity deduplicates and compresses all data, not just data protection. This
reduces the I/O profile even further.

Figure 6-9: Storage IO reduction—Workload

Final result
HPE SimpliVity has dramatically reduced IO requirements while delivering data protection as good or
better than the legacy solution.

:
Figure 6-10: Storage IO reduction—Final result

Rev. 21.31 251 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE SimpliVity data protection mechanisms: RAIN

Figure 6-11: HPE SimpliVity data protection mechanisms: RAIN

HPE SimpliVity clusters combine two ways of protecting data: redundant array of independent nodes
(RAIN) and redundant array of independent disks (RAID). RAIN is described below, and RAID is
described on the next page.

RAIN
The cluster assigns every VM to a replica set with two nodes. Each node has a copy of the VM’s data,
and writes to the VM’s virtual drive are synchronously replicated to both nodes.
To decrease latency, the OVC on the node receiving replicated data sends an Ack as soon as it receives
a write request. The original OVC then sends an Ack to the VM. Meanwhile both nodes individually
deduplicate and compress data and write it to each node’s local drives.
The RAIN function described above is SimpliVity's typical behavior. However, as of OmniStack v4.0.1,
customers can choose to create single-replica datastores. VMs created on single-replica datastores are
single-replica VMs, for which the cluster maintains a copy on only one node. The company might choose
to use single-replica VMs for non-critical apps.

Rev. 21.31 252 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE SimpliVity data protect ion mechanisms: RAID

Figure 6-12: HPE SimpliVity data protection mechanisms: RAID

SimpliVity further protects data by having each node use RAID to store data. A single node can lose one
drive without losing any data. By combining RAID and RAIN, the cluster can lose at least two, and
possibly more, drives without losing any data.

Rev. 21.31 253 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE SimpliVity for mission-critical apps

Figure 6-13: HPE SimpliVity for mission-critical apps

Many customers want the simplicity of hyperconvergence for mission-critical applications, but they can
only deploy such applications on solutions that they can trust. Many competing hyperconverged vendors
use only RAIN to protect data in case of drive failures. HPE SimpliVity's RAIN + RAID can withstand
many more drive failures, making it the clear winner in the mission critical space.

Rev. 21.31 254 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Why HPE SimpliVity data protection is better


Review the graphics to see how HPE SimpliVity protects customers’ data better than solutions that
employ only RAIN.

One failed drive on one node


In the figure below, you see one failed drive on one node. Note:
• Other vendor RAIN -> Data still available
• HPE SimpliVity RAIN + RAID -> Data still available

Figure 6-14: One failed drive on one node

One failed drive on two nodes


In the figure below, you see one failed drive on two nodes: Note:
• Other vendor RAIN -> Data gone
• HPE SimpliVity RAIN + RAID -> Data still available

Figure 6-15: One failed drive on two nodes

Rev. 21.31 255 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

How HPE SimpliVity localizes data

Figure 6-16: How HPE SimpliVity localizes data

For any solution that features SDS, data localization can become an important consideration.
Hyperconverged solutions transform local drives on the clusters’ nodes into an abstracted pool of storage,
which is good from the point of view of simplicity and management. However, from the point of view of
performance, it is best when a VM’s virtual disk is stored on the local drives that belong to the node that
hosts that VM. At the same time, the data also needs to be stored on one or more other nodes to protect
against failures.
The solution could take a few different approaches. In the primary data localization approach, the VM’s
primary data is localized on its node while copies are distributed across multiple other nodes. The RF2
approach makes one copy (in addition to the original) while RF3 makes two. In either case, the peak
performance when all nodes are up is good because the VM’s data is localized. However, replication
takes a toll on performance because the primary node needs to calculate to write each copied block. And
performance becomes poor when a VM moves because data is no longer localized. The system can
rebalance and move data to the VM’s current node, but this takes time and generates IOs that can
decrease performance across the system. In short, these approaches cannot deliver consistent,
predictable performance.
Having no data localization improves predictability because the performance is the same when all nodes
or up or when one fails. However, without data localization, the performance is only fair.
HPE SimpliVity takes a full data localization approach so that it provides the best peak performance and
the best predictability. A VM’s data is localized on the node that hosts it, and all of its data is also
replicated to the same other node. Replication takes less of a toll on performance because the primary
node knows that it always replicates to the same other node.
If the first node fails—or if its local drives fail--the VM can move to the second node and continue to
receive exactly the same excellent performance without any data rebalancing.

Rev. 21.31 256 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Keeping data local with HPE SimpliVity Intelligent Workload Optimizer

Figure 6-17: Keeping data local with HPE SimpliVity Intelligent Workload Optimizer

If the VM needs to move, how does the HPE SimpliVity cluster guarantee that it moves to the node that
already has its data? You will look at an HPE SimpliVity solution built on VMware as an example. The
HPE SimpliVity cluster is a VMware cluster that uses VMware Distributed Resource Scheduler (DRS) and
High Availability (HA). DRS handles choosing the node to which each VM is deployed while HA helps the
cluster restart VMs on a new node if the original host fails. DRS can take factors such as CPU and RAM
load into account when it schedules where to deploy or move a VM. However, DRS does not have insight
into where the SimpliVity DVP stores data. It assumes all data is external to the hosts and, therefore,
moves VMs around freely within the cluster with no regard to where the data may be.
Some competing hyperconvergence solutions simply react to DRS. After DRS moves the VM, the solution
moves data around until it is local again. However, this “follow the VM” approach takes time and impacts
performance with a lot of extra network traffic and IOs. The SimpliVity Intelligent Workload Optimizer
takes a proactive approach. It integrates with DRS and creates DRS rules to ensure that each VM is
deployed on one of the two nodes that stores its data.
This allows VMs to have the peak and predictable performance that data locality and DRS can both
provide, while avoiding the extra I/O and network load of the "follow the VM" approach. The HPE
SimpliVity DVP handles the configuration automatically. In fact, SimpliVity self-heals the configuration
even if an admin changes the groups or rules.

Rev. 21.31 257 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Speeding up data restores

Figure 6-18: Speeding up data restores

SimpliVity’s restore capabilities really set it apart from the competition, allowing companies to restore data
in seconds.
The Town of Mansfield’s experience shows how quickly data can be restored. As a new HPE SimpliVity
customer, the Town of Mansfield noticed the gains in application performance almost immediately. They
also knew they were saving storage space and backup times had decreased significantly. But it was not
until they needed to restore data that they fully appreciated HPE SimpliVity’s built-in resiliency.
The Town of Mansfield had a network issue that unfortunately corrupted their primary SQL Server. The
problem occurred around 9:30 a.m. When they could not resolve the issue, the organization knew they
had to restore their SQL Server from the backup. Before HPE SimpliVity, restoring the server’s 950 GB
would have taken 5 hours, and the Town would have lost more than half a day in productivity.
With HPE SimpliVity, however, they were able to restore their 8:15 a.m. SimpliVity backup, and it took
only 40 seconds to restore the 950 GB SQL Server. The organization was “up and running in under a
minute.” (“The Town of Mansfield’s Unexpected Journey into Hyperconvergence,” Upshot, Oct. 14, 2019.)

Rev. 21.31 258 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE SimpliVity integration with VMware


The next section focuses on how HPE SimpliVity is integrated with VMware.

Rev. 21.31 259 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE SimpliVity plug-ins for VMware

Figure 6-19: HPE SimpliVity plug-ins for VMware

In addition to providing simple, out-of-the-box SDS, HPE SimpliVity integrates with the virtualization
solution to help customers manage SimpliVity from a single interface. The HPE SimpliVity plug-in for
VMware enables admins to manage SimpliVity nodes as VMware hosts just as they are used to doing,
but also adds extra functionality specific to SimpliVity. For example, admins monitor the SimpliVity
Federation as a whole. They can also manage automatic backup policies and initiate manual backup and
recoveries. The plug-in also lets admins monitor databases and the underlying storage from a single tool.
They can create new datastores and expand existing ones. With a single view for monitoring resource
utilization, they can more quickly find and resolve issues. Finally, the SimpliVity plug-in for VMware
includes a Deploy Management Virtual wizard, which allows you to convert a peer-managed federation to
a centrally managed federation. The wizard gives you more flexibility in deploying and managing
federations.
HPE SimpliVity also offers seamless integration with vRealize Automation (vRA) and vRealize
Orchestrator. In the previous module, you learned about how these solutions helps companies use
powerful workflows to orchestrate their services. HPE has developed workflows specific to HPE SimpliVity
to accelerate companies' efforts to use vRA to automate their SimpliVity environment. The figure above
shows a list of the tasks customers can automate with the workflows. If the customer wants to use vRA in
a SimpliVity environment, HPE recommends deploying vCenter, vCenter Single Sign on, the vRA
appliances, and vRealize Orchestrator.

Rev. 21.31 260 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE SimpliVity Deployment Manager with VMware vCenter

Figure 6-20: HPE SimpliVity Deployment Manager with VMware vCenter

After admins install the HPE SimpliVity nodes in the data center on Day 0, the HPE SimpliVity
Deployment Manager helps to automate the deployment of the solution. Read the sections below to see a
high-level overview of the process.

1. vCenter pre-setup
First, admins should establish on vCenter the clusters to which they want to add HPE SimpliVity nodes.

Figure 6-21: 1. vCenter pre-setup

2. Beginning to use Deployment Manager


The admin enters settings on the Deployment Manager to connect to vCenter. The admin chooses a
cluster and specifies whether to use an existing Federation or create a new one.
The admin can then define network settings or import them from an XML.

Rev. 21.31 261 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Figure 6-22: 2. Beginning to use Deployment Manager

3. Node discovery
Admins first discover and add a single node to the cluster. They can then add more.
Here you see that the first node receives a DHCP address. The admin then just needs to scan for host,
and the Deployment Manager automatically discovers it.

Figure 6-23: 3. Node discovery

4. Node deployment
The admin now tells the Deployment Manager to deploy network settings and the ESXi hypervisor to the
host.
After adding the first node to the cluster, admins can quickly deploy the same settings to add more nodes.

Figure 6-24: 4. Node deployment

Rev. 21.31 262 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Why REST API

Figure 6-25: Why REST API

As you have seen, admins can quickly complete common tasks for managing SimpliVity clusters in a GUI.
But sometimes admins need to repeat the same task many times. For example, they might need to clone
multiple VMs every morning for a test team, so clicking through a GUI would be tedious. That’s why HPE
has created the HPE SimpliVity REST API: to allow companies to script the most common administrative
tasks available in the GUI.

Rev. 21.31 263 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Using the REST API

Figure 6-26: Using the REST API

This figure provides an example of a PowerShell function that utilizes the SimpliVity Rapid Clone
functionality. But developers can use any scripting language that can execute a REST API call, including
Python, Java, or orchestration platforms like vRealize Orchestrator.
To make it easier for users to develop and prototype automation scripts, SimpliVity offers a
documentation interface.
The following sections explain how programmers can use this interface to help them create the script.

Finding objects in the interface

Figure 6-27: Finding objects in the interface

Users can navigate through the interface and easily find object types such as virtual machines and
functions that execute on those objects.

Rev. 21.31 264 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Viewing documentation and trying out values

Figure 6-28: Viewing documentation and trying out values

Clicking on any function shows documentation about the function and parameters available for the
function.
After entering values into this screen, admins can click “Try it out” to actually execute the function.

Viewing and using results

Figure 6-29: Viewing and using results

Rev. 21.31 265 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

They will then see the actual results. They can copy the URL to use within custom written code or an
orchestration platform. It’s a very easy and convenient way to test and prototype automation actions.

Rev. 21.31 266 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE SimpliVity Upgrade Manager

Figure 6-30: HPE SimpliVity Upgrade Manager

The HPE SimpliVity Upgrade Manager helps customers to quickly upgrade a complete Federation to new
software without impacting services. Admins choose the new software and run the Upgrade Manager.
The Upgrade Manager upgrades one node in each cluster at a time, first moving that node's VMs to other
nodes. After upgrading one node, the Upgrade Manager moves the VMs for the next node and upgrades
that node until all nodes are on the same software. As you see, if the Federation has multiple clusters, the
Upgrade Manager can upgrade multiple clusters at once.
After the upgrade is complete, admins can choose to roll back the upgrade on all nodes or individual
nodes. While all nodes in a Federation typically have to be on the same version, they are permitted to be
on different versions while the Federation is in this state. Once admins are sure that all nodes are running
the new software and the upgraded Federation is working as expected, they can commit the upgrade
after which point they can no longer roll back.

Rev. 21.31 267 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Sizing an HPE SimpliVity solution


This section focuses on sizing an HPE SimpliVity solution.

Rev. 21.31 268 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE SimpliVity design process

Figure 6-31: HPE SimpliVity design process

This section covers the first two steps of the HPE SimpliVity design process. You will first review
strategies and tools for collecting the data necessary for sizing solution. You will then look at how to input
what you have learned into the HPE SimpliVity Sizing Tool in order to determine the number and type of
nodes to deploy.
Please note that you will need HPE employee or partner credentials to access some of the tools
referenced in this section.

Rev. 21.31 269 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Data gathering

Figure 6-32: Data gathering

Begin by reviewing the data gathering process. Read the following sections to review.

Basic information to put into the sizer


• Number of VMs
• Compute (processor): number of vCPUs, desired vCPU to physical core ratio (V:P), and peak
GHz
• Compute (memory): vRAM requirements
• Storage capacity: Used vs allocated
• Backup requirements
• Storage performance
• IOPS
• Percentage read vs write
• IO sizes
• Growth

Additional information about files


• Are file systems large? Do file servers represent a significant percentage of the total storage?
• Do hosts or applications implement compression or deduplication? Do they implement
encryption?
• Do VMs have a lot of media and archive files that are already compressed (such as .tz, .gz, .zip,
.mp3, .mp4)?

Additional information about backups


• What are backup vs archival needs?
• What are the primary SLAs (will be fulfilled with datastore-level policies)

Rev. 21.31 270 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

o Where will data be backed up (local and remote)


o How frequent of backups
 Most frequent = 10 minutes (more frequent requires DSR)
o How long will backups be maintained?
o What is the change rate?
• Does the customer have any workload-specific SLAs? (will be fulfilled by individual VM-level
policies)

Data gathering tools


• Interviews
• HPE Assessment Foundry (SAF)—This free suite of tools helps you collect data customer
environments. It analyzes configuration and workloads, generating detailed reports. It also helps
you size HPE solutions. For more information, click HPE Assessment Foundry.
• HPE Software-Defined Opportunity Engine (SDOE)—This AI-enabled tool is available through
HPE InfoSight. Using AI and deep learning, SDOE provides insights into customers’ storage
environment and then offers recommendations for technology solutions. In less than a minute,
SDOE auto-generates customer proposals, which include sizing, configuration, and total cost of
ownership analysis. It also adds optional HPE Financial Services and Pointnext information. For
more information about SDOE, visit Seismic or HPE Products and Solutions Now.

Rev. 21.31 271 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Reviewing choices for the HPE SimpliVity platform

Figure 6-33: Reviewing choices for the HPE SimpliVity platform

HPE offers a wide array of SimpliVity platforms optimized for customers' particular requirements,
workloads, and preferences. Read the sections below for a brief review of each model.

HPE SimpliVity 380G


• Option for extra PCIe card (GPU or NIC)
• GPU acceleration useful for workloads such as CAD and analytics
• VDI deployments

HPE SimpliVity 380H


• Lower cost backup
• Hybrid flash and HDD configurations for lower cost storage
• Backup, archive, and DR for other clusters
• High-capacity mixed workload
• General-purpose virtualization

HPE SimpliVity 2600


• Optimized for space constrained areas that need high compute and moderate storage
• Edge computing
• High-density VDI

HPE SimpliVity 325


• Powered by AMD EPYC for 2P-like performance with 1P
• All flash XS and S options
• Ideal for ROBO and edge
• Small-to-medium businesses
• VDI

Rev. 21.31 272 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Preparing for sizing

Figure 6-34: Preparing for sizing

Several factors affect how many clusters you plan. Location can play a role. For example, if you are
designing a solution for a customer with several branch offices, each site might have its own cluster.
Stretched clusters can span WAN links and multiple sites. However, you should only use a stretched
cluster when you want to distribute services across the two sites. For the ROBO solution, it can make
more sense to deploy a separate cluster at each site so that VMs for that site stay local. Clusters can
back up data to a cluster at another site for higher availability.
You might need to plan multiple clusters at the same site if you need a large solution with more the
recommended number of nodes per cluster.
And you might also want to create multiple clusters even if you have fewer than 16 nodes. It can be
beneficial to isolate latency sensitive applications such as VDI on their own clusters. When in doubt,
separate your workload types for optimal performance.
Finally, consider the need for separate compute nodes, which you might want to deploy for power users
in a VDI solution or to support processor hungry applications.

Rev. 21.31 273 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Getting started with the HPE SimpliVity Sizing Tool

Figure 6-35: Getting started with the HPE SimpliVity Sizing Tool

If you are an HPE Partner, you can access the HPE SimpliVity Sizing Tool. (Click here to access the
sizer. If you have trouble accessing the sizer at this link, check HPE Products and Solutions Now for
updated information about it.)
The figure above shows the first sizer window, which will show any saved sizings.
1. Click the Create New Sizing button to begin sizing a new solution.
2. In the pop-up window that is displayed, enter a name in the Sizing Name field and select the type of
deployment: You can choose Infrastructure Cluster for general virtualization and End-User
Computing (GPU) or End-User Computing (Non-GPU) for VDI.
3. Click Create Sizing.
4. Click Add Cluster.
5. In the pop-up window that is displayed, enter a name in the Cluster Name field and click Add
Cluster.

Rev. 21.31 274 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Inputting information to size the cluster

Figure 6-36: Inputting information to size the cluster

When you add a cluster, you will see a window like the one shown in the figure above. Each cluster
consists of one or more VM groups. You enter sizing information for each VM group separately. Read the
following sections for more information.
After configuring all the settings, click Add Cluster.

Basic Mode/Advanced Mode


When you first enter this window, you see the Basic Mode settings. Click Advanced mode to show more
advanced settings. Click Basic Mode to hide those options again.

Basic Mode settings


Version number
Choose version of software that you want to use for the sizing.

VM Count through Latency Tolerance


Fill in these values based on the information that you collected:
• VM count = Number of VMs to deploy on this VM group
• Total vCPU count = Total number of vCPUs on those VMs
• vCPU ratio = Max number of vCPUs desired per physical core
• Allocated memory (GiB) = Total memory allocated to those VMs
• Used Capacity (GiB) = Total storage required for this VM group
• IOPS 95% Percential = Total read and write IOPs at 95% of the max
• Latency Tolerance (ms) = Max latency tolerated by workloads that will run on this VM
group
Add VM Group
Click this button to add another VM group to the cluster.

Additional cluster settings


• Cluster Growth—Specify expected growth by percentage. The fields refer to growth for compute,
memory, storage, and IOPS respectively.
• Stretch Custer—Select N+N if a stretch cluster is needed

Rev. 21.31 275 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

• Compute HA—Select this check box if you want the sizer to take HA into account. The sizer will
ensure that the cluster still meets requirements if a node fails.
After configuring all the settings, click Add Cluster.

Advanced Mode
This section outlines the additional settings available when you select Advanced Mode.

Backup Policies
Click the Backup Policies button to create a backup policy that you can reference in the Local Backup
Policy or Remote Backup Policy menu.

Recommendations
Click the Recommendations button to see the products recommended for your clusters.

Deduplication and compression ratios


Choose the compression and deduplication ratios. You should generally stay with the ratios
recommended by default in the Sizing tool. However, some workloads, in particular VDI, do permit a
higher deduplication ratio. Note that you set ratios for hardware optimized nodes and software optimized
notes separately.
Future updates to the sizing tool might change the default recommended values.

VM Count through Latency Tolerance


Fill in these values based on the information that you collected:
• VM count = Number of VMs to deploy on this VM group
• Total vCPU count = Total number of vCPUs on those VMs
• vCPU ratio = Max number of vCPUs desired per physical core
• Peak pCPU (GHz) = Specify the peak CPU requirements across the VMs
• Allocated memory (GiB) = Total memory allocated to those VMs
• Peak Capacity (GiB) = Total storage required for this VM group
• 95% Peak Storage IOPs = Total read and write IOPs at 95% of the max
• Latency Tolerance (ms) = Max latency tolerated by workloads that will run on this VM
group
Hourly Change Rate through Yearly Change Rate
Set the expected percentage that data will change over the frequency that you have selected for backups.

Backup policies
If you plan to store local backups, select a Local Backup Policy.
If you plan to back up to another cluster, select a Remote Backup Policy and also click Configure to
choose the cluster.
You should have created the policies by clicking the Backup Policies button.

Cluster Growth
Specify expected growth by percentage. The fields refer to growth for compute, memory, storage, and
IOPS respectively.

External Compute
If you plan to recommend external compute nodes, specify their CPU and memory.

Rev. 21.31 276 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Compute HA
Select this check box if you want the sizer to take HA into account. The sizer will ensure that the cluster
still meets requirements if a node fails.

Rev. 21.31 277 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Architecting the HPE SimpliVity Solution


This topic reviews architectural designs and decisions for an HPE SimpliVity solution.

Rev. 21.31 278 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE SimpliVity design process

Figure 6-37: HPE SimpliVity design process

You are at the third step of the design process. You will now review elements of the HPE SimpliVity
architecture and best practices for designing them. Finally, you will review situations in which you need to
submit a Deal Specific Request (DSR).

Rev. 21.31 279 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Architectural design

Figure 6-38: Architectural design

You should create an architecture diagram that shows the components within each cluster and how
clusters connect together. Read the sections below to review the different components.

Cluster
Include each cluster and the site at which it is located. Indicate the number of nodes and the model.
Attached to the diagram, you can add more information about the model such as processor choices and
amount of memory.

vCenter (site 1)
For vCenter and vSphere VDI deployments, you should indicate where vCenter servers are located. They
can be deployed on a separate management SimpliVity cluster, which is generally preferred for larger
deployments. For small deployments, you can place vCenter on the same SimpliVity cluster that hosts
production VMs. You can also deploy vCenter outside of SimpliVity. If you choose to deploy vCenter on a
SimpliVity cluster that it manages, you must deploy vCenter first and then move it to the cluster.
For Hyper-V deployments, you should similarly indicate where Microsoft System Center (MSSC) is
deployed

vCenter (site 2)
A single vCenter server can manage multiple HPE SimpliVity clusters in a Federation. However, the
Federation can also include up to 5 vCenter servers. In this example, site 2 has its own vCenter server for
resiliency. When a Federation has multiple vCenter servers, they must connect with Enhanced Linked
mode.
For Hyper-V, a single MSSC instance is supported, but MSSC can use Microsoft clustering.

Arbiter
An Arbiter helps to break ties in failover situations. HPE SimpliVity 3.7.10 or earlier always required the
installation of an arbiter. For OmniStack v4 and above, Arbiters are only required for two-node clusters or
for any stretch clusters. However, they are also recommended for four-node clusters.

Rev. 21.31 280 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

An Arbiter can never be deployed on a cluster for which it acts as Arbiter. However, it can be deployed on
a different cluster. It can also act as Arbiter for multiple clusters.

Federation
A Federation includes multiple HPE SimpliVity clusters that are managed by the same vCenter
infrastructure. This infrastructure could consist of one vCenter server or multiple vCenter servers
operating in Linked mode.

Site-to-Site links
You need to indicate the link between sites, specifying their bandwidth and latency. This example has
separate clusters at each site, so the latency requirements are less strict. A link used by a stretched
cluster, which has members at multiple sites, must have round trip time (RTT) latency of 2ms or less.

Rev. 21.31 281 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Network design

Figure 6-39: Network design

Every cluster requires three networks: a management, storage, and federation network.

Management
The Management network is the network on which external devices reach the SimpliVity cluster and on
which SimpliVity communicates with vCenter. This network has a default gateway, and it should be
advertised in the routing protocol used by the network so that it is reachable from other subnets.
It can use 1, 10, or 25 GbE NICs, which are shared by VMs' production networks using tagged VLANs.

Storage
Each node has a VMkernel adapter for storage traffic. This adapter connects to the Storage network, as
does each OVC. The Storage network carries NFS traffic for mounting the SimpliVity datastore to the host
and handles IO requests from VMs.
If the cluster has compute nodes, their VMkernel adapters should connect to this network, too.
This network should be dedicated to this purpose; it is not routed. It requires an MTU of 9000 and a
latency of 2ms or under. It can be 10GbE or 25GbE.
With v 4.1.0, HPE SimpliVity allows IT admins to control how much bandwidth HPE SimpliVity uses for
backup and restore operations. This feature is particularly useful for customers who deploy HPE
SimpliVity at branches, remote locations, or any location that has limited bandwidth.

Federation
The Federation network carries OVC-to-OVC communications between nodes. Only OVCs should be
connected to this network.
This network should be dedicated to this purpose; it is not routed. It requires an MTU of 9000. It should
use 10GbE.
OVCs contact OVCs in other clusters on their Federation IP addresses, but the traffic is routed out the
Management network, which has the default gateway.

Rev. 21.31 282 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Cluster and federation sizing guidelines

Figure 6-40: General sizing guidelines

To properly plan a SimpliVity solution, you need to understand the maximum number of nodes supported
for clusters and federations.
SimpliVity supports single-node clusters, which provides only RAID protection for data. However, HPE
generally recommends that clusters consist of at least two nodes. The maximum recommended cluster
size is 16 nodes.
If the customer wants HA and remote backups, the federation needs at least two clusters. A federation
supports up to 96 nodes. For large ROBO environments, a federation could consist of 48 2-node clusters.
In v4.0.1 and above, companies can deploy the HPE SimpliVity Management Virtual Appliance to help
manage the federation. A federation managed with the SimpliVity Management Virtual Appliance is called
a centrally managed federation, while other federations are called peer-to-peer federations. A centrally
managed federation supports up to 96 nodes, all managed by the same vCenter. A peer-to-peer
federation requires at least 3 vCenter instances to manage 96 nodes.
You should look for updates to these guidelines if you are using a software version above 4.1.0.

Rev. 21.31 283 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Determine when to submit a DSR

Figure 6-41: Determine when to submit a DSR: Based on other factors

You should submit a DSR if your solution has special requirements and circumstances:
• Backup period under 10 minutes
• Storage network latency over 2ms (or Management network latency over 300ms)
• Individual VMs larger than 3 TB in size
• Unusual storage requirements
– Significant multimedia files
– Data compressed, deduplicated, or encrypted before entering SimpliVity
• No data collection before sizing
• For VDI
– VDI and other workloads in same cluster
– > 500 users
• Any EUC opportunity
• Additional PCIe hardware (except NIC)
Note that stretch clusters are now supported for more use cases. HPE SimpliVity systems running 3.7.10
and above can be configured in 8+8 node stretch clusters. HPE SimpliVity nodes running software 4.0.1
or above can run linked clone VDI desktops in stretch clusters.

Rev. 21.31 284 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Activity 6

Figure 6-42: Activity 6

This activity uses a new scenario.

Scenario
A small community college is struggling to maintain its data center, which has grown organically over the
years. The data center has a lot of aging equipment that is difficult for the limited IT staff to manage. The
college has shifted some services to the cloud, and, while the college wants to maintain other services
on-prem, the customer has made simplifying the data center a priority. The customer has already begun
virtualizing with VMware; your contact originally brought you in to help with a server refresh to handle the
consolidated workloads.
In this discussion, you have discovered some more issues. The CIO wants to improve availability by
adding VMware clustering. He realizes that clustering requires shared storage, but the data center does
not have a SAN—and the CIO does not want to add one. The IT staff doesn't have the expertise to run a
SAN. The CIO also has received complaints from IT staff about the organization's current manual
processes for backups. But—he tells you—he doesn't have the budget for another project at this point.

Additional background information


The community college has approximately 5,000 students and 500 faculty and employees. The IT staff is
small with few full-time positions. The virtualized environment currently has just five hosts, which support
about 10-12 VMs each. The customer is using Dell servers.

Task
Take a few minutes to reflect on and list the reasons that HPE SimpliVity will be a good solution for this
customer.

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 21.31 285 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

__________________________________________________________________________

Rev. 21.31 286 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Nimble Storage dHCI: Emphasizing the Software-


Defined Benefits
You will now review HPE Nimble Storage dHCI (which stands for disaggregated hyperconverged
infrastructure).

Rev. 21.31 287 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Nimble Storage dHCI versus traditional HCI solutions

Figure 6-43: HPE Nimble Storage dHCI versus traditional HCI solutions

Customers need to simplify how they deploy and manage the infrastructure that supports their virtualized
workloads. Although hyperconverged infrastructure (or HCI) solutions offer an attractive solution, HCI
solutions can have one drawback. Traditional HCI stacks consist of servers, which contribute both the
compute and storage resources, in the form of local storage drives. When customers need to scale the
solution, they add another server, and compute and storage scale uniformly. However, many workloads
feature complex architectures that scale less cleanly. For these unpredictable workloads, the
requirements for the storage-hungry database layer might grow more quickly than requirements for the
compute-intensive application layer. With traditional HCI, the customer must invest in more compute
power than needed simply to obtain the required storage. Or, the opposite may occur.
In either case, customers with unpredictable workloads face a difficult choice. Do they deploy a
converged bundle of servers and storage arrays so that they can scale storage and compute separately,
but miss out on the simplicity and operational benefits of HCI? Or do they deploy HCI and end up over-
provisioning?
HPE Nimble Storage dHCI provides the flexibility of converged infrastructure with the simplicity of HCI. It
enables customers to deploy ProLiant DL servers and Nimble arrays, which automatically discover each
other and form a stack. Customers manage the stack from an intuitive management UI and integrate it
into vCenter, as easily as a traditional HCI stack.
HPE Nimble Storage dHCI is designed to deliver high performance and availability while allowing
customers to scale compute and storage separately.

Rev. 21.31 288 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Scaling compute and storage with disaggregated HCI

Figure 6-44: Scaling compute and storage with disaggregated HCI

You will now consider how HPE Nimble Storage dHCI enables customers to scale compute and storage
precisely as they want. However, from the admins’ perspective, dHCI is a single stack.
In the figure above, the customer initially deploys a pool of 32 compute nodes, but far less
storage. In the figure below, you can see how the customer can begin to scale storage.

Figure 6-45: Scaling compute and storage with disaggregated HCI

Rev. 21.31 289 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

When adding storage nodes, the customer can scale up capacity within a single chassis with mixed
capacities. The customer can scale up further by attaching capacity expansion shelves—each one being
its own independent RAID group. The customer can also scale out storage and cluster up to for array
platforms in a single instance for aggregated performance and capacities up to 9PB.

Figure 6-46: Scaling compute and storage with disaggregated HCI

Rev. 21.31 290 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Nimble Storage dHCI and vVols features

Figure 6-47: HPE Nimble Storage dHCI and vVols features

HPE Nimble Storage dHCI supports all the same features of Nimble, including its support for VMware
vVols.

As you recall, a vVol is a volume on a SAN array, which a VM uses to back its disk that than a VMDK file
within a VMFS datastore. vVols can simplify storage management. When a VMware admin performs a
task like creating a new virtual disk, or snapshotting a disk, the storage array automatically provisions the
vVol or takes the snapshot. The vVol approach also enables admins to apply policies at a VM-level rather
than a LUN-level.

Nimble arrays offer mature vVol support with features such as QoS, thin provisioning, data encryption and
deduplication. Nimble snapshots are fast and efficient. Nimble supports application-aware snapshots for
vVols, which help ensure consistency for data backed up with Volume Shadow Copy (VSS). The VM
recycle bin helps to protect companies from mistakes. Nimble defers deleting VMs for 72 hours, allowing
admins to reclaim the VMs within that time period, if necessary.

Companies using vVols can also take advantage of Nimble replication features and Nimble integration
with HPE Cloud Volumes.

Rev. 21.31 291 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE InfoSight integration with Nimble Storage dHCI

Figure 6-48: HPE InfoSight integration with Nimble Storage dHCI

HPE InfoSight provides cross-stack recommendations for HPE Nimble Storage dHCI, just as it does for
other HPE storage solutions. One of the major benefits with the dHCI platform is that InfoSight provides
end-to-end full stack analytics and AI-Ops. The HPE Nimble Storage dHCI automatically collect statistics
from the storage arrays and the ESXi hosts as well HPE iLO. It collates all the statistics within the array
and submits them to HPE Infosight. Admins can then see all the statistics for Nimble Storage dHCI, in the
context of an integrated solution.
InfoSight cross-stack analytics gives customers insight into applications and workloads, VMware objects,
and the storage layer for VMware as well. InfoSight provides a granular view of the resources every VM
uses. This information makes it possible to correlate the performance of VMs in datastore with insights in
the host resource constraints such a vCPU, memory, and network.
InfoSight provides performance and wellness information across the complete Nimble Storage dHCI
solution. It not only helps customers detect common issues such as under-performing VMs but also helps
them identify the root cause for such issues. Further, InfoSight provides customized recommendations for
the entire environment, including VMs, hosts, storage, and networks.
Finally, InfoSight applies deep data analytics to telemetry data that is gathered from HPE Nimble Storage
array. This enables InfoSight to identify even rate issues and begin to determine when the issue occurred
and begin to pinpoint the causes.

Rev. 21.31 292 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Nimble Storage dHCI: Architecting the solution


In this section, you will focus on designing and deploying a Nimble Storage dHCI solution for a VMware
environment.

Rev. 21.31 293 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Nimble Storage dHCI platform building blocks

Figure 6-49: HPE Nimble Storage dHCI platform building blocks

HPE Nimble Storage dHCI supports a range of products, which can be combined into a disaggregated
HCI platform. This figure shows the products that customers can use to build the solution. (Note that this
information was current when this course was created; please check the HPE web site for up-to-date
information: https://www.hpe.com.)

Storage
You can use HPE Nimble Storage all-flash or adaptive flash models for iSCSI only. Customers can also
use HPE Alletra 6000 for storage although this option is not covered in this course. (As mentioned earlier,
the Alletra storage solutions were announced as this course was being developed and are not covered in
this course.)

Compute
Nimble Storage dHCI supports the servers listed in the figure. The Gen 9 models are supported only in
brownfield deployments, which enables customers to use their existing servers for a Nimble dHCI
deployment. You will learn more about Nimble Storage dHCI both greenfield and brownfield deployments
later in this module.

Hypervisor
Nimble Storage dHCI supports VMware vSphere 7.0 or 6.7 for greenfield deployments or VMware
vSphere 6.5 for brownfield deployments.

Management
For management, Nimble Storage dHCI enables admins to use the familiar VMware vCenter. It also
includes tools to set up, manage, and upgrade the stack.

Network
For greenfield deployments, Nimble Storage dHCI supports HPE StoreFabric M-Series, FlexFabric
57x0/59x0, and Aruba 6300/83xx switches.

Rev. 21.31 294 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Nimble Storage dHCI architecture

Figure 6-50: HPE Nimble Storage dHCI architecture

HPE Nimble Storage dHCI integrates ProLiant hosts running vSphere, 10GbE switches, and a Nimble
Storage imaged array into a single stack. As this figures shows, the integrated solution has a single
management plane, which is VMware vCenter.
Before this integrated stack can be created, the HPE Nimble Storage Connection Manager (NCM) must
be installed on each host where the HPE Nimble Storage dHCI solution will be deployed.
HPE provides a number of tools to help admins integrate the individual products into a disaggregated HCI
solution:
• dHCI Stack Setup—This wizard runs after admins set up the dHCI-enabled array and guides
admins through the process of setting up the complete solution. In a greenfield deployment, the
wizard guides admins through the process of creating a vCenter server, setting up data stores
and clusters, setting up new switches, and adding and configuring new ProLiant servers. In a
brownfield deployment, the wizard guides admins through the process of adding a Nimble array
to an existing vCenter server, and specifying and discovering the ProLiant servers and switches
that will become part of Nimble dHCI
• Stack Management—Stack management is implemented as a vCenter plug-in, allowing admins
to manage and monitor Nimble Storage dHCI from within the familiar vCenter interface.
• dHCI DNA Collector—The Collector gathers information about the storage system, including
configuration settings, health, and statistics. This information is reported in the vCenter plug-in.
• dHCI Stack Upgrades—This tool manages and streamlines the process of upgrading the devices
in the integrated stack.
As you can see the devices use heartbeats to ensure the stack is healthy and intact.

Rev. 21.31 295 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Nimble Storage dHCI: Multiple vSphere HA/DRS cluster support

Figure 6-51: HPE Nimble Storage dHCI: Multiple vSphere HA/DRS cluster support

At the time this course was released, HPE Nimble Storage dHCI supported a maximum of one vSphere
cluster in the integrated management plane of the solution. You can create additional separate, isolated
vSphere clusters using standard iSCSI shared storage backed by the HPE Nimble Storage dHCI array.
This would be provisioned via the array GUI and managed as a standard vSphere solution. The dHCI
management plane would have no visibility of these servers.
This design is much more flexible and adaptable than classic HCI vendors that support only a single
vSphere cluster in the management plane and cannot provision the storage outside of that cluster for
services or requirements.

Rev. 21.31 296 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Guidelines for deploying HPE Nimble Storage dHCI

Figure 6-52: Guidelines for deploying HPE Nimble Storage dHCI

VMware ESXi image for HPE server


To be able to discover and use your existing HPE ProLiant server with HPE Nimble Storage dHCI, you
must use the VMware ESXi image for HPE servers.

Management and iSCSI networks


Nimble Storage dHCI requires three networks: one management and two iSCSI networks. The iSCSI
networks should be 10 Gbps or faster.

Recommended switch settings


The switches should be configured with these settings:
• Enable jumbo frames on the iSCSI networks
• Ensure that the devices across the network are using the same MTU settings
• Enable Link Layer Discovery Protocol (LLDP) on switches
• Enable flow control

Other settings
• Enable flow control on hosts and array ports as well
• Configure DNS server with proper forward and reverse DNS entries
• Configure all dHCI components to use the same NTP server, ensuring that they are all set to the
same time
• Include a DHCP server in the management VLAN for the initialization. After the dHCI solution is
set up, it will be assigned new IP addresses, and the DHCP server will no longer be needed.
• Configure the HPE Nimble Storage Connection Manager on each host on which the Nimble dHCI
solution will be deployed

Firewall
Make sure that your firewall allows communication in both directions:

Rev. 21.31 297 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

• • HPE Nimble Storage array communication to the vCenter instance through port 443 and 8443
• • VMware vCenter communication to the HPE Nimble Storage array through port 443 and 8443
• • HPE Nimble Storage array to ESXi over SSH port 22

Rev. 21.31 298 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Required VMware licenses


VMware vCenter Server license
The HPE Nimble Storage dHCI solution requires VMware vCenter Server for Essentials or VMware
vCenter Server Standard™.

VMware vSphere license


The HPE Nimble Storage dHCI solution requires a VMware vSphere license that supports high-availability
functionality and APIs for Array Integration and Multipathing. Hewlett Packard Enterprise recommends
using ESXi Enterprise Plus.

Rev. 21.31 299 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE InfoSight Welcome Center: Guided deployments

Figure 6-53: HPE InfoSight Welcome Center: Guided deployments

The HPE InfoSight Welcome Center is designed to help you quickly and easily deploy HPE storage
solutions. In addition to HPE Nimble Storage dHCI, the InfoSight Welcome Center supports:
• HPE Nimble Storage
• HPE Primera
• HPE Alletra Storage
The sections that follow describe the guidance the Welcome Center provides for Nimble Storage dHCI.

Getting started
The “Getting started” section provides a preinstallation checklist for both greenfield (new) and brownfield
(existing) installations.
For Nimble Storage dHCI, the preinstallation checklist helps you prepare so you can install the actual
solution in 30 to 45 min. For example, the preinstallation checklist details for:
• Required components
• Recommendations for location
• Power sources
• Network layout
• Network ports and cabling
• Guidelines for creating firewall policies to allow Nimble dHCI traffic
• Storage and server configuration
• Network configuration

Physical installation
The Welcome Center also guides you through the installation. For Nimble Storage dHCI, it provides
videos to walk you through the steps of physically installing and cabling the storage array, servers, and
switches.

Software configuration
This sections explains the process of configuring the switch, preparing the environment, discovering the
array, setting up the array, configuring the vCenter, and validating the array.

Rev. 21.31 300 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Two deployment paths

Figure 6-54: Two deployment paths

With HPE Nimble Storage dHCI, you have two deployment options: greenfield or brownfield.

Greenfield deployment
As the name suggests, a greenfield solution is a new deployment. For switches, customers can choose
from HPE StoreFabric M-Series, HPE FlexFabric 57x0/59x0, or Aruba 6300/83xx switches.
At the time this course was created, new Nimble Storage dHCI deployments supported the following
servers:
• HPE ProLiant DL325 Gen10 and Gen10+
• HPE ProLiant DL385 Gen10 and Gen10+
• HPE ProLiant DL360 Gen 10
• HPE ProLiant DL380 Gen 10
• HPE ProLiant DL560 Gen 10
• HPE ProLiant DL580 Gen 10
As always, you should check for updated information.
You can build Nimble Storage dHCI using all-flash or adaptive flash models for iSCSI only.

Brownfield deployment
Brownfield deployments allow customers to use existing good quality switches as well as existing HPE
ProLiant servers. At the time this course was created, the following servers were supported:
• HPE ProLiant DL325 Gen10 and Gen10+
• HPE ProLiant DL385 Gen10 and Gen10+
• HPE ProLiant DL360 Gen 10 and Gen 9
• HPE ProLiant DL380 Gen 10 and Gen 9
• HPE ProLiant DL560 Gen 10 and Gen 9

Rev. 21.31 301 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

• HPE ProLiant DL580 Gen 10 and Gen 9


• As always, you should check for updated information.
• You can build Nimble Storage dHCI using all-flash or adaptive flash models for iSCSI only.

Understanding greenfield and brownfield deployments


The figures below show how Nimble Storage dHCI greenfield and brownfield deployments are built.
With greenfield deployments, the VMware vSphere image and HPE Nimble Connection Manager can be
installed on servers and the NimbleOS dHCI can be installed on the storage arrays at the factory. At the
customer site, admins then just need to complete the network initialization and run the Nimble dHCI
Setup Wizard.

For brownfield deployments, the admins must ensure the network and server components meet the
requirements for being part of Nimble dHCI. For example, you must install the VMware vSphere 6.7 dHCI
image and the Nimble Connection Manager on each host.
Figure 6-55: Greenfield deployment

Figure 6-56: Brownfield deployment

Rev. 21.31 302 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE InfoSight integration with Nimble Storage dHCI

Figure 6-57: HPE InfoSight integration with Nimble Storage dHCI

To integrate HPE Nimble Storage dHCI, you must visit the HPE InfoSight portal and register HPE Nimble
Storage dHCI.
Once Nimble Storage dHCI is registered, you must enable telemetry streaming for HPE InfoSight and
cross-stack analysis:
1. From the settings menu (the gear icon) on the HPE InfoSight Portal, select Telemetry Settings.
2. Locate the array you want to monitor and click the Streaming button to On. This button enables data
streaming from the array.
3. In the same row, click the VMware button to On. This button allows data to be collected from
VMware. Wait for HPE InfoSight to process the vCenter registration and start streaming VMware and
array data (up to 48 hours).

Rev. 21.31 303 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Nimble Storage dHCI vCenter plug-in

Figure 6-58: HPE InfoSight integration with Nimble Storage dHCI

Once Nimble Storage dHCI is set up, admins can manage it using the dHCI vCenter plug-in. They can
complete tasks such as:
• Add new servers
• Create a new VMFS datastore
• Grow the VMFS datastore
• Clone a VMFS datastore
• Create a snapshot of a VMFS datastore
• Create a vVol datastore
Because admins are using the familiar vCenter interface, managing Nimble Storage dHCI is
straightforward.
The vCenter plug-in also allows admins to perform a consistency check to ensure their Nimble Storage
dHCI is set up correctly.

Rev. 21.31 304 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

HPE Nimble dHCI tools

Figure 6-59: HPE Nimble dHCI tools

Below is a list of Nimble dHCI tools and the URL where you can access them:
• HPE Assessment Foundry (SAF): HPE Assessment Foundry Portal
• Primary storage and compute sizing: HPE Infosight Resources
• dHCI Networking Tools: HPE Infosight Downloads
• NinjaSTARS: HPE Assessment Foundry Portal

Rev. 21.31 305 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Summary

Figure 6-60: Summary

In this module, you reviewed how HPE SimpliVity helps your customer protect their data in their SDDC.
You also focused on sizing and designing HPE SimpliVity solutions to meet customers' needs for a
software-defined data center (SDCC).
You also learned more about Nimble Storage dHCI, focusing on its integration with VMware.

Rev. 21.31 306 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

Learning checks
1. On which network do HPE SimpliVity nodes have their default gateway address?
a. Storage
b. Management
c. Cluster
d. Federation
2. How does an HPE SimpliVity cluster protect data from loss in case of drive failure?
a. Only RAIN (replicating data to at least three nodes
b. Only RAID (with the level depending on the number of drives)
c. Both RAID (with the level depending on the number of drives) and RAIN (replicating data to two
nodes)
d. Only RAID (always RAID 10)

You can check the correct answers in “Appendix: Answers.”

Rev. 21.31 307 Confidential – For Training Purposes Only


Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment

PAGE INTENTIONALLY LEFT BLANK

Rev. 21.31 308 Confidential – For Training Purposes Only


Design an HPE VMware Cloud
Foundation (VCF) Solution
Module 7

Learning objectives
In this module, you will learn how to design an HPE solution for VMware Cloud Foundation (VCF).
After completing this module, you will be able to:
• Describe the HPE Composable Strategy and position HPE value proposition for VMware Cloud
Foundation
• Given a set of customer requirements or use case, position VCF on HPE Composable Infrastructure
to solve the customer’s requirements
• Describe the integration points between VCF and HPE Synergy

Rev. 21.31 | © Copyright 2021 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Module 7: Design an HPE VMware Cloud Foundation (VCF) Solution

Customer scenario: Financial Services 1A

Figure 7-1: Customer scenario: Financial Services 1A

Financial Services 1A has invested in a highly virtualized data center and taken steps to transform
compute, storage, and networking with software-defined technologies. But the company still needs help
bringing all of the components together. IT knows that it needs to respond to line of business (LOB)
requests more quickly. Ideally IT would like to give developers a cloud experience without moving
workloads off-prem. All of these needs point towards a private cloud solution, and the CIO is looking into
VMware Cloud Foundation (VCF).

Rev. 21.31 310 Confidential – For Training Purposes Only


Module 7: Design an HPE VMware Cloud Foundation (VCF) Solution

VMware Cloud Foundation (VCF) architecture


In the first section in this module, you will review VCF architecture. (Please note that this course was
written based on VCF v4.1. You should check the VMware web site for more detailed information about
the latest version.)

Rev. 21.31 311 Confidential – For Training Purposes Only


Module 7: Design an HPE VMware Cloud Foundation (VCF) Solution

VCF SDDC Manager and domains

Figure 7-2: VCF SDDC Manager and domains

In a VCF deployment, admins use SDDC Manager to configure and manage the logical infrastructure.
SDDC also automates some tasks such as provisioning hosts.
VCF domains are used to create logical pools across compute, storage, and networking. VCF includes
two types of domains: the management domain and virtual infrastructure workload domains.

Management domain
The management domain is created during the VCF “bring-up,” or installation, process. The management
domain contains all the components that are needed to manage the environment, such as one or more
instances of vCenter Server, the required NSX components, and the components of the VMware vRealize
Suite. The management domain uses vSAN storage.
You can set up availability zones to protect the management domain from hosts failing. Regions enable
you to locate workloads near users. Regions help you apply and enforce local privacy laws and
implement disaster recovery solutions for the SDDC.

Virtual infrastructure workload domain


Virtual infrastructure (VI) workload domains are reserved for user workloads. A workload domain consists
of one or more vSphere clusters. Each cluster must include a minimum of three hosts and can scale up to
a maximum of 64 hosts. (Check the current version of VCF for up-to-date information about scalability.)
SDDC manager automates the creation of the workload domain and the underlying vSphere cluster(s).
Within a cluster, all the servers must be homogeneous. That is, they must be the same model and type. If
the VI domain contains more than one cluster, however, the servers in the different clusters do not need
to be homogeneous. For example, if the domain has two clusters, the servers in cluster 1 must be the
same model and type, and the servers in cluster 2 must be the same model and type. However, the
servers in cluster 1 do not need to be the same model and type as the servers in cluster 2.
When the first VI workload domain is created, SDDC manager creates a vCenter server and an NSX
Manager, which are placed in the management domain. For each additional VI workload domain, another
vSphere Server is deployed to the management domain. You can choose to have the additional VI
workload domain share an existing NSX manager cluster or deploy a new NSX Manager cluster.

Rev. 21.31 312 Confidential – For Training Purposes Only


Module 7: Design an HPE VMware Cloud Foundation (VCF) Solution

VCF architecture: Standard model

Figure 7-3: VCF architecture: Standard model

VMware supports two architecture modules: standard and consolidated.


Most installations will use the standard model. As this figure shows, the standard model includes a
dedicated management domain, which hosts only management workloads.
The standard model also includes at least one VI workload domain, which hosts the user workloads. As
mentioned earlier, one vCenter server is required for each VI workload domain.
VMware recommends companies use the standard model because it separates management workloads
from user workloads and provides greater flexibility and scalability.
It is important to know that you do not select the architecture model you will use when you bring up VCF.
For every VCF deployment, you start by deploying the management domain. If you are using the
standard architecture, you continue by setting up a VI workload domain and deploying the user workloads
in this domain.

Guidelines for deploying VCF


Below are some general guidelines for deploying VCF in a standard model:
• Management domain:
– vSAN storage
– One vCenter server
• VI workload domain:
– Deploy one vCenter server for each VI workload domain.
– When setting up a cluster, ensure all servers in the cluster are the same model and type. As noted
earlier:
 Servers in cluster 1 must be the same model and type
 Servers in cluster 2 must be the same model and type
 Servers in cluster 1 do not need to be the same as servers in clusters 2

Rev. 21.31 313 Confidential – For Training Purposes Only


Module 7: Design an HPE VMware Cloud Foundation (VCF) Solution

– You can use SAN arrays to enhance performance for a VI workload domain. Supported storage
includes vSAN, vVols, NFS, or VMFS on FC.
– For vSAN-backed VI workload domains, vSAN ReadyNode configurations are required.
You will also the necessary VMware vSphere, vSAN, and NSX-T licenses to support the specific VI
workload domain deployment.

Rev. 21.31 314 Confidential – For Training Purposes Only


Module 7: Design an HPE VMware Cloud Foundation (VCF) Solution

VCF architecture: Consolidated model

Figure 7-4: VCF architecture: Consolidated model

The consolidated model is designed for companies that have a small VCF deployment or special use
cases that do not require most hosts. With the consolidated model, both management and user
workloads run in the management domain. You manage the VCF environment from a single vCenter
server. You can use resource pools to isolate the management workloads and the user workloads.
Remember that when you bring up VCF, you do not select the architecture model. No matter which
architecture module you are using, you first deploy and bring up the management domain. If you are
using a consolidated architecture, you then deploy the user workloads in that management domain, using
resource pools to isolate them from the management workloads.
If you later want to migrate a consolidated architecture to a standard architecture, the process is fairly
straightforward. You create a VI workload domain and then move the workload VMs to the new domain.

Rev. 21.31 315 Confidential – For Training Purposes Only


Module 7: Design an HPE VMware Cloud Foundation (VCF) Solution

HPE integration with VCF


You will now review how HPE has integrated HPE Synergy with VCF, making it the ideal platform for
deploying this solution.

Rev. 21.31 316 Confidential – For Training Purposes Only


Module 7: Design an HPE VMware Cloud Foundation (VCF) Solution

First composable platform that seamlessly integrates with SDDC


Manager

Figure 7-5: First composable platform that seamlessly integrates with SDDC Manager

HPE and VMware have tightly integrated SDDC Manager and HPE OneView powering HPE Synergy to
deliver simplicity in managing composable infrastructure and the private cloud environments. By
introducing the HPE OneView connector for VFC, HPE brings composability features to VCF. Through
this unique integration and enhanced automation customers can dynamically compose resources within a
single console using SDDC Manager to meet the needs of VCF workloads, thus saving time and
increasing efficiency. This integration simplifies management of infrastructure by providing the ability to
quickly respond to business needs to add capacity on demand directly from SDDC Manager. It does so
seamlessly to increase business agility and help reduce cost from overprovisioning or under provisioning
of resources.

Rev. 21.31 317 Confidential – For Training Purposes Only


Module 7: Design an HPE VMware Cloud Foundation (VCF) Solution

Why VCF on HPE Synergy?

Figure 7-6: Why VCF on HPE Synergy?

Think about how Synergy delivers these benefits in a bit more detail. As you see here, Synergy eliminates
Top of Rack (ToR) switches by bringing networking inside the frame; in this way it greatly reduces
infrastructure cost and complexity. Synergy Virtual Connect (VC) modules provide profile-based network
configuration, designed for server admins. Because server admins no longer need to wait for network
admins to reconfigure the infrastructure, they can move server profiles from one Synergy compute
module to another as required, making infrastructure management simpler and more flexible. HPE
Synergy also stands out from other solutions because it disaggregates storage and compute. In other
words, rather than each server having its own local drives, forcing companies to scale compute and
storage together, Synergy has separate compute modules and storage modules. Admins can use profiles
to flexibly connect or disconnect compute modules from drives on the storage modules. Because Synergy
provides the same flexibility and profile-driven management to both virtualized and bare metal workloads,
customers can consolidate traditional data center applications and their VCF-based private cloud on the
same infrastructure, reducing management complexity and costs.

Rev. 21.31 318 Confidential – For Training Purposes Only


Module 7: Design an HPE VMware Cloud Foundation (VCF) Solution

Deployment options for VCF on HPE Synergy

Figure 7-7: Deployment options for VCF on HPE Synergy

Customers have several options for deploying VCF on HPE Synergy:


• Do It Yourself (DIY)—The customer can deploy it themselves. This option offers the most flexibility
but also requires the highest level of skill and knowledge. Rather than doing the deployment
themselves, customers can also work with an HPE Partner to implement VCF.
• Customized offering with HPE Pointnext Services—Customers can also use HPE Pointnext Services
to deploy VCF. HPE Pointnext Services focuses on decreasing the time to deploy the solution, setting
up the solution “first time right.”
• HPE GreenLake as-a-service—Customers can obtain VCF as-a-service through HPE GreenLake.

Rev. 21.31 319 Confidential – For Training Purposes Only


Module 7: Design an HPE VMware Cloud Foundation (VCF) Solution

Sizing HPE Synergy for VCF

Figure 5-8: Sizing HPE Synergy for VCF

The Solution Sales Enablement Tool (SSET) helps you size the HPE Synergy solution for VCF. This tool
gives you three options for sizing a VCF solution: quickstart configuration, basic option, and expert option.

Quickstart configuration
Designed to eliminate the guesswork and complexity from the ordering process, the quickstart
configuration shortens the quote time and simplifies the process of sizing the solution. It relies on
predefined solutions, offering the simplest configuration process with the highest “guidance” level.
You can configure:
• Number of VMs
• VM types—small, medium, or large
Based on the size of the VM you select, SSET adjusts:
• vCPUs per VM
• vRAM per VM (GB)
• Storage per VM (GM)
• Storage preference
You can select Review and wait while the tool sizes the solution. SSET then displays the proposed HPE
Synergy solution for VCF.

Basic option
Like the quickstart configuration, the basic option is designed to simplify the process of sizing a VCF
solution. The basic option offers a simple configuration process with guidance to help you gather the
information needed to size the solution. The basic option offers more flexibility than the quickstart
configuration, allowing you to customize more options.
You can select Review to display the proposed HPE Synergy solution for VCF.

Rev. 21.31 320 Confidential – For Training Purposes Only


Module 7: Design an HPE VMware Cloud Foundation (VCF) Solution

Expert option
The expert option is designed for architects who have experience scoping VCF deployments. It provides
a more detailed configuration process. You have the flexibility to specify more options but still receive
guidance in scoping the solution. As with the other two options, SSET allows you to review the solution.

Access SSET
You can access SSET at:
https://sset.ext.hpe.com/

Rev. 21.31 321 Confidential – For Training Purposes Only


Module 7: Design an HPE VMware Cloud Foundation (VCF) Solution

Order, build, and validate automation on HPE Synergy

Figure 5-9: Order, build, and validate automation on HPE Synergy

In addition to helping you size the VCF solution, HPE provides the tools and the support you need, from
ordering and validating the solution to bringing up VCF.
The HPE VCF solution is tightly integrated to help reduce deployment errors while also reducing
operational and maintenance costs. You have already seen how SSET helps you size the solution.
You can then use HPE Smart CID to create a Customer Intent Document (CID) that contains system
requirements and configuration information. You can import the “guidance” from SSET so that Smart CID
imports the sized solution.
HPE Smart CID also integrates with the HPE Solution Automation tool kit (SAT). SAT provides
prevalidated VCF configurations and helps automate the ordering process. Based on customer inputs in
CID, the SAT builds the underlying HPE Synergy infrastructure, as per best practices, with pre- and
post-validations to create VMware Cloud Foundation management and domain workloads. It helps
eliminate the guesswork of designing a VCF solution while reducing human errors.
The SAT-Build (S-Build) and SAT-Validate (S-Validate) are automation plug-ins for SAT. These plug-ins
run within the SAT framework and assist in build automation and validation of HPE Synergy VCF
Solution.

Rev. 21.31 322 Confidential – For Training Purposes Only


Module 7: Design an HPE VMware Cloud Foundation (VCF) Solution

VCF Cloud Builder VM

Figure 7-10: VCF Cloud Builder VM

The VCF Cloud Builder VM is designed to help you bring up VCF. Using information you provide in the
VCF deployment parameter workbook, the Cloud Builder VM deploys and configures the first cluster in
the management domain. Once the management domain cluster is installed, the Cloud Builder VM
transfers inventory information and control to SDDC Manager.
Before running the Cloud Builder VM, you must enter comprehensive configuration information into the
VCF deployment parameter workbook. This information includes:
• Network information, such as IP addresses for hosts, IP addresses for gateways, VLAN settings,
MTU settings, management IP addresses, DHCP settings, and DNS settings
• VMware license keys (for ESXi, vSAN, vCenter server, NSX-T and SDDC Manager)
• Passwords for VCF components
• Configuration settings for the VCF management domain, including NSX-T configuration settings and
SDDC configuration settings (host name, IP addresses, and network pool name)
As the Cloud Builder VM deploys the management domain cluster, it validates configuration information
provided in the deployment parameter workbook. To verify this information, the Cloud Builder VM requires
network connectivity to the ESXi hosts for the management network (VLAN). It also needs to
communicate with DNS and NTP servers so it can validate configuration information in the VCF
deployment parameter workbook.
For VCF, you should download the VMware base image and the HPE add-on to create the desired cluster
image.

Rev. 21.31 323 Confidential – For Training Purposes Only


Module 7: Design an HPE VMware Cloud Foundation (VCF) Solution

HPE Synergy + VCF guidelines

Figure 7-11: HPE Synergy + VCF guidelines

When deploying VCF on HPE Synergy, use the following general guidelines:
• Scalability up to 256 nodes (max scale) per VCF instance
• Cache and data drive sizes dictated by VM sizing prior to purchase (no set “only use these disks” in
VCF)
• Physical layout of frames and racks dependent upon HA and VM sizing (local to D3940) with specific
drives
• High availability—For high availability, design redundancy within the HPE Synergy frame and provide
two or more frames.
• All nodes in the same cluster—equivalent configurations of memory and equivalent configurations of
vSAN
• Compute, memory and storage—vSAN-certified and part of the Synergy vSAN ReadyNodes

Rev. 21.31 324 Confidential – For Training Purposes Only


Module 7: Design an HPE VMware Cloud Foundation (VCF) Solution

HPE OneView Connector for VCF

Figure 7-12: HPE OneView Connector for VCF

HPE and VMware collaborated to integrate SDDC Manager and HPE OneView. The HPE OneView
Connector provides the interface between HPE OneView and SDDC Manager, using DMTF’s Redfish
APIs to communicate with SDDC Manager. HPE OneView Connector for VCF 4.0 includes support for
HPE Primera and HPE Nimble Storage.
When you install the OneView Connector, you install it on a Linux VM. As part of the installation process,
you import the OneView Connector’s certificate into SDDC Manager. After the Connector is installed, you
must register it with SDDC Manager.
The OneView connector for VCF enables you to complete tasks such as:
• Create server profile templates that are visible in SDDC Manager
• Compose resources, which includes allocating resources to servers, storage, and networking
interfaces
• Decompose resources, returning them to Synergy’s fluid resource pools

Rev. 21.31 325 Confidential – For Training Purposes Only


Module 7: Design an HPE VMware Cloud Foundation (VCF) Solution

Automated lifecycle management for VCF on HPE Synergy

Figure 7-13: Automated lifecycle management for VCF on HPE Synergy

HPE OneView for VMware vRealize Orchestrator (vRO) helps automate IT tasks in an extensible and
repeatable manner. It provides a predefined collection of HPE OneView tasks and workflows that can be
used in vRO with easy-to-use, drag and drop access to the automation of HPE OneView managed
hardware deployment, firmware updates, and other life cycle tasks. HPE OneView for VMware vRO
allows the advanced management features of HPE OneView to be incorporated into larger IT workflows.
HPE OneView workflows and actions can also be integrated with VMware vRealize Automation using
vRO.

Rev. 21.31 326 Confidential – For Training Purposes Only


Module 7: Design an HPE VMware Cloud Foundation (VCF) Solution

Learning checks
1. What is one difference between a VCF standard architecture and a consolidated architecture?
a. The standard architecture supports more than one VI domain while the consolidated
supports only one VI domain.
b. The consolidated architecture supports SAN arrays to improve storage performance,
but the standard architecture does not.
c. The standard architecture separates management workloads from user workloads.
d. The consolidated architecture uses a wizard to simplify the installation process
rather than requiring the Cloud Builder VM.
2. What is the purpose of the Deployment Parameter Workbook?
a. Helps automate the process of ordering a Synergy solution for VCF
b. Helps you order the necessary licenses for VCF components
c. Imports configuration information about the VCF environment into the HPE OneView
Connector for VCF
d. Provides the network information and configuration settings the Cloud Builder VM
requires to bring up VCF
3. Which correctly describes the HPE OneView Connector for VCF?
a. It uses Redfish APIs to communicate with SDDC Manager.
b. It uses workflows to automate updates on HPE Synergy.
c. It must be installed on HPE Synergy before VCF is deployed.
d. It is deployed with Cloud Builder VM.

You can check the correct answers in “Appendix: Answers.”

Rev. 21.31 327 Confidential – For Training Purposes Only


Module 7: Design an HPE VMware Cloud Foundation (VCF) Solution

PAGE INTENTIALLY LEFT BLANK

Rev. 21.31 328 Confidential – For Training Purposes Only


Answers
Appendix

Module 1
Activity
Possible answers
Your presentation might have mentioned ideas such as these.
Right now IT is struggling because some processes are software-defined while managing the
infrastructure is manual. For agility, especially for speeding up application development, the company
needs a software-defined infrastructure that automates and orchestrates the physical with the virtual.
The customer should start by moving its virtualized environment to composable infrastructure with fluid
resource pools—HPE Synergy provides the capabilities that the customer needs. The fluid resource pools
mean the companies can scale compute and storage separately so that they do not need to overprovision
one to get the other. They can easily compose storage and compute together for different workloads as
needs change. This will help the physical infrastructure “catch up” with the virtual infrastructure.
The customer needs to be able to easily deploy workloads on those fluid resource pools. HPE OneView
within Synergy has a Unified API. The OneView templates help to consolidate hundreds of lines of code
into one. Instead of trying to coordinate many components with scripts, customers just need to use the
script to deploy the template. The template ensures that the right settings are applied every time.

Rev. 21.31 | © Copyright 2021 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Appendix: Answers

Module 1 Learning checks


1. What is one feature of a software-defined infrastructure (SDI) according to Moor Insights?
a. It monitors and heals itself. ***Correct answer***
b. It is 100 percent virtualized.
c. It is 100 percent containerized.
d. It requires a hybrid environment.
If you missed this question, please review “Section 2: SDI, the First step to Hybrid Cloud.”
2. Which are benefits that HPE Synergy provides? (Select two.)
a. Synergy converges all of the infrastructure below the hypervisor, providing an ideal
platform for VMs.
b. Synergy is a density-optimized solution that is designed for IoT solutions.
c. Synergy provides a unified API, which enables companies to use tools such as Chef
and Ansible to automate tasks. ***Correct answer***
d. Synergy includes HPE OneView, which automates the management of both Synergy
and VCF, replacing SDDC Manager in a VCF deployment.
e. Synergy enables companies to deploy virtualized, containerized, and bare metal
workloads on the same infrastructure. ***Correct answer***
If you missed this question, please review “Section 3: Hybrid cloud with VMware and HPE.”

Rev. 21.31 330 Confidential – For Training Purposes Only


Appendix: Answers

Module 2
Activity 2.1
Task 1
Some of the information that you might have listed includes:
• What level of oversubscription is acceptable? (vCPU-to-core? RAM subscription?)
• What level of redundancy does the customer require?
• More data about current hosts and resource utilization
– HPE Assessment Foundry (SAF)
– HPE vCenter
– Perfmon for Windows

Task 2
As you created your BOM for the cluster, you should have found that you need 5 SY480 modules, but you
could plan one more for redundancy. The BOM includes all the frames and accompany components.
Remember that you planned one cluster for simplicity, but in the real world, you would be planning all of
the clusters.

Rev. 21.31 331 Confidential – For Training Purposes Only


Appendix: Answers

Activity 2.2
Some of the ideas that you might have had are listed below.
• Ask what networks the customer wants to deploy on the ESXi hosts (Management, vMotion, FT,
production, etc.)
– Explain how to divide port into multiple FlexNICs
– Can use LACP-S to enhance resiliency and load balancing (support with virtual distributed
switches)
• Discuss integration with data center network (possibly eliminate ToR switches and have EoR
switches only)
• Discuss the importance of a template-based approach to management
• Explain how to get the HPE Custom image for ESXi (can be further customized)

Rev. 21.31 332 Confidential – For Training Purposes Only


Appendix: Answers

Module 2 Learning checks


1. What does VMware recommend as a typical good starting place for vCPU-to-core ratio?
a. 1:1
b. 1:2
c. 4:1 ***Correct answer***
d. 16:1
If you missed this question, please review “Section 1: Sizing the HPE Synergy solution for VMware
vSphere.”
2. You are advising a customer about how to deploy VMware vSphere on HPE Synergy.
The customer wants to use redundant ESXi host adapters to carry VMs’ production
traffic. What is a best practice for providing faster failover and best load sharing of traffic
over the redundant adapters? (Select two.)
a. Use an LACP LAG on the VMware virtual distributed switch. ***Correct answer***
b. Use a Network Set with multiple networks on the uplink set that supports the
production traffic.
c. Make sure to enable Smart Link on the uplink set that supports the production traffic.
d. Set up one link aggregation on one interconnect module and another link
aggregation on the other interconnect module.
e. Use LACP-S on a pair of connections on the compute modules on which ESXi hosts
are deployed. ***Correct answer***
If you missed this question, please review “Section 2: Best practices for deploying VMware vSphere
on HPE Synergy.”
3. You are advising a customer about how to deploy VMware vSphere on HPE Synergy.
What is a simple way to ensure that the ESXi host has the proper HPE monitoring tools
and drivers?
a. Provision the hosts with the HPE custom image for ESXi. ***Correct answer***
b. Use Insight Control server provisioning to deploy the ESXi image to the hosts.
c. Manage the ESXi hosts exclusively through Synergy, rather than in vCenter.
d. Customize a Service Pack for ProLiant and upload it to Synergy Composer before
using Composer to deploy the image.
If you missed this question, please review “Section 2: Best practices for deploying VMware vSphere
on HPE Synergy.”
4. How far can an HPE Synergy internal network extend?
a. Within a single Synergy frame
b. Up to the ICM and on its uplink sets, but not back to any downstream ports
c. Across multiple Synergy frames, as long as they are in the same data center
d. Across multiple Synergy frames that are connected with conductor and satellite
modules ***Correct answer***
If you missed this question, please review “Section 2: Best practices for deploying VMware vSphere
on HPE Synergy.”

Rev. 21.31 333 Confidential – For Training Purposes Only


Appendix: Answers

Module 3
Activity 3
Below are listed some of the ideas that you might have had.
• vSAN benefits
– Cost effective
– Highly integrated with VMware
– Relatively simple to deploy
• HPE benefits for vSAN
– Flexibility on D3940 (no fixed number of drives per compute module)
– High performance flat iSCSI network across frames
• HPE storage array benefits
– Advanced services such as QoS, snapshotting, and replication (important for mission critical web
and business management services)
– Tight integration with VMware
– Simplified provisioning with vVols and/or vCenter plugins
– Automated backups to Storage Catalyst or cloud with RMC (3PAR)
– HPE InfoSight and VMVision

Rev. 21.31 334 Confidential – For Training Purposes Only


Appendix: Answers

Module 3 Learning checks


1. What is one benefit of HPE Synergy D3940 modules?
a. A single D3940 module can provide up to 40 SFF drives each to 10 half-height
compute modules.
b. Customers can assign drives to connected compute modules without fixed ratios of
the number per module. ***Correct answer***
c. A D3940 module provides advanced data services like Peer Persistence.
d. D3940 modules offload drive management from compute modules, removing the
need for controllers on compute modules.
If you missed this question, please review “Section 2: VMware vSAN on HPE Synergy.”
2. What is one rule about boot options for a VMware vSAN node deployed on HPE
Synergy?
a. The node must boot from a volume stored on the same D3940 module that supplies
the drives for vSAN.
b. The node must use HPE Virtual Connect to boot.
c. The node cannot boot using PXE.
d. The node can boot from internal M.2 drives with an internal P204i storage controller.
***Correct answer***
If you missed this question, please review “Section 2: VMware vSAN on HPE Synergy.”
3. What is one strength of HPE Nimble and Primera for vVols?
a. They help the customer unify management of vVol and vSAN solutions.
b. They have mature vVols solutions that support replication. ***Correct answer***
c. They automatically convert VMFS datastores into simpler vVol datastores.
d. They provide AI-based optimization for Nimble volumes exported to VMware ESXi
hosts.
If you missed this question, please review “Section 5: Additional HPE storage array benefits for
VMware environments.”

Rev. 21.31 335 Confidential – For Training Purposes Only


Appendix: Answers

Module 4
Activity 4

Figure Appendix-1: Possible answers

Ideas for things to discuss include:


• Need to sync up VLAN ID and MTU for the VXLAN transport connection
• VXLAN transport will remain stable no matter what logical networks are added in VMware (no need
for later coordination between VMware and Synergy admins)
• Need to discuss how BUM traffic is forwarded

Rev. 21.31 336 Confidential – For Training Purposes Only


Appendix: Answers

Module 4 Learning checks


1. What benefit do overlay segments provide to companies?
a. They provide encryption to enhance security.
b. They provide admission controls on connected VMs.
c. They enhance performance, particularly for demanding and data-driven workloads.
d. They enable companies to place VMs in the same network regardless of the
underlying architecture. ***Correct answer***
If you missed this question, please review “Section 1: VMware NSX.”
2. What is one way that NetEdit helps to provide orchestration for ArubaOS-CX switches?
a. It provides the API documentation and helps developers easily create scripts to
monitor and manage the switches.
b. It lets admins view and configure multiple switches at once and makes switch
configurations easily searchable. ***Correct answer***
a. It integrates the ArubaOS-CX switches into HPE IMC and creates a single pane of
glass management environment.
b. It virtualizes the switch functionality and enables the switches to integrate with
VMware NSX.
If you missed this question, please review “Section 2: NSX + ArubaOS-CX.”

Rev. 21.31 337 Confidential – For Training Purposes Only


Appendix: Answers

Module 5
Module 5 Learning checks
1. What is one benefit of HPE OneView for vRealize Orchestrator (OV4vRO)?
a. It integrates a dashboard with information and events from HPE servers into
vRealize.
b. It provides an end-to-end view of servers' storage (fabric) connectivity within the
vRealize interface.
c. It adds pre-defined workflows for HPE servers to vRealize. ***Correct answer***
d. It integrates multi-cloud management into the VMware Cloud Foundation (VCF)
environment.
If you missed this question, please review “Section 1: HPE OneView integration with VMware
vSphere and vRealize.”
2. Which is an option for licensing HPE OneView for vCenter (OV4VC)?
a. InfoSight licenses
b. Remote Support licenses
c. Composable Rack licenses
d. OneView licenses ***Correct answer***
If you missed this question, please review “Section 1: HPE OneView integration with VMware
vSphere and vRealize.”
3. What is one benefit of OV4VC that is available with the OneView standard license?
a. An easy-to-use wizard for growing a cluster from a single tool
b. Non-disruptive cluster firmware updates from within vCenter
c. An inventory of servers and basic monitoring of them in vCenter ***Correct
answer***
d. Workflows for managing servers and storage
If you missed this question, please review “Section 1: HPE OneView integration with VMware
vSphere and vRealize.”

Rev. 21.31 338 Confidential – For Training Purposes Only


Appendix: Answers

Module 6
Activity 6
This customer craves simplicity, and the simple-to-deploy HPE SimpliVity also simplifies the virtualized
environment. SimpliVity has built-in software-defined storage. Non-storage experts like the college's IT
staff can easily deploy VMs across the cluster without having to worry about attaching LUNs. This
customer does not want to have to think about and fuss with storage. The OmniStack Data Virtualization
Platform provides always on data reduction to minimize capacity requirements without extra effort. It also
provides built in data protection and easy to use local backups, so the CIO can start to simplify the
backup process without necessarily having to add another solution.
You might have also mentioned:

• Integration with VMware plug-ins


• Simple deployment with the Deployment Manager
• Fast, built-in backups
• Efficient backups
• RAIN + RAID protection
• Better performance with fewer IOs required and data locality

Module 6 Learning checks


1. On which network do HPE SimpliVity nodes have their default gateway address?
a. Storage
b. Management ***Correct answer***
c. Cluster
d. Federation
If you missed this question, please review “Section 3: Architecting the HPE SimpliVity solution.”
2. How does an HPE SimpliVity cluster protect data from loss in case of drive failure?
a. Only RAIN (replicating data to at least three nodes)
b. Only RAID (with the level depending on the number of drives)
c. Both RAID (with the level depending on the number of drives) and RAIN (replicating
data to two nodes) ***Correct answer***
d. Only RAID (always RAID 10)
If you missed this question, please review “Section 1: Emphasizing the software-defined benefits of
HPE SimpliVity.”

Rev. 21.31 339 Confidential – For Training Purposes Only


Appendix: Answers

Module 7
Module 7 Learning checks
1. What is one difference between a VCF standard architecture and a consolidated
architecture?
a. The standard architecture supports more than one VI domain while the consolidated
supports only one VI domain.
b. The consolidated architecture supports SAN arrays to improve storage performance,
but the standard architecture does not.
c. The standard architecture separates management workloads from user workloads.
***Correct answer***
d. The consolidated architecture uses a wizard to simplify the installation process
rather than requiring the Cloud Builder VM.
If you missed this question, please review “Section 1: VMware Cloud Foundation (VCF) architecture.”
2. What is the purpose of the Deployment Parameter Workbook?
a. Helps automate the process of ordering a Synergy solution for VCF
b. Helps you order the necessary licenses for VCF components
c. Imports configuration information about the VCF environment into the HPE OneView
Connector for VCF
d. Provides the network information and configuration settings the Cloud Builder VM
requires to bring up VCF ***Correct answer***
If you missed this question, please review “Section 2: HPE integration with VCF.”
3. Which correctly describes the HPE OneView Connector for VCF?
a. It uses Redfish APIs to communicate with SDDC Manager. ***Correct answer***
b. It uses workflows to automate updates on HPE Synergy.
c. It must be installed on HPE Synergy before VCF is deployed.
d. It is deployed with Cloud Builder VM.
If you missed this question, please review “Section 2: HPE integration with VCF.”

Rev. 21.31 340 Confidential – For Training Purposes Only


PAGE INTENTIONALLY LEFT BLANK
To learn more about HPE solutions, visit
www.hpe.com
© 2021 Hewlett Packard Enterprise Development LP. The information contained herein is
subject to change without notice. The only warranties for Hewlett Packard Enterprise products
and services are set forth in the express warranty statements accompanying such products
and services. Nothing herein should be construed as constituting an additional warranty.
Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions
contained herein.

You might also like