Creating HPE VMware Solutions Learner Guide
Creating HPE VMware Solutions Learner Guide
Learner Guide
Rev. 21.31
Copyright 2021 Hewlett Packard Enterprise Development LP
The information contained herein is subject to change without notice. The only warranties for
Hewlett Packard Enterprise products and services are set forth in the express warranty
statements accompanying such products and services. Nothing herein should be construed
as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for
technical or editorial errors or omissions contained herein.
This is a Hewlett Packard Enterprise copyrighted work that may not be reproduced without
the written permission of Hewlett Packard Enterprise. You may not use these materials to
deliver training to any person outside of your organization without the written permission of
Hewlett Packard Enterprise.
Microsoft, Windows, and Windows Server are registered trademarks of the Microsoft
corporation in the United States and other countries.
Rev. 21.31 | © Copyright 2021 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Contents
HPE InfoSight—Key distinguishing feature for the HPE SDDC .......................................................... 124
Architecting the AI Recommendation Engine ...................................................................................... 125
Example of HPE InfoSight in action ..................................................................................................... 126
Summary of HPE storage array benefits for VMware environments ................................................... 127
Activity 3 .................................................................................................................................................. 129
Summary .................................................................................................................................................. 131
Learning checks ...................................................................................................................................... 132
Endpoint Groups (EPGs) and other key ACI components .................................................................. 164
Activity 4 .................................................................................................................................................. 166
Summary .................................................................................................................................................. 168
Learning checks ...................................................................................................................................... 169
Appendix: Review VMware networking ................................................................................................ 170
Standard switch (vSwitch).................................................................................................................... 171
How vSwitch forwards traffic ................................................................................................................ 172
VMkernel adapters ............................................................................................................................... 173
Implementing VLANs ........................................................................................................................... 174
vSphere distributed switch (VDS) ........................................................................................................ 175
Appendix: Answers
Module 1 ................................................................................................................................................... 329
Activity .................................................................................................................................................. 329
Possible answers ................................................................................................................................. 329
Module 1 Learning checks ................................................................................................................... 330
Module 2 ................................................................................................................................................... 331
Activity 2.1 ............................................................................................................................................ 331
Activity 2.2 ............................................................................................................................................ 332
Module 2 Learning checks ................................................................................................................... 333
Module 3 ................................................................................................................................................... 334
Learning objectives
This module reviews cloud computing and introduces you to software-defined infrastructure (SDI). It also
highlights the close partnership that HPE and VMware have developed to deliver SDI and hybrid cloud
solutions and then reviews the solutions they offer.
After completing this module, you will be able to:
• Engage customers in a meaningful discussion about cloud computing and cloud management
• Explain the benefits of a software defined infrastructure
• Describe the HPE Composable Strategy and position HPE value prop for SDI and Software-Defined
Datacenter (SDDC)
Rev. 21.31 | © Copyright 2021 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Module 1: Overview of HPE VMware Solutions
Course map
This course includes the modules shown here. You are starting module 1.
Throughout this course, you will follow a scenario, which demonstrates how a customer transformed their
legacy environment to a software-defined data center.
Financial Services 1A is a prominent institution in its region, but it is facing new competition and its growth
has slowed significantly. The company has one main goal: attract and retain more customers. After
extensive research, C-level executives have determined that the best way to reach this goal is to offer
personalized services, based on each customer’s lifestyle, stage in life, and financial goals. Like many
financial institutions, the company offers self-service options for customers, but the company wants to
add more financial services and also simplify access, while maintaining strict security. IT is also
investigating using AI to make its fraud protection services more reliable.
This customer currently has a highly virtualized deployment with more than 80% of workloads virtualized.
The company uses VMware vSphere version 7.0, but none of the vRealize Suite applications. The CIO
feels that IT has reached a stalling point with the virtualized environment. Admins can provision a new
virtual machine (VM) very quickly, but getting a new host deployed takes a very long time. The same
goes for setting up new storage volumes and datastores.
IT has started using tools such as Ansible to start automating. Everyone is enthusiastic about using these
tools at first, but when admins get down to trying to automate everything, they run into issues. There are
always parts of service deployment, particularly with the physical infrastructure, that resist automation.
Finally, the CIO cannot obtain a good view of the entire environment. The bare metal workloads and
virtual workloads are totally siloed. The vSphere admins do not have a clear idea about what is going on
in the physical infrastructure. They and the network and storage admins sometimes seem to struggle to
communicate what the virtual workloads need as far as physical resources.
Cloud computing
You will first review the options your customers have for deploying cloud computing. Your understanding
of cloud computing will lay the groundwork for learning how you can help your customers achieve a
hybrid cloud deployment with SDI. Note that if you are already familiar with these concepts, you can skip
this section.
Cloud infrastructure itself is no different from typical data center infrastructure, except that it’s consistently
virtualized and offered as a service to be consumed via the network. Servers, storage, compute
resources and security are all key components of cloud infrastructure. This as-a-service consumption
model offers several key benefits:
Financial benefits
Cloud computing provides consumption-based pricing, which allows customers to pay only for the
resources they actually use. There are no upfront costs and the consumption-based model allows
customers to stop paying for resources if they become unnecessary.
Elasticity
Another fundamental aspect of cloud computing is that resources can be increased or decreased on
demand. The resources can be scaled-up (adding more CPU, memory, storage or network capacity) to an
existing compute node or scaled out (adding more compute nodes that function in a team of nodes that
run an application).
Scaling can often be automated, based on the rules for an application. For instance, if a news website
has an article that is very popular for a period of time, the system can automatically increase the network
capacity when needed and decrease the network capacity when the article gets less and less hits.
Rapid deployment
Resources can be deployed through easy to use interface, often automatic. Patches and updates to the
infrastructure can also be deployed automatically, keeping the infrastructure current and more secure.
In addition, cloud service providers often provide pre-packaged application services that make the
deployment of new applications easier and faster.
Organizations use cloud services for a wide range of use cases, such as:
• Test and develop software applications: Because cloud infrastructures can easily be scaled up
and down, organizations can save costs and time for application development.
• Implement new services and applications: Organizations can quickly gain access to the resources
they need to meet their performance, security, and compliance requirements. Organizations can then
develop, implement and scale applications more easily.
• Deliver software on request: Software-as-a-service provides software on demand, which helps
organizations offer users the software versions and updates whenever they need them.
• Analyze data: The data from all the organization’s services and users can be collected in the cloud.
Then, cloud services, such as machine learning, can be used to analyze the data and get better
insights for more and better decisions.
• Save, back up, and restore data: Data protection can be done cost-effectively (and on a very large
scale) by transferring data to an external cloud storage system. The data can then be accessed from
any location.
Deployment models
Customers who are considering how to deploy their workloads have to make some difficult decisions,
particularly because they have more options for investing their IT budgets than ever before. Do they
deploy workloads on premises, using “traditional” infrastructure solutions? In a public cloud? In a private
cloud or managed cloud?
These options are outlined below.
• Traditional on-premises infrastructure: IT is responsible for provisioning services for line of
business (LOB). Although companies can deploy solutions that are tailored to the needs of their
organizations, procurement and provisioning cycles can unfortunately take months.
• Public cloud: Public cloud consists of on-demand IT services, delivered with a pay-per-use funding
model. The services are hosted on infrastructure that is owned by the cloud service provider, is
shared by multiple customers, and is more or less transparent to customers.
Common cloud services include:
– Software-as-a-service (SaaS), which allows users to access software applications from the cloud.
Users do not need to install and run a purchased application on their own devices. The service
hides the underlying OS and the infrastructure.
– Infrastructure-as-a-service (IaaS), which offers a computing environment, typically a virtualized
OS, as a service. Companies can add any applications that they desire to the virtual machine
(VM), and the service also includes supporting storage and networking resources. (A VM is a
virtual instance of a computing system.)
– Platform-as-a-service (PaaS), which is similar to IaaS but adds a standard stack of developer tools
that enables developers to write applications designed specifically to run in the cloud.
• Private cloud: A private cloud delivers on-demand IT services, which IT can easily scale and Line of
Business (LOB) users can request using self-service portals. The customer owns the infrastructure
that hosts the services, and the infrastructure is dedicated to the customer.
Typically the customer must build, manage, and maintain the on-prem infrastructure that hosts the
cloud services. However, some service providers offer managed private clouds, in which they take
over many of these responsibilities.
• Hybrid cloud: A hybrid cloud consists of one or more private clouds and one or more public clouds.
With a hybrid cloud, the customer can choose which workloads to deploy in which cloud, based on
the business and workload needs. Some hybrid clouds support “bursting,” which means scaling
services from one cloud to another cloud on-demand.
Most organizations prefer to use a hybrid of public and private cloud, because this strategy allows
companies to match individual workloads to the environment that is best-suited for them.
For example, companies like Financial Services 1A need to meet strict regulatory requirements when
storing their customers’ personal financial data. Such sensitive data is best kept in the safety of an
on-prem environment, where Financial Services 1A has the most control over their data.
However, it might make sense to deploy other business applications, such as those for marketing and
sales, to the public cloud. Perhaps certain times of the year, such as the winter holidays or the start of
the school year, correlate with more activity in these departments. If that is the case, the public cloud
is naturally a better option for seamlessly scaling the environment up or down to meet demand.
Regardless of their specific challenges, all businesses seek to optimize their IT spend while
minimizing operational risks, which is why they prefer the flexibility offered by hybrid cloud.
Cloud has historically been a destination: a public cloud that is “out there” or a private cloud that is on-
premises. With that idea of cloud, you might struggle to see how “cloud-enabled” fits in an edge-centric
world in which 70% of apps run on-prem. But HPE makes the seeming contradiction fade away by
bringing “cloud” on-prem and at the edge.
If cloud isn’t defined by being “out there,” what does define a cloud? A cloud lets businesses obtain the
services that they need on-demand with a self-service process. From this agility, springs flexibility. The
company can scale services up and down as it makes sense to meet the company’s needs at that time.
Cloud is also characterized by a particular economic model. Companies pay for only the IT resources that
they use when they use them, removing the roadblock of a large up-front capital expenditure and letting
the company invest that capital elsewhere. Finally, from an operational viewpoint, the provider manages
the infrastructure, freeing up the company’s IT staff members for other innovative pursuits.
Companies need a way to bring these cloud characteristics to the locations that a data-driven, edge-
centric world demands. If it makes sense for the business case, then certainly, workloads can reside in a
public cloud such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. But if business
apps such as enterprise resource planning (ERP) and customer relationship management (CRM)
applications need to run on-prem, the cloud needs to be on-prem, whether in a private or a co-located
data center. If IoT-enabled systems call for intelligence at the edge, the cloud needs to be at the edge.
HPE understands that customers need a cloud that comes to them, anywhere and everywhere they need
it. HPE GreenLake offers a full portfolio of as-a-service solutions, including pre-configured compute and
storage solutions, workload-optimized solutions, and fully customized solutions based on particular
customer requirements. In this way, HPE GreenLake offers customers self-service, scalability, pay-per-
use, and provider management across the complete edge-to-cloud platform.
For more information about HPE GreenLake, you can take the Configure HPE GreenLake Solutions
course.
Moor Insights defines SDI as “the ability to manage hardware resources (compute, storage, and
networking in a programmable manner” (Moor Insights and Strategy, “Accelerating Software-Defined
Infrastructure with HPE Synergy,” Mar. 2019). What does programmable mean in more practical terms?
Moor Insights explains that "true SDI" is:
• Self-provisioning—Self-provisioning enables users to provision the resources that they need on the
fly. Whether users deploy workloads using an application, script, or catalog as in a private cloud, one
common factor applies. Users can quickly provision servers with all the accompanying volumes, OS,
and network connections without having to involve manual processes and multiple teams of experts.
• Self-monitoring—Users can easily monitor utilization of compute and other resources. The SDI
enables simple scaling in response to needs for more capacity.
• Self-managing and self-healing—Artificial Intelligence (AI) monitors systems, proactively detects
potential issues, and alerts admins only about important issues. It can even take actions on its own
before problems occur.
As Moor Insights points out, “SDI is the foundational building block to the software-defined datacenter
(SDDC)” (Moor Insights and Strategy, “Accelerating Software-Defined Infrastructure with HPE Synergy,”
Mar. 2019).
A software-defined infrastructure is more than a virtualized one. SDI can support virtualized, bare metal,
and containerized workloads, bringing automation to all of them. Because an SDI can support all three
types of deployment, it lets your focus remain where it should be: on helping customers choose the right
deployment for their individual workloads.
Read the following sections to explore considerations for bare metal, virtualized, and containerized
workload deployment.
Bare metal
While virtualization has made performance improvements, it always introduces a hypervisor layer
between a workload and physical resources. In addition, virtualization typically means sharing resources
with other workloads, which might interfere (the “noisy neighbor” problem). Bare metal cannot be beaten
when it comes to pure performance.
Bare metal can also be the preference for customers who are particularly anxious about isolating and
securing a workload.
Traditionally customers have struggled the most with automating deployment of workloads on bare metal,
but, as you will see, an HPE SDI makes such automation possible.
Virtualization
Virtualization offers many benefits for a wide array of workloads. Most workloads do not require the full
resources of a modern server, so sharing the resources is more efficient.
Admins often find it much simpler to apply standard and automated processes to a virtualized
environment than a bare metal one. They can clone VMs and script the deployment of more VMs from a
template. They can easily stop and start VMs. They can snapshot VMs and revert to snapshots. VMs can
be moved from one location to another (although, without extra help from network virtualization, live
migrations are often limited in extent).
In addition, admins can consolidate Windows and Linux operating systems on the same physical server.
This gives them the freedom to deploy workloads on the operating system that is best suited for each
particular workload. They can deploy these workloads using familiar virtualization management tools. In a
VMware environment, for example, they can use vSphere.
Compared to containers, virtualization is a mature technology with which most customers are very
familiar.
Containers
Containers are designed to make it easier to move applications from one server to another without the
risk of missing dependencies causing issues.
Traditionally, moving an application from one system to another could lead to major problems. To
understand why, you need to understand a little bit about an application’s “runtime system” and why any
changes to that system can cause problems. The runtime system is the environment in which an
application runs. It includes the binaries that translate human readable code to machine code for
execution. The runtime system also includes libraries, which are common pieces of code that multiple
applications can call on and run. For example, python has a math library with many mathematical
functions already defined within it so that every developer does not have to recreate these functions. Most
modern applications use dynamic libraries, which are linked to the code when it is compiled (if the
application uses a compiled language), but only have their code loaded into the application when the
application starts to run.
As developers create an application, they set up the runtime system with all the binaries and libraries that
the application needs. Now imagine that the code moves from one server to another—for example, from a
server in the development environment to a server in the production environment. If the new server’s
runtime system does not exactly mirror the development one, the application might link to a dynamic
library that does not exist—causing it to fail to compile or run.
A container combines an application with its runtime system so that the application always has the correct
binaries and libraries to run successfully.
A container platform can run on either bare metal hosts or virtual machine (VMs), as companies choose.
HPE and VMware have been collaborating on delivering solutions for their customers for more than 20
years. With the delivery of HPE Synergy, the world’s first composable infrastructure solution, HPE made it
even easier for HPE VMware customers to move to a software-defined infrastructure (SDI).
The two companies have also collaborated to help companies simplify their hybrid cloud environment. By
integrating VMware Cloud Foundation (VCF) and HPE solutions, the two companies have made it easier
to design, install, validate, deploy, and manage a hybrid cloud solution.
Learn more about this ongoing alliance by visiting the: HPE and VMware Alliance page.
VMware Cloud Foundation (VCF) is a hybrid cloud platform, which can be deployed on-premises as a
private cloud or can run as a service within a public cloud. This integrated software stack combines
compute virtualization (VMware vSphere), storage virtualization (VMware vSAN), network virtualization
(VMware NSX), and cloud management and monitoring (VMware vRealize Suite) into a single platform.
In the version 4 release of VCF, VMware added Tanzu, which embeds the Kubernetes runtime within
vSphere. VMware has also optimized its infrastructure and management tools for Kubernetes, providing a
single hybrid cloud platform for managing containers and VMs.
VCF components
SDDC Manager is the management platform for VCF, enabling admins to configure and maintain the
logical infrastructure. It also automatically provisions VCF hosts.
In addition to SDDC Manager, VCF includes the following components:
vSphere (compute)
VMware vSphere hypervisor technology lets organizations run applications in a common operating
environment, across clouds and devices. vSphere includes key features such as:
• VM migration
• Predictive load balancing
• High availability and fault tolerance
• Centralized administration and management
vSAN (storage)
vSAN is a storage solution that is embedded in vSphere. It delivers storage for virtual machines, with
features like:
• Hyper-converged object storage
• All flash or hybrid
• Deduplication and compression data services
• Data protection and replication
NSX-T (networking)
Networking is often the last part of the stack to virtualize, but virtualizing the network is key to achieving
the full benefits of a software-defined data center. Without it, network and security services will still be a
manual configuration and provisioning process, ultimately becoming the bottleneck to faster delivery of IT
resources.
VMware NSX-T has been updated to support hybrid cloud environments. In addition to supporting ESXi
servers, NSX-T supports containers and bare-metal servers. It also supports Kubernetes and OpenShift
as well as AWS and Azure. Furthermore, it is not tied to an hypervisor so it supports Microsoft Hyper-V
environments.
NSX-T also supports:
• Distributed switching/routing
• Micro-segmentation
• Load balancing
• L2-L7 networking services
• Distributed firewall
• Analytics
vRealize Suite
vRealize Suite is the integrated management environment of the hybrid cloud environments.
For example, within VMware’s vRealize Suite, operational capabilities continuously optimize workload
placement for running services based on policies that reflect business requirements. Automation
capabilities within the Suite can leverage those same policies when deciding where to place a newly
requested service.
VMware introduced vSphere Lifecycle Manager (vLCM) in vSphere 7. As the name suggests, vLCM is
designed to help customers manage the entire lifecycle of ESXi hosts. For example, vLCM helps
customers deploy clusters more easily and quickly and then helps IT admins monitor and manage them.
With vLCM, IT admins can establish a “desired state,” and vLCM will automatically check to ensure hosts
meet that state. vLCM also supports vendor add-ins, which allow vendors to integrate their products
tightly with vLCM and vSphere. You will learn more about the HPE Hardware Support Manager plug-in
later in this course.
Figure 1-14: Deploy Hybrid Cloud Platform or Just Products Customer Needs
Figure 1-15: HPE Synergy: Composable infrastructure for VCF and virtualized, containerized, and bare
metal workloads
You will now consider how the HPE Composable Infrastructure empowers SDI and hybrid cloud in more
detail. A composable infrastructure is designed to be programmed from the hardware up.
A Composable Infrastructure supports bare metal and containerized workloads, as well as virtualized
ones. But it also abstracts resources into fluid pools created from underlying physical resources.
Customers can then dynamically assign and release resources from these pools. These resource pools
must be programmable by open-standards-based APIs, which allow for scripting and automation of
resource allocation. Automation enables real-time resource allocation, which helps companies to support
their on-demand application and services, particularly for developer environments, but for other use cases
as well
Customers often find themselves pulled between the demands of their traditional applications, which
require stability and are carefully managed by IT operations teams, and the demands of emerging cloud
apps, which are driven by developers’ requirements and the need for speed. An HPE Composable
Infrastructure helps customers simplify because it is a single infrastructure that supports both types of
apps. Whether customers need to deploy workloads on bare metal, as VMs, or in containers—or some
mixture of the three—the HPE Composable Infrastructure provides the same fluid resource pools that can
be composed for the current needs and the programmable processes to ease deployment.
HPE Synergy Gen10 compute modules also support the HPE Silicon Root of Trust.
The key benefits are described in more depth below.
Unified API
• Single line of code to abstract every element of infrastructure: the API, which is hosted by the
composer, allows admins to write and run scripts that tell any part of the infrastructure what to do.
• Full infrastructure programmability: Because admins can script their commands, they can automate
management work that they previously had to perform manually.
Software-defined intelligence
• Template-driven workload composition: Admins can dynamically compose workloads; for example,
they can write a script that directs Synergy to support virtual desktop infrastructure (VDI) in the day,
and perform analytics at night.
• Frictionless operations: By automating so many processes, Synergy helps IT teams reduce the cost
of human error, which commonly occurs when admins have to perform repetitive tasks manually.
HPE offers a variety of storage options for customers deploying VMware solutions.
HPE MSA is an entry-level SAN storage solution, designed for businesses with 100 to 250 employees
and remote office/branch offices (ROBOs). MSA offers the speed and efficiency of flash and hybrid
storage and advanced features such as Automated Tiering.
Most customers can benefit from HPE Nimble to meet their storage needs. HPE Nimble provides
99.9999% guaranteed availability. It also uses Triple+ Parity RAID for resiliency, which allows a Nimble
array to withstand three simultaneous drive failures in one group.
Nimble simplifies the storage lifecycle. For example, Nimble is simple to install and provision, so IT
generalists can deploy it. HPE Nimble can also scale up or scale out as needed, without disruption to the
customer.
HPE Primera redefines what’s possible in mission-critical storage with three key areas of unique value.
First, it delivers a simple user experience that enables on-demand mission-critical storage, reducing the
time it takes to manage storage. Second, HPE Primera delivers app-aware resiliency backed with 100%
availability, guaranteed. Third, HPE Primera delivers predictable performance for unpredictable workloads
so the customer’s apps and business are always fast.
As this course was being developed, HPE announced two new storage solutions: HPE Alletra 6000 and
Alletra 9000.
Please note that this course does not cover these solutions, but HPE expects to provide the same
VMware integration for these solutions that it provides for HPE Nimble and Primera.
HPE Alletra is engineered to be tightly coupled with the HPE Data Services Cloud Console. Together,
they deliver a common, cloud operational experience across workload-optimized systems on-premises
and in the cloud. Alletra solutions deliver the same agility and simplicity for every application across their
entire lifecycle, from edge to cloud. Customers can deploy, provision, manage, and scale storage in
significantly less time. For example, the platform can be set up in minutes, and provisioning is automated.
HPE Alletra 6000 is designed for business-critical workloads that require fast, consistent performance. It
guarantees 99.9999% availability and scales easily. HPE Alletra 9000, on the other hand, is designed for
mission-critical workloads that have stringent latency and availability requirements. It guarantees 100%
availability.
HPE also offers data protection solutions. HPE StoreOnce meets the needs of customers who require
comprehensive, low-cost backup for a broad range of applications and systems. It provides extensive
support for applications and ISVs so customers can consolidate backups from multiple sources.
HPE SimpliVity takes convergence to a new level by assimilating eight to twelve core data center
activities, including solid state drive (SSD) arrays for all-flash storage; appliances for replication, backup
and data recovery; real-time deduplication; WAN optimization; cloud gateways; backup software; and
more. And all of these functions are accessible under a global, unified management interface.
With the convergence of all infrastructure below the hypervisor, HPE SimpliVity allows businesses of all
sizes to completely virtualize the IT environment while continuing to deliver enterprise-grade performance
for mission-critical applications.
A core set of values unites all HPE SimpliVity models. Customers gain simple VM-centric management
and VM mobility. As they add nodes, capacity and performance scale linearly, delivering peak and
predictable performance. Best-in-class data services, powered by the SimpliVity Data Virtualization
Platform, deliver data protection, resiliency, and efficiency.
HPE Nimble Storage dHCI provides a disaggregated hyperconverged infrastructure solution. It allows
customers to scale compute and storage separately, while providing a low-latency, high-performance
solution.
HPE offers solutions, including vSAN ReadyNodes, that are validated to be compatible with VMware
products. Visit the VMware Compatibility Guide and select Hewlett Packard Enterprise as the vendor. You
will see a list of available solutions and can select each solution to view more information about it.
HPE InfoSight and HPE OneView work hand in hand to establish a data center that can run itself.
HPE InfoSight helps to deliver the self-monitoring and self-healing components of an SDI, with predictive
analytics that support automation and provide customers with helpful AI-based recommendations for
resolving issues and optimizing. In some cases, InfoSight can predict and mitigate issues before they
occur without human intervention. You will learn more about InfoSight in Module 3.
HPE OneView helps to make the SDI self-provisioning and self-managing with template-based
provisioning and management and a Unified API.
HPE OneView is the engine for the HPE automated data center. Much of HPE OneView’s power comes
from the Unified API, which enables OneView to communicate with infrastructure devices and users to
reprogram servers, storage, and networking. HPE OneView conceals the complexity of infrastructure
management from upper layer applications while exposing functionality to an ecosystem of tools,
infrastructure applications, and business applications.
Summary
In this module you have learned about the benefits that businesses stand to gain by transforming to an
SDI or hybrid cloud environment. You also learned the HPE and VMware have a long-standing alliance,
working together to integrate their solutions. Together they provide the SDI and hybrid cloud solutions
customers need.
Activity 1
You will now return to the customer scenario introduced at the beginning of the module and learn more
about it.
Financial Services 1A is a prominent institution in its region, but it is facing new competition, and its
numbers are flagging. The company has one top goal: attract and retain more customers. After extensive
research, C-level executives have determined that the best way to do so is to offer personalized services
based on each customer's lifestyle, stage in life, and financial goals. Like many financial institutions, the
company offers digital self-service options for customers, but the company wants to add more financial
services and also simplify access, while maintaining strict security. IT is also investigating using AI to
make its fraud protection services more reliable.
The new initiatives will require the customer to scale up services and accommodate changing workloads
more aggressively.
This customer currently has a highly virtualized deployment with more than 80% of workloads virtualized.
The company uses VMware vSphere version 6.7, but none of the vRealize Suite applications. The CIO
feels that IT has reached a stalling point with the virtualized environment. The CIO has shared issues
such as these with you:
• The virtual environment and the physical environment are out of sync. Admins can provision a new
VM very quickly, but getting a new host deployed takes a very long time. The same goes for setting
up new storage volumes and datastores.
• IT has started using tools such as Ansible to start automating. Everyone is enthusiastic at first, but
when admins get down to trying to automate everything, they run into issues. There are always parts
of service deployment, particularly with the physical infrastructure, that resist automation.
• The CIO does not have a good view of the entire environment. The bare metal workloads and virtual
workloads are totally siloed.
• The vSphere admins do not have a firm idea about what is going on in the physical infrastructure.
They and the network and storage admins sometimes seem to struggle to communicate what the
virtual workloads need as far as physical resources.
accounts, the company offers loans such as mortgages and auto loans. The company has about 1200
employees, including a sizeable development and IT staff.
Based on your research Financial Services 1A has about US$10 billion in assets.
The customer has one primary data center and a disaster recovery (DR) site. About 10 years ago, the
customer consolidated services in a VMware vSphere deployment. The primary data center has 30
VMware hosts in 6 clusters, running a variety of workloads including:
• General Active Directory services
• General enterprise solutions
• An extensive web farm for both internal and external sites
• Development platforms
• The Web front end interacts with a number of applications, including
– Customer banking and self-service applications
– Investment management
– Loan management
– Inventory management
– Business management
The company also has about 20 bare metal servers running more intensive data analysis and risk
management applications. The company further has several load balancing appliances and security
appliances such as firewalls and an intrusion detection system/intrusion prevention system (IDS/IPS).
While the vSphere deployment hosts some business management solutions, the company moved some
of its customer relationship management (CRM), HR, and payroll services to the cloud about 3 years ago.
The customer also archives some less sensitive data in Amazon Web Services (AWS).
The ESXi hosts and bare metal servers are mostly HPE ProLiant DL servers (primarily 300 series and
Gen8). The customer also has about a dozen legacy Dell servers. The storage backend for the vSphere
deployment currently consists of Dell EMC storage arrays.
The data center has a leaf and spine network using HPE FlexFabric 5840 switches. Traffic is routed at
the top of the rack.
After reviewing the scenario, take about 20 minutes to create a presentation about the HPE approach to
SDDC and how it applies to the customer’s pain points and goals.
You can use the space below to record ideas for your presentation.
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
Learning checks
1. What is one feature of a software-defined infrastructure (SDI) according to Moor Insights?
a. It monitors and heals itself.
b. It is 100 percent virtualized.
c. It is 100 percent containerized.
d. It requires a hybrid environment.
2. Which are benefits that HPE Synergy provides? (Select two.)
a. Synergy converges all of the infrastructure below the hypervisor, providing an ideal
platform for VMs.
b. Synergy is a density-optimized solution that is designed for IoT solutions.
c. Synergy provides a unified API, which enables companies to use tools such as Chef
and Ansible to automate tasks.
d. Synergy includes HPE OneView, which automates the management of both Synergy
and VCF, replacing SDDC Manager in a VCF deployment.
e. Synergy enables companies to deploy virtualized, containerized, and bare metal
workloads on the same infrastructure.
Learning objectives
In this module, you will learn how to size an HPE Synergy solution for VMware vSphere. You will then
look at best practices for deploying VMware vSphere on HPE Synergy.
After completing this module, you will be able to:
• Given a set of customer requirements, position software-defined infrastructure (SDI) solutions to
solve the customer’s requirements
• Given a set of customer requirements, determine the appropriate software defined platform (such as
virtualization farm, scale out database, VDI, streaming analytics, and scale out storage).
• Given a set of customer requirements for a virtualized environment, determine the appropriate
software defined compute technology
Rev. 21.31 | © Copyright 2021 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Module 2: Design an HPE Composable Infrastructure Solution for a Virtualized Environment
Working with Financial Services 1A’s CIO and top decision makers, you have created a plan for
accelerating the company’s efforts to attract and retain customers. You are going to revitalize the
customer’s vSphere deployment by moving it to the composable HPE Synergy. This plan will help make
the customer’s network more automated and orchestrated from the physical infrastructure to the virtual
infrastructure.
When you are planning to migrate an existing vSphere deployment to Synergy, you need to collect as
much information about that environment as you can. You also need to understand the customer’s
expectations for the environment.
VM profiles
VM profiles allow you to standardize the configuration of VMs. You can establish a VM profile for each
type of VM. As you plan the migration, must catalog the resources that are required for each type of VM:
• Number of vCPUs
• Allocated RAM
• Disk size
You should also attempt to determine the input/output operations per second (IOPS) and disk throughput
requirements for each type of VM.
In addition to documenting the VM profiles, you should track how many of each type of VM are required.
Subscription expectations
The virtual resources allocated to VMs consume physical resources on the ESXi host. Because not every
VM will operate at 100% utilization at the same time, resources can be oversubscribed. However, too
much oversubscription can compromise performance. Based on the "VMware vSphere ESXi Solution on
HPE Synergy: Best practices for sizing vSphere and ESXi on HPE Synergy" white paper, a 4:1 vCPU-to-
processor core ratio provides ample performance for most environments. In other words, a host with 32
cores could support VMs with 128 vCPUs total. If the customer has lower priority or lighter VMs, the ratio
could even go as high as 8:1 (which might create a small degradation in performance).
If VMs have higher CPU utilization, the ratio could be lower.
The same white paper suggests that 125% oversubscription for memory should be conservative, based
on memory sharing and other technologies provided by VMware. However, some customers might not be
comfortable oversubscribing memory.
You need to work with the customer to specifically define the amount of oversubscription that the
customer will tolerate. If the customer has mission-critical VMs, you will need to assess their specific
requirements and plan for them without oversubscription.
Cluster plans
You need to know whether the vSphere environment uses clusters. A cluster consists of multiple ESXi
hosts. VMs are deployed to the cluster rather than to the individual host. VMware Dynamic Resource
Scheduler (DRS) assigns the VM to the host based on considerations such as load, as well as
configurable affinity and non-affinity rules for VMs. A cluster can also implement high availability (HA).
Among other features, HA ensures that, if a host within the cluster fails, its VMs restart on another host in
the cluster.
If the customer uses clusters, you need to know which clusters will support which VMs. You also need to
define the availability requirements. Should the cluster be able to tolerate the failure of one host for N+1
redundancy or more than one host?
Also find out whether the cluster will apply Fault Tolerance (FT) to any VMs. FT creates a standby copy of
the VM, so it will essentially double the requirements for that VM.
Growth requirements
Discuss how quickly the solution is expanding. Agree on a growth rate per year and a number of years for
which the solution will accommodate that growth. For example, you might size the solution to
accommodate 5% growth for 3 years.
Rather than migrate an existing vSphere deployment, you might be working with a customer who wants to
virtualize physical workloads, migrating them to VMware vSphere on HPE Synergy. In this case, you
should profile each physical machine. Here you see information that you should collect. You can then
work with the customer to convert that information into a profile for a VM that can handle the same
workload. For example, if the physical machine has 16 cores and currently operates at 15-20 percent
utilization, you and the customer might decide that 4 vCPUs is sufficient for the VM.
Similar to a migration from an existing vSphere environment, you should also discuss desired
oversubscription levels, plans for using VMware clustering, and expected growth.
In addition to interviewing the customer, you can obtain the information that you need from a number of
tools. It is strongly recommended that you use one or more of these tools to collect information, as
customer documentation can be spotty or outdated, leading you to undersize a solution if you rely on
them alone.
Perfmon
If you are migrating physical workloads to vSphere on Synergy, you can track resource utilization on
Windows machines using Perfmon. Perfmon shows utilization for any hardware resources on the
machine, including CPU, memory, and disk drives. The figure above shows an example in which you are
monitoring a number of disk related counters. You can also create a Data Collector Set for System
Performance to collect data on an ongoing basis.
Business management
These applications run on databases, which might be traditional or in-memory. Characteristics include:
• Be latency sensitive
• Be mission critical
• Require high IOPS
Object storage
Common characteristics include:
• Scale out
• IOPS intensive
EUC or VDI
End user computing (EUC) refers to any solution for allowing users to access compute resources
remotely. Virtual desktop infrastructure (VDI) is a common example. Common characteristics include:
• Latency sensitive
• Possible need for GPU acceleration (power users using applications like CAD)
Figure 2-5: Positioning the HPE Synergy compute module for the workload
You can mix and match compute modules for the Synergy frames based on the workloads that the
customer needs to support. Use the figure above to match your customers’ workload to an appropriate
Synergy compute module.
As you see, the Synergy 480 Gen 10 is a great go to option for many workloads, including VDI, email,
collaboration, system management, web serving, engineering, object storage, networking services, and
content or application development. It can even support SAP and business management workloads if
they are on the lighter end in term of number of users and requests. For similar applications but more
demanding requirements, recommend the HPE Synergy 660 Gen10.
You are now ready to input the information that you gathered and turn that into a BOM, specifying the
type and number of Synergy compute modules that you need, as well as their configuration and
accompanying components such as D3940 modules, Synergy frames, Composers, Frame Link Modules,
and interconnect modules.
This course assumes that you are familiar with the components of a Synergy solution and focuses on
sizing the compute modules for the vSphere deployment.
As you size such a solution, keep some additional best practices in mind. You should size to keep VM
load on the host’s resources at 80 percent or under. You also need to consider redundancy if the
customer uses HA clusters. For example, if the customer wants N+1 redundancy, you should scope the
solution with an extra module so that the remaining modules can support the load if one module fails. If
the customer plans to use fault tolerance, you should double the requirements for each FT-protected VM.
Whenever possible use an HPE sizer to size the solution. You can look for sizers at these links:
HPE Assessment Foundry
HPE SSET
Note that HPE SSET provides guidance on sizing VMware ESXi and VMware Cloud Foundation (VCF)on
HPE Synergy.
HPE Products and Solutions Now
HPE Tech Pro Community
Activity 2.1
For this activity, you will return to the Financial Services 1A customer scenario.
After your discussion of the plan for helping the company transform to an SDI, Financial Services 1A has
decided to have you propose migrating vSphere to HPE Synergy.
Earlier in this module, you reviewed how to size one cluster for Financial Services 1A. Now you will look
at a second cluster for the customer. (The environment has additional clusters, but you do not need to
consider them for the purposes of this activity.) This second cluster supports a variety of Web applications
and services for the customer's website and mobile banking apps.
The customer has told you that this cluster must support 60 VMs with this per-VM profile:
• 4 vCPUs
• 16 GB RAM
• 60 GB disk
Task 1
What additional information do you need to collect in order to properly size the deployment?
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
Task 2
In response to your questions, the customer has indicated that a 4 vCPU-to-core oversubscription ratio is
acceptable and 100% RAM subscription. The customer wants N+1 redundancy for the cluster (one host
can fail without impacting performance). You used Lanamark and vCenter to discover this information:
• VM count, vCPUs, and allocated RAM given by customer are confirmed as correct
• Total IOPS = 2034 write; 4325 read
• Datastore Total = 5600 GB
• Datastore Provisioned = 3600GB
• Datastore Used = 3023 GB
Create a BOM for this cluster. Use the HPE Synergy sizer for VMware vSphere, which you can find by
following the steps below.
1. Access https://psnow.ext.hpe.com
5. Scroll down and select the HPE SSET (Solution Sales Enablement Tool).
8. Choose the appropriate sizing from the list (VMware ESXi on HPE Synergy).
9. Click Start.
10. Fill in the information that the customer provided you and click Review. (Have no preference for
storage at this point).
11. Export the BOM and take notes on how you will present the BOM to the customer.
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
The best practices outlined in this section are based on the "VMware vSphere on HPE Synergy Best
Practices Guide." It provides a set of standard practices for deploying VMware vSphere clusters on HPE
Synergy infrastructure. You will now look at each step in more detail. If, after you have completed this
section, you want to learn more, you can download this guide and read it in full.
Cluster design
You have already looked at many topics related to cluster design, as these were relevant to sizing the
solution. For the cluster design step, you simply follow one best practice for Synergy. Distribute nodes in
the same cluster across frames.
For example, your solution for Financial Services 1A might have six clusters: one with three modules, two
with five modules, and three with six modules. The figure above illustrates how you could distribute those
clusters across 3 frames. Distributing the nodes evenly minimizes the impact if a full frame fails.
For example, you might have sized Financial Services 1A as requiring two three-module clusters and
several five-module clusters. You should distribute those modules as evenly across the frames as
possible. You should also clarify the customer requirements. Should the cluster operate without
degradation of services if one compute module fails or if an entire frame fails? In the latter case, you
might want to add another module and make the five module clusters have six modules so that the cluster
can tolerate the loss of a two modules that would occur if a frame failed.
For the next two steps, you will look at best practices for fabric and server profile design. First, though
consider a basic best practice rule: Support scalability by using templates, which include logical
interconnect groups (LIGs), and server profile templates (SPTs).
For example, look at a case in which a customer has one rack with three Synergy frames. Now the
customer wants to add another rack. Review each step to see how templates make it easy for customers
to scale the solution.
Step 1
You open up the management ring on the existing rack and easily integrate the frame link modules on the
three new frames into the ring. The existing Composer will now manage both racks. You could move the
redundant Composer to a frame in the new rack for rack-level redundancy.
Step 2
You power on everything, and Composer auto-discovers the new frames.
Step 3
Admins can apply the existing enclosure group (EG) and LIG templates to the new frames, which quickly
establishes the correct connectivity and network settings. The new frames just need to have their
conductor interconnect modules cabled into the row switches, following a similar layout as used in the
original rack.
Step 4
The logical enclosure settings are applied in tandem with firmware updates. Within a few hours and with
minimal admin work, the new Synergy frames are available.
Step 5
Admins can apply the existing SPTs to compute modules in the new frames to quickly scale up the
desired workloads.
Multiple FlexNICs
You will now focus on mezzanine 3. A Converged Network Adapter (CNA) plus a VC SE 40Gb F8 ICM
together unlock the full benefits of composable networking. A CNA can be divided into multiple FlexNICs
or connections, each of which looks like a physical port to the OS running on the compute module. The
number of supported FlexNICs per port depends on the VC module and CNA capabilities, whichever is
lower. The VC SE 40Gb F8 module supports eight per port, as does the 4820C CNA, but the 3820C CNA
supports only four.
Admins can set bandwidth policies per connection. For our purposes the compute module is an ESXi
host, so virtual switches or virtual distributed switches (vDS) own the FlexNICs and connect VMkernel
adaptors or VM port groups to them.
As just one example, with a single two-port CNA on the compute module, the ESXi host can have
redundant deployment, management, vMotion/FT, and production ports.
Synergy FC convergence
Admins can also configure one of the FlexNICs on a CNA port, and the paired FlexNIC on the other port,
to use FC or enhanced iSCSI; the FlexNICs are then called FlexHBAs. In this example, 3:1c and 3:2c
operate in FCoE mode and are assigned to Synergy FC networks. The ports appear as storage adapters
on the ESXi host, which it can use to connect to SAN storage arrays, accessible through the VC ICMs,
which require FC licenses.
This design could eliminate the need for a mezzanine 2 and interconnect modules in bays 2 and 5. On
the other hand, fewer FlexNICs are available for other purposes.
Mapped VLANs
The example you see in the figure above has fewer connections for simplicity, but the same principles
apply even if you are using more connections.
You assign each compute module connection to a network. Interconnect modules have uplink sets that
own one or more external ports on the interconnect module. The uplink set also has networks assigned to
it. The compute module connection can send traffic to any other compute module connections in the
same network and over the uplink ports assigned an uplink set with its network.
With mapped VLANs, every network is assigned a VLAN ID. An uplink set can support multiple networks
so that those networks can share the uplinks. To maintain network divisions, traffic for all of the networks,
except the one marked as the native network, is tagged with the network VLAN ID as it is sent over the
uplink. If a compute module connection is assigned to a single network, the traffic is untagged on the
connection. But a downlink connection can also support multiple networks, which are bundled in a
network set. Again traffic for all networks, except the network set’s native network, is tagged on the
downlink. This is useful for connecting to virtual switches that send tagged traffic for multiple port groups.
Mapped VLANs give Synergy the most control, and are recommended in most circumstances. However,
they do require VMware admins to coordinate the VLANs that they set up in VMware and in Synergy.
Tunneled mode
Tunneled mode opens up the network to support any VLAN tags. If a virtual switch uses a connection
with a tunneled mode network, admins can add new port groups and VLANs without needing to change
the Synergy configuration. However, tunneled mode causes all networks to share the same broadcast
domain and ARP table. If upstream switches bridge VLANs, this will cause MAC addresses to be learned
incorrectly and disrupt traffic. Therefore, tunneled mode is only recommended for very changeable
environments such as with DevOps.
And you will learn how to create an even better solution with NSX in Module 4; that solution will keep
mapped VLAN networks stable on Synergy while allowing VMware admins to add new VM networks
flexibly.
For most Ethernet networks, it is recommended that you use LACP-S, or S-channel, to create link
aggregations between pairs of compute module connections. Pairs of connections are defined as
FlexNICs with the same letter on different ports. Connections in the same LAG are assigned to the same
network, and the OS that runs on the compute module must define the connections as a LAG too. For
ESXi this means that a distributed switch configured with a LAG must own the connections. LACP-S
provides faster fault recovery and better load balancing compared to traditional NIC teaming with OS load
balancing.
LACP-S works best with the connected ICMs use an M-LAG to carry the connections’ networks. The
ICMs automatically establish an M-LAG when the same uplink set has ports on both ICMs. The two ICMs
present themselves as a single entity to the devices connected to those ports. They could connect to one
data center switch or two switches in a stack that also support M-LAG. VC SE 40Gb F8 modules supports
up to eight active links per M-LAG. (Each module has six 40GbE uplinks, which can be split into four
10GbE links each. All links in the M-LAG must be the same speed).
When you use LACP-S and M-LAG together, whichever ICM receives traffic from the downlink LACP-S
LAG forwards the traffic across a local link in the M-LAG. Similarly when an ICM receives traffic from
upstream, destined to the compute module connection, it forwards the traffic on its local downlink in the
S-channel. This reduces traffic on the links between ICMs.
Note also that this view shows the compute module connected directly to the ICMs for simplicity. In reality
the compute module might connect to satellite modules, which connect to the conductor VC ICMs in
another frame. Only conductor ICMs have uplinks. Logically, though, the topology is the same.
For iSCSI a different configuration is recommended. The compute module’s pair of iSCSI connections
should be assigned to the two different networks with no aggregation. To decrease unnecessary traffic
over the conductor-to-conductor links, the VC conductor modules should have different uplink sets, which
only support their own downlink’s network. They can establish a LAG to the uplink switch with their own
links, but not an M-LAG.
This design requires smart link to handle failures. Without smart link, if all uplinks on an interconnect
module fail, but the downlinks are still operational, the compute modules will contain to send traffic on the
iSCSI network with a failure, causing disruption. Smart link shuts down the downlinks in a network if all
the uplinks fail, allowing the compute module to detect the failure and fail over to the other connection.
You might also choose to use this design to permit an active/active configuration if the data center
switches do not support a stacking technology such as IRF, DRNI, or VSX. The virtual switch could load
balance with originating source port (by VM), for example, so some VMs would use the uplinks on ICM 3
and some would use the uplinks on ICM 6.
Although the last two figures have shown the two approaches separately for clarity, the same CNA can
combine the two approaches on different FlexNICs. For example, you can have the iSCSI connections
using Smart link and no link aggregation while the management and production connections use LAGs.
Similarly, the ICMs can have some uplink sets that use LAGs and some that use M-LAGs, but each uplink
set owns ports exclusively.
Internal networks
Internal networks are not assigned to uplink sets on interconnect modules, but are assigned to downlink
ports on compute modules. That means that compute modules can communicate with each other through
the interconnect modules, but their traffic does not pass out into the data center network. The traffic
extends as far as the connected conductor and satellite modules, which could be three frames.
If a cluster is confined to three frames, internal networks can be useful for functions like vMotion and FT.
A production network, to which VMs connect, can also be an internal network, but only if the VMs in that
network only need to communicate within the three-frame Synergy solution. Also remember that VC
modules are not routers. Consider whether VMs need to communicate at Layer 3, even with VMs on
hosts in the same Synergy frames. If the data center network is providing the routing, the VMs' networks
must be carried on an uplink set.
Private networks
A private network blocks connections between downlinks, but permits traffic out uplinks. This can be
useful if the network includes less trusted or more vulnerable VMs. Many hackers attempt to move from
one compromised machine to others, seeking to find more privileges and sensitive information as they go.
Preventing VMs from accessing VMs on another host can limit the extent of an attack. Of course, a
private network does not work when VMs need to communicate together as part of their functionality.
The Synergy adapters support some key functions for the virtualization workload.
Single root input/output virtualization (SR-IOV) enables network traffic to bypass the software switch layer
typical in a hypervisor stack, which results in less network overhead and performance that more closely
mimic non-virtualized environments. To make this feature available to the customer, you must choose an
Ethernet adapter that supports it. You must also deploy compatible ICMs for the selected adapter.
The SR-IOV architecture on VC allows up to 512 VFs, but the Ethernet adapter itself might support fewer.
When admins create a connection in a Synergy server profile or SPT, they can enable VFs and set the
number of VFs from 8 to the max supported by the adapter. Admins can then assign individual VMs on
that host to a port group and the SR-IOV-enabled adapter. Each VM is assigned its own VF on the
adapter and has its own IP address and dynamic MAC Address; VLAN settings come from the port group
and should match what is configured for the network on Synergy. In this way, admins can continue to
manage VMs connection in a mostly familiar way, but the VMs experience dramatically improved
performance.
Many Synergy adapters also support DirectPath IO. This technology improves performance and
decreases the CPU load on the hypervisor by allowing VMs direct access to the hardware. However, this
technology is only recommended for workloads that need maximum network performance as it comes
with some significant drawbacks. It is not compatible with HA, vMotion, or snapshots.
Here you see some best practices for designing SPTs for ESXi hosts. When the Synergy frame uses
VCs, an SPT can include connections, which define the correct networks for the compute modules’
adapters. You already saw some typical designs for these in the previous section. Admins can set
bandwidth reservations on each connection from the SPT. They can use NetIOC in VMware to set limits
on dvport groups. NETIOC support bandwidth limitation at a VM virtual adapter level.
The SPT can also define BIOS settings, which include workload profiles that customize server operations
so as to optimize for the expected workload. VMware recommends setting the workload profile to either
"Virtualization – Power Efficient" or "Virtualization – Max Performance" depending on whether the
customer prioritizes efficiency or performance.
You can also use the SPT to create volumes on local drives on D3940 modules and attach them to the
compute module. And you can even manage HPE storage arrays in Synergy, create volumes on them,
and attach those volumes to the compute module through the SPT. Synergy handles all of the
complexities in the background. This feature is a great value add for customers who are used to having to
coordinate with storage expects to get ESXi hosts attached to volumes. You will learn more about both
local and SAN options in Module 3.
Figure 2-29: Best practices for VMware vSphere ESXi hypervisor provisioning
HPE provides a custom ESXi image for deploying on HPE Synergy compute modules (as well as other
HPE ProLiant modules). This image comes pre-loaded with HPE management tools, utilities, and drivers,
which help to ensure that Synergy modules can perform tasks such as boot from SAN correctly.
Customers can obtain the HPE Custom Image for the Synergy compute modules at this link.
You should also make sure that Synergy compute modules’ firmware is updated to align with the driver
versions used in the HPE Custom Image. See the Service Pack for ProLiant (SPP) documentation at
https://hpe.com/info/spp and the “HPE ProLiant server and option firmware and driver support recipe”
document on http://vibsdepot.hpe.com for information on SPP releases supported with HPE Custom
Images.
If customers want to customize the image further, they can use VMware Image Builder, which is included
with the vSphere PowerCLI. They can add vSphere Installation Bundles (VIBs) with additional drivers,
HPE components, or third party tools. They also have the option of downloading HPE ESXi Offline
Bundles and third-party driver bundles and applying them to the image supplied by VMware. Or
companies can choose from the ESXi Offline Bundles and third-party drivers to create their own custom
ESXi image.
If VMware updates the image in the future, HPE supports application of the update or patches to the HPE
Custom Image. However, HPE does not issue an updated Custom Image every time that VMware
updates. Instead, it updates the image on its own cadence.
Step 1
Using the HPE OneView for vCenter (OV4VC) plugin, VMware admins can monitor the physical
infrastructure with the virtual infrastructure. They can view information such as utilization or see a map of
the network connectivity from virtual switch to data center switch.
Step 2
Cluster-aware firmware upgrades make it simple to upgrade ESXi hosts’ software without disrupting
services.
Step 3
If the admins need to expand a cluster, all they have to do is install the new compute module, and a
simple wizard gets the OS deployed and the new host joined to the cluster in a few clicks.
Step 4
Admins can look in VMware vRealize Operations and see alerts about potential issues related both to the
virtual and physical environment. They can troubleshoot more quickly and with a lot less frustration.
Step 5
With OV4VC’s Proactive HA capabilities, if OneView detects an issue with a Synergy ESXi host, it alerts
vCenter, which moves the host’s VM to other hosts in the cluster. This protects the VMs in case the host
fails. The infrastructure is one step closer to zero downtime and to driving itself.
Many customers, even ones with highly virtualized environments, have some workloads that need to stay
on bare metal, whether because of performance requirements or the customer’s preference. With HPE
Synergy, however, customers can consolidate bare metal workloads and virtualized workloads in
infrastructure. Customers can use many of the same features to manage the bare metal workloads as
they do the virtualized ones. They can define SPTs to deploy the OS to the bare metal, define networks,
and attach volumes. They can automate deploying SPTs with the OneView API. While each workload
remains deployed in the ideal environment for it, customers can have a single infrastructure for both.
Summary
This module has guided you through taking a customer from a traditional virtualized environment to a
software-defined environment on the composable HPE Synergy. You learned about sizing and design
considerations, as well as deployment best practices.
Activity 2.2
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
• Best practices for the deployment
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
Learning checks
1. What does VMware recommend as a typical good starting place for vCPU-to-core ratio?
a. 1:1
b. 1:2
c. 4:1
d. 16:1
2. You are advising a customer about how to deploy VMware vSphere on HPE Synergy.
The customer wants to use redundant ESXi host adapters to carry VMs’ production
traffic. What is a best practice for providing faster failover and best load sharing of traffic
over the redundant adapters? (Select two.)
a. Use an LACP LAG on the VMware virtual distributed switch.
b. Use a Network Set with multiple networks on the uplink set that supports the
production traffic.
c. Make sure to enable Smart Link on the uplink set that supports the production traffic.
d. Set up one link aggregation on one interconnect module and another link
aggregation on the other interconnect module.
e. Use LACP-S on a pair of connections on the compute modules on which ESXi hosts
are deployed.
3. You are advising a customer about how to deploy VMware vSphere on HPE Synergy.
What is a simple way to ensure that the ESXi host has the proper HPE monitoring tools
and drivers?
a. Provision the hosts with the HPE custom image for ESXi.
b. Use Insight Control server provisioning to deploy the ESXi image to the hosts.
c. Manage the ESXi hosts exclusively through Synergy, rather than in vCenter.
d. Customize a Service Pack for ProLiant and upload it to Synergy Composer before
using Composer to deploy the image.
4. How far can an HPE Synergy internal network extend?
a. Within a single Synergy frame
b. Up to the ICM and on its uplink sets, but not back to any downstream ports
c. Across multiple Synergy frames, as long as they are in the same data center
d. Across multiple Synergy frames that are connected with conductor and satellite
modules
Learning objectives
This module gives you the opportunity to explore multiple HPE solutions for making storage more
software-defined and better integrated within a VMware environment. You will first look at VMware vSAN,
the VMware option for software-defined storage (SDS) and in particular how you can implement vSAN on
HPE Synergy. You will then look at the options that HPE SAN arrays, including Nimble and Primera,
provide for integrating with VMware.
After completing this module, you will be able to:
• Position supported software defined storage solutions
• Given a set of customer requirements, determine the appropriate storage virtualization technologies
and solutions
Rev. 21.31 | © Copyright 2021 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Module 3: Design an HPE Software-Defined Storage (SDS) Solution
You are still in the process of helping a company migrate its vSphere deployment to HPE Synergy, and
you need to propose an HPE storage component of the solution. Customer discussions have revealed a
few key requirements. The customer is tired of endless struggles with storage being a black box that
VMware admins have little insight into and that slows down provisioning processes. For the upgrade, they
want a storage solution that provides tight integration with VMware. Ideally, VMware admins should be
able to provision and manage volumes on demand.
Because Financial Services 1A runs mission critical services on vSphere, the company is also concerned
with protecting its and its customers' data. Their current backup processes are too time consuming and
complex, and the customer is concerned that the complexity will lead to mistakes—and lost data.
In the previous module, you learned about scoping a customer’s requirements for a VMware vSphere
deployment, including storage capacity and performance requirements. But to deliver a truly software-
defined solution, you must go beyond those requirements to help customers solve their vexing problems.
Some of the issues customers face are outlined below.
Gaining visibility
Most customers struggle to correlate and analyze usage data. Isolating and solving issues can take
weeks. They even lack the visibility into their environment that they need to know when they are running
out of disk space.
A brief look at how VMware storage has evolved can help you understand the challenges that customers
have faced in managing storage for their virtual environments. The focus for this course will be vSAN and
vVols, as well as the unique storage automation features enabled by HPE Synergy and HPE OneView.
The following sections outline these technologies. If you want more information, you can click here.
VMFS
A VM's drive is traditionally backed by a virtual machine disk (VMDK). This VMDK is a file, which can be
stored on a SAN array. Virtual Machine File System (VMFS) is the file system imposed on the SAN array
for storing the VMDKs. VMware created VMFS in ESX 1.0 in 2001 to fulfill the special requirements of
block storage and impose a file structure on block storage. This file structure was initially flat, but became
clustered in later versions. VMFS enables multiple devices to access the same block storage, locking
each individual VM's VMDKs for that VM's exclusive access.
VMware added support for Network File System (NFS) volumes, which use an NFS server rather than
block storage to store VMDKs, as an alternative to VMFS in Vl3.
With vSphere 7.0, VMware introduced clustered VMDKs. Clustered VMDKs require VMFS 6; they are
useful for supporting clustered applications such as Microsoft Windows Server Failover Cluster (WSFC).
Many customers still use VMFS datastores, but VMFS can be challenging and require a lot of
coordination with storage admins.
VAAI
vStorage API for Array Integration (VAAI) was introduced in ESX 4.1 in 2010 to enhance functionality for
VMFS datastores; it was extended with more primitives in ESX 5.0. VAAI aimed to enlist the storage as
an ally to vSphere by offloading certain storage options to the storage hardware. For example, cloning an
image requires xcopy operations. With VAAI, a VAAI primitive requests that the storage array perform the
operations, freeing up ESXi host CPU cycles. Other VAAI primitives include unmap and block zero. VAAI
also introduced a better locking mechanism called atomic test and set (ATS).
VAAI is an important enhancement, which is fully supported out-of-the-box on HPE Nimble and HPE
Primera arrays. However, all vendors that support VAAI do so in the same way. In addition to supporting
VAAI, HPE extends its VMware integration to vSAN and vVols, which you will learn more about in this
module, and vCenter, which you will learn more about later in this course.
VASA
vStorage APIs for Storage Awareness (VASA) was introduced in vSphere 5.0 in 2011. VASA APIs let the
storage array communicate its attributes to vCenter. This lets VMware recognize capabilities on storage
arrays such as RAID, data compression, and thin provisioning. While VASA 1.0 was basic, admins can
now create VASA storage profiles to define different tiers of storage, helping them to choose the correct
datastore on which to deploy a VM.
However, VASA only characterizes capabilities at the datastore level. Admins cannot, for example, select
different services for VMDKs stored within the same datastore.
vSAN
VMware introduced virtual SAN (vSAN) in vSphere 5.5 U1 in 2014. This software-defined storage solution
is VMware's second try at virtual storage. vSAN transforms physical servers and their local disks into a
VMware-centric storage service. It is integrated in vSphere and does not require separate virtual storage
appliances (VSAs). In vSAN, VMs write objects to the disks provided by the vSAN nodes without the
requirement of a file system. vSAN features an advanced storage policy based management engine.
You will look at HPE platforms for supporting vSAN throughout this module.
vVols
VMware introduced Virtual Volumes (vVols) in vSphere 6.0 in 2015 as an alternative to VMFS and NFS
datastores. With this solution, a VM's drive can be a vVol—which is an actual volume on the SAN array—
rather than a VMDK file.
The vVol technology provides a similar level of sophistication and VMware-integration as vSAN but for
customers who want to use a storage array backend rather than servers with local drives. Building on
VASA 2.0/3.0, vVols transforms storage to be VM-centric. VMs can write natively to the vVols instead of
through a VMFS file system. As of vSphere 6.5, replication is supported with vVols, and, as of vSphere
7.0, Site Recovery Manager (SRM) integrates with vVols. These features make vVols much more
attractive to enterprises for which availability and disaster recovery (DR) are critical.
Storage vendors create their own vVols solutions to plug into vSphere so vendors such as HPE can
provide a lot of value adds to customers. You will look at the benefits of HPE's solutions for vVols later in
this module.
The HPE Synergy D3940 modules fully support SDS, including VMware vSAN. Use cases for SDS on
Synergy include supporting a VM farm, as you are examining for the Financial Services 1A scenario, as
well as supporting virtual desktop infrastructure (VDI). SDS can provide the flexible support for shared
DevOps volumes that app development environments need, and also work well for Web development.
You can also deploy SDS on Synergy to provide managed data services for mid-tier storage.
VMware vSAN is VMware's integrated SDS solution. It enables a cluster of ESXi hosts to contribute their
local HDDs, SSDs, or NVMe drives to creating a unified vSAN datastore. VMs that run on the cluster can
then be deployed on this datastore. The vSAN cluster can also present the datastore for use by other
hosts and clusters using iSCSI. vSAN provides benefits such as establishing a high-speed cache tier and
automatically moving more frequently accessed data to that tier.
Because vSAN eliminates the need for a SAN backend, it can save customers money and simplify their
data center administration. VMware vSAN appeals to customers who want the simplicity of a storage
solution that is integrated with the compute solution and is easy to install with their existing vCenter
server. A vSAN solution can also provide simplicity of scaling; to expand, you simply add another host to
the vSAN cluster.
Figure 3-7: HPE Synergy D3940—Ideal platform for SDS and vSAN
You will now consider what makes the D3940 the ideal platform for SDS solutions like vSAN in more
detail.
Flexibility
The D3940 provides a flexible ratio of zoned drives to compute nodes. That means that customers can
choose to assign as many drives to each node as makes sense for their business needs. This flexibility
represents a vast improvement over legacy blade solutions in which storage blades were tied to a single
server blade, causing inefficient use of resources.
Each D3940 storage module provides up to 40 drives and 600 TB capacity. With a fluid pool of up to five
storage modules per frame, up to 200 drives can be zoned to any compute module in the frame.
Each compute module uses its own Smart Array controller to manage the drives zoned to it, so a single
module can support File, Block and Object storage formats together.
The conductor-satellite fabric enabled by VC modules also creates a flat, high-speed iSCSI network for
vSAN that extends over multiple frames, which means that vSAN clusters can extend over multiple
frames, too.
Performance
A non-blocking SAS fabric provides optimal performance between vSAN hosts and the drives zoned to
them on D3940 modules. HPE tests showed that the non-blocking SAS fabric delivers up to 2M IOPs for
4KB random read workload using SSDs. (2M IOPs is for a single storage module connected to multiple
compute modules in as DAS scenario.)
HPE Synergy enables customers to deploy a customized mix of compute and storage resources and to
scale those separately, it provides an ideal SDS platform.
Figure 3-8: HPE Synergy D3940—Ideal platform for SDS and vSAN
The flexibility in drive-to-compute module ratio means that the D3940 can deliver the right-sized
provisioning to any workload, including the SDS scenarios that you are examining.
This graph depicts three scenarios with different combinations of half-height compute modules and
D3940 modules in a frame. In the first scenario, the frame has 10 compute modules and one D3940
module, meaning that each compute module can have an average of 4 SFFs zoned to it. This scenario is
ideal for small databases and file sharing servers.
In the second scenario, the frame has six half-height compute modules and three D3940s, giving each
compute module an average of 20 SFFs. This configuration could work for SDS cluster nodes. The final
configuration has four half-height compute modules and four D3940s, meaning that each computer
module can have 40 SFFs dedicated to it, which is ideal for mail and collaboration services, VDI or VM
farms, and mid-sized databases.
You will now move on to looking at ways to ensure a successful vSAN deployment for your customers,
beginning with proposing Synergy module configurations that HPE has tested and validated with VMware.
To find a certified vSAN Ready Node configuration, use the VMware Compatibility Guide, available by
clicking here. Select that you are looking for vSAN and choose Hewlett Packard Enterprise as the vSAN
Ready Node Vendor. You can also choose a vSAN Ready Node Profile. Select HY for hybrid HDD and
flash or AF for all flash. The profile also has a number that indicates its general scale.
Then select Update and View Results. You can scroll through the results and find a Synergy compute
module model and components that are certified for your profile.
Figure 3-11: Following best practices for vSAN on HPE Synergy: Cluster and network design
You should follow a few best practices to ensure that the vSAN cluster, deployed on HPE Synergy,
functions optimally. Use a minimum 3-node cluster. All nodes in the cluster must act as vSAN nodes. As
mentioned earlier, though, the vSAN cluster can present datastores to other clusters.
You should provide redundant connections for the vSAN network and raise the bandwidth limit on each
connection to at least 10 Gbps. The vSAN network can be an internal network as long as the cluster is
confined within a logical frame, which can include multiple Synergy frames connected with a
conductor/satellite architecture. If the cluster extends beyond the logical frame, the vSAN networks
should be carried in conductor module uplink sets, following the guidelines for iSCSI networks laid out in
the previous module.
Figure 3-12: Following best practices for vSAN on HPE Synergy: Drivers and controllers
Each vSAN node should use a P416ie-m Smart Array controller operating in HBA only mode to access
D3940 drives (through two SAS Connection Modules in bays 1 and 4). The controller should configure
these drives as just a bunch of disks (JBODs). It is important not to use RAID for these drives.
VMware requires a caching (SSD) drive and one or more capacity drives per node. The Compatibility
Guide will indicate the number and type of drives for each tier. In the SPT or server profile for the vSAN
nodes you should configure the recommended set of caching drives as a single caching logical JBOD.
You can configure the capacity drives as one or more capacity logical JBODs.
You should help the customer understand that vSAN has some restrictions on the boot options. The
compute node can boot from internal M.2 hard drives (mirrored) but it requires a P204i storage controller.
PXE boots are also supported, as are USB boots. However, with USB boots, VMware requires the
customer to make other accommodations for log files so that they are stored in persistent storage.
You cannot configure the P416ie-m in mixed mode and create a boot volume from D3940 drives.
Figure 3-13: Following best practices for vSAN on HPE Synergy: Drivers and controllers
It is also best practice to provide redundant connectivity for the D3940s used in the vSAN solution. You
should install two I/O adapters on each D3940. You must also install two Synergy 12Gb SAS Connection
Modules in the Synergy frame, one in ICM bay 1 and one in ICM bay 4.
The HPE process for meeting customer needs with vSAN Ready nodes begins by using our expertise to
define a ProLiant DL-based configuration that is optimized for a particular workload. We work with
VMware to certify the configuration. We then add the new node to the catalog for our partners to
recommend to customers.
The VMware Compatibility Guide gives you certified options, but there are many options to choose from
without much guidance as to when you would choose one over the other. HPE has added just a few
vSAN Ready Nodes to the catalog, on the other hand, and we have listed those by workloads. When you
select a VSAN Ready node, OCA only permits you to customize its configuration with a limited set of
certified options, helping to prevent you from making mistakes.
HPE has done this to better help you as a partner, as you know that you should always begin with the
workload to help you position the correct vSAN solution for a customer.
HPE now provides configurations for each supported platform (HPE ProLiant DL325, HPE ProLiant
DL360, and HPE ProLiant DL380 Gen10) that cover all vSAN profiles (HY2, HY4, HY6, HY8, AF4, AF6,
and AF8). There are also four workload-optimized solutions available. The next several pages cover the
four workload-optimized solutions in more detail.
The HPE ProLiant DL325 All-Flash 6 solution is optimized for heavily virtualized and/or web infrastructure
environments. It offers balanced compute, memory, and network resources to support exceptional VM
density.
Processors
This node uses AMD EPYC processors with 24 to 32 cores.
Figure 3-16: HPE ProLiant DL360 All-Flash 8 for data management and processing
The HPE ProLiant DL360 All-Flash 8 node is optimized for data management and processing. It provides
high disk throughput, low latency, and very high random IO performance.
Processor
This node uses Intel Xeon Gold processors with 28 to 40 cores (total on two processors).
Figure 3-17: HPE ProLiant DL380 8SFF All-Flash 4 for accelerated infrastructure
The HPE ProLiant DL380 8SFF All-Flash 4 is optimized for accelerated infrastructure use cases. It
provides dedicated co-processors to support high-end workloads.
Processor
This node uses Intel Xeon Silver processors with 20 to 24 cores (total on two processors).
Figure 3-18: HPE ProLiant DL380 24SFF Hybrid 8 for data warehousing
The HPE ProLiant DL380 24SFF Hybrid 8 model is intended for data warehousing and storage use
cases. It is capacity optimized with options for storage expansions.
Processor
This node uses Intel Xeon Silver processors with 24 to 32 cores (total on two processors).
Figure 3-19: HPE Synergy fluid resource pools for Tier 1 storage
HPE Synergy also offers fluid resource pools for Tier 1 storage through a backend connection to
enterprise flash arrays. HPE storage arrays can provide managed data services such as Quality of
Service (QoS). They are preferable to SDS on D3940 modules when customers need a highly available
solution with disaster recovery capabilities. They are also top choice for workloads such as CRM, ERP,
Oracle, and SQL, which require low latency and high IO.
Nimble is positioned for business-critical storage and mid-sized companies. HPE Primera provides
mission-critical storage. Designed for ease of use and performance, these arrays provide a 100%
availability guarantee and an architecture designed for NVMe.
This section gives more details about how HPE Nimble and Primera arrays provide value-adds for an
VMware environment.
This figure compares the features supported by the HPE Synergy D3940 to the features supported by
HPE Primera arrays, as an example. Extra features such as advanced replication, the ability to support
stretched cluster, and snapshots explain why customers with mission-critical applications often prefer an
HPE storage array-based solution.
Traditionally getting a volume hosted on a storage array attached to an ESXi host involves many,
relatively complex steps. Storage admins must create the volume. They need to find out the ESXi host's
WWNs, add the host to the array, and export the volume to it. SAN admins must also zone the SAN to
permit the server's WWNs to reach the array. Server admins must find the exported volume by LAN and
add it. HPE Synergy provides fully automated volume provisioning for volumes on Primera or Nimble.
In the steps below, you can see how Synergy simplifies provisioning volumes.
Step 1
Synergy admins can add SAN Managers such as Cisco, Brocade, and HPE to bring SAN switches into
Synergy. Admins can then create networks for the SANs and manage servers' SAN connectivity using
templates and profiles, as they do servers' Ethernet connectivity.
Step 2
Synergy admins can also add Primera and Nimble arrays to Synergy and create volumes on them from
Synergy. They can use server pools and templates to apply policies to volume management.
Step 3
When admins create server profiles and server profile templates, they can add connections for the
servers in the managed SANs. They can also attach volumes to the servers. When the profile is applied
to a compute module bay, Synergy will automate all the heavy lifting of configuring the SAN zoning, as
well as exporting and attaching the volume.
Whatever the compute solution underlying the VMware environment, HPE storage arrays can make the
VMware environment work more efficiently, deliver simpler management, and provide higher performance
and availability.
Key features that you will examine in this topic include vVols, plugins for vCenter, integration with
VMware Site Recovery Manager (SRM), and integration with HPE Recovery Manager Central for VMware
(RMC-V).
You examined one option by which HPE automates storage provisioning. Next, you will examine an
alternative solution that is specific to VMware. vVols represents the culmination of the evolution of
VMware and storage. You will now look at vVols in more detail.
Protocol endpoint
Logical I/O proxy that serves as the data path between ESXi hosts to VMs and their respective vVols
VASA provider
Software component that mediates out-of-band communication for vVols' traffic between the vCenter
Server, ESXi hosts, and the storage array
Storage container
Pool of raw storage capacity that becomes a logical grouping of vVols, seen as a virtual datastore by
ESXi hosts
vVols empowers vSphere admins to control the functions that they need to control. They get to choose to
create a VM snapshot, to thin provision a VM, to create a virtual disk, or to delete a VM. At the same time,
vSphere ESXi hosts should not spend CPU cycles copying or deleting. Under vSphere's direction, the
storage array executes the task automatically. For example, when admins delete a VM, the array deletes
the VM in the vVols Container and reclaims space. This automation eliminates common tasks for storage
admins and frees up their time for more sophisticated optimization.
With VMFS storage volumes are pre-allocated, which typically means that companies must over-provision
resources, which leads to inefficiency. But with vVols vSphere admins can then dynamically allocate
storage only when they need it.
With VMFS it is complicated to provision storage as it requires vendor specific tools for the storage array.
vVols provides simple provisioning and management through vSphere interfaces. The vSphere admins
can easily add a vVol datastore based on a vVol created on an HPE storage array and attach the
datastore to ESXi hosts. The LUNs are managed in the background, making the process much simpler
and more intuitive for non-storage experts.
In addition to reducing the lengthy VMFS provisioning processes, as you saw before, vVols enables
vSphere decisions to automate actions on HPE Nimble and Primera arrays. For example, when a
vSphere admin deletes a VM, the array automatically reclaims space.
Figure 3-28: The HPE Primera and Nimble advantages with vVols
It is important that you understand vVols is not properly a VMware solution; rather it is a design
specification that storage vendors can use to plug their functionality into vSphere. Therefore, vendors like
HPE have a great opportunity to innovate and prove their value in this space. HPE has among the most
mature solutions in this area. The following sections explain the differentiating benefits of HPE Primera
and Nimble solutions for vVols.
Some customers are not ready to shift to vVols. HPE plugins for vCenter allows customers to enjoy a
simpler provisioning process for both VMFS and vVol datastores. The Nimble Storage vCenter plug in
supports both vSphere Web Client and HTML5. Customers can easily create datastores based on Nimble
volumes and then attach those to hosts directly without having to search for LUNs.
HPE Storage Integration Pack for VMware vCenter provides similar benefits for HPE Primera. Admins
can create and manage VMFS and vVol based datastores on their Primera arrays directly from VMware.
In addition to the plugins for vCenter, which you just examined, HPE provides an extensive management
and automation portfolio for integrating with VMware. You will look at much of this portfolio in Module 5,
which covers orchestration of management and monitoring. Over the next part of this module, you will
focus on the data protection portions of the portfolio, examining how HPE arrays integrate with VMware
Site Recovery Manager and also looking at HPE Recovery Manager Central for VMware.
The VMware vCenter Site Recovery Manager (SRM) is a plugin to the vCenter Server that enables you to
create disaster recovery plans for a VMware environment. The recovery plan automates bringing up VMs
in a recovery site to replace failed VMs at a primary site. Because such plans can be complex and require
precise ordering to function correctly, SRM provides a testing feature that lets admins test their plans in
advance. SRM also supports sub-site failover scenarios and failback to move services back to the primary
site again.
SRM can work in scenario without stretched clusters (a stretched cluster has ESXi hosts in the same
cluster at two sites), in which case it brings VMs back up on a new cluster after some downtime. As of
version 6.1 SRM can also work with stretched clusters.
Figure 3-32: HPE Nimble and Primera array benefits for SRM
SRM requires storage array replication to ensure that VMs can access the correct data at the recovery
site if the primary site fails.
Both HPE Nimble and HPE Primera arrays support Storage Replication Adapters (SRAs) for SRM. These
SRAs integrate the arrays' volume replication features with SRM. The Nimble SRA brings the inherent
efficiency of Nimble replication. Nimble also supports zero-copy clones for DR testing. In other words,
Nimble can create the clones without copying any data, making them highly space efficient and fast to
create.
The Primera SRA supports a broad range of features:
• Synchronous, asynchronous periodic, and asynchronous streaming replication (Remote Copy [RC])
modes
• Synchronous Long Distance (SLD) operation in which an array uses synchronous replication to a
secondary array at a metro distance and asynchronous replication to a tertiary array at long distance
• Peer Persistence with synchronous replication and 3 Data Center Peer Persistence (3DC PP) with
SLD
• VMware SRM stretched storage with 2-to-1 remote copy
Refer to the VMware Compatibility Site to look up the SRA versions compatible with various SRM
versions.
You should also be aware that VMware SRM v8.3 has added support for vVols. Now SRM can replicate
and restore vVols and include vVols in DR plans. When companies use SRM with vVols, SRM can handle
the replication natively and seamlessly. No SRA is required.
HPE provided day 0 integration with this feature on Nimble and has also added support for HPE Primera.
Companies can use SRM with vVols on the HPE storage arrays in a vSphere 6.5/6.7 or 7 environment.
Because SRM is so important to companies, the ability to use vVols with SRM will encourage many more
enterprises to start using vVols and leveraging the other benefits of this technology. HPE remains one of
the few vendors to support SRM with vVols, positioning HPE storage well in the VMware space.
HPE Recovery Manager Central (RMC) and RMC for VMware (RMC-V)
overview
Figure 3-33: HPE Recovery Manager Central (RMC) and RMC for VMware (RMC-V) overview
Next, you will look at HPE Recovery Manager Central. RMC is a software solution for integrating HPE
storage arrays with HPE StoreOnce Systems. RMC enables customers to enhance array-based
snapshots, which they love for their ease and speed, but which do not provide true 1-2-3 data protection,
as they are stored in a single location. With RMC, snapshots are easily copied to StoreOnce and even to
the cloud for painless backup and recovery.
RMC can protect several types of applications, including SQL. For this course, though, your main focus is
on RMC for VMware (RMC-V). RMC-V provides backup and replication for VMware environments. It
enables application-consistent and crash-consistent snapshots of VMware virtual machine disks and
datastores. Backups are stored on a StoreOnce system and can be restored to the original or a different
HPE storage array. With HPE StoreOnce Catalyst Copy, customers can even copy backups to a remote
StoreOnce Catalyst or to the cloud.
RMC-V 6.3 supports both HPE Primera and HPE Nimble arrays.
Step 1
vCenter tells the ESXi host to freeze the VMs, and a snapshot is taken of the datastore.
Figure 3-34: One RMC-V copy policy to orchestrate: Array Snapshot, Express Protect, Catalyst Copy
and Cloud Copy
Step 2
The RMC-V plugin contacts RMC, which contacts the Primera array. The array uses Express Protect to
copy the snapshot as backup data to the HPE StoreOnce-A Catalyst.
Figure 3-35: One RMC-V copy policy to orchestrate: Array Snapshot, Express Protect, Catalyst Copy
and Cloud Copy
Step 3
The StoreOnce-A system uses Catalyst Copy to copy the data to the StoreOnce-B system. It uses Cloud
Copy to copy the data to HPE Cloud Bank, which is supported on Azure, AWS, and Scality.
Figure 3-36: One RMC-V copy policy to orchestrate: Array Snapshot, Express Protect, Catalyst Copy
and Cloud Copy
Step 4
Customers can define a variety of rules for each type of copy in their copy policy.
Figure 3-37: One RMC-V copy policy to orchestrate: Array Snapshot, Express Protect, Catalyst Copy
and Cloud Copy
You will now look at two more key distinguishing features for HPE storage solutions. HPE Nimble arrays
are cloud-ready with the ability to migrate data to and from HPE Cloud Volumes. And both HPE Nimble
and Primera benefit from the AI-driven optimization of Infosight. The next several pages guide you
through the benefits of these solutions in more detail.
You have explored how HPE storage solutions help to protect the VMware environment on-prem. But
now it is relatively common for customers to keep at least some of their data in the cloud. How safe is
data there? Customers may face some additional challenges when they use cloud block storage such as
Amazon EBS or Azure Disks.
Lock-in
Customers do not want to be locked into services that are increasing in cost or that no longer make sense
for them. But once customers move their data into the cloud, it is difficult and expensive to move the data
out. Cloud providers often hit customers with egress charges if they want to remove their data.
HPE helps customers to surmount these challenges with HPE Cloud Volumes. HPE Cloud Volumes is a
suite of enterprise cloud data services that help customers unlock the potential of hybrid cloud.
Cloud Volumes Block helps customers move their data to the cloud to be near their cloud workloads with
greater ease and less risk. HPE Cloud Volumes Block provides as-a-service block storage for workloads
that run in Microsoft Azure or AWS. Customers can easily migrate their data from on-prem Nimble arrays
to Cloud Volumes Block and then attach the data to Azure or AWS services. Cloud Volumes Block stores
customers’ data in an HPE cloud, with locations that are strategically near Azure and AWS locations to
deliver low latency.
Here you see an example of how HPE Nimble arrays and HPE Cloud Volumes Block provide a simple
and consistent hybrid solution for a variety of workloads. On-prem HPE ProLiant DL servers and Nimble
arrays can support production databases on VMs and cloud-native apps on Kubernetes-managed
containers. The VMs use vVols and the Kubernetes containers use Persistent Volumes (PVs), both
provisioned dynamically on Nimble arrays. The company can have a hybrid solution that spans multiple
public clouds with database workloads in AWS and cloud-native apps in Google Cloud. The Nimble
arrays also hook into the cloud with bi-directional mobility to HPE Cloud Volumes.
Cloud Volumes Block helps customers achieve their goals for cloud storage. HPE also provides a solution
called Cloud Volumes Backup for backup and restoration use cases. Both solutions provide enterprise-
grade availability, ease of mobility, and visibility.
Enterprise grade
You manage Cloud Volumes through a simple web portal just like you do with AWS or Azure but it
provides you with the enterprise grade reliability you expect. Compare Nimble's proven six 9s storage
availability and Triple+ Parity RAID protection with native cloud storage's three to four 9s uptime and high
annual failure rates. Cloud Volumes delivers data durability a millions times better.
Its enterprise grade backups occur in seconds, not hours, so customers can back up their data as often
as they need. Nimble also supports instant clones for use cases such as testing, analytics, or bursting. In
addition, Nimble's efficient snapshots mean that customers are not paying for full copies, but just for
incrementally changed data, which typically adds just a few percentage points of overhead.
Ease of mobility
Cloud Volumes gives customers a faster on-ramp to the cloud without requiring drawn out data migration
projects. Customers can migrate data to the cloud without worrying about their infrastructure not being
compatible with the cloud. Cloud Volumes also enables easy mobility between cloud providers so
customers can use multiple clouds and avoid lock-in. If customers find that they need to switch providers,
they do not experience the pain of complex data migration or costly egress charges. They just choose the
new provider in Cloud Volumes, and Cloud Volumes automatically switches the connection to the new
cloud provider instantly with moving a single byte of data. The same ease applies if customers decide to
move data back on-prem—no egress charges.
Global visibility
The Cloud Volumes portal allows customers to track current usage and estimate future costs easily.
Powered by InfoSight, Cloud Volumes gives customers visibility across the cloud and on-prem—without
requiring complex and expensive third party monitoring tools.
Figure 3-42: HPE InfoSight—Key distinguishing feature for the HPE SDDC
You cannot leave your examination of how HPE arrays make the infrastructure more software-defined
without examining HPE InfoSight. HPE InfoSight is the AI-driven engine behind HPE Nimble, HPE
Primera, and HPE server solutions, helping the data center to manage and monitor itself and leading to
79 percent lower storage operational expenses. It is a game-changer for customers, transforming the
support experience. With InfoSight, 86 percent of issues are automatically opened and resolved. Because
InfoSight can solve problems proactively before the dire consequences occur, HPE storage systems can
deliver six nines or even 100% availability.
This figure illustrates the architecture for the InfoSight AI Recommendation Engine.
Predictive models
Good predictive models require good data. InfoSight has been collecting and correlating data from
millions of sensors every minute across many installed solutions for years. Because understanding why
applications are not performing as they should requires a broad view, InfoSight collects metrics across
compute and storage. VMVision lets customers choose to send vSphere data along with the other data
packages periodically sent to InfoSight. InfoSight analyzes and correlates that data with the rest, giving
customers deeper insight into their complete environment.
Good predictive models also require guidance, so InfoSight is also expert-trained by the PEAK team of
data scientists.
Recommendation
Too many competing solutions act as if giving customers visibility means giving them more data. But data
without guidance can leave admins with more questions than answers. If IOPS suddenly increase, for
example—what does that mean? Have application demands changed? Has something changed in the
infrastructure? Is it a normal fluctuation or something to worry about? InfoSight gives customers answers.
Its prioritization matrix helps them to understand what their real issues are.
Figure 3-45: Summary of HPE storage array benefits for VMware environments
Before moving on to the next topic, review the HPE storage solution benefits for VMware environments.
Application aware
vVols on Nimble and Primera enables storage VM-level awareness that helps customers to align storage
resources with VMs and their workload requirements. InfoSight also gives customers clear visibility into
VMs with VMVision.
Deeply integrated
HPE arrays provide full VAAI & VASA 1.0, 2.0 & 3.0 support. HPE Primera also supports VASA 4.0. HPE
Nimble and Primera provide SRAs to enhance SRM's disaster recovery capabilities. HPE also provide
plugins for vCenter to help customers manage the storage environment from vCenter.
Predictive
HPE InfoSight delivers predictive AI for the data center. It supports a broad array of HPE infrastructure,
including Nimble arrays, HPE Primera arrays, and HPE servers. Its ability to proactively solve issues and
help the data center manage itself represents a key value add for HPE solutions. InfoSight, as well as
other technologies embedded in the HPE storage solutions, help HPE deliver 6-nines uptime on HPE
Nimble and a 100% availability guarantee on HPE Primera. In this way, HPE storage helps to protect
critical VMs.
Leadership
HPE has partnered with VMware for over 20 years, delivering proven solutions from the datacenter to the
desktop to the cloud. HPE was the first vendor to support the vVols array-based replication capability that
was first available in vSphere 6.5, and one of only three vendors to support replication as of 2021. HPE
also supported vVols in SRM, as soon as v8.3 added that feature to SRM. Because replication and SRM
are key features for many enterprises, HPE storage provides the natural choice for companies who want
the benefits of vVols.
HPE continues to lead in the vVol space. HPE telemetry shows that HPE vVols support over 160,000
VMs as of April 2021.
Activity 3
Scenario
You are still in the process of helping Financial Services 1A migrate its vSphere deployment to HPE
Synergy, and you need to propose an HPE storage component of the solution. Customer discussions
have revealed a few key requirements. The customer is tired of endless issues with storage being a black
box that VMware admins have little insight into and that slows down provisioning processes. For the
upgrade, they want a storage solution that provides tight integration with VMware. Ideally, VMware
admins should be able to provision and manage volumes on demand.
Because Financial Services 1A runs mission critical services on vSphere, the company is also concerned
with protecting its own data, as well as its customers' data. The company's current backup processes are
too time consuming and complex, and the customer is concerned that the complexity will lead to
mistakes—and lost data.
In sizing the Synergy solution, you determined these total requirements for all of the clusters in the
vSphere deployment:
• Total IOPS = 13,000 write; 26,000 read
• Datastore Total = 40 TB
• Datastore Provisioned = 48TB
• Datastore Used = 36 TB
Task
Prepare a presentation on the relative benefits of vSAN or an HPE storage array as the storage solution
for this customer. In your presentation, note the advantages and disadvantages of both solutions. Also
emphasize the particular distinguishing benefits of HPE for either solution.
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
Summary
This module has guided you through the designing HPE storage solutions for VMware environments. You
focused in particular on how you can deploy SDS as part of a composable infrastructure with HPE
Synergy. However, you also learned about other HPE vSAN Ready Nodes. You learned about using HPE
storage arrays and the many benefits that these arrays provide for VMware environments.
Learning checks
1. What is one benefit of HPE Synergy D3940 modules?
a. A single D3940 module can provide up to 40 SFF drives each to 10 half-height
compute modules.
b. Customers can assign drives to connected compute modules without fixed ratios of
the number per module.
c. A D3940 module provides advanced data services like Peer Persistence.
d. D3940 modules offload drive management from compute modules, removing the
need for controllers on compute modules.
2. What is one rule about boot options for a VMware vSAN node deployed on HPE
Synergy?
a. The node must boot from a volume stored on the same D3940 module that supplies
the drives for vSAN.
b. The node must use HPE Virtual Connect to boot.
c. The node cannot boot using PXE.
d. The node can boot from internal M.2 drives with an internal P204i storage controller.
3. What is one strength of HPE Nimble and Primera for vVols?
a. They help the customer unify management of vVol and vSAN solutions.
b. They have mature vVols solutions that support replication.
c. They automatically convert VMFS datastores into simpler vVol datastores.
d. They provide AI-based optimization for Nimble volumes exported to VMware ESXi
hosts.
You can check the correct answers in “Appendix: Answers.”
Learning objectives
This module outlines options for making the network as software-defined as the rest of the data center.
The scenario for this course features a VMware environment, so in this module you will learn how to use
a combination of VMware and HPE technologies to virtualize and automate the network.
You will first learn about NSX and specifically NSX-T, which is the network component for VMware Cloud
Foundation (VCF). You will then look at using ArubaOS-CX switches as the underlay for the data center
and how Aruba Net Edit helps companies automate. Finally, you will briefly review Cisco ACI for cases in
which you need to integrate with this third-party solution.
After completing this module, you will be able to:
• Position HPE software-defined networking (SDN) solutions based on use case
• Design HPE SDN solutions
Rev. 21.31 | © Copyright 2021 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Module 4: Design an HPE Software-Defined Networking (SDN) Solution for a Virtualized Environment
Network virtualization is at the core of an SDDC approach. In this module, you will learn about strategies
for virtualizing and automating the network. You will learn how you can create software-defined network
management and control planes that let companies use GUIs and scripts to reconfigure the network to
support new workloads on the fly. This "network hypervisor" overlays the virtualization layer, if present,
and can be programmed to orchestrate network provisioning in sync with workload deployment.
Companies with highly virtualized environments can face issues with making the physical network as
flexible as they need.
Consider a simple example. A company might run a Web service on an ESXi cluster, shown here as
"compute cluster." The company wants to expand the number of Web service VMs, but needs more hosts
to support them. After evaluating the data center, IT founds a place for the new hosts—across a Layer 3
boundary in a new section of the data center. The networking team has a strict rule about terminating
VLANs at the ToR switch. Team members say that trying to change this will cause instability throughout
the data center. Traditionally, this restriction poses a problem because the company wants to keep VMs
of the same type in the same subnet.
Throughout this module, you will look at how network virtualization technologies can help companies
deploy workloads without having to consider the underlying physical topology.
You will also learn about how companies can increase network automation and orchestration so that they
can deploy new workloads, or move workloads, without long delays for network provisioning.
VMware NSX
You will now learn more about VMware NSX. While step-by-step implementation instructions and detailed
technology dives are beyond the scope of this course, by the end of this section, you should understand
the most important capabilities of NSX and be able to make key design decisions for integrating NSX into
your data center solutions.
VMware NSX
A brief look at the VMware NSX architecture will give you the foundation you need to understand the NSX
features. Review each section to learn about that component of the architecture.
Management plane
The management plane consists of the NSX Manager, which holds and manages the configuration. It
plugs into vCenter, as well as NSX Container Plugin, and Cloud Service Manager.
NSX Manager
Admins can access the NSX manager through a GUI, as well as through a plugin to vCenter, and
configure and monitor NSX functions. The NSX Manager also provides an API, which enables it to
integrate with third-party applications. By allowing these applications to program network connectivity, the
NSX API provides the engine for wide-scale network orchestration.
You deploy an NSX Manager together with a Controller in an NSX Manager Appliance VM. VMware
recommends deploying a cluster of three NSX Manager Appliances for redundancy.
Control plane
The control plane builds up MAC forwarding tables and routing tables.
NSX Controller
Each NSX Manager Appliance also includes an NSX Controller. The controllers for the Central Control
Plane (CCP). They perform tasks such as building MAC forwarding tables and routing tables, which they
send to the Local Control Plane.
Control plane objects are distributed redundantly across controllers such that they can be reconstructed if
one controller fails.
Data plane
The data plane consists of the transport nodes. They are responsible for receiving traffic in logical
networks, switching the traffic toward its destination, and implementing any encapsulation necessary for
tunneling the traffic through the underlay network. The data plane also routes traffic and applies edge
services.
Edge services
NSX-T provides a distributed router (DR) in the data plane for routing traffic directly on transport nodes.
However, some services such as NAT, DHCP, and VPNs are not distributed. A Services Router (SR),
which is deployed in an edge cluster, provides these services. The SR is also responsible for routing
traffic outside of the NSX domain into the physical data center network.
NSX-T also supports edge bridges, which can connect physical servers into the NSX networks.
You will now examine NSX-T features in more detail, starting with the network virtualization use case.
Network virtualization enables VMs to connect into a common logical network regardless of where their
hosts are located in the physical network. The physical network can implement routing at the top of the
rack without compromising the portability of VMs.
Overlay networking
NSX-T uses overlay networking to provide network virtualization. A brief discussion of overlay networking
in general will be useful.
When designing a data center network, network architects typically prioritize values such as stability,
load-sharing across redundant links, and fast failover. They have found that an architecture that routes
between each network infrastructure device delivers these values well. However, such an architecture
can make it harder to extend application networks wherever they need to go.
With overlay networking, the physical infrastructure remains as it is: scalable, stable, and load-balancing.
Virtualized networks, or overlay networks, lie over the physical infrastructure, or underlay network. An
overlay network can be extended without regard to the architecture of the underlay network. Companies
can then deploy workloads in any location, but still the workloads can belong to the same subnet and
communicate at Layer 2. VMware managers can also deploy overlay networks on demand, without
having to coordinate IP addressing and other settings with the data center network admins.
Overlay networking technologies are also highly scalable, typically offer millions of IDs for the virtual
(overlay) networks.
There are many strategies to build an overlay network. Here you are focusing on one of the most
common. Tunnel endpoints (TEPs) create tunnels between them. The tunnels are based on UDP
encapsulation. When a TEP needs to deliver Layer 2 traffic in an overlay network, it encapsulates the
traffic with a header specific to the overlay technology. It also adds a delivery header, which directs the
traffic to the TEP behind which the destination resides. The underlay network only needs to know how to
route traffic between TEPs, and has no visibility into the addressing used for the overlay networks.
Common overlay technologies include Virtual Extensible LAN (VXLAN), Network Virtualization using
Generic Routing Encapsulation (NVGRE), and Generic Network Virtualization Encapsulation (Geneve).
Geneve is a newer standard that supports the capabilities of VXLAN, NVGRE and other network
virtualization techniques; NSX-T uses this technology.
Like VXLAN, Geneve encapsulates L2 frames into UDP segments and uses a 24-bit Virtual Network
Identifier. In Geneve however, the header format is variable in length, making it possible to add extra
information to the header. This information can be used by the underlay network to decide how to handle
the traffic in the best way.
The Geneve header is also extensible. This means that it will be easier to add new optional features to
the protocol by adding new fields to the header.
Technologies such as VXLAN and Geneve do not provide automation on their own. However, NSX-T
provides the orchestration layer, enabling admins to simplify and automate the configuration of overlay
networks.
Overlay segments
Transport zones
NSX-T uses transport zones to group segments. An overlay transport zone includes one or more overlay
segments, while a VLAN transport zone contains one or more VLAN segments.
NSX admins assign transport nodes to the transport zone, which makes the segments in that zone
available to those nodes. In this example, a compute ESXi cluster and an edge ESXi cluster have been
assigned to the overlay transport zone, "my overlays." Admins can then connects VMs running on those
clusters to the overlay segments in the transport zone.
A gateway (which consists of DR and SR components) can route traffic between the overlay segments in
the same zone.
NSX-T provides uplink profiles for defining the uplink used for transporting overlay traffic. It is important
for you to understand these settings because you need to coordinate them with the physical network
infrastructure.
The uplink profile defines a transport VLAN ID. The transport node's TEP component uses this VLAN to
communicate with other TEPs. For example, a transport node might use transport VLAN ID 100, and this
VLAN is associated with subnet 10.5.100.0/24 in the data center network. The transport node might
receive IP address 10.5.100.5, and it would send and receive encapsulated traffic for overlay networks
using this address.
To account for encapsulation, the uplink needs a larger MTU. The minimum MTU is 1600, but VMware
recommends at least 1700 to account for future expansions to the Geneve header.
The uplink profile also includes the names of active uplinks and standby uplinks (if any), as well as the
NIC teaming settings. The NIC teaming options are similar to those available for traditional VDSes. An
uplink can be a link aggregation group (LAG), as shown in this example.
NSX-T floods broadcast, unknown unicast, and multicast (BUM) traffic to ensure that all VMs, containers,
and other endpoints in the overlay segment receive them. Each segment can use one of two modes for
flooding.
Two-tier hierarchical mode is the default, and typically recommended, mode. The figure illustrates it.
Transport node 1 receives BUM traffic in overlay segment 110. It replicates the traffic and sends it to
every transport node that:
• Is attached to the same overlay segment
• Is in the same transport subnet as it (10.5.5.0/24 in this example)
In this example, only transport node 2 is in the same subnet, but in the real world, more nodes will
typically reside in the subnet.
Transport node 1 also sends one copy to an arbitrary node in each other subnet used by nodes attached
to this segment. In this example, transport nodes 3 and 4 are in 10.5.6.0/24. Transport node 1 sends the
BUM traffic to transport node 3. Transport node 3 then replicates the traffic and sends it to all the
transport nodes in its segment.
Alternatively, NSX-T can use headend replication mode, in which transport node 1 would send a copy of
the traffic to all other transport nodes attached to the overlay segment. Two-tier hierarchical mode
distributes the burden of replication and tends to reduce rack-to-rack traffic. (Note that NSX-T implements
mechanisms to ensure that MAC forwarding tables are programmed correctly, regardless of the mode.)
Unlike some modes supported in NSX-V, neither NSX-T mode requires the data center network to
support multicast routing.
You will now look at a simplified example of how NSX-T can alter the network architecture.
In this example, a company has an ESXi cluster called "compute cluster" with a VDS called
"Compute_VDS." Compute_VDS has a port group for "web_front-end" VMs and for "web_app" VMs. It
also has networks for vMotion and management traffic.
Now the company is deploying NSX-T. NSX-T will enable the company to virtualize the production
networks with the Web front-end and Web app VMs.
The company creates an overlay network for "web_front-end" and "web_app" and places them in an
overlay transport zone. They attach the compute cluster to that zone.
Now the company can remove the VLANs that used to be associated with these networks from the
Compute_VDS uplinks, as well as from the connected physical infrastructure. Instead the uplink carries
VLAN 100, which is the transport VLAN in this example. Even more importantly, admins can add new
overlay segments in the future without having to add corresponding VLANs and subnets in the physical
infrastructure.
Note that it is typically best practice to leave the management and vMotion networks in traditional VLAN-
backed segments. The same holds true for storage networks such as for vSAN or other iSCSI traffic.
You will now learn about how NSX-T fulfills the microsegmentation use case, helping customers to
enhance their control over their virtualized workloads more easily and more flexibly. In this blog, VMware
outlines what it means by micro-segmentation. Read each section for a summary of the key features.
Topology agnostic
With traditional security solutions, traffic must pass through the firewall to be filtered and the firewall
location determines the extent of security zones. But as workloads become more portable, companies
need more flexibility in creating security zones based on business need, not location. NSX micro-
segmentation deploys an instance of the firewall to each host, enabling companies to implement
topology-agnostic controls.
Centralized control
While firewall functionality is distributed to the ESXi hosts, the firewall is controlled centrally. Admins
create security policies for their distributed services through an API or management platform, and those
policies are implemented everywhere.
NSX-T includes two types of firewall. The distributed firewall (DFW) empowers microsegmentation for the
complete NSX domain. Defined centrally, the DFW is instantiated on every transports and filters all traffic
that enters and leaves every VM or container. An edge firewall is implemented on an ESG, and it filters
traffic between the NSX domain and external networks. The stateful firewalls use rules that should be
familiar to you from other firewall applications. Rules specify the source and destination for traffic, the
service (defined by protocol and possibly TCP or UDP port), a direction, and an action—either allow or
deny. However, NSX-T permits great flexibility in defining the source and destination, making it easy for
admins to group devices together based on the company's security requirements.
Security extensibility
NSX-T provides a platform for bringing the industry’s leading networking and security solutions into the
SDDC. By taking advantage of tight integration with the NSX-T platform, third-party products can not only
deploy automatically as needed, but also adapt dynamically to changing conditions in the data center.
NSX enables two types of integration. With network introspection, a third-party security solution such as
an IDS/IPS registers with NSX-T. A third-party service VM is then deployed on each ESXi host and
connected to the VDS used as the NSX virtual switch. The host then redirects all traffic from vNICs to the
service VM. The service VM filters the traffic, which is then redirected back to the VDS. Examples of
supported next-generation firewalls and IDS/IPSes are listed in the figure above.
The second type of security extensibility is guest introspection. Guest introspection installs a thin third-
party agent directly on the VM, and this agent then takes over monitoring for viruses or vulnerabilities on
the VM. The figure above lists examples of supported solutions in this area.
Companies can deploy vRealize, which fully integrates with NSX-T, to permit orchestrated delivery of
virtualized services, including compute, storage, and networking components, through ordered workflows
and API calls. Companies can create policies to govern how resources are allocated to services to ensure
that applications are matched to the correct service level, based on business priorities. IT can deliver a
private cloud experience, allowing users to obtain their own services through an IT catalog. The vRealize
solution also provides extensibility through an API, allowing customers to integrate the applications of
their own choice and use those applications to dynamically provision workloads. You will learn more
about integrating HPE OneView with vRealize in the next module.
You have explored the major use cases for NSX-T and understand at a high level how NSX-T provides
software-defined networking and security for your customer's VMware-centric data center.
While NSX-T is meant to be deployed over any underlay, that does not mean that the underlay is
immaterial to the success of the solution. The tunneled traffic still ultimately crosses the underlay network,
and issues there can compromise traffic delivery or network performance. Because different teams
usually manage the virtual and physical networks, no one team has all of the information that they need,
and IT staff can find it difficult to troubleshoot.
In short, the physical data center network matters. In the next section, you will learn how ArubaOS-CX
switches fulfill this role, integrating with and enhancing an NSX solution.
Aruba also provides an SDN solution called Aruba Fabric Composer, which provides tight integration with
VMware and enhanced visibility across physical and virtual networks. See Aruba training for more
information about Aruba Fabric Composer.
NSX + ArubaOS-CX
This section explains how to set up an ArubaOS-CX environment to integrate with NSX.
You just need to check a few settings on your ArubaOS-CX switches to ensure that they work well for the
NSX-T environment.
Determine the settings in the uplink profile for each transit node. You will need to match those settings in
the ToR switches that connect to those nodes. The ToR switches must support the transport node's
transport VLAN ID on the links connected to it. Typically, these switches will also be the default gateway
for that VLAN. Also make sure to match the MTU settings in the uplink profile in this VLAN on the switch.
The next page explains more.
Also remember VLANs for any non-overlay networks, such as management, vMotion, and storage. The
physical infrastructure will need to be tagged for the correct VLANs.
Also make sure that the link aggregation settings sync with the VMware NIC teaming settings, both on the
VXLAN transport network and other networks. You will generally deploy ToR switches in pairs for
redundancy. You should deploy ArubaOS-CX switches with VSX, which unifies the data plane between
two switches, but leaves the control plane separate. A LAG on a transport node can connect to both
switches in the VSX group. The switches use an M-LAG technology to make this possible.
You will now look at the MTU requirements in a bit more detail.
Standard Ethernet
The standard Ethernet payload or Maximum Transmission Unit (MTU) is 1500 bytes.
The Ethernet protocol adds a header and a checksum to the payload. In total, according to the IEEE
802.3 Ethernet Version 2, the default maximum frame size is 1518 bytes. The 802.1Q tag adds 4 bytes to
the standard Ethernet frame header, so the default maximum frame size is 1522 bytes.
Jumbo frames
Ethernet frames between 1500 bytes up to 1600 bytes in size are called baby giant frames (or baby
jumbo frame) and Ethernet frames up to 9216 bytes are called jumbo frames.
Jumbo frames can cause problems in the underlay network because all components in the network, from
end-to-end, must support it. That means careful planning and careful implementation. In other words, you
must increase the MTU on the ToR switches that connect to transport nodes and on all network
infrastructure devices in between.
problems as well. For instance, Geneve can add 50 or more bytes to the header. Adding bytes to an
already expanded payload could make traffic exceed the MTU. The payload must then be sent in two
frames.
This would make the transport very inefficient. Some network components might even drop frames that
are too large, which would result in no communication at all.
You will now look at ways that Aruba, an Hewlett Packard Enterprise company, makes managing the
physical infrastructure simpler and more automated.
Network operators are often slowed down as they make configurations because they do not have all the
relevant information at their fingertips. For example, they might not know the IP address of a server or
what address is available on the management network for a new switch. And even expert operators can
make mistakes, which can cause serious repercussions for the network. Fully 74% of companies report
that configuration errors cause problems more than once a year.
ArubaOS-CX switches offer Aruba NetEdit, which provides orchestration through the familiar CLI. It gives
operators the intelligent assistance and continuous validation they need to ensure that device
configurations are consistent, compliant, and error free. IT operators edit configurations in much the way
that they are used to, working within a CLI, so no knowledge of scripting or retraining is necessary.
However, they create the configuration safely in advance with all the help tools they need. They can
search through multiple configurations and quickly find information such as the IP addresses that other
switches are using. They can also tag devices based on role or location. The editor also provides
validation so that a simple error does not get in the way of the successful application of a configuration.
Admins can then deploy the configuration with confidence. An audit trail helps admins easily track
changes for simpler change management and troubleshooting.
Conformance
Change validation
Beyond making life easier for operators, NetEdit delivers key business benefits to your customers. Read
the sections below to learn more.
Cisco ACI
While the solutions covered earlier are the preferred SDN solutions for HPE SDDCs, some customers
have Cisco entrenched as their data center networking solution. If you cannot dislodge Cisco in the
network, you can still win the compute and storage components of the SDDC and integrate them with
Cisco.
In Cisco ACI, Cisco Nexus 9000 series switches, deployed in a leaf-spine topology, provide the data
plane. They also provide the control plane, using OSPF as the underlay protocol and VXLAN as the
overlay protocol. However, management of the 9000 switches is completely taken over by Application
Policy Infrastructure Controllers (APICs). The APICs manage all aspects of the fabric. Instead of
configuring OSPF, VXLAN, VLANs, and other features manually, admins configure policies about how
they want to group endpoints and handle their traffic. The APICs then configure the underlying protocols
as required to implement desired functions.
For customers with VMware-centric environments, APICs can integrate with VMware.
Figure 4-26: Endpoint Groups (EPGs) and other key ACI components
In Cisco ACI, the endpoint group (or EPG) serves as the fundamental block for controlling endpoint
communications. It can act like a VLAN, VXLAN, or subnet; however, it is not exactly any of those things.
This map shows the components that relate to EPGs in the ACI policy universe. To learn more about
some of the key components, read about them below.
This is, of course, just a brief introduction to ACI. If you need more information, refer to Cisco
documentation.
Domain profile
The EPG is associated with one or more domain profiles. An access policy applies domain profiles to leaf
edge ports to control how traffic from endpoints is assigned to EPGs. The domain profile includes VLAN
instance profiles, which differ depending on its type. A physical domain might specify a specific VLAN ID
for the EPG. A VMM domain has a dynamic VLAN pool, which, through VMware integration, is presented
in VMware for configuration on VM networks.
Access
An access policy can include multiple domain profiles.
Application profile
The application profile helps admins to group workloads by application. The profile can be associated with
one or more EPGs, as well as with AEPs.
For traffic to flow, the leaf port needs an AEP that permits VLANs, and an appropriate EPG needs to be
applied to the port. Application profiles can help to correlate the two. When an AEP is applied to port, the
EPG in its application policy is automatically applied as well.
Bridge domain
The bridge domain defines the Layer 2 boundary for communications. It is associated with one (or more)
subnets within a virtual routing and forwarding (VRF) instance. (The VRF enables the establishment of
completely separate routing domains.)
Activity 4
After moving to Synergy, Financial Services 1A's ESXi hosts are using the plan shown in this figure for
networking. The box on the left is a single ESXi host compute module, but the plan is the same for all of
the hosts in the clusters that you are examining.
The pairs of FlexNICs that connect to the Mgmt and vMotion VDSes each support a single network with
the same name as the VDS. The pair of FlexNICs that connect to the Prod VDS supports a Network Set
with multiple production networks.
Now the customer wants to implement NSX-T and move the production VLANs to overlay segments for
greater flexibility in extending clusters across multiple Synergy racks.
What are some of the considerations for integrating NSX-T with the Synergy networking? Consider
questions such as:
• How will the connections and networks on compute modules need to change?
• What settings will you need to check and synchronize with the switches at the top of rack?
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
Summary
In the module, you learned about NSX-T and how its overlay capabilities make data center networks
more flexible and aligned with virtualized workload requirements. You also learned how to use ArubaOS-
CX switches as the physical underlay and how to use Aruba NetEdit to automate management.
Learning checks
1. What benefit do overlay segments provide to companies?
a. They provide encryption to enhance security.
b. They provide admission controls on connected VMs.
c. They enhance performance, particularly for demanding and data-driven workloads.
d. They enable companies to place VMs in the same network regardless of the
underlying architecture.
2. What is one way that NetEdit helps to provide orchestration for ArubaOS-CX switches?
a. It provides the API documentation and helps developers easily create scripts to
monitor and manage the switches.
b. It lets admins view and configure multiple switches at once and makes switch
configurations easily searchable.
c. It integrates the ArubaOS-CX switches into HPE IMC and creates a single pane of
glass management environment.
d. It virtualizes the switch functionality and enables the switches to integrate with
VMware NSX.
You can check the correct answers in “Appendix: Answers.”
You will start with a standard virtual switch (or vSwitch), which is deployed on a single ESXi host. A
vSwitch is responsible for connecting VMs to each other and to the data center LAN. When you define a
vSwitch on an ESXi host, you can associate one or more physical NICs with that switch. The vSwitch
owns those NICs—no other vSwitch is allowed to send or receive traffic on them. You should define a
new vSwitch for every set of NICs that you want to devote to a specific purpose. For example, if you want
to use a pair of NICs for traffic associated with one tenant's VMs and a different pair of NICs for another
tenant's VMs, you should define two vSwitches. However, if you want the tenants to share physical NICs,
you should connect them to the same vSwitch using port groups to separate them.
In the vSphere client, adding a port group is called adding a network of the VM type. The port group
defines settings such as the NIC teaming policy, which determines how traffic is distributed over multiple
physical NICs associated with the vSwitch, and the VLAN assignment—more on that later. The port group
controls traffic shaping settings and other features such as promiscuous mode.
When you deploy a VM, you can add one or more vNICs to the VM, and connect each vNIC to a port
group. Each vNIC connects to a virtual port on exactly one port group on one vSwitch.
The figure above shows how the vCenter client presents the vSwitch and connected components.
Like a physical Ethernet switch, a vSwitch creates a MAC forwarding table that maps each MAC address
to the port that should receive traffic destined to that address. However, the vSwitch does not build up the
MAC table by learning MAC addresses from traffic. Instead the hypervisor already knows the VMs' MAC
addresses. The vSwitch forwards any traffic not destined to a virtual NIC MAC address out its physical
NICs.
The vSwitch also knows, based on the hypervisor, for which multicast groups VMs are listening. It
replicates and forwards multicasts to the correct VMs accordingly. (In vSphere 6 and above, you can
enable multicast filtering, which includes IGMP snooping, to ensure that the vSwitch always assesses the
multicast group memberships correctly). The vSwitch does flood broadcasts.
The way that vSwitches handle unicasts and multicasts ensures better security. Because the switch does
not need to flood unicasts to unknown destinations, it does not ever need to forward traffic destined to
one VM's MAC address to another VM. And it helps to prevent reconnaissance and eavesdropping
attacks in which a hacker overloads the MAC table and forces a switch to flood all packets out all ports.
The figure above provides an example of the traffic flow. Assume that VMs' ARP tables are already
populated. Now VM 1 sends traffic to VM 2's IP address and MAC address. The vSwitch forwards the
traffic to VM 2, based on the MAC forwarding table. When VM 3 sends traffic to a device at 10.2.20.15,
which is in a different subnet, VM 3 uses its default gateway MAC address as the destination. The default
gateway is not on this host, so the vSwitch forwards this traffic out its physical NIC.
VMkernel adapters
You can create a second type of network connection on an ESXi host—a VMkernel adapter. The
VMkernel adapter is somewhat analogous to a port group. However, instead of connecting to VMs and
carrying their traffic, it carries traffic for the hypervisor. A VMkernel adapter can carry all the types of traffic
that you see in the figure above. When you create the adapter, you choose the function for which the
adapter carries traffic. In the figure above, you are creating a VMkernel adapter for the ESXi host's
management connection. You also give the adapter an IP address.
The figure above shows how VMware shows the settings after you have created the VMkernel and
connected it to a switch.
You can make the same VMkernel port carry multiple types of this traffic—you simply select multiple
types when you create the adapter. However, some functions, such as vMotion, should have a dedicated
adapter with its own IP address. In the past, admins preferred to dedicate a pair of 1GbE interfaces to
each VMkernel adapter. With 10GbE to the server edge so common now, though, you might connect
multiple VMkernel adapters to the same switch and consolidate traffic.
Implementing VLANs
VMware vSwitches define a VLAN for each port group and VMkernel adapter. Like a physical switch, the
vSwitch enforces VLAN boundaries, only forwarding traffic between ports in the same VLAN. A vSwitch
can take one of three approaches in defining the VLAN for a port group or VMkernel adapter. Read each
section to learn more about that approach.
For deployments with many hosts and clusters, defining standard vSwitches individually on each is
tedious and error prone. If an admin forgets to define a network on one host, moving a VM that requires
that network to that host will fail. A vSphere distributed switch (VDS) provides a centralized way to
manage network connections, simplifying administrators’ duties and reducing these risks. The
management plane for the VDS resides centrally on vCenter. There you create distributed port groups,
which include the familiar VLAN and NIC teaming policies. You also define a number of uplinks based on
the maximum number of physical NICs that a host should dedicate to this VDS.
You deploy the VDS to hosts, each of which replicates the VDS in its hypervisor. The individual instances
of the VDS hold the data and control plane and perform the actual switching. When you associate a host
to the VDS, you must associate a physical NIC with at least one uplink. Each uplink can be associated
with only one NIC, but if the VDS has additional uplinks defined, you can associate other physical NICs
with them. The multiple NICs act as a team much as they do on an individual virtual switch, using the
settings selected on the VDS centrally.
The VDS’s distributed port groups are available on the hosts for attaching VMs or VMkernel adapters.
Note that for VDSes, the VMkernel adapter attaches to a distributed port group, rather than directly to the
switch.
Learning objectives
In this module, you will explore orchestration tools for software-defined data centers (SDDCs). You will
first look at HPE OneView integrations with VMware vSphere. You will then consider the integration
between HPE InfoSight and VMware. Finally, you will review scripting and automation tools that integrate
with HPE OneView.
After completing this module, you will be able to:
• Provision and deploy an HPE SDI solution using orchestration tools
• Manage and monitor an HPE SDI solution
• Demonstrate an understanding of the HPE integrations for given automation tools and scripting tools
• Explain the benefits of HPE DEV resources
Rev. 21.31 | © Copyright 2021 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Module 5: Use Orchestration and Configuration Management to Deploy and Manage the HPE SDI Solution
Financial Services 1A has invested in a highly virtualized data center and taken steps to transform
compute, storage, and networking with software-defined technologies. But the company still needs help
bringing all of the components together. IT knows that it needs to respond to line of business (LOB)
requests more quickly. For example, IT would like to be able to deploy VMs more quickly. They would
also like to detect issues and resolve issues before they create outages.
Automation
Automation is creating a single task that can run on its own. Automated tasks can be combined to create
a sequence. Automation works in one area, or in other words: a single domain (for instance, automatically
checking email, installing an operating system on a server, or an automated welding machine in a car
factory).
Creating an automated process takes time and money. But the benefits are that you only must do it once
and then, after successful testing, the automated process can be used multiple times.
Automation can:
• Increase productivity—Once created and tested, the automated task can run repeatedly
• Increase quality and consistency—Automation ensures that tasks are performed identically. This will
result in consisted results of high quality. Consistency also means that tasks will be performed in
ways that are needed to comply with corporate governance or legislation.
• Decrease and provide faster turnover times—By creating an optimal workflow and eliminating
unnecessary tasks, IT admins can complete tasks more quickly. They can also meet performance
goals within tighter budget constraints.
Orchestration
Orchestration starts with automation, but it takes the concept a step further. Orchestration is creating a
workflow of automated tasks to arrange, coordinate and manage IT resources.
Where automation works on a single domain, orchestration works on multiple domains. It can work on
the hardware, the middleware and the services that are needed on top of the infrastructure. The
orchestration tool coordinates all the tasks, like a conductor leading an orchestra.
As an example, an orchestration tool could provide a web portal for an end-user. When the end user
needs an IT resource, it can make a request in the portal. Once the request is approved (this can also be
an automated task), the orchestration tool starts the automated provisioning of hardware. When the
hardware is ready, the automated installation of OS and software services could be scheduled.
Customers can achieve a true software-defined data center (SDDC) by taking advantage of extensive
HPE OneView integrations with the VMware solutions. With OneView integration, VMware admins can
continue to use the VMware interfaces with which they are familiar but gain access to HPE’s deep
management ecosystem. The single-console access simplifies administration. IT can further reduce their
efforts by automating responses to hardware events. Customers can take control of the software-defined
data center (SDDC) by launching trusted HPE management tools directly from vCenter and proactively
managing changes with detailed relationship dashboards that extend across the virtual and physical
infrastructure. By automating hardware and virtualization together, IT can deliver on-demand server and
storage capacity.
Customers can also achieve a more stable and reliable environment with automation that enables online
firmware updates and workload deployment. They can also integrate information collected by OneView
into VMware vRealize Operations, Orchestrator, and Log Insight for deep analytics, automation, and
troubleshooting.
HPE provides several plug-ins for VMware integration. You will learn more about these plug-ins in this
module.
HPE OneView for vCenter (OV4VC) brings the power of HPE OneView to VMware environments. The
sections below summarize the key benefits.
Deploy faster
OV4VX simplifies on-demand provisioning. Template-based tools let customers:
• Leverage the HPE OneView automation engine
• Quickly and easily create or expand a VMware cluster
Figure 5-7: HPE OneView for VMware vCenter (OV4VC): Separation of server and storage integration
HPE OV4VC 9.6 and below supports both server and storage integration in vCenter. With HPE Ov4VC
10, however, the plug-in includes support for servers only.
You can download OV4VC from the Software Depot.
Storage integration is provided in the HPE Storage Integration Pack for VMware. You will learn more
about this plug-in later in this course.
When upgrading OV4VC 9.6, you should be aware that version 9.x backups cannot be restored using
version 10.x.
You deploy HPE OV4VC as a VM. The OV4VC VM must have access to vCenter and OneView, and you
must register it with vCenter. All vCenter clients connected to this vCenter Server can then access the
OV4VC views and features.
Licensing
OV4VC can be licensed with OneView standard or advanced licenses:
• Standard—Supports basic health and inventory features
• Advanced—Supports advanced features such as server profiles.
HPE Synergy includes the Advanced license, so no additional license is required when using HPE
Synergy.
Managed devices
With OV4VC 9.4 and above, all servers, enclosures, and Virtual Connect devices must be managed by
HPE OneView. OV4VC will report an error when trying to manage non-OneView managed devices. If
companies want to use OV4VC to manage devices that are not managed by OneView, they can use
OV4VC 9.3 (rather than upgrading to a later version).
As of the release of this course, supported devices include:
• HPE ProLiant BladeSystem c-Class
• HPE ProLiant 100, 300, 500, 700, or 900 series ML or DL servers
• HPE Synergy D3940 Storage Module
• HPE Synergy 12Gb SAS Connection Module
• HPE Synergy Server
Figure 5-9: HPE OneView Hardware Support Manager for VMware vLCM
HPE OV4VC 10.1 and above include an additional plug-in: HPE OneView Hardware Support Manager for
VMware vLCM. As the name suggests, this plug-in integrates with vLCM, providing one-click lifecycle
management for ESXi, HPE drivers and firmware, directly in the vSphere user interface. With the
OneView Hardware Support Manager plug-in, IT admins can:
• Set baselines for images and firmware versions
• Automatically check and validate components meet the baseline
• Update components that do not comply
This plug-in supports any HPE Gen10 server certified for ESXi 7.0 and HPE OneView. In addition, one
HPE OV4VC instance supports multiple vCenters/OneViews and external HPE firmware repository.
Hardware overview
The OneView Hardware views display detailed information about server processors, memory, and
physical adapters.
Firmware
This view shows the firmware version installed on every server component.
Ports
This view lists network adapters and helps admins correlate physical and virtual settings.
Network diagram
The network diagram helps admins set up and troubleshoot networking with a complete view of
connections between virtual switches, server adapters, Virtual Connect modules, and uplinks.
Enclosures
This view shows information about the enclosure in which the server is installed.
Remote support
Customers can use this view to check the status of the server's support services. You can use this view to
check the:
• Warranty expiration date for Server Hardware and Enclosures
• Remote Support contract type and status
HPE OV4VC also shows:
• Support/contract about to expire
• Support/contract already expired
The remote support page provides information about the Remote Support status. IT admins can use it to
create or manage a Remote Support case.
When the infrastructure that underlies VMware consists of a Synergy or HPE BladeSystem or solution
that uses Virtual Connect (VC) modules, customers can import VMware clusters into OneView. Admins
can then implement cluster-aware maintenance on the clusters from OneView. Integrating management
within OneView enables admins to automate tasks that would otherwise require hopping between tools.
For example, admins can choose to grow a cluster, and OneView handles all the steps from deploying
the OS to adding the host to the cluster. Similarly, admins can use OneView cluster management to
shrink a cluster, check cluster members' consistency with server profile template, and apply cluster-aware
firmware updates.
Process initiation
IT admins initiate the grow cluster process in the Grow Cluster wizard. The cluster is associated with a
server profile template and OS deployment plan, which already define many required settings, including
OS build plans, that are stored and managed on OV4VC itself.
IT admins simply need to indicate the cluster, the new hardware, and the networking settings for the new
host. The networking settings can include vDS settings for particular functions such as management, FT,
and vMotion. They can also configure multi-NIC vMotion.
Firmware baseline
Admins can choose the new firmware baseline in the HPE Server Hardware tab on vCenter—no need to
jump to OneView to make edits there.
Figure 5-18: HPE OV4VC benefits: Host and cluster consistency check and remediation
Consistency check
Admins can easily determine which hosts are not on the new firmware by running a consistency check
against the selected server profile template. HPE OV4VC also supports clusters not managed by HPE
OneView Cluster Profiles, but automated remediation is not available for these cluster.
HPE OV4VC enhances VMware's HA capabilities to prevent downtime. When selected as a partial failure
provider in the cluster's HA settings, OV4VC monitors hosts' health and notifies vCenter of impending
issues on a host. Admins can chose from a broad range of failure conditions for OV4VC to monitor,
including issues with memory, storage, networking adapters, fans, and power. When OV4VC informs
vCenter of an issue, the cluster can then move VMs to other hosts or take another remediation action, as
specified in the cluster HA settings. In this way, workloads move to a fully operational host before a
hardware issue causes an outage.
As mentioned earlier, HPE OV4VC 10 supports only server integration. Storage integration is provided in
the HPE Storage Integration Pack for VMware. This plug-in provides context-aware information about
HPE storage solutions and integrates the virtual and physical infrastructure.
With HPE Storage Integration Pack for VMware vCenter, admins can access context-aware information
about HPE Storage within vCenter. They can view storage information such as:
• Heath status
• Storage volumes and paths
• Performance details
• Alerts
They can also provision their HPE storage, completing tasks such as:
• Create, delete, and expand datastores
• Create VMs from a template
• Switch primary and standby roles for Peer Persistence
Admins can perform operations, such as:
• Set up quality of service policies on VMFS datastores
• Restore snapshots
• Check configurations to ensure they meet best practices
You now understand how HPE OneView enhances what customers can do with vSphere. You will next
examine VMware vRealize Suite and then HPE plugins for it.
As an optional add-on for VCF, the VMware vRealize Suite transforms the SDDC into a true private cloud.
It enhances the intelligence of operations across the SDDC. Users can now obtain services through easy-
to-use catalogs. And cloud costing capabilities enable customers to track and optimize utilization across a
multi-cloud environment.
Read the sections below to learn more about the vRealize solutions that make all of this possible.
vRealize Automation
vRealize Automation provides a self-service catalog that allows users to select services in the private and
public clouds. With vRealize Automation, customers can dramatically accelerate workload delivery while
allowing IT to maintain control of the environment.
Customers have three options for purchasing the vRealize Suite: Standard, Advanced, and Enterprise. All
three options include the Lifecycle Manager, Log Insight, and Operations. However, the Standard and
Advanced Suite provide the Advanced version of vRealize Operations while the Enterprise Suite features
the Enterprise version of this component. As compared to the Advanced version, the Enterprise version
of vRealize Operations provides performance monitoring, analytics, remediation, and troubleshooting
over more extensive hybrid cloud and multi-cloud environments, as well as containerized environments. It
also includes application, database, and middleware monitoring.
Only vRealize Suite Advanced and Enterprise deliver the private cloud features, including vRealize
Business for Cloud and vRealize Automation (vRA). vRA also comes in an Advanced or Enterprise
version. Both versions provide a self-service catalog with a variety of IaaS and other services. Both also
support multi-vendor virtual, physical, and public cloud services, but vRA Enterprise adds application
authoring capabilities.
You will now look at the HPE plugins for vRealize Suite components, starting with Log Insight.
HPE provides two content packs for vRealize Log Insight. The free content packs add dashboards,
extracted fields, saved queries, and alerts that are specific to the server and storage hardware. HPE
OneView for VMware vRealize Log Insight dashboards summarize and analyze log information from iLO
and Onboard Administrator (OA). The StoreFront Analytics Content Pack for vRealize Log Insight adds
dashboards and information specific to 3PAR. With operational intelligence and deep visibility across all
tiers of their IT infrastructure and applications, admins have a more complete picture of all the factors
behind performance and possible issues. They can troubleshoot and optimize more quickly using Log
Insight's intuitive, easy-to-use GUI to run searches and queries. And analytics help admins to find the
patterns behind data.
HPE OneView for vRealize Operations enhances the solution’s monitoring capabilities, helping customers
to gain visibility into their complete environment and solve problems more quickly. Read the examples
below to see what the HPE integration adds.
Infrastructure view
Admins can browse through the infrastructure tree, checking each device’s health and efficiency. Risk
alerts are clearly shown, ready to grab admins’ attention.
Risk details
Admins can drill in on alerts to quickly discover potential issues for faster troubleshooting.
Dashboard
The view on the left shows the relationships between VMware and HPE OneView resources. On the right,
admins can click to open alerts and health trees. Metric graphs show historical data so that admins can
easily track trends over time.
VMware Realize Orchestrator (vRO) helps customers to automate complex IT tasks and standardize
operations with workflows. A library of building block actions defines functions such as powering on or
stopping a VM. A wide array of plug-ins, including third-party ones, define various actions. Admins can
easily drag and drop actions to define a workflow, which ensures repeatable and reliable operations. The
workflow can feature logical constructions such as if/then statements or an order to wait for a particular
event to occur.
Admins can create workflows using the vRealize Orchestrator Client.
vRA + vRO
However, vRO reveals its true power by making its workflows available to other VMware solutions, such
as vSphere and vRA, to use the workflows as part of their orchestration functions. Here you see how
vRA, in particular, and vRO interact. vRA communicates with vRO through a vRO's RESTful API. vRA
can invoke vRO workflows that execute when users select a particular service from the self-service
catalog. In this way, vRA and vRO work together to provide IT services and lifecycle management for
private and hybrid cloud services.
HPE offers two plugins for vRO. The HPE 3PAR plug-in for vRO provides predefined workflows for 3PAR
storage while the OneView for vRO (OV4vRO) plug-in offers actions and workflows for vRO to perform
server-focused functions. While offering many predefined workflows and actions, OV4vRO also permits
admins to customize and extend the workflows so that they can automate based on their company’s
needs. The HPE plug-ins for vRO help admins to easily automate the lifecycle of OneView-managed
hardware from deployment to firmware update to other maintenance tasks. Customers can make their
existing workflows more powerful by incorporating HPE OneView’s advanced management capabilities
within them. For example, a cloud service might allow deployment of the workload on bare metal servers.
A vRO workflow could manage the service deployment with OV4vRO furnishing the capability for tasks
such as deploying an OS to the bare metal server and updating its drivers.
The following sections provide examples of the workflows and actions supported by OV4vRO
Server-focused workflows
HPE OV4vRO has workflows for performing actions across clusters, configuring OneView instances,
managing hypervisors and clusters imported in OneView, managing hardware on single servers, and
using utilities to customize workflows.
Most workflows work on any HPE-managed servers, but the following workflows require blade or Synergy
modules connected to VC modules or HPE Composable Cloud: Import Hypervisor, Import Hypervisor
Cluster Profile, and Configure Host Networking from Server Profile.
Server-focused actions
This figure shows many of the predefined actions for managing HPE Servers' lifecycle.
OV4vRO also supports provides workflows for automating OneView-managed 3PAR systems' lifecycle.
Admins can automate storage provisioning, as well as configuration of Remote Copy—a technology that
helps to provide disaster recovery by replicating volumes to remote systems.
HPE InfoSight gives customers a new way to approach troubleshooting and optimization. Collecting
millions of pieces of data a day from deployments across the world, this AI-based solution can detect
potential issues, and recommend solutions, before the issues grow into larger problems. InfoSight
extends across the HPE storage, compute, and hyperconverged infrastructure. And it extends to the
virtualization layer. With the breadth and depth of insight delivered by InfoSight, customers can hone in
on the true causes of issues and better optimize their infrastructure.
HPE InfoSight’s integration with VMware provides greater insight in the environment. InfoSight can look at
the entire VMware infrastructure and provide detailed advice on both optimizing the environment and
mitigating and avoiding problems. With the in-depth analysis of its cross-stack telemetry, InfoSight
provides in-depth VMware analysis and troubleshooting. As shown in this figure, InfoSight can report
symptoms of an issue, pinpoint the root cause, and then suggest a solution.
In addition, InfoSight’s cross-stack analytics identifies VM noisy neighbors. Noisy neighbors are VMs or
applications that consume most of the resources and cause performance issues for other VMs. By
identifying high-consuming VMs, InfoSight allows companies to take corrective actions.
InfoSight provides information about resource utilization, providing visibility into host CPU and memory
usage. InfoSight not only identifies latency issues but also helps IT admins pinpoint the root causes
across hosts or storage. It also reveals inactive VMs, allowing IT admins to repurpose or reclaim their
resources. IT admins can also view reports showing the “top performing” VMs, based on IOPs and
latency.
By providing this detailed visibility into their environment and offering recommendations for optimizing
performance and remedying issues, InfoSight impro, IT admins better manage their environment, ensure
they have necessary resources, and optimize the distribution of workloads across the physical
infrastructure.
Consider just one example of how InfoSight enables admins to discover the root cause of an issue. A
customer's applications were experiencing issues with excessive latency. InfoSight VMVision pulls data
from the VMware environment and correlates it with data from across the infrastructure. Admins no longer
need to run extensive tests to determine whether the storage, network, or another factor lies behind the
latency. They can pinpoint the true root cause and then take steps to resolve the issue.
With InfoSight VMVision admins can examine and compare performance for all VMs. A heat map helps
the admins to quickly detect which VMs are experiencing issues. InfoSight further helps admins with
explicit root cause diagnostics for the underperforming VMs. It even provides recommendations for
improving the performance.
GitHub
Git is a free and open-source distributed version control system. It can handle small to very large
projects. Git tracks the history of the projects that are stored in a repository.
GitHub is a website that uses Git for version control. GitHub is mostly used to publish software code, but
it can be used for other projects that need version control. GitHub offers a large variety of code projects,
ranging from small on-premise projects to large cloud-based infrastructures.
Developers can place their code in a repository and can allow others to collaborate on their projects.
Projects can be public (for instance for open-source software), or private (for instance, to allow only
specific team members to work on a project).
HPE has more than 200 repositories on GitHub, varying from the OneView provider for Terraform, to a
project for HPE Azure Stack on HPE Nimble Storage. Use the following link to access the HPE page:
https://github.com/HewlettPackard
HPE DEV
HPE DEV is a website for developers in the HPE ecosystem. It is a hub that serves a community of
individuals and partners that want to share open-source software for HPE products and services. It offers
numerous resources to help developers learn and connect with each other, such as blogs, monthly
newsletters, technical articles with sample code, links to GitHub projects, and on demand workshops.
You can find HPE DEV at:
https://developer.hpe.com
The HPE APIs are a critical component of its ability to deliver a software-defined infrastructure.
HPE uses a Representational State Transfer (REST) model for its APIs. REST is a web service that
allows clients to use basic HTTP commands to perform create, read, update, and delete (CRUD)
operations on resources. When an application provides a RESTful API, it is called a RESTful application.
A RESTful API makes infrastructure programmable in ways that CLIs and GUIs cannot. For example, a
CLI show command provides output that an admin can read, but a script cannot. On the other hand, a
simple GET call to an API returns information in JSON format that is easily extractable for a script.
With RESTful APIs, developers can use their favorite scripting or programming language to script HTTP
calls for automating tasks such as inventorying, updating BIOS settings, and many more. Because
RESTful APIs provide a simple, stateless, and scalable approach to automating, they are common to
many modern web environments, and customer’s staff should be quite familiar with developing to them.
Redfish is an open source RESTful API sponsored and controlled by Distributed Management Task Force
(DMTF), an industry recognized peer-review standards body. Redfish provides a schema for managing
heterogeneous servers in today’s cloud and web-based data center infrastructures, helping organizations
to transform to a software-defined data center.
In accordance with HPE’s commitment to open standards, the iLO API, used by OneView and other tools
to manage HPE ProLiant servers, is Redfish conformant. The Redfish API offers many advantages over
earlier interfaces such as IPMI as Redfish is designed for security and scalability.
The iLO RESTful API in iLO 5 has several new features, some of which keep it in conformance with the
latest Redfish developments and some of which add to its management capabilities. New features for
Gen10 include the ability to configure Smart Array controllers through the API and to stage and update
components using the iLO Repository.
A Software Development Kit with libraries and rich sample code helps developers to easily create scripts
for their own environments. Refer your customers to https://developer.hpe.com.
HPE has a Software Development Kit (SDK) for OneView that is available for Python. The SDK provides
a pure Python interface to the HPE OneView REST APIs.
The figure shows an example of the SDK used in a Python script. The script adds a ProLiant server to
OneView. The referenced Python script (add-server.py) instructs OneView to connect to the iLO of the
server (172.18.6.31) using the credentials Administrator/HP1nvent. Then the script will instruct OneView
to add the server to the OneView database.
https://github.com/HewlettPackard/oneview-python
Understanding Python
Python is a high-level programming language that can be used to work together with automation and
orchestration tools. One of the design goals of Python is readability of the code. The extensive use of
white space makes Python code look clear.
One of the benefits of Python is its expandability: the core programming language can be expanded with
custom-made modules. It is also available for many operating systems.
Many cmdlets are built-in to PowerShell. By adding modules to PowerShell, cmdlets can be added. HPE
provides PowerShell modules for many HPE platforms. One of these modules is the PowerShell module
for HPE OneView.
This PowerShell module provides access into the HPE OneView REST API with cmdlets that can be used
like a CLI, or scripting. It will install tens of new cmdlets. An example of one of these new cmdlets is the
Copy-HPOVServerProfile cmdlet
The Copy-HPOVServerProfile cmdlet will copy a OneView server profile to a profile with another name.
For example:
Copy-HPOVServerProfile -SourceName "Profile 1" -DestinationName "Profile 2"
Cmdlets
Cmdlets are typical for PowerShell. A cmdlet is a lightweight PowerShell script designed to perform a
single function.
PowerShell cmdlets have a simple syntax. They use a verb-noun structure. The Verb is specifying which
kind of action to take, the Noun specifies where or on which type of object the action should take place.
Cmdlets can run without a parameter, however some cmdlets need additional parameters to run properly.
For example:
• Get-Command: Gets all the cmdlets that are registered in the PowerShell environment
• Get-Help Get-Process: Displays help about the Get-Process cmdlet
Chef, Ansible, and Puppet are examples of configuration management (CM) tools. They provide an
environment for automated provisioning and configuration of IT resources, such as VMs, containers,
applications, and patches.
One of the goals of the DevOps concept is to shorten software development cycles. Automation of all the
components in building software (from integration, to test, to release, to deployment) is essential in
DevOps. This is the reason that automation and automated CM tools are an essential component in a
DevOps and hybrid cloud environment.
Automation and CM tools change the way departments work together and change the human workflow.
Application developers can write, test and deploy applications without having to wait for the operations
department to supply resources.
In short, the benefits of CM tools such as Chef, Ansible and Puppet are:
• Reusability: Create reusable building blocks that can be used in multiple stacks
• Speed: Validate code on non-critical systems with fast feedback loops to catch issues earlier
• Uptime: Ensure changes are tested against downstream dependencies to prevent unforeseen
failures in production
• Common workflow: Ensure all changes are tested and approved with the same rigor and speed.
The configuration management tools ensure changes are only deployed once properly approved.
• Increased reliability: For instance, after integration with HPE OneView, bare‑metal servers are
configured the same way every time and maintain infrastructure compliance with automated rolling
upgrades.
• Automation and compliance: Automatically ensure that code matches the state of the
infrastructure. Automatically test that systems remain in compliance. Automatically test, review, build,
and deploy changes on commit
The unified API in HPE OneView provides a programmatic interface for higher-level configuration
management and orchestration tools. HPE OneView brings infrastructure as code to bare-metal through
templates that unify the processes for provisioning compute, connectivity, storage, and OS deployment in
a single step, just like provisioning VM instances from the public cloud.
Chef enables rapid and reliable deployment and updates of application infrastructure, using recipes that
can be versioned and tested just like application software.
HPE OneView can act as an infrastructure provider for Chef, bringing the speed of the public cloud to
internal IT processes. By using Chef and OneView in combination, developers can provision hardware
resources using infrastructure as code.
• Chef offers application automation (with Habitat) helping customers stand up, maintain, correct
issues, and fix errors.
Chef recipes
To automate the infrastructure, Chef administrators write declarative scripts, called recipes, which are
stored in cookbooks. Chef recipes are relatively easy to write and can be shared among IT organizations
through the Chef Supermarket.
The cookbooks can be used to automate software processes. Chef recipes are more efficient and reliable
than standard shell scripts or manual processes, because they are repeatable, testable, and versionable.
Provisioning hardware stacks is a multi-step process, requiring many tools to manage provisioning tasks.
Provisioning infrastructure can easily become a bottleneck in the continuous delivery of applications. HPE
OneView manages all provisioning functions through a single API, leveraging pre-existing profiles and
templates.
Binding Ansible with HPE OneView allows DevOps to introduce physical provisioning into the same
playbook used to deploy the software stack. Adding an additional line of code to our Ansible playbook
directs HPE OneView to provision hardware and to load the operating system using specified templates.
The Ansible role for HPE OneView is available for download at:
https://github.com/HewlettPackard/oneview-ansible
- hosts: webservers
remote user: root
roles:
- base-apache
- web
HPE OneView and Ansible provide a software-defined approach to the management of the entire
hardware and software stack, giving IT the ability to deliver new or updated services on an as-needed or
on-demand basis.
A library of modules can reside on any machine, and there are no servers, daemons, or databases
required. Typically, admins work with their favorite terminal program, a text editor, and a version control
system to keep track of changes to the content.
Ansible playbooks
Ansible uses YAML (YAML Ain't Markup Language), a simple, human readable markup language, in
playbooks to automate and orchestrate the build, deployment, and management of an application’s
software stack.
Ansible playbooks can be version-controlled and tested just like application software, providing
repeatable and reliable installations and upgrades.
Using a simple Ansible playbook, like the one shown in the figure, DevOps can automate a task such as
the creation of a web server cluster with a load balancer. This playbook assumes that servers are ready
with hardware configured and the OS installed and that they are waiting to land the application stack.
Puppet Forge
https://forge.puppet.com/hewlettpackard/oneview
Puppet has its own language, also called Puppet. Puppet is more than just a shell language, such as
Windows PowerShell, or a pure programming language, such as Python. Puppet uses a declarative,
model-based approach. In this way, Puppet can be used to define infrastructure as code and enforce
system configuration.
Puppet treats everything in the environment as data: the compute node’s current state, the desired end
state, and all the actions needed to move from one state to the other.
Each Puppet-managed server instance gets a catalog of all the resources and their relationships. It
compares that catalog with the desired system state and will make changes as necessary to bring the
system in accordance with the desired state.
Puppet code
The Puppet hierarchy lets you write relatively simple, re-usable code using the following:
• Classes: Blocks of Puppet code that are stored in modules for later use.
• Manifest: Puppet programs are called manifests. A manifest is a collection of classes
• Modules: Manifests are stored in modules. Puppet modules are Puppet's fundamental building
blocks. To keep code reusable and clear, modules should act on the same technology type (for
instance, a module for Microsoft SQL or a module for Apache web server).
• Profiles: Profiles are classes that use multiple modules to configure a layered technology stack. For
example, you can create a profile to set up a web service, including load balancer etc.
• Roles: Roles are classes that wrap multiple profiles to build a complete system configuration. For
instance, a web server role specifies that the server should use standard profiles like “base operating
system profile” and “base web server profile.” In this example, the first roles could specify that the
server should run Ubuntu with a specific version, while the second role could specify that it should
use NGINX.
This hierarchical approach makes data easier to use and re-use, makes system configurations easier to
read, and makes refactoring easier. Classes, defined types, and plugins should all be related, and the
module should be as self-contained as possible.
Puppet Bolt
Puppet Bolt is an open source tool that automates infrastructure maintenance. It is not so much about
getting or keeping a system in a desired state, but instead automating tasks that need to be executed on
an as-needed basis. For instance, to stop or start a service, to run an update, or to run a troubleshooting
script.
Puppet Bolt can run on its own or be part of a larger orchestration tool.
Terraform providers
HashiCorp Terraform is not so much a configuration management tool like Chef, Ansible or Puppet, but
an infrastructure orchestration tool. Terraform can be used to create, manage, and update infrastructure
resources. These resources may be physical machines, VMs, network switches, containers, or others.
Almost any infrastructure type can be represented as a resource in Terraform.
Although Terraform is not a Configuration Management tool, it can use such tools as providers. The basic
idea is that Terraform is an orchestrator that uses providers to do the jobs they are good at.
The list of providers is long and ranges from large-scale cloud providers such as AWS, Azure and Google
Cloud, to tools like Chef and Puppet, to very specific providers such as HPE OneView. Each provider is
responsible for the underlying APIs and the interaction with the resources.
Terraform configuration
An infrastructure configuration by Terraform is defined in .tf files, which are written in HashiCorp
Configuration Language (HCL) or JSON. tf files are simple to read and write.
Terraform supports variable interpolation based on different sources such as files, environment variables,
other resources, and so on.
The figure shows an example of a Terraform tf file that uses OneView as a provider.
Terraform apply
The terraform apply command is used to apply the changes. First, terraform will create an execution
plan. The execution plan will show all the actions that are needed to bring the infrastructure in the desired
state. If the plan was created successfully terraform will pause and wait for approval. If needed, the plan
can be aborted, but if the plan looks good it can be accepted and the plan will be executed.
The following is an example of the terraform apply command being executed.
Mutable means liable to change. A mutable infrastructure is an infrastructure that can be modified or
updated. Server architectures traditionally have been mutable infrastructures. For instance, patches for
the OS can be deployed into the existing OS. This is very flexible but can cause inconsistencies in the
infrastructure as a whole. Especially, an update over an update, over an update could cause configuration
drift.
On the other hand, an immutable infrastructure is an infrastructure with resources that cannot be changed
once it is deployed. If anything needs to be changed or updated, a completely new instance of a resource
will be deployed. Containers are an example of a resource in an immutable infrastructure. In the cloud,
where new environments can be created in minutes, an immutable infrastructure could be a feasible
strategy.
Orchestration tools are focused on the end result, the desired state. If anything in the current state is
missing, the orchestration can automatically provide the missing resource. This is very useful in
environments that require a steady state. In this sense, an orchestration tool fits to the concept of an
immutable infrastructure.
CM tools configure the resources in the environment. If there is a problem with a resource configuration
management tool will attempt to repair the resource instead of just replacing it. This fits to the idea of a
mutable infrastructure.
In theory, the distinction between CM tools and orchestration tools is clear. In daily practice however, it
can be hard to decide whether a tool is a CM tool or an orchestration tool, or whether an infrastructure is
mutable or immutable.
For instance, Chef is a CM tool. But it can work with OneView to replace servers by applying server
profile. Also, Chef can work with Docker containers to provision and replace complete container
resources. This could be seen as replacing a complete resource, like in an immutable infrastructure. By
using the OneView integration, Chef can also act on the hardware infrastructure layer, and thus acts an
orchestration tool.
Summary
This module has shown you the power of automation and orchestration. HPE OneView integrates with
vSphere vCenter and vRealize Suite. HPE InfoSight offers AI-driven insights and optimization for the
complete environment from the infrastructure to the VM. You also looked at various automation and
orchestration tools that you customers might be using so that you understand their role in a SDDC.
Learning checks
1. What is one benefit of HPE OneView for vRealize Orchestrator (OV4vRO)?
a. It integrates a dashboard with information and events from HPE servers into
vRealize.
b. It provides an end-to-end view of servers' storage (fabric) connectivity within the
vRealize interface.
c. It adds pre-defined workflows for HPE servers to vRealize.
d. It integrates multi-cloud management into the VMware Cloud Foundation (VCF)
environment.
2. Which is an option for licensing HPE OneView for vCenter (OV4VC)?
a. InfoSight licenses
a. Remote Support licenses
b. Composable Rack licenses
c. OneView licenses
3. What is one benefit of OV4VC that is available with the OneView standard license?
a. An easy-to-use wizard for growing a cluster from a single tool
a. Non-disruptive cluster firmware updates from within vCenter
b. An inventory of servers and basic monitoring of them in vCenter
c. Workflows for managing servers and storage
You can check the correct answers in “Appendix: Answers.”
Learning objectives
In this module, you will review the importance of emphasizing the benefits of HPE hyperconverged
solutions to your customers.
After completing this module, you will be able to:
• Given a set of customer requirements, position hyperconverged SDI solutions to solve the
customer’s requirements
• Given a set of customer requirements, determine the appropriate hyperconverged platform
• Explain the integration points between HPE hyperconverged solutions and VMware solutions
Rev. 21.31 | © Copyright 2021 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Module 6: Design an HPE Hyperconverged Solution for a Virtualized Environment
Customer scenario
A small community college is struggling to maintain its data center, which has grown organically over the
years. The data center has a lot of aging equipment that is difficult for the limited IT staff to manage. The
college has shifted some services to the cloud, and, while the college wants to maintain other services
on-prem, the customer has made simplifying the data center a priority. The customer has already begun
virtualizing with VMware; your contact originally brought you in to help with a server refresh to handle the
consolidated workloads.
In this discussion, you have discovered some more issues. The CIO wants to improve availability by
adding VMware clustering. He realizes that clustering requires shared storage, but the data center does
not have a SAN—and the CIO does not want to add one. The IT staff doesn't have the expertise to run a
SAN. The CIO also has received complaints about the organization's current manual processes for
backups. But—he tells you—he doesn't have the budget for another project at this point.
Some legacy hyperconverged vendors either support inline deduplication or post-process deduplication.
While their inline deduplication does have the intended effect of reducing IOPS and capacity demands on
their drives, it is CPU-intensive, taking power away from production VMs and reducing available IOPS.
Post-process deduplication does the same while also adding IOPS demands on the disk drives.
The HPE SimpliVity Data Virtualization Platform delivers inline deduplication and compression for all data
without compromising the performance of the application VMs running on the same hardware platform.
The HPE SimpliVity nodes look like standard x86 servers with components such as SSDs, DRAM, and
CPUs. And like any virtualized hosts, they run ESXi or Hyper-V.
But the Data Virtualization Platform empowers simple software-defined storage (SDS), built into the
solution. In logical architecture, it sits between the hardware and the hypervisor, abstracting the hardware
from the VMs and apps that are running on top.
The following sections summarize each part of the architecture.
Presentation Layer
The Presentation Layer interacts with the VMware hypervisor and presents datastores to the hypervisor.
From the point of view of hypervisors—and VMs and apps running on top of them—each datastore is full
of all of the data written to it. However, this layer does not contain any actual data or metadata.
This figure shows the Data Virtualization Platform in action. The figure simplifies a bit by collapsing the
two parts of the data management later. As you see, the data management layer only writes to the disk
when a VM sends a write request with a new block. If the block already exists, the data management
layer simply updates metadata, and no IO actually occurs on the disk. Because the best IO is the one that
you don't have to do, HPE SimpliVity doesn't just dramatically reduce capacity requirements, it also
improves performance.
Storage IO reduction
In a legacy solution, workload IO makes up only a fraction of the total IO requirements. Snapshots, data
mirroring, and backups all add IOs too. With its ultra lightweight approach to protecting data and by
applying inline deduplication for all data, HPE SimpliVity helps customers to reduce their storage IO and
improve performance with less infrastructure.
Read the following sections to see how SimpliVity makes IO disappear.
Backups
When backups run, any data that has been changed since the last backup (at the very least) needs to be
read off the array and sent across the network to the backup storage location. In traditional solutions, this
creates a major spike every night, which is the reason backups are generally only scheduled in the
evenings. By taking local backups via metadata, HPE SimpliVity is able to take full backups with
essentially no I/O, thus eliminating the largest chunk of I/O.
Mirror
To replicate data to a remote site, a traditional solution must read data from the array and send it across
the WAN. This results in additional I/O. By intelligently only moving unique data between data centers,
HPE SimpliVity dramatically reduces the amount of data moved.
Snapshots
Array-level or vSphere snapshots are quick and often used as a short-term recovery point. While their
effect is relatively small, these snapshots do add to IO requirements. Because HPE SimpliVity backups
can be taken in seconds and have no IO impact, they make an easy replacement for local snapshots.
Workload
HPE SimpliVity leaves just the primary application workload, with just a bit of data protection overhead.
And remember that SimpliVity deduplicates and compresses all data, not just data protection. This
reduces the I/O profile even further.
Final result
HPE SimpliVity has dramatically reduced IO requirements while delivering data protection as good or
better than the legacy solution.
:
Figure 6-10: Storage IO reduction—Final result
HPE SimpliVity clusters combine two ways of protecting data: redundant array of independent nodes
(RAIN) and redundant array of independent disks (RAID). RAIN is described below, and RAID is
described on the next page.
RAIN
The cluster assigns every VM to a replica set with two nodes. Each node has a copy of the VM’s data,
and writes to the VM’s virtual drive are synchronously replicated to both nodes.
To decrease latency, the OVC on the node receiving replicated data sends an Ack as soon as it receives
a write request. The original OVC then sends an Ack to the VM. Meanwhile both nodes individually
deduplicate and compress data and write it to each node’s local drives.
The RAIN function described above is SimpliVity's typical behavior. However, as of OmniStack v4.0.1,
customers can choose to create single-replica datastores. VMs created on single-replica datastores are
single-replica VMs, for which the cluster maintains a copy on only one node. The company might choose
to use single-replica VMs for non-critical apps.
SimpliVity further protects data by having each node use RAID to store data. A single node can lose one
drive without losing any data. By combining RAID and RAIN, the cluster can lose at least two, and
possibly more, drives without losing any data.
Many customers want the simplicity of hyperconvergence for mission-critical applications, but they can
only deploy such applications on solutions that they can trust. Many competing hyperconverged vendors
use only RAIN to protect data in case of drive failures. HPE SimpliVity's RAIN + RAID can withstand
many more drive failures, making it the clear winner in the mission critical space.
For any solution that features SDS, data localization can become an important consideration.
Hyperconverged solutions transform local drives on the clusters’ nodes into an abstracted pool of storage,
which is good from the point of view of simplicity and management. However, from the point of view of
performance, it is best when a VM’s virtual disk is stored on the local drives that belong to the node that
hosts that VM. At the same time, the data also needs to be stored on one or more other nodes to protect
against failures.
The solution could take a few different approaches. In the primary data localization approach, the VM’s
primary data is localized on its node while copies are distributed across multiple other nodes. The RF2
approach makes one copy (in addition to the original) while RF3 makes two. In either case, the peak
performance when all nodes are up is good because the VM’s data is localized. However, replication
takes a toll on performance because the primary node needs to calculate to write each copied block. And
performance becomes poor when a VM moves because data is no longer localized. The system can
rebalance and move data to the VM’s current node, but this takes time and generates IOs that can
decrease performance across the system. In short, these approaches cannot deliver consistent,
predictable performance.
Having no data localization improves predictability because the performance is the same when all nodes
or up or when one fails. However, without data localization, the performance is only fair.
HPE SimpliVity takes a full data localization approach so that it provides the best peak performance and
the best predictability. A VM’s data is localized on the node that hosts it, and all of its data is also
replicated to the same other node. Replication takes less of a toll on performance because the primary
node knows that it always replicates to the same other node.
If the first node fails—or if its local drives fail--the VM can move to the second node and continue to
receive exactly the same excellent performance without any data rebalancing.
Figure 6-17: Keeping data local with HPE SimpliVity Intelligent Workload Optimizer
If the VM needs to move, how does the HPE SimpliVity cluster guarantee that it moves to the node that
already has its data? You will look at an HPE SimpliVity solution built on VMware as an example. The
HPE SimpliVity cluster is a VMware cluster that uses VMware Distributed Resource Scheduler (DRS) and
High Availability (HA). DRS handles choosing the node to which each VM is deployed while HA helps the
cluster restart VMs on a new node if the original host fails. DRS can take factors such as CPU and RAM
load into account when it schedules where to deploy or move a VM. However, DRS does not have insight
into where the SimpliVity DVP stores data. It assumes all data is external to the hosts and, therefore,
moves VMs around freely within the cluster with no regard to where the data may be.
Some competing hyperconvergence solutions simply react to DRS. After DRS moves the VM, the solution
moves data around until it is local again. However, this “follow the VM” approach takes time and impacts
performance with a lot of extra network traffic and IOs. The SimpliVity Intelligent Workload Optimizer
takes a proactive approach. It integrates with DRS and creates DRS rules to ensure that each VM is
deployed on one of the two nodes that stores its data.
This allows VMs to have the peak and predictable performance that data locality and DRS can both
provide, while avoiding the extra I/O and network load of the "follow the VM" approach. The HPE
SimpliVity DVP handles the configuration automatically. In fact, SimpliVity self-heals the configuration
even if an admin changes the groups or rules.
SimpliVity’s restore capabilities really set it apart from the competition, allowing companies to restore data
in seconds.
The Town of Mansfield’s experience shows how quickly data can be restored. As a new HPE SimpliVity
customer, the Town of Mansfield noticed the gains in application performance almost immediately. They
also knew they were saving storage space and backup times had decreased significantly. But it was not
until they needed to restore data that they fully appreciated HPE SimpliVity’s built-in resiliency.
The Town of Mansfield had a network issue that unfortunately corrupted their primary SQL Server. The
problem occurred around 9:30 a.m. When they could not resolve the issue, the organization knew they
had to restore their SQL Server from the backup. Before HPE SimpliVity, restoring the server’s 950 GB
would have taken 5 hours, and the Town would have lost more than half a day in productivity.
With HPE SimpliVity, however, they were able to restore their 8:15 a.m. SimpliVity backup, and it took
only 40 seconds to restore the 950 GB SQL Server. The organization was “up and running in under a
minute.” (“The Town of Mansfield’s Unexpected Journey into Hyperconvergence,” Upshot, Oct. 14, 2019.)
In addition to providing simple, out-of-the-box SDS, HPE SimpliVity integrates with the virtualization
solution to help customers manage SimpliVity from a single interface. The HPE SimpliVity plug-in for
VMware enables admins to manage SimpliVity nodes as VMware hosts just as they are used to doing,
but also adds extra functionality specific to SimpliVity. For example, admins monitor the SimpliVity
Federation as a whole. They can also manage automatic backup policies and initiate manual backup and
recoveries. The plug-in also lets admins monitor databases and the underlying storage from a single tool.
They can create new datastores and expand existing ones. With a single view for monitoring resource
utilization, they can more quickly find and resolve issues. Finally, the SimpliVity plug-in for VMware
includes a Deploy Management Virtual wizard, which allows you to convert a peer-managed federation to
a centrally managed federation. The wizard gives you more flexibility in deploying and managing
federations.
HPE SimpliVity also offers seamless integration with vRealize Automation (vRA) and vRealize
Orchestrator. In the previous module, you learned about how these solutions helps companies use
powerful workflows to orchestrate their services. HPE has developed workflows specific to HPE SimpliVity
to accelerate companies' efforts to use vRA to automate their SimpliVity environment. The figure above
shows a list of the tasks customers can automate with the workflows. If the customer wants to use vRA in
a SimpliVity environment, HPE recommends deploying vCenter, vCenter Single Sign on, the vRA
appliances, and vRealize Orchestrator.
After admins install the HPE SimpliVity nodes in the data center on Day 0, the HPE SimpliVity
Deployment Manager helps to automate the deployment of the solution. Read the sections below to see a
high-level overview of the process.
1. vCenter pre-setup
First, admins should establish on vCenter the clusters to which they want to add HPE SimpliVity nodes.
3. Node discovery
Admins first discover and add a single node to the cluster. They can then add more.
Here you see that the first node receives a DHCP address. The admin then just needs to scan for host,
and the Deployment Manager automatically discovers it.
4. Node deployment
The admin now tells the Deployment Manager to deploy network settings and the ESXi hypervisor to the
host.
After adding the first node to the cluster, admins can quickly deploy the same settings to add more nodes.
As you have seen, admins can quickly complete common tasks for managing SimpliVity clusters in a GUI.
But sometimes admins need to repeat the same task many times. For example, they might need to clone
multiple VMs every morning for a test team, so clicking through a GUI would be tedious. That’s why HPE
has created the HPE SimpliVity REST API: to allow companies to script the most common administrative
tasks available in the GUI.
This figure provides an example of a PowerShell function that utilizes the SimpliVity Rapid Clone
functionality. But developers can use any scripting language that can execute a REST API call, including
Python, Java, or orchestration platforms like vRealize Orchestrator.
To make it easier for users to develop and prototype automation scripts, SimpliVity offers a
documentation interface.
The following sections explain how programmers can use this interface to help them create the script.
Users can navigate through the interface and easily find object types such as virtual machines and
functions that execute on those objects.
Clicking on any function shows documentation about the function and parameters available for the
function.
After entering values into this screen, admins can click “Try it out” to actually execute the function.
They will then see the actual results. They can copy the URL to use within custom written code or an
orchestration platform. It’s a very easy and convenient way to test and prototype automation actions.
The HPE SimpliVity Upgrade Manager helps customers to quickly upgrade a complete Federation to new
software without impacting services. Admins choose the new software and run the Upgrade Manager.
The Upgrade Manager upgrades one node in each cluster at a time, first moving that node's VMs to other
nodes. After upgrading one node, the Upgrade Manager moves the VMs for the next node and upgrades
that node until all nodes are on the same software. As you see, if the Federation has multiple clusters, the
Upgrade Manager can upgrade multiple clusters at once.
After the upgrade is complete, admins can choose to roll back the upgrade on all nodes or individual
nodes. While all nodes in a Federation typically have to be on the same version, they are permitted to be
on different versions while the Federation is in this state. Once admins are sure that all nodes are running
the new software and the upgraded Federation is working as expected, they can commit the upgrade
after which point they can no longer roll back.
This section covers the first two steps of the HPE SimpliVity design process. You will first review
strategies and tools for collecting the data necessary for sizing solution. You will then look at how to input
what you have learned into the HPE SimpliVity Sizing Tool in order to determine the number and type of
nodes to deploy.
Please note that you will need HPE employee or partner credentials to access some of the tools
referenced in this section.
Data gathering
Begin by reviewing the data gathering process. Read the following sections to review.
HPE offers a wide array of SimpliVity platforms optimized for customers' particular requirements,
workloads, and preferences. Read the sections below for a brief review of each model.
Several factors affect how many clusters you plan. Location can play a role. For example, if you are
designing a solution for a customer with several branch offices, each site might have its own cluster.
Stretched clusters can span WAN links and multiple sites. However, you should only use a stretched
cluster when you want to distribute services across the two sites. For the ROBO solution, it can make
more sense to deploy a separate cluster at each site so that VMs for that site stay local. Clusters can
back up data to a cluster at another site for higher availability.
You might need to plan multiple clusters at the same site if you need a large solution with more the
recommended number of nodes per cluster.
And you might also want to create multiple clusters even if you have fewer than 16 nodes. It can be
beneficial to isolate latency sensitive applications such as VDI on their own clusters. When in doubt,
separate your workload types for optimal performance.
Finally, consider the need for separate compute nodes, which you might want to deploy for power users
in a VDI solution or to support processor hungry applications.
Figure 6-35: Getting started with the HPE SimpliVity Sizing Tool
If you are an HPE Partner, you can access the HPE SimpliVity Sizing Tool. (Click here to access the
sizer. If you have trouble accessing the sizer at this link, check HPE Products and Solutions Now for
updated information about it.)
The figure above shows the first sizer window, which will show any saved sizings.
1. Click the Create New Sizing button to begin sizing a new solution.
2. In the pop-up window that is displayed, enter a name in the Sizing Name field and select the type of
deployment: You can choose Infrastructure Cluster for general virtualization and End-User
Computing (GPU) or End-User Computing (Non-GPU) for VDI.
3. Click Create Sizing.
4. Click Add Cluster.
5. In the pop-up window that is displayed, enter a name in the Cluster Name field and click Add
Cluster.
When you add a cluster, you will see a window like the one shown in the figure above. Each cluster
consists of one or more VM groups. You enter sizing information for each VM group separately. Read the
following sections for more information.
After configuring all the settings, click Add Cluster.
• Compute HA—Select this check box if you want the sizer to take HA into account. The sizer will
ensure that the cluster still meets requirements if a node fails.
After configuring all the settings, click Add Cluster.
Advanced Mode
This section outlines the additional settings available when you select Advanced Mode.
Backup Policies
Click the Backup Policies button to create a backup policy that you can reference in the Local Backup
Policy or Remote Backup Policy menu.
Recommendations
Click the Recommendations button to see the products recommended for your clusters.
Backup policies
If you plan to store local backups, select a Local Backup Policy.
If you plan to back up to another cluster, select a Remote Backup Policy and also click Configure to
choose the cluster.
You should have created the policies by clicking the Backup Policies button.
Cluster Growth
Specify expected growth by percentage. The fields refer to growth for compute, memory, storage, and
IOPS respectively.
External Compute
If you plan to recommend external compute nodes, specify their CPU and memory.
Compute HA
Select this check box if you want the sizer to take HA into account. The sizer will ensure that the cluster
still meets requirements if a node fails.
You are at the third step of the design process. You will now review elements of the HPE SimpliVity
architecture and best practices for designing them. Finally, you will review situations in which you need to
submit a Deal Specific Request (DSR).
Architectural design
You should create an architecture diagram that shows the components within each cluster and how
clusters connect together. Read the sections below to review the different components.
Cluster
Include each cluster and the site at which it is located. Indicate the number of nodes and the model.
Attached to the diagram, you can add more information about the model such as processor choices and
amount of memory.
vCenter (site 1)
For vCenter and vSphere VDI deployments, you should indicate where vCenter servers are located. They
can be deployed on a separate management SimpliVity cluster, which is generally preferred for larger
deployments. For small deployments, you can place vCenter on the same SimpliVity cluster that hosts
production VMs. You can also deploy vCenter outside of SimpliVity. If you choose to deploy vCenter on a
SimpliVity cluster that it manages, you must deploy vCenter first and then move it to the cluster.
For Hyper-V deployments, you should similarly indicate where Microsoft System Center (MSSC) is
deployed
vCenter (site 2)
A single vCenter server can manage multiple HPE SimpliVity clusters in a Federation. However, the
Federation can also include up to 5 vCenter servers. In this example, site 2 has its own vCenter server for
resiliency. When a Federation has multiple vCenter servers, they must connect with Enhanced Linked
mode.
For Hyper-V, a single MSSC instance is supported, but MSSC can use Microsoft clustering.
Arbiter
An Arbiter helps to break ties in failover situations. HPE SimpliVity 3.7.10 or earlier always required the
installation of an arbiter. For OmniStack v4 and above, Arbiters are only required for two-node clusters or
for any stretch clusters. However, they are also recommended for four-node clusters.
An Arbiter can never be deployed on a cluster for which it acts as Arbiter. However, it can be deployed on
a different cluster. It can also act as Arbiter for multiple clusters.
Federation
A Federation includes multiple HPE SimpliVity clusters that are managed by the same vCenter
infrastructure. This infrastructure could consist of one vCenter server or multiple vCenter servers
operating in Linked mode.
Site-to-Site links
You need to indicate the link between sites, specifying their bandwidth and latency. This example has
separate clusters at each site, so the latency requirements are less strict. A link used by a stretched
cluster, which has members at multiple sites, must have round trip time (RTT) latency of 2ms or less.
Network design
Every cluster requires three networks: a management, storage, and federation network.
Management
The Management network is the network on which external devices reach the SimpliVity cluster and on
which SimpliVity communicates with vCenter. This network has a default gateway, and it should be
advertised in the routing protocol used by the network so that it is reachable from other subnets.
It can use 1, 10, or 25 GbE NICs, which are shared by VMs' production networks using tagged VLANs.
Storage
Each node has a VMkernel adapter for storage traffic. This adapter connects to the Storage network, as
does each OVC. The Storage network carries NFS traffic for mounting the SimpliVity datastore to the host
and handles IO requests from VMs.
If the cluster has compute nodes, their VMkernel adapters should connect to this network, too.
This network should be dedicated to this purpose; it is not routed. It requires an MTU of 9000 and a
latency of 2ms or under. It can be 10GbE or 25GbE.
With v 4.1.0, HPE SimpliVity allows IT admins to control how much bandwidth HPE SimpliVity uses for
backup and restore operations. This feature is particularly useful for customers who deploy HPE
SimpliVity at branches, remote locations, or any location that has limited bandwidth.
Federation
The Federation network carries OVC-to-OVC communications between nodes. Only OVCs should be
connected to this network.
This network should be dedicated to this purpose; it is not routed. It requires an MTU of 9000. It should
use 10GbE.
OVCs contact OVCs in other clusters on their Federation IP addresses, but the traffic is routed out the
Management network, which has the default gateway.
To properly plan a SimpliVity solution, you need to understand the maximum number of nodes supported
for clusters and federations.
SimpliVity supports single-node clusters, which provides only RAID protection for data. However, HPE
generally recommends that clusters consist of at least two nodes. The maximum recommended cluster
size is 16 nodes.
If the customer wants HA and remote backups, the federation needs at least two clusters. A federation
supports up to 96 nodes. For large ROBO environments, a federation could consist of 48 2-node clusters.
In v4.0.1 and above, companies can deploy the HPE SimpliVity Management Virtual Appliance to help
manage the federation. A federation managed with the SimpliVity Management Virtual Appliance is called
a centrally managed federation, while other federations are called peer-to-peer federations. A centrally
managed federation supports up to 96 nodes, all managed by the same vCenter. A peer-to-peer
federation requires at least 3 vCenter instances to manage 96 nodes.
You should look for updates to these guidelines if you are using a software version above 4.1.0.
You should submit a DSR if your solution has special requirements and circumstances:
• Backup period under 10 minutes
• Storage network latency over 2ms (or Management network latency over 300ms)
• Individual VMs larger than 3 TB in size
• Unusual storage requirements
– Significant multimedia files
– Data compressed, deduplicated, or encrypted before entering SimpliVity
• No data collection before sizing
• For VDI
– VDI and other workloads in same cluster
– > 500 users
• Any EUC opportunity
• Additional PCIe hardware (except NIC)
Note that stretch clusters are now supported for more use cases. HPE SimpliVity systems running 3.7.10
and above can be configured in 8+8 node stretch clusters. HPE SimpliVity nodes running software 4.0.1
or above can run linked clone VDI desktops in stretch clusters.
Activity 6
Scenario
A small community college is struggling to maintain its data center, which has grown organically over the
years. The data center has a lot of aging equipment that is difficult for the limited IT staff to manage. The
college has shifted some services to the cloud, and, while the college wants to maintain other services
on-prem, the customer has made simplifying the data center a priority. The customer has already begun
virtualizing with VMware; your contact originally brought you in to help with a server refresh to handle the
consolidated workloads.
In this discussion, you have discovered some more issues. The CIO wants to improve availability by
adding VMware clustering. He realizes that clustering requires shared storage, but the data center does
not have a SAN—and the CIO does not want to add one. The IT staff doesn't have the expertise to run a
SAN. The CIO also has received complaints from IT staff about the organization's current manual
processes for backups. But—he tells you—he doesn't have the budget for another project at this point.
Task
Take a few minutes to reflect on and list the reasons that HPE SimpliVity will be a good solution for this
customer.
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
__________________________________________________________________________
Figure 6-43: HPE Nimble Storage dHCI versus traditional HCI solutions
Customers need to simplify how they deploy and manage the infrastructure that supports their virtualized
workloads. Although hyperconverged infrastructure (or HCI) solutions offer an attractive solution, HCI
solutions can have one drawback. Traditional HCI stacks consist of servers, which contribute both the
compute and storage resources, in the form of local storage drives. When customers need to scale the
solution, they add another server, and compute and storage scale uniformly. However, many workloads
feature complex architectures that scale less cleanly. For these unpredictable workloads, the
requirements for the storage-hungry database layer might grow more quickly than requirements for the
compute-intensive application layer. With traditional HCI, the customer must invest in more compute
power than needed simply to obtain the required storage. Or, the opposite may occur.
In either case, customers with unpredictable workloads face a difficult choice. Do they deploy a
converged bundle of servers and storage arrays so that they can scale storage and compute separately,
but miss out on the simplicity and operational benefits of HCI? Or do they deploy HCI and end up over-
provisioning?
HPE Nimble Storage dHCI provides the flexibility of converged infrastructure with the simplicity of HCI. It
enables customers to deploy ProLiant DL servers and Nimble arrays, which automatically discover each
other and form a stack. Customers manage the stack from an intuitive management UI and integrate it
into vCenter, as easily as a traditional HCI stack.
HPE Nimble Storage dHCI is designed to deliver high performance and availability while allowing
customers to scale compute and storage separately.
You will now consider how HPE Nimble Storage dHCI enables customers to scale compute and storage
precisely as they want. However, from the admins’ perspective, dHCI is a single stack.
In the figure above, the customer initially deploys a pool of 32 compute nodes, but far less
storage. In the figure below, you can see how the customer can begin to scale storage.
When adding storage nodes, the customer can scale up capacity within a single chassis with mixed
capacities. The customer can scale up further by attaching capacity expansion shelves—each one being
its own independent RAID group. The customer can also scale out storage and cluster up to for array
platforms in a single instance for aggregated performance and capacities up to 9PB.
HPE Nimble Storage dHCI supports all the same features of Nimble, including its support for VMware
vVols.
As you recall, a vVol is a volume on a SAN array, which a VM uses to back its disk that than a VMDK file
within a VMFS datastore. vVols can simplify storage management. When a VMware admin performs a
task like creating a new virtual disk, or snapshotting a disk, the storage array automatically provisions the
vVol or takes the snapshot. The vVol approach also enables admins to apply policies at a VM-level rather
than a LUN-level.
Nimble arrays offer mature vVol support with features such as QoS, thin provisioning, data encryption and
deduplication. Nimble snapshots are fast and efficient. Nimble supports application-aware snapshots for
vVols, which help ensure consistency for data backed up with Volume Shadow Copy (VSS). The VM
recycle bin helps to protect companies from mistakes. Nimble defers deleting VMs for 72 hours, allowing
admins to reclaim the VMs within that time period, if necessary.
Companies using vVols can also take advantage of Nimble replication features and Nimble integration
with HPE Cloud Volumes.
HPE InfoSight provides cross-stack recommendations for HPE Nimble Storage dHCI, just as it does for
other HPE storage solutions. One of the major benefits with the dHCI platform is that InfoSight provides
end-to-end full stack analytics and AI-Ops. The HPE Nimble Storage dHCI automatically collect statistics
from the storage arrays and the ESXi hosts as well HPE iLO. It collates all the statistics within the array
and submits them to HPE Infosight. Admins can then see all the statistics for Nimble Storage dHCI, in the
context of an integrated solution.
InfoSight cross-stack analytics gives customers insight into applications and workloads, VMware objects,
and the storage layer for VMware as well. InfoSight provides a granular view of the resources every VM
uses. This information makes it possible to correlate the performance of VMs in datastore with insights in
the host resource constraints such a vCPU, memory, and network.
InfoSight provides performance and wellness information across the complete Nimble Storage dHCI
solution. It not only helps customers detect common issues such as under-performing VMs but also helps
them identify the root cause for such issues. Further, InfoSight provides customized recommendations for
the entire environment, including VMs, hosts, storage, and networks.
Finally, InfoSight applies deep data analytics to telemetry data that is gathered from HPE Nimble Storage
array. This enables InfoSight to identify even rate issues and begin to determine when the issue occurred
and begin to pinpoint the causes.
HPE Nimble Storage dHCI supports a range of products, which can be combined into a disaggregated
HCI platform. This figure shows the products that customers can use to build the solution. (Note that this
information was current when this course was created; please check the HPE web site for up-to-date
information: https://www.hpe.com.)
Storage
You can use HPE Nimble Storage all-flash or adaptive flash models for iSCSI only. Customers can also
use HPE Alletra 6000 for storage although this option is not covered in this course. (As mentioned earlier,
the Alletra storage solutions were announced as this course was being developed and are not covered in
this course.)
Compute
Nimble Storage dHCI supports the servers listed in the figure. The Gen 9 models are supported only in
brownfield deployments, which enables customers to use their existing servers for a Nimble dHCI
deployment. You will learn more about Nimble Storage dHCI both greenfield and brownfield deployments
later in this module.
Hypervisor
Nimble Storage dHCI supports VMware vSphere 7.0 or 6.7 for greenfield deployments or VMware
vSphere 6.5 for brownfield deployments.
Management
For management, Nimble Storage dHCI enables admins to use the familiar VMware vCenter. It also
includes tools to set up, manage, and upgrade the stack.
Network
For greenfield deployments, Nimble Storage dHCI supports HPE StoreFabric M-Series, FlexFabric
57x0/59x0, and Aruba 6300/83xx switches.
HPE Nimble Storage dHCI integrates ProLiant hosts running vSphere, 10GbE switches, and a Nimble
Storage imaged array into a single stack. As this figures shows, the integrated solution has a single
management plane, which is VMware vCenter.
Before this integrated stack can be created, the HPE Nimble Storage Connection Manager (NCM) must
be installed on each host where the HPE Nimble Storage dHCI solution will be deployed.
HPE provides a number of tools to help admins integrate the individual products into a disaggregated HCI
solution:
• dHCI Stack Setup—This wizard runs after admins set up the dHCI-enabled array and guides
admins through the process of setting up the complete solution. In a greenfield deployment, the
wizard guides admins through the process of creating a vCenter server, setting up data stores
and clusters, setting up new switches, and adding and configuring new ProLiant servers. In a
brownfield deployment, the wizard guides admins through the process of adding a Nimble array
to an existing vCenter server, and specifying and discovering the ProLiant servers and switches
that will become part of Nimble dHCI
• Stack Management—Stack management is implemented as a vCenter plug-in, allowing admins
to manage and monitor Nimble Storage dHCI from within the familiar vCenter interface.
• dHCI DNA Collector—The Collector gathers information about the storage system, including
configuration settings, health, and statistics. This information is reported in the vCenter plug-in.
• dHCI Stack Upgrades—This tool manages and streamlines the process of upgrading the devices
in the integrated stack.
As you can see the devices use heartbeats to ensure the stack is healthy and intact.
Figure 6-51: HPE Nimble Storage dHCI: Multiple vSphere HA/DRS cluster support
At the time this course was released, HPE Nimble Storage dHCI supported a maximum of one vSphere
cluster in the integrated management plane of the solution. You can create additional separate, isolated
vSphere clusters using standard iSCSI shared storage backed by the HPE Nimble Storage dHCI array.
This would be provisioned via the array GUI and managed as a standard vSphere solution. The dHCI
management plane would have no visibility of these servers.
This design is much more flexible and adaptable than classic HCI vendors that support only a single
vSphere cluster in the management plane and cannot provision the storage outside of that cluster for
services or requirements.
Other settings
• Enable flow control on hosts and array ports as well
• Configure DNS server with proper forward and reverse DNS entries
• Configure all dHCI components to use the same NTP server, ensuring that they are all set to the
same time
• Include a DHCP server in the management VLAN for the initialization. After the dHCI solution is
set up, it will be assigned new IP addresses, and the DHCP server will no longer be needed.
• Configure the HPE Nimble Storage Connection Manager on each host on which the Nimble dHCI
solution will be deployed
Firewall
Make sure that your firewall allows communication in both directions:
• • HPE Nimble Storage array communication to the vCenter instance through port 443 and 8443
• • VMware vCenter communication to the HPE Nimble Storage array through port 443 and 8443
• • HPE Nimble Storage array to ESXi over SSH port 22
The HPE InfoSight Welcome Center is designed to help you quickly and easily deploy HPE storage
solutions. In addition to HPE Nimble Storage dHCI, the InfoSight Welcome Center supports:
• HPE Nimble Storage
• HPE Primera
• HPE Alletra Storage
The sections that follow describe the guidance the Welcome Center provides for Nimble Storage dHCI.
Getting started
The “Getting started” section provides a preinstallation checklist for both greenfield (new) and brownfield
(existing) installations.
For Nimble Storage dHCI, the preinstallation checklist helps you prepare so you can install the actual
solution in 30 to 45 min. For example, the preinstallation checklist details for:
• Required components
• Recommendations for location
• Power sources
• Network layout
• Network ports and cabling
• Guidelines for creating firewall policies to allow Nimble dHCI traffic
• Storage and server configuration
• Network configuration
Physical installation
The Welcome Center also guides you through the installation. For Nimble Storage dHCI, it provides
videos to walk you through the steps of physically installing and cabling the storage array, servers, and
switches.
Software configuration
This sections explains the process of configuring the switch, preparing the environment, discovering the
array, setting up the array, configuring the vCenter, and validating the array.
With HPE Nimble Storage dHCI, you have two deployment options: greenfield or brownfield.
Greenfield deployment
As the name suggests, a greenfield solution is a new deployment. For switches, customers can choose
from HPE StoreFabric M-Series, HPE FlexFabric 57x0/59x0, or Aruba 6300/83xx switches.
At the time this course was created, new Nimble Storage dHCI deployments supported the following
servers:
• HPE ProLiant DL325 Gen10 and Gen10+
• HPE ProLiant DL385 Gen10 and Gen10+
• HPE ProLiant DL360 Gen 10
• HPE ProLiant DL380 Gen 10
• HPE ProLiant DL560 Gen 10
• HPE ProLiant DL580 Gen 10
As always, you should check for updated information.
You can build Nimble Storage dHCI using all-flash or adaptive flash models for iSCSI only.
Brownfield deployment
Brownfield deployments allow customers to use existing good quality switches as well as existing HPE
ProLiant servers. At the time this course was created, the following servers were supported:
• HPE ProLiant DL325 Gen10 and Gen10+
• HPE ProLiant DL385 Gen10 and Gen10+
• HPE ProLiant DL360 Gen 10 and Gen 9
• HPE ProLiant DL380 Gen 10 and Gen 9
• HPE ProLiant DL560 Gen 10 and Gen 9
For brownfield deployments, the admins must ensure the network and server components meet the
requirements for being part of Nimble dHCI. For example, you must install the VMware vSphere 6.7 dHCI
image and the Nimble Connection Manager on each host.
Figure 6-55: Greenfield deployment
To integrate HPE Nimble Storage dHCI, you must visit the HPE InfoSight portal and register HPE Nimble
Storage dHCI.
Once Nimble Storage dHCI is registered, you must enable telemetry streaming for HPE InfoSight and
cross-stack analysis:
1. From the settings menu (the gear icon) on the HPE InfoSight Portal, select Telemetry Settings.
2. Locate the array you want to monitor and click the Streaming button to On. This button enables data
streaming from the array.
3. In the same row, click the VMware button to On. This button allows data to be collected from
VMware. Wait for HPE InfoSight to process the vCenter registration and start streaming VMware and
array data (up to 48 hours).
Once Nimble Storage dHCI is set up, admins can manage it using the dHCI vCenter plug-in. They can
complete tasks such as:
• Add new servers
• Create a new VMFS datastore
• Grow the VMFS datastore
• Clone a VMFS datastore
• Create a snapshot of a VMFS datastore
• Create a vVol datastore
Because admins are using the familiar vCenter interface, managing Nimble Storage dHCI is
straightforward.
The vCenter plug-in also allows admins to perform a consistency check to ensure their Nimble Storage
dHCI is set up correctly.
Below is a list of Nimble dHCI tools and the URL where you can access them:
• HPE Assessment Foundry (SAF): HPE Assessment Foundry Portal
• Primary storage and compute sizing: HPE Infosight Resources
• dHCI Networking Tools: HPE Infosight Downloads
• NinjaSTARS: HPE Assessment Foundry Portal
Summary
In this module, you reviewed how HPE SimpliVity helps your customer protect their data in their SDDC.
You also focused on sizing and designing HPE SimpliVity solutions to meet customers' needs for a
software-defined data center (SDCC).
You also learned more about Nimble Storage dHCI, focusing on its integration with VMware.
Learning checks
1. On which network do HPE SimpliVity nodes have their default gateway address?
a. Storage
b. Management
c. Cluster
d. Federation
2. How does an HPE SimpliVity cluster protect data from loss in case of drive failure?
a. Only RAIN (replicating data to at least three nodes
b. Only RAID (with the level depending on the number of drives)
c. Both RAID (with the level depending on the number of drives) and RAIN (replicating data to two
nodes)
d. Only RAID (always RAID 10)
Learning objectives
In this module, you will learn how to design an HPE solution for VMware Cloud Foundation (VCF).
After completing this module, you will be able to:
• Describe the HPE Composable Strategy and position HPE value proposition for VMware Cloud
Foundation
• Given a set of customer requirements or use case, position VCF on HPE Composable Infrastructure
to solve the customer’s requirements
• Describe the integration points between VCF and HPE Synergy
Rev. 21.31 | © Copyright 2021 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Module 7: Design an HPE VMware Cloud Foundation (VCF) Solution
Financial Services 1A has invested in a highly virtualized data center and taken steps to transform
compute, storage, and networking with software-defined technologies. But the company still needs help
bringing all of the components together. IT knows that it needs to respond to line of business (LOB)
requests more quickly. Ideally IT would like to give developers a cloud experience without moving
workloads off-prem. All of these needs point towards a private cloud solution, and the CIO is looking into
VMware Cloud Foundation (VCF).
In a VCF deployment, admins use SDDC Manager to configure and manage the logical infrastructure.
SDDC also automates some tasks such as provisioning hosts.
VCF domains are used to create logical pools across compute, storage, and networking. VCF includes
two types of domains: the management domain and virtual infrastructure workload domains.
Management domain
The management domain is created during the VCF “bring-up,” or installation, process. The management
domain contains all the components that are needed to manage the environment, such as one or more
instances of vCenter Server, the required NSX components, and the components of the VMware vRealize
Suite. The management domain uses vSAN storage.
You can set up availability zones to protect the management domain from hosts failing. Regions enable
you to locate workloads near users. Regions help you apply and enforce local privacy laws and
implement disaster recovery solutions for the SDDC.
– You can use SAN arrays to enhance performance for a VI workload domain. Supported storage
includes vSAN, vVols, NFS, or VMFS on FC.
– For vSAN-backed VI workload domains, vSAN ReadyNode configurations are required.
You will also the necessary VMware vSphere, vSAN, and NSX-T licenses to support the specific VI
workload domain deployment.
The consolidated model is designed for companies that have a small VCF deployment or special use
cases that do not require most hosts. With the consolidated model, both management and user
workloads run in the management domain. You manage the VCF environment from a single vCenter
server. You can use resource pools to isolate the management workloads and the user workloads.
Remember that when you bring up VCF, you do not select the architecture model. No matter which
architecture module you are using, you first deploy and bring up the management domain. If you are
using a consolidated architecture, you then deploy the user workloads in that management domain, using
resource pools to isolate them from the management workloads.
If you later want to migrate a consolidated architecture to a standard architecture, the process is fairly
straightforward. You create a VI workload domain and then move the workload VMs to the new domain.
Figure 7-5: First composable platform that seamlessly integrates with SDDC Manager
HPE and VMware have tightly integrated SDDC Manager and HPE OneView powering HPE Synergy to
deliver simplicity in managing composable infrastructure and the private cloud environments. By
introducing the HPE OneView connector for VFC, HPE brings composability features to VCF. Through
this unique integration and enhanced automation customers can dynamically compose resources within a
single console using SDDC Manager to meet the needs of VCF workloads, thus saving time and
increasing efficiency. This integration simplifies management of infrastructure by providing the ability to
quickly respond to business needs to add capacity on demand directly from SDDC Manager. It does so
seamlessly to increase business agility and help reduce cost from overprovisioning or under provisioning
of resources.
Think about how Synergy delivers these benefits in a bit more detail. As you see here, Synergy eliminates
Top of Rack (ToR) switches by bringing networking inside the frame; in this way it greatly reduces
infrastructure cost and complexity. Synergy Virtual Connect (VC) modules provide profile-based network
configuration, designed for server admins. Because server admins no longer need to wait for network
admins to reconfigure the infrastructure, they can move server profiles from one Synergy compute
module to another as required, making infrastructure management simpler and more flexible. HPE
Synergy also stands out from other solutions because it disaggregates storage and compute. In other
words, rather than each server having its own local drives, forcing companies to scale compute and
storage together, Synergy has separate compute modules and storage modules. Admins can use profiles
to flexibly connect or disconnect compute modules from drives on the storage modules. Because Synergy
provides the same flexibility and profile-driven management to both virtualized and bare metal workloads,
customers can consolidate traditional data center applications and their VCF-based private cloud on the
same infrastructure, reducing management complexity and costs.
The Solution Sales Enablement Tool (SSET) helps you size the HPE Synergy solution for VCF. This tool
gives you three options for sizing a VCF solution: quickstart configuration, basic option, and expert option.
Quickstart configuration
Designed to eliminate the guesswork and complexity from the ordering process, the quickstart
configuration shortens the quote time and simplifies the process of sizing the solution. It relies on
predefined solutions, offering the simplest configuration process with the highest “guidance” level.
You can configure:
• Number of VMs
• VM types—small, medium, or large
Based on the size of the VM you select, SSET adjusts:
• vCPUs per VM
• vRAM per VM (GB)
• Storage per VM (GM)
• Storage preference
You can select Review and wait while the tool sizes the solution. SSET then displays the proposed HPE
Synergy solution for VCF.
Basic option
Like the quickstart configuration, the basic option is designed to simplify the process of sizing a VCF
solution. The basic option offers a simple configuration process with guidance to help you gather the
information needed to size the solution. The basic option offers more flexibility than the quickstart
configuration, allowing you to customize more options.
You can select Review to display the proposed HPE Synergy solution for VCF.
Expert option
The expert option is designed for architects who have experience scoping VCF deployments. It provides
a more detailed configuration process. You have the flexibility to specify more options but still receive
guidance in scoping the solution. As with the other two options, SSET allows you to review the solution.
Access SSET
You can access SSET at:
https://sset.ext.hpe.com/
In addition to helping you size the VCF solution, HPE provides the tools and the support you need, from
ordering and validating the solution to bringing up VCF.
The HPE VCF solution is tightly integrated to help reduce deployment errors while also reducing
operational and maintenance costs. You have already seen how SSET helps you size the solution.
You can then use HPE Smart CID to create a Customer Intent Document (CID) that contains system
requirements and configuration information. You can import the “guidance” from SSET so that Smart CID
imports the sized solution.
HPE Smart CID also integrates with the HPE Solution Automation tool kit (SAT). SAT provides
prevalidated VCF configurations and helps automate the ordering process. Based on customer inputs in
CID, the SAT builds the underlying HPE Synergy infrastructure, as per best practices, with pre- and
post-validations to create VMware Cloud Foundation management and domain workloads. It helps
eliminate the guesswork of designing a VCF solution while reducing human errors.
The SAT-Build (S-Build) and SAT-Validate (S-Validate) are automation plug-ins for SAT. These plug-ins
run within the SAT framework and assist in build automation and validation of HPE Synergy VCF
Solution.
The VCF Cloud Builder VM is designed to help you bring up VCF. Using information you provide in the
VCF deployment parameter workbook, the Cloud Builder VM deploys and configures the first cluster in
the management domain. Once the management domain cluster is installed, the Cloud Builder VM
transfers inventory information and control to SDDC Manager.
Before running the Cloud Builder VM, you must enter comprehensive configuration information into the
VCF deployment parameter workbook. This information includes:
• Network information, such as IP addresses for hosts, IP addresses for gateways, VLAN settings,
MTU settings, management IP addresses, DHCP settings, and DNS settings
• VMware license keys (for ESXi, vSAN, vCenter server, NSX-T and SDDC Manager)
• Passwords for VCF components
• Configuration settings for the VCF management domain, including NSX-T configuration settings and
SDDC configuration settings (host name, IP addresses, and network pool name)
As the Cloud Builder VM deploys the management domain cluster, it validates configuration information
provided in the deployment parameter workbook. To verify this information, the Cloud Builder VM requires
network connectivity to the ESXi hosts for the management network (VLAN). It also needs to
communicate with DNS and NTP servers so it can validate configuration information in the VCF
deployment parameter workbook.
For VCF, you should download the VMware base image and the HPE add-on to create the desired cluster
image.
When deploying VCF on HPE Synergy, use the following general guidelines:
• Scalability up to 256 nodes (max scale) per VCF instance
• Cache and data drive sizes dictated by VM sizing prior to purchase (no set “only use these disks” in
VCF)
• Physical layout of frames and racks dependent upon HA and VM sizing (local to D3940) with specific
drives
• High availability—For high availability, design redundancy within the HPE Synergy frame and provide
two or more frames.
• All nodes in the same cluster—equivalent configurations of memory and equivalent configurations of
vSAN
• Compute, memory and storage—vSAN-certified and part of the Synergy vSAN ReadyNodes
HPE and VMware collaborated to integrate SDDC Manager and HPE OneView. The HPE OneView
Connector provides the interface between HPE OneView and SDDC Manager, using DMTF’s Redfish
APIs to communicate with SDDC Manager. HPE OneView Connector for VCF 4.0 includes support for
HPE Primera and HPE Nimble Storage.
When you install the OneView Connector, you install it on a Linux VM. As part of the installation process,
you import the OneView Connector’s certificate into SDDC Manager. After the Connector is installed, you
must register it with SDDC Manager.
The OneView connector for VCF enables you to complete tasks such as:
• Create server profile templates that are visible in SDDC Manager
• Compose resources, which includes allocating resources to servers, storage, and networking
interfaces
• Decompose resources, returning them to Synergy’s fluid resource pools
HPE OneView for VMware vRealize Orchestrator (vRO) helps automate IT tasks in an extensible and
repeatable manner. It provides a predefined collection of HPE OneView tasks and workflows that can be
used in vRO with easy-to-use, drag and drop access to the automation of HPE OneView managed
hardware deployment, firmware updates, and other life cycle tasks. HPE OneView for VMware vRO
allows the advanced management features of HPE OneView to be incorporated into larger IT workflows.
HPE OneView workflows and actions can also be integrated with VMware vRealize Automation using
vRO.
Learning checks
1. What is one difference between a VCF standard architecture and a consolidated architecture?
a. The standard architecture supports more than one VI domain while the consolidated
supports only one VI domain.
b. The consolidated architecture supports SAN arrays to improve storage performance,
but the standard architecture does not.
c. The standard architecture separates management workloads from user workloads.
d. The consolidated architecture uses a wizard to simplify the installation process
rather than requiring the Cloud Builder VM.
2. What is the purpose of the Deployment Parameter Workbook?
a. Helps automate the process of ordering a Synergy solution for VCF
b. Helps you order the necessary licenses for VCF components
c. Imports configuration information about the VCF environment into the HPE OneView
Connector for VCF
d. Provides the network information and configuration settings the Cloud Builder VM
requires to bring up VCF
3. Which correctly describes the HPE OneView Connector for VCF?
a. It uses Redfish APIs to communicate with SDDC Manager.
b. It uses workflows to automate updates on HPE Synergy.
c. It must be installed on HPE Synergy before VCF is deployed.
d. It is deployed with Cloud Builder VM.
Module 1
Activity
Possible answers
Your presentation might have mentioned ideas such as these.
Right now IT is struggling because some processes are software-defined while managing the
infrastructure is manual. For agility, especially for speeding up application development, the company
needs a software-defined infrastructure that automates and orchestrates the physical with the virtual.
The customer should start by moving its virtualized environment to composable infrastructure with fluid
resource pools—HPE Synergy provides the capabilities that the customer needs. The fluid resource pools
mean the companies can scale compute and storage separately so that they do not need to overprovision
one to get the other. They can easily compose storage and compute together for different workloads as
needs change. This will help the physical infrastructure “catch up” with the virtual infrastructure.
The customer needs to be able to easily deploy workloads on those fluid resource pools. HPE OneView
within Synergy has a Unified API. The OneView templates help to consolidate hundreds of lines of code
into one. Instead of trying to coordinate many components with scripts, customers just need to use the
script to deploy the template. The template ensures that the right settings are applied every time.
Rev. 21.31 | © Copyright 2021 Hewlett Packard Enterprise Development LP | Confidential – For Training Purposes Only
Appendix: Answers
Module 2
Activity 2.1
Task 1
Some of the information that you might have listed includes:
• What level of oversubscription is acceptable? (vCPU-to-core? RAM subscription?)
• What level of redundancy does the customer require?
• More data about current hosts and resource utilization
– HPE Assessment Foundry (SAF)
– HPE vCenter
– Perfmon for Windows
Task 2
As you created your BOM for the cluster, you should have found that you need 5 SY480 modules, but you
could plan one more for redundancy. The BOM includes all the frames and accompany components.
Remember that you planned one cluster for simplicity, but in the real world, you would be planning all of
the clusters.
Activity 2.2
Some of the ideas that you might have had are listed below.
• Ask what networks the customer wants to deploy on the ESXi hosts (Management, vMotion, FT,
production, etc.)
– Explain how to divide port into multiple FlexNICs
– Can use LACP-S to enhance resiliency and load balancing (support with virtual distributed
switches)
• Discuss integration with data center network (possibly eliminate ToR switches and have EoR
switches only)
• Discuss the importance of a template-based approach to management
• Explain how to get the HPE Custom image for ESXi (can be further customized)
Module 3
Activity 3
Below are listed some of the ideas that you might have had.
• vSAN benefits
– Cost effective
– Highly integrated with VMware
– Relatively simple to deploy
• HPE benefits for vSAN
– Flexibility on D3940 (no fixed number of drives per compute module)
– High performance flat iSCSI network across frames
• HPE storage array benefits
– Advanced services such as QoS, snapshotting, and replication (important for mission critical web
and business management services)
– Tight integration with VMware
– Simplified provisioning with vVols and/or vCenter plugins
– Automated backups to Storage Catalyst or cloud with RMC (3PAR)
– HPE InfoSight and VMVision
Module 4
Activity 4
Module 5
Module 5 Learning checks
1. What is one benefit of HPE OneView for vRealize Orchestrator (OV4vRO)?
a. It integrates a dashboard with information and events from HPE servers into
vRealize.
b. It provides an end-to-end view of servers' storage (fabric) connectivity within the
vRealize interface.
c. It adds pre-defined workflows for HPE servers to vRealize. ***Correct answer***
d. It integrates multi-cloud management into the VMware Cloud Foundation (VCF)
environment.
If you missed this question, please review “Section 1: HPE OneView integration with VMware
vSphere and vRealize.”
2. Which is an option for licensing HPE OneView for vCenter (OV4VC)?
a. InfoSight licenses
b. Remote Support licenses
c. Composable Rack licenses
d. OneView licenses ***Correct answer***
If you missed this question, please review “Section 1: HPE OneView integration with VMware
vSphere and vRealize.”
3. What is one benefit of OV4VC that is available with the OneView standard license?
a. An easy-to-use wizard for growing a cluster from a single tool
b. Non-disruptive cluster firmware updates from within vCenter
c. An inventory of servers and basic monitoring of them in vCenter ***Correct
answer***
d. Workflows for managing servers and storage
If you missed this question, please review “Section 1: HPE OneView integration with VMware
vSphere and vRealize.”
Module 6
Activity 6
This customer craves simplicity, and the simple-to-deploy HPE SimpliVity also simplifies the virtualized
environment. SimpliVity has built-in software-defined storage. Non-storage experts like the college's IT
staff can easily deploy VMs across the cluster without having to worry about attaching LUNs. This
customer does not want to have to think about and fuss with storage. The OmniStack Data Virtualization
Platform provides always on data reduction to minimize capacity requirements without extra effort. It also
provides built in data protection and easy to use local backups, so the CIO can start to simplify the
backup process without necessarily having to add another solution.
You might have also mentioned:
Module 7
Module 7 Learning checks
1. What is one difference between a VCF standard architecture and a consolidated
architecture?
a. The standard architecture supports more than one VI domain while the consolidated
supports only one VI domain.
b. The consolidated architecture supports SAN arrays to improve storage performance,
but the standard architecture does not.
c. The standard architecture separates management workloads from user workloads.
***Correct answer***
d. The consolidated architecture uses a wizard to simplify the installation process
rather than requiring the Cloud Builder VM.
If you missed this question, please review “Section 1: VMware Cloud Foundation (VCF) architecture.”
2. What is the purpose of the Deployment Parameter Workbook?
a. Helps automate the process of ordering a Synergy solution for VCF
b. Helps you order the necessary licenses for VCF components
c. Imports configuration information about the VCF environment into the HPE OneView
Connector for VCF
d. Provides the network information and configuration settings the Cloud Builder VM
requires to bring up VCF ***Correct answer***
If you missed this question, please review “Section 2: HPE integration with VCF.”
3. Which correctly describes the HPE OneView Connector for VCF?
a. It uses Redfish APIs to communicate with SDDC Manager. ***Correct answer***
b. It uses workflows to automate updates on HPE Synergy.
c. It must be installed on HPE Synergy before VCF is deployed.
d. It is deployed with Cloud Builder VM.
If you missed this question, please review “Section 2: HPE integration with VCF.”