0% found this document useful (0 votes)
103 views570 pages

VCP-MC Full Course

Uploaded by

candykelly66
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
103 views570 pages

VCP-MC Full Course

Uploaded by

candykelly66
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 570

VCP-MC Blueprint

Friday, January 13, 2023 9:38 AM

VMware Cloud Professional

Exam Details (Last Updated: 09/23/2022)


The VMware Cloud Professional (2V0-33.22) which leads to VMware Certified Professional - VMware Cloud 2022 (VCPVMC
2022) certification is a 70-item exam, with a passing score of 300 using a scaled scoring method. Candidates are given 135
minutes to complete the exam, which includes adequate time to complete the exam for non-native English speakers.

Exam Delivery
This is a proctored exam delivered through Pearson VUE. For more information, visit the Pearson VUE website.

Certification Information
For details and a complete list of requirements and recommendations for attainment, please reference the VMware Education
Services – Certification website.

Minimally Qualified Candidate


The minimally qualified candidate (MQC) can successfully identify the key components of a VMware Cloud solution and how to
support migrations from an on-premises data center into VMware Cloud (across different hyperscaler partners including
VMware Cloud on AWS). The successful candidate understands the networking requirements to support VMware Cloud use
cases and the benefit of VMware Tanzu Kubernetes Grid (TKG). The successful candidate also has a basic understanding of
storage concepts, security, business continuity and disaster recovery, as well as monitoring and troubleshooting, in a VMware
Cloud environment. The successful candidate possesses most of the knowledge in the exam blueprint sections detailed below
and may need to research some topis and require occasional assistance in carrying out some tasks.

Products/Technologies
This exam validates breadth of knowledge of VMware Cloud across different hyperscalers including:

VMware Cloud on AWS

VMware Cloud on Dell EMC

VMware Cloud on AWS Outposts

Google Cloud VMware Engine

Azure VMware Solution

Introduction Page 1
Exam Sections
VMware exam blueprint sections are now standardized to the seven sections below, some of which may NOT be included in the
final exam blueprint depending on the exam objectives.
Section 1 – Architecture and Technologies
Section 2 – VMware Products and Solutions
Section 3 – Planning and Designing
Section 4 – Installing, Configuring, and Setup
Section 5 – Performance-tuning and Optimization
Section 6 – Troubleshooting and Repairing
Section 7 – Administrative and Operational Tasks

If a section does not have testable objectives in this version of the exam, it will be noted below, accordingly. The objective
numbering may be referenced in your score report at the end of your testing event for further preparation should a retake of
the exam be necessary.

Sections Included in this Exam


Section 1 – Architecture and Technologies
Objective 1.1 – Explain the benefits of cloud computing
Objective 1.2 –Describe the functional components of a VMware Cloud solution
Objective 1.3 – Differentiate between VMware Cloud connectivity options
Objective 1.4 – Describe a cloud network architecture
Objective 1.5 – Describe networking in the software-defined data center (SDDC)
Objective 1.6 – Describe VMware SDDC components
Objective 1.7 – Explain Hybrid Linked Mode for the VMware SDDC
Objective 1.8 – Describe virtual machine components
Objective 1.9 – Describe VMware vSphere vMotion and vSphere Storage vMotion technology
Objective 1.10 – Explain a high availability and resilient infrastructure
Objective 1.11 – Describe the different backup and disaster recovery options for VMware Cloud
Objective 1.12 – Explain scaling options in VMware Cloud environments
Objective 1.13 – Identify authentication options for the VMware Cloud Services Portal
Objective 1.14 – Describe the purpose of using Kubernetes
Objective 1.15 – Describe use cases for VMware Cloud on Dell EMC and VMware Cloud on AWS Outposts
Section 2 – VMware Products and Solutions
Objective 2.1 – Describe the VMware Cloud operating model
Objective 2.2 – Identify the role of other cloud services
Objective 2.3 – Explain the VMware multi-cloud vision
Objective 2.4 – Identify the appropriate backup or disaster recovery method for VMware Cloud given a scenario
Objective 2.5 – Describe how VMware and its hyperscaler partners address IT challenges
Objective 2.6 – Recognize VMware Cloud use cases
Objective 2.7 – Describe the function of VMware HCX
Objective 2.8 – Explain the NSX architecture in VMware Cloud
Objective 2.9 – Explain the functions of Kubernetes components
Objective 2.10 – Describe the functions of VMware Tanzu products in Kubernetes life cycle management
Objective 2.11 – Explain Tanzu Kubernetes Grid concepts
Section 3 – Planning and Designing
Objective 3.1 – Understand configuration sizing requirements for a VMware Cloud SDDC
Objective 3.2 – Understand considerations for installing VMware Cloud on Dell EMC and VMware Cloud on AWS
Outposts on-premises
Section 4 – Installing, Configuring, and Setup

Introduction Page 2
Objective 4.1 – Deploy and configure VMware HCX appliances
Objective 4.2 – Configure connectivity between clouds (VPN, AWS Direct Connect, VMware Managed Transit Gateway)
Objective 4.3 – Set up Hybrid Linked Mode using the VMware Cloud Gateway Appliance
Objective 4.4 – Deploy and configure cloud business continuity and disaster recovery (BC/DR) solutions
Objective 4.5 – Assess the requirements for cloud onboarding within a VMware single- or multi-cloud environment
Objective 4.6 – Assess the required account access and privileges for an SDDC deployment within a VMware single- or
multi-cloud environment
Objective 4.7 – Understand the concept of different types of segments (compute and management)
Objective 4.8 – Understand hyperscaler networking considerations
Objective 4.9 – Understand the concept of dynamic SDDC scale-out
Objective 4.10 – Complete cluster operations

Section 5 – Performance-tuning and Optimization


Objective 5.1 – Determine networking performance
Objective 5.2 – Determine storage performance
Objective 5.3 – Optimize the guest OS configuration
Section 6 – Troubleshooting and Repairing
Objective 6.1 – Troubleshoot networking issues
Objective 6.2 – Troubleshoot internetworking
Objective 6.3 – Troubleshoot security
Objective 6.4 – Troubleshoot workloads
Objective 6.5 – Troubleshoot storage
Section 7 – Administrative and Operational Tasks
Objective 7.1 – Create and manage user account and role permissions
Objective 7.2 – Create a content library
Objective 7.3 – Create and manage network segments
Objective 7.4 – Create and manage VM snapshots
Objective 7.5 – Monitor VMware NSX networking within VMware Cloud
Objective 7.6 – Determine the appropriate network connectivity option for connecting to and from VMware Cloud
Objective 7.7 – Recognize management and operational responsibilities in VMware Cloud on AWS
Objective 7.8 – Describe elements of the service management process
Objective 7.9 – Recognize update and upgrade responsibilities of various components for VMware Cloud on AWS

Recommended Training
Designing, Configuring, and Managing the VMware Cloud

References*
In addition to the recommended course modules listed above, item writers used the following references for information when
writing exam questions. It is recommended that you study the reference content as you prepare to take the exam, in addition
to any recommended training.
Link Topic
https://blogs.vmware.com/ Introduction to the VMware Cloud
Operating Model

https://kb.vmware.com/ [VMC on AWS] Cannot change Storage


Policy applied to any data except VM
(83392)

https://docs.vmware.com/ VMware HCX Product Documentation,


VMware Cloud Services Product
Documentation, VMware Tanzu Service

Introduction Page 3
Documentation, VMware Tanzu Service
Mesh Product Documentation, VMware
Tanzu
Product Documentation, vSphere with
Tanzu Configuration and Management
Documentation, VMware Cloud on AWS
Product Documentation, VMware Cloud
Disaster Recovery Product
Documentation, VMware Site Recovery
Product
Documentation, VMware Cloud on AWS
Operating Principles, VMware NSX-T
Data Center Product Documentation
https://www.vmware.com/topics/glossary.html Kubernetes Namespace

*The content in this exam covers breadth of knowledge of VMware Cloud across
different hyperscalers including VMware Cloud on AWS, VMware Cloud on Dell
EMC, VMware Cloud on AWS Outposts, Google Cloud VMware Engine, and Azure
VMware Solution.

Sample Questions
Sample questions presented here are examples of the types of questions candidates may encounter and should not be used as
a resource for exam preparation.
Sample Question 1
When creating a hybrid cloud solution using Google Cloud VMware Engine, which inter-connectivity option would a cloud
administrator choose to provide the most secure layer 3 connection with the greatest possible throughput for application
connectivity?
A. Partner Interconnect
B. Partner VPN
C. Dedicated Interconnect
D. Cloud VPN

Answer: C

Sample Question 2
An administrator will be implementing a third-party, cloud-based backup solution to provide backup services to virtual
machines running in VMware Cloud on AWS.

What is the recommended approach?

A. Deploy the solution inside the VMware Cloud on AWS environment to take advantage of the existing capacity of the
service.
B. Deploy the solution into the customer-owned virtual private cloud (VPC) that is connected to the SDDC. This allows use
of a high-speed, low latency ENI connection for data backup and recovery.
C. Deploy the solution on-premises. This affords the greatest degree of recoverability in the event that VMware Cloud on
AWS becomes unavailable.
D. Deploy the solution into a virtual private cloud (VPC) located in another AWS availability zone (AZ). This provides
increased resiliency in the event of a localized AZ failure that may impact the VMware Cloud on AWS environment.

Answer: B

Sample Question 3
A cloud administrator is managing an Azure VMware Solution environment. Currently, the environment consists of a single
cluster. Due to increased demand, the cloud administrator is tasked with adding an additional six hosts to the environment.
The newly provisioned hosts must be able to provide access to existing VMware NSX networks.
What should the administrator do to achieve this goal?

A. Provision a new cluster.


B. Provision a new private cloud.

Introduction Page 4
B. Provision a new private cloud.
C. Create a new Azure VMware Solution tenant.
D. Contact VMware support to request a cluster expansion.

Answer: A

Sample Question 4
Which three strategies are key when transitioning to a cloud operating model? (Choose three.)

A. Continuity
B. Endpoint
C. Application
D. Financial
E. Migration
F. Cloud

Answers: C, D, F

Sample Question 5
A cloud administrator needs to deploy a three-tiered application that needs to be compliant with the following security policies:
• The web layer should be accessible only from testing networks
• The application layer should be accessible only by the web services
• The database layer should be accessible only by the application services

Based on the given scenario, which three VMware NSX components would be necessary at a minimum to provide a compliant
architecture for the application to be deployed on VMware Cloud? (Choose three.)

A. Tier-1 gateway
B. Segments
C. VP services
D. Endpoint protection rules
E. Security group
F. Distributed firewall rules

Answers: A, B, F

Sample Question 6
What are the two authentication options supported when using Hybrid Linked mode with the vCenter Cloud Gateway
Appliance? (Choose two.)
A. Security Assertion Markup Language (SAML)
B. Open Authorization (OAuth) 2.0
C. Integrated Windows Authentication (IWA)
D. Windows NT LAN Manager (NTLM)
E. Lightweight Directory Access Protocol (LDAP)

Answers: C, E

Sample Question 7
A cloud administrator is tasked with ensuring a dedicated, secure, high-speed, and low-latency connection exists between an
on-premises environment and Azure VMware Solution.

Which solution should be configured?

A. ExpressRoute gateway
B. Dedicated Microsoft Enterprise Edge
C. Global Reach
D. ExpressRoute

Introduction Page 5
Answer: D

Sample Question 8
A cloud administrator would like to limit bandwidth from a particular virtual machine that is connected to a network segment
using a Quality of Service (QoS) segment profile.

Which action should a cloud administrator take to meet this objective?


A. Attach the virtual machine to a segment port and configure the egress limit.
B. Attach the virtual machine to a network segment and configure the egress limit.
C. Attach the virtual machine to a segment port and configure the ingress limit.
D. Attach the virtual machine to a network segment and configure Differentiated Service Code Point (DSCP) class of
service.

Answer: A

Sample Question 9

A cloud administrator is experiencing an issue with VMware vMotion failing between two of its hosts.

Which VMware solution could the administrator use to gather further information about the failure? A. VMware vRealize
Lifecycle Manager

A. VMware Cloud Director


B. VMware vRealize Orchestrator
C. VMware vRealize Log Insight Cloud

Answer: D

Sample Question 10
A company is using AWS Direct Connect to access VMware Cloud on AWS. The autonomous system number (ASN) configured
on AWS and the software-defined data center (SDDC) is 65225. The connection is unsuccessful.

What could be causing this issue?

A. They are using External Border Gateway Protocol (EBGP).


B. The ASN numbers CANNOT be the same.
C. They are using Internal Border Gateway Protocol (IBGP). D. The ASN number is outside of the acceptable range.

Answer: B

Certification Alignment

VCP-VMC 2022 Exam Content Contributors

Carla Gavalakis
Christopher Lewis Chris Vallee
Cosmin Trif
Emad Younis
Frances Wong
James Potts
Jamie Maillart
Kim Delgado
Mateusz Konopnicki
Paul Irwin

Introduction Page 6
Paul Irwin
Ranjna Aggarwal
Scott Bowe
Tiago Baeta Neves

VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com © 2022 VMware, Inc. All rights reserved. The product or
workshop materials is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at
http://www.vmware.com/download/patents.html . VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks
and names mentioned herein may be trademarks of their respective companies.
VMware warrants that it will perform these workshop services in a reasonable manner using generally accepted industry standards and practices. THE EXPRESS WARRANTY SET FORTH IS IN LIEU OF ALL
OTHER WARRANTIES,
EXPRESS, IMPLIED, STATUTORY OR OTHERWISE INCLUDING IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE WITH RESPECT TO THE SERVICES AND
DELIVERABLES
PROVIDED BY VMWARE, OR AS TO THE RESULTS WHICH MAY BE OBTAINED THEREFROM. VMWARE WILL NOT BE LIABLE FOR ANY THIRD-PARTY SERVICES OR PRODUCTS IDENTIFIED OR
REFERRED TO
CUSTOMER. All materials provided in this workshop are copyrighted by VMware ("Workshop Materials"). VMware grants the customer of this workshop a license to use and make reasonable copies of any
Workshop Materials strictly for the purpose of facilitating such company's internal understanding, utilization and operation of its licensed VMware product(s). Except as set forth expressly in the sentence above,
there is no transfer of any intellectual property rights or any other license granted under the terms of this workshop. If you are located in the United States, the VMware contracting entity for the service will be
VMware, Inc., and if outside of the United States, the VMware contracting entity will be VMware International Limited.
REV. 12/2022

Introduction Page 7
Introduction
Wednesday, January 11, 2023 9:59 AM

VMware Cloud helps organizations to navigate the shift to the cloud. It consists of infrastructure and
management components, and their supporting technologies.

In this course, you learn about how VMware Cloud solutions help to manage and support multi-cloud
environments.

Topics covered include:

• VMware cross-cloud services


• SDDC planning
• SDDC networking
• Workload management
• VMware Tanzu on VMware Cloud
• Application migration
• Disaster recovery
• Maintenance and troubleshooting

Introduction Page 8
Understanding Multi-Cloud
Wednesday, January 11, 2023 10:04 AM

Learner Objectives

After completing this lesson, you should be able to:


• Identify benefits of cloud computing
• Explain the VMware multi-cloud vision

Why Multi-Cloud?
This lesson starts with a solution: VMware CloudTM.

It explores the solution's origins by addressing several questions: Why cloud? What is multi-cloud? What is the
VMware multi-cloud vision? What are the benefits of cloud? What are the challenges it presents? And finally,
how does VMware Cloud address those challenges?

Kit Colbert Transcript:

"Multi-cloud is all about the power of and. In fact, we hear that clearly from our customers. Most of them
today are by far multi-cloud already. Seventy-five percent have two or more clouds, and almost half have
three or more clouds. When we talk to customers and ask them, what are your goals with multi-cloud? The
biggest thing that came back was all about getting access to best-of-breed cloud services.

When we think about cloud services, you think about things like databases, data warehouses, messaging,
streaming, analytics. And every public cloud has at least one of those, if not many of those. And so what
customers do is that they compare and contrast. They look across to see which of those cloud services best
meets their application needs.
So how can you get all the benefits of these best-of-breed cloud services without the complexity?

Well it's all about having the right architecture. It's about standardizing in certain places so that you can choice
and flexibility in other places.

In particular, standardizing at the DevSecOps level and at the infrastructure level is crucial here to this
architecture.

VMware Tanzu, laser-focused on the DevSecOps space, and VMware Cloud, by contrast, is really focused on
the infrastructure space, delivering a set of consistent cloud infrastructure services, available across all clouds.

So, first question, what is VMware Cloud? You hear us talk about it, but what really is it at its heart? Well, at its
heart, it's fundamentally two different components. The first of which is this infrastructure building block that
we call VMware Cloud Foundation. The beauty of VMware Cloud Foundation is that you can place it anywhere
you want, in a public cloud, at a colo, in your data center, at the edge. It's very flexible and yet also powerful
and consistent. It gives you that consistent infrastructure layer.

Second is our multi-cloud management layer with vRealize. The goal here is to pull together all these different
infrastructure pieces, infrastructure building blocks, and give you a single view into them all, a single way of

Cloud Computing and Operations Page 9


infrastructure pieces, infrastructure building blocks, and give you a single view into them all, a single way of
managing them, of governing them, of securing them.

So what are the benefits of VMware Cloud? First of all, it's the fastest path to cloud. This consistent
infrastructure from the on-prem to the cloud, leveraging tools, like HCX. Unprecedented levels of speed.

We're also focused on application modernization, getting you those best-of-breed cloud services that your app
teams are looking for in the easiest and yet most secure way possible.

Speaking of security, really focused on that, across clouds, governance, operations, standardizing these things
because that is so important to your business. And finally, it's got great economic value, dramatically reducing
total cost of ownership, much lower than you see in other places, and great ROI as well."

Cloud Evolution

How did we get to a point where the cloud is the model for doing business?

Cloud and container technologies have changed the way that IT and businesses operate. Consider the
following interlinking shifts in the evolution of the cloud.

Starting as a shadow IT novelty, cloud has become a core strategic initiative.

The cloud delivers a range of advanced services, including integrated Kubernetes, artificial intelligence,
machine learning, Internet of things, and
more.

Kubernetes and containers accelerated the adoption of a microservices architecture and of DevOps
methodologies, making Linux and open source more mainstream and strategic to the enterprise.

Public cloud providers now bring their stacks on premises, and data center and server vendors offer cloudlike
managed services with OpEx financial models.

Cloud Computing and Operations Page 10


Transforming with the Cloud

To unlock the potential of cloud and applications, businesses must transform in three main ways, each of
which supports digital business and application modernization at different levels:
• Embrace microservices and APIs and improve the developer experience, speeding innovation and the
delivery of business services.
• Accelerate the path to new and modernized applications and deliver critical business services to
production quickly, securely, and continuously.
• Redefine IT with cloud capabilities, modern architectures, and a cloud operating model that spans from
the data center to any cloud and edge for all applications.

As organizations develop and deliver modern and traditional applications in the cloud, they redefine the
nature of IT.

Benefits of Cloud Computing

Organizations that use cloud capabilities and technologies to transform their business can reap several
benefits:
• Increased agility: Agility encompasses scalability, customizability, and access to the cloud service from
any where and on any device. In addition, you can have the same level of security regardless of the scale
of your business or services.
• Cost reductions: You save on capital expenses (hardware) and can use a flexible payment structure
where you pay only for the resources that you use.
• Increased innovation and developer productivity: With cloud computing, organizations do not need to
worry about managing IT infrastructures and can focus on application development and other priorities
using the most up-to-date technology.

Knowledge Check: Cloud Benefits

Given what you know about cloud, which examples illustrate its benefits? (Select all options that apply)

In seconds, you receive a large amount of storage using a cloud option.


A developer codes an application in a cloud-based environment, and, with a few simple commands, deploys
the application on the business website.
A business stores infrequently accessed data in the cloud to benefit from reduced on-premises storage costs.
An organization manages its cloud resources by using different cloud providers that are separate and isolated
from each other.
An organization requires fewer developers when it uses the cloud.

Cloud Challenges

Even with all its benefits, the cloud also presents challenges as IT struggles to balance the needs of new
and existing applications.

Pressure to provide reliability, availability, security, and Rugegovernance can be compounded by growing a portfolio
of application architectures, infrastructure and cloud vendors, tools, and processes.

Cloud Computing and Operations Page 11


of application architectures, infrastructure and cloud vendors, tools, and processes.

Consider cloud challenges in more detail.

Challenge: Siloed Teams


As you add more cloud services and providers, incompatibility between them creates new silos and
challenges. Your organization might find it has separate teams and skill sets, different management tools, and
different operating processes.

How do you avoid siloes and unify multiple clouds?

You can address this challenge by applying network virtualization technology to public clouds:
• Connect public clouds and services securely.
• Deploy secure network architectures that span multiple clouds with VMware NSX®.

Challenge: Managing Across Clouds:


You want to manage resources in different clouds, but how can you manage what you cannot see?

The solution is a common operating environment that gives you visibility and tools to view and manage
resources, workloads, and operations across clouds.

In this way, you can avoid cloud vendor lock-in, monitor operations, and manage to specific service-level
agreements (SLAs).

Although organizations might not outright choose a multi-cloud architecture, they often find themselves using
a multi-cloud approach to increase innovation.

VMware Multi-Cloud Vision


VMware offers a vision for multi-cloud that helps to address cloud challenges. In its view, cloud is less about
where you run apps, and more about how you deliver business innovation.

A multi-cloud environment is where apps are built and deployed quickly, and new capabilities are continuously
added.

You can deploy apps anywhere across a distributed cloud and move apps freely to the best cloud.

Implementing the Vision

Do you recall the solution that this lesson started with, namely, VMware Cloud? It was developed to help
address cloud challenges.

VMware Cloud delivers multi-cloud services that span the data center, edge, and any cloud. It has two main
parts:

• Infrastructure: You redefine IT with cloud capabilities, modern architectures, and consistent, global
operations in the data center, cloud, and edge.
• Management: The goal for the management layer is to bring together the architecture components in a

Cloud Computing and Operations Page 12


• Management: The goal for the management layer is to bring together the architecture components in a
common set of management and security tools. So you have a single view into all of them: a single way
of managing, governing, and securing them.

Knowledge Check: VMware Multi-Cloud Vision

Which statement best describes the VMware multi-cloud vision? (Select one option)

Provide cloud services through the infrastructure using existing tools and outsource the management of the
infrastructure
Deliver infrastructure across all clouds and in the datacenter and edge and manage and secure the
infrastructure with a common set of tools.
Modernize applications in the cloud of your choice, using the cloud-native services of that cloud provider,
including their management services.

Cloud Computing and Operations Page 13


VMware Cloud Operating Model
Wednesday, January 11, 2023 10:02 AM

Learner Objectives

After completing this lesson, you should be able to:


• Describe the principles of the VMware Cloud operating model
• Describe the benefits of the VMware Cloud operating model

Everyone is impacted by the digital transformation. It is happening in every company and in every industry.

The goal of digital transformation is to become a digital business. The transformation process changes how
companies deliver services and products to their consumers.

Banks were one of the first business to go on the digital transformation journey.

Most banks now have a web or mobile application that customers can use to access
their bank accounts.

Customers chose a bank based on their experience of interactive with it. So banks
must continuously innovate and transform themselves to keep up with changing
customer demands.

Why Digital Transformation?

What drives a business toward digital transformation? Why are companies embarking on this journey?

• Improve Customer Experience - Nowadays, people expect to consume services digitally, through a web
app or mobile app.

Cloud Computing and Operations Page 14


app or mobile app.
• Sustain Growth and Increase Revenue - Becoming digital is key to remaining relevant and sustaining
growth. And with the right strategy and investments in digital, the transformation to digital can become
an accelerator for increasing revenue.
• Protect Brand and Decrease Risk - It is becoming more and more important for a company to protect its
digital assets. This need only increases with digital transformation. Digital transformation should drive
security and compliance as a result, decrease risk and protect the company's brand from negative press.
• Decrease Time to Market and Increase Competitive Advantage - The key goal of digital transformation
is to add capabilities that will help a company move its business forward.

Digital Transformation is Powered By Applications

Source: VMware Market Insights Study, March 2021, based on research of 1,200 organizations globally

To become a digital business, a company must be able to:

• Deliver modern applications at the speed that the business demands.


• Operate across any cloud, with flexibility to run applications in the data center, at the edge, or in the
cloud.
• Rapidly transform the business while delivering enterprise-level resiliency, security, and operations.

Knowledge Check: Digital Transformation

Why are companies transforming into digital businesses? (Select three options)

To enable customers to consume services through web or mobile apps.


To create and promote a new brand and thereby increase sales.

Cloud Computing and Operations Page 15


To create and promote a new brand and thereby increase sales.
To continue running business-critical applications in on-premises data centers.
To remain relevant and sustain growth.
To decrease the time to market for services and increase their competitive advantage.

IT Challenges

Traditionally, services come from the data center and are managed by customers themselves. But more
services are now being consumed from multiple public clouds and service providers.

A multi-cloud strategy presents challenges for IT, which is still responsible for delivering IT services.

How can IT control multiple services from multiple cloud providers? And how can IT manage multiple
cloud environments so that consumers get the cloud resources and services that they require?

Enterprise IT organizations should be able to provide IT services from different cloud providers and still remain in
control.

Complexity and Its Challenges

Multiple clouds bring complexity. And complexity can hinder organizations from getting the benefits of multi-
cloud.

Cloud Computing and Operations Page 16


Instead of agility, lower costs, and fewer risks, IT organizations experience the opposite of these benefits:

• Decreased Agility - Due to bureaucratic process and complexity, agility might decrease, IT might be
perceived as too slow, as the bottleneck. It can take 7.4 years to refactor and migrate 100 apps to the
cloud. (Hybrid Cloud Trends Survey, The Enterprise Strategy Group, March 2019)
• Higher Costs - Costs typically increase due to limited visibility and IT not being able to transform towards
cloud. Along the way, efficiency is reduced. It can cost 1 million USD to move 1,000 workloads from one
cloud to another. (VMware white paper: Six Ways Application Requirements Drive Your Infrastructure
Decisions, Sept 2019)j
• Higher Risk - Risk is introduced if a siloed approach is taken when consuming multiple clouds. The
environment becomes complex and disjointed, resulting in an increase in risk for the business. 90% of
organizations reported skills shortages in cloud-related tasks. (2019 Trends in Cloud Transformation, 451
Research, Nov 2018)

IT must rethink how it provides services.

Traditional and Cloud-Based IT

What are the differences between traditional and cloud-based IT?

The cloud operating model provides a framework for adopting the cloud.

In a cloud operating model, services are automatically

Cloud Computing and Operations Page 17


In a cloud operating model, services are automatically
delivered to those who consume them: application
developers and the application owners in each line of
business (LOB).

These service consumers can consume any type service


from any type of cloud: edge, private, hybrid, or public.

The IT organization delivers these services while


maintaining control of the environment.

Digital transformation is a business strategy.

Application, cloud, and financial strategies work together


to build a digital business.

A key part of implementing the cloud operating model is aligning your applications, clouds, and investments
with your business strategy.

• Application Strategy
○ Your application strategy defines what applications you need to support the business.
○ The strategy might involve building new, modern applications. Or it might involve using the
existing applications, and possible modernizing them as well.
• Cloud Strategy
○ Your cloud strategy should define the cloud resources needed to align to the business outcomes
and application requirements.
• Financial Strategy
○ Your financial strategy is used to manage your investments.
○ You must ensure that your costs are under control, while providing the right resources at the right
cost for your applications.
○ Cost governance and compliance become important in a multi-cloud environment.

Cloud Computing and Operations Page 18


Application Strategy: The Five Rs

A business consumes applications, and applications can run on any cloud. In a multi-cloud environment,
existing and new applications must run on their clouds of choice.

As you move toward a multi-cloud environment, you must determine your application strategy.

One helpful tool is the five Rs:

• Retain
○ Retain applications that already exist and ensure that they are optimally supported based on key
requirements such as security, privacy, performance, and data gravity.
• Rehost
○ Rapidly relocate (migrate) applications, without recoding or refactoring, to any cloud, based on the
organization's goals. You match the needs of each app to the best cloud environment.
• Replatform
○ Leverage Kubernetes for new and existing applications to improve application deployment speed
while evolving to a more flexible, more reliable architecture. For example, you can move
applications from virtual machines in your data center to containers in a public cloud.
• Refactor
○ Use modern application design, microservices, and cloud-native principles to refactor (restructure)
existing applications or build new applications.
• Retire or Replace
○ Decommission an existing application, or replace the application with software as a service (SaaS).

The cloud operating model brings together application and cloud strategies so that both new and existing
applications are managed and operated in a multi-cloud environment.

To move to a cloud operating model, you must transform the people, processes, and technology.

The cloud operating model encompasses the people, processes, and technology that are required for
implementing the business, application, and cloud strategies.
Cloud Computing and Operations Page 19
implementing the business, application, and cloud strategies.

• People
○ Moving from the traditional IT model to a cloud operating model affects the people in the
organization.
○ The IT team must rethink how to manage and provide services to the business.
○ People must align to the organization objectives and business level KPIs (key performance
indicators)
• Process
○ The IT team must rethink their current processes and adopt a model for cloud operations and
management that focuses on delivering services.
○ IT must automatically deliver services, from development to consumption.
• Technology
○ The IT team must have the right technology from an infrastructure and cloud management
perspective, to align with application and business requirements.

With the VMware cloud operating model, you can build a cloud on VMware technology, or unify an existing
multi-cloud infrastructure.

Cloud Computing and Operations Page 20


• Build VMware Clouds
○ Many organizations are moving away from traditional IT. With VMware virtualization technology
and cloud management that supports application modernization, these organizations are building
clouds.
○ VMware cloud management technology provides a cloud experience that organizations can
manage and maintain themselves or through SaaS services. With this key technology, customers
can transform their traditional IT into cloud operations.
• Embrace Multi-Cloud
○ VMware empowers organizations to embrace public clouds.
○ Your cloud operations team must manage and govern cloud resources from public cloud providers.

Embracing Multi-Cloud

How can you create a cloud operating model for managing multiple clouds?

Video Transcript

Cloud Computing and Operations Page 21


VMware makes it possible for IT organizations to embrace the way of multi-cloud and create a cloud
operating model that has the ability to manage multiple clouds.

The way we do this is by looking at what is required for both existing and modern applications to run on this
hybrid stack. Your applications could be built on top of VMs, containers, Kubernetes, or native public cloud
services

These applications could be hosted inside an edge, private, public, or hybrid cloud infrastructure. And no
matter where your workloads are hosted, or what your workloads look like, you must have to have a
consistent management experience across all of your infrastructure.

Customers who consume public cloud services from AWS, Azure, Google, and other public cloud providers
have their compute and hardware hosted in their data centers. This experience is something that cloud
consumers have become accustomed to. Your infrastructure is easily accessible when needed.

In order to get that same experience within the data center we need to standardize and modernize the
infrastructure across your multi-cloud landscape.

VMware has a long history by doing this through our software-defined data center, or SDDC approach to data
center design. Now, VMware also delivers SDDC as a service.

While it is powerful, SDDC as a service won’t provide a full picture across all of your compute and
infrastructure needs. For that, we need to have a cloud management platform that will transform your
infrastructure from disparate clouds into a true VMware cloud.

How do we build this VMware cloud?

First, let’s look at the SDDC itself. A variation of VMware Cloud Foundation TM is used to automatically deploy
instances of VMware vSphere®, VMware vSAN TM and VMware NSX®. These three components are the

Cloud Computing and Operations Page 22


instances of VMware vSphere®, VMware vSANTM and VMware NSX®. These three components are the
foundational parts of a vSphere SDDC.

Instead of using VMware Cloud Foundation on your own infrastructure, the SDDC can also be consumed as a
service by leveraging VMware services, such as VMware Cloud TM on AWS or VMware CloudTM on Dell. You can
also spin up an SDDC as a service using one of our partners, such as Azure VMware® Solution, Google
VMware® Engine, or one of the more than 4,500 VMware certified partners worldwide.

In addition to this more traditional stack, we utilize VMware vRealize® Cloud Management TM together with
our VMware Tanzu® application modernization technology to add cloud capabilities to our platform.

All of these components work together to create VMware Cloud. VMware Cloud gives you a unified cloud
experience with the public clouds. It allows you to provide services to the consumers of the cloud. Your
developers and lines of business will see increased flexibility and scalability as a result.

With so many different options for compute both on premises and in multiple clouds, you must have the
ability to manage them all together. vRealize and VMware Tanzu have multi-cloud capabilities. And we enrich
that with CloudHealth® for cost management and cloud security.

All of these products and solutions snap together like a puzzle, working in tandem to make the VMware cloud
operating model.

Knowledge Check: Multi-Cloud Environment

Which technologies support the main components of a multi-cloud environment? Match each component to
the solutions that support it.

Cloud Computing and Operations Page 23


The VMware cloud operating model delivers three main competencies for multi-cloud management: service
delivery, operations, and governance.

• Service Delivery: Accelerate


○ This competency focuses on the delivery of services to accelerate the ability to deliver and update
applications. It focuses on delivering a modern applications platform by leveraging the services
from the VMware Cloud Management stack.
○ When done successfully, your organization will deliver services with speed, unlock app innovation,
and support DevOps best practices.

Benefits to Consumers and Cloud Providers

The VMware cloud operating model provides benefits to both the consumer (application developers and lines
of business) and the cloud service provider.

Benefits to the Consumer Benefits to the Cloud Provider


• Delivers a modern developer platform to • Automatically delivers consistent services across
unlock app innovation VMs, Kubernetes, containers, and native clouds
• Integrates infrastructure into DevOps and • Applies DevOps best practices to cloud
application pipelines infrastructure management
• • Adopts cloud best practices for proactive and

Cloud Computing and Operations Page 24


• Empowers developers with app-context view • Adopts cloud best practices for proactive and
across resources intelligent operations
• Empowers developers with application • Gains app-aware visibility across cloud
discovery and mapping infrastructure and resources
• Empowers developers to make informed • Provides cost governance, accountability, and
placement decisions clarity for the whole organization
• Enables developers with protection from • Allows platform teams to enforce compliance
security vulnerabilities across cloud workloads

Knowledge Check: Cloud Operating Model

True or False: The cloud operating model is a framework for implementing a cloud strategy.

True
False

Which benefits does the VMware cloud operating model provide? (Select two options)

Helps organizations to transition to cloud operations through the use of public clouds only
Defines the people, processes, and technology that are required to deliver a cloud strategy
Adopts a siloed approach from traditional IT organizations by managing each cloud separately
Delivers services, intelligent operations, and governance for multi-cloud management

Cloud Computing and Operations Page 25


VMware Cross-Cloud Solutions
Wednesday, January 11, 2023 1:04 PM

Learner Objectives

After completing this lesson, you should be able to:


• Recognize the benefits of VMware cross-cloud solutions
• Identify types of cross-cloud solutions

Addressing Challenges of Multiple Clouds


Using an integrated portfolio of Software-as-a-service (SaaS) solutions, you can balance speed,
complexity, and control in a multi-cloud environment. Such solutions provide a unified way to
build, deploy, manage, and connect applications across multiple clouds.

Consider how cross-cloud solutions can provide control and consistency as you develop and
manage applications across different clouds.

• App Platform
○ With a flexible application platform, developers can build and deploy applications in
different types of clouds in a consistent way.
• Cloud Infrastructure
○ The cloud infrastructure helps you to operate and run enterprise applications.
• Cloud Management
○ Cloud management tools help you monitor and manage the performance and cost of
applications across different clouds.
• Security & Networking
Security and networking span entire multi-cloud operations so that you can connect

Cloud Computing and Operations Page 26


○ Security and networking span entire multi-cloud operations so that you can connect
and better secure all applications, and set security policies across all clouds.
• Digital Workspace & Edge
○ A digital workspace service empowers a distributed workforce to deploy and
manage edge-native applications.

Knowledge Check: Benefits of Cross-Cloud Solutions

Which examples demonstrate the benefits of cross-cloud solutions? (Select two options)

Developers build and deploy apps faster


Clouds work independently of one another.
Cloud administrators view and manage operations, security, and compliance across all
clouds.
IT administrators use on-premises hardware to create redundancy solutions for cloud
provider outages.

Cloud Computing and Operations Page 27


Multi-Cloud Management with vRealize
Wednesday, January 11, 2023 1:12 PM

Learner Objectives:
After completing this lesson, you should be able to:

• Explain how vRealize Log Insight Cloud supports multi-cloud environments


• Explain how vRealize Automation Cloud supports multi-cloud environments
• Recognize use cases for vRealize Operations Cloud
• Recognize use cases for vRealize Network Insight Cloud

Managing Across Clouds

A retail organization is pursuing a multi-cloud strategy. It wants to use different cloud providers
for data storage and for its online store applications.

Cloud administrators want visibility and tools to view and


manage the resources and operations across these different
cloud environments.

And developers are concerned with application performance


and cost, and pushing new code as fast as possible.

How do you manage and develop applications in multiple clouds?

Cloud Computing and Operations Page 28


vRealize Cloud Management provides services across private and public clouds.

Cloud management solutions can help organizations manage their cloud infrastructure.

vRealize® Cloud Management TM provides a core set of products and services for managing cloud
environments.

vRealize Suite

vRealize Cloud Management includes the VMware vRealize® Suite products, which provide a
comprehensive management stack for IT services on vSphere and other hypervisors, physical
infrastructure, and multiple public clouds.

• vRealize Operations
○ Automates IT management, providing full-stack visibility across various
infrastructures.
• vRealize Automation
○ Automates multiple clouds with secure, self-service provisioning.
• vRealize Network Insight
Helps you build a secure network infrastructure across cloud environments.

Cloud Computing and Operations Page 29


○ Helps you build a secure network infrastructure across cloud environments.
• vRealize Log Insight
○ Collects and analyzes logs for troubleshooting and problem resolution.

Every vRealize product is available on premises and as a cloud service.

To address the volume of log data that is generated across multiple


clouds, you use vRealize Log Insight Cloud.

This service collects and analyzes logs from cloud, virtual, and physical
infrastructures in a central location. You can view log data and actionable
insights and query data quickly, using several features.

Features:

• VMware-Authored Insights
○ Insights provide useful information for troubleshooting and auditing events in your
multi-cloud environment. Insights provide information about what is happening in
your SDDC, including critical information about VMware ESXi.
• Query Facility
○ A query facility supports troubleshooting for novice and experienced administrators.
• Alerts
○ You can access built-in alerts or create custom alerts.
• Notifications
○ You can get notifications in different ways, including Syslog forwarding and email.
• Authentication Support
○ Support for local or federated authentication is available, depending on your
security environment.

How vRealize Log Insight Works

Events and logs can be forwarded between on-premises vRealize Log Insight, Syslog, and other
logging tools, and vRealize Log Insight Cloud.

A cloud proxy receives log and event information from monitored sources and sends this
information to vRealize Log Insight Cloud, where it can be queried and analyzed.

vRealize Log Insight Cloud includes the cloud proxy as an OVA file, which you can
download and install as a VM. The cloud proxy is also available as an Amazon Machine
Image (AMI) for deployment in Amazon Elastic Compute Cloud (Amazon EC2).

Subscribing to vRealize Log Insight Cloud

You can sign up for the vRealize Log Insight Cloud service and set up your organization, billing,

Cloud Computing and Operations Page 30


You can sign up for the vRealize Log Insight Cloud service and set up your organization, billing,
and subscription in VMware Cloud.

You can subscribe to vRealize Log Insight Cloud in different ways:

VMware Cloud Core Subscription


The vRealize Log Insight Cloud core subscription is included for all VMware Cloud
organizations using VMware Cloud on AWS.

The subscription includes the following features:


• VMware Cloud on AWS audit logs, activity logs, events, and alarms
• Combined logs (1 GB):
▪ VMware Cloud on AWS NSX-T Data Center firewall logs
▪ VMware Cloud services platform audit logs
▪ VMware on-premises product logs
▪ Other logs (third party)
• Seven days of queryable log retention
• Visualization dashboard
• Searching and saving queries

Trial and Standalone Subscriptions


When you start using the vRealize Log Insight Cloud service, you enter a free trial
subscription period of 30 days. During this period, you can use all the features in the
service.

The trial tier includes the following features:


• Unlimited collection of the following logs:
▪ VMware Cloud on AWS NSX-T Data Center firewall logs
▪ VMware Cloud services platform audit logs
▪ VMware on-premises product logs
▪ Other logs (third party)
• 30 days of queryable log retention
• Additional nonaudit log collection (pay per GB per month with the paid tier)
• VMware log content packs (vSphere, vSAN, NSX-T Data Center, and more)
• Third-party log content packs (AWS, Microsoft, and more)

After the trial ends, you have the following options:


• If you are not a VMware Cloud on AWS user, you must upgrade to a standalone
premium subscription to continue using vRealize Log Insight Cloud.
• If you are a VMware Cloud on AWS user, you can get an extension beyond the free
trial subscription period (30 days + 15 days grace period) or after the expiration of
the trial.
• If you do not need an extension, you can continue with the VMware Cloud core
subscription or upgrade to a premium subscription.

You can access vRealize Log Insight Cloud at


https://www.mgmt.cloud.vmware.com/li/.

Cloud Computing and Operations Page 31


Knowledge Check: vRealize Log Insight Cloud

Which description most accurately explains the function of vRealize Log Insight Cloud in a multi -
cloud environment? (Select one option)

Collects and analyzes log data from your entire environment so you can view and resolve
problems from one place.
Collects log data on premises and exports the information to the cloud environments for cloud
providers to interpret.
Analyzes data that is collected by cloud providers and uses the data to determine financial costs
of each provider.

vRealize Automation Cloud automates the delivery of virtual machines, applications,


and personalized IT services across a multivendor, multi-cloud infrastructure.

vRealize Automation Architecture

Administrators, developers, and business users can access a common service catalog to request
IT services, including infrastructure, applications, and desktops.

Cloud Computing and Operations Page 32


For example, you might request cloud templates to deploy an app so that you can work on a
new feature.

vRealize Automation Cloud Services

vRealize Automation Cloud includes the following services: VMware Cloud Assembly TM,
VMware Service BrokerTM, and VMware Code StreamTM .

In addition, vRealize Automation Cloud contains an embedded VMware vRealize®


OrchestratorTM instance.

SaltStack Config

VMware vRealize® Automation SaltStack® Config is tightly integrated with vRealize Automation
and is one of its key product features. SaltStack Config is available for both the on -premises and
cloud versions of vRealize Automation.

SaltStack Config Interface

SaltStack Config provisions, configures, and deploys software to your virtual machines at any

Cloud Computing and Operations Page 33


SaltStack Config provisions, configures, and deploys software to your virtual machines at any
scale using event-driven automation.

You can also use SaltStack Config to define and enforce optimal, compliant software states
across your entire environment.

If you have an active vRealize Automation Cloud cloud license, you are eligible for a SaltStack
Config cloud integration. You can request a SaltStack Config cloud integration using the
VMware Cloud Services console.

You can purchase an enhanced license that includes SaltStack SecOps, which includes two
libraries of content: Compliance and Vulnerability.

Cloud Assembly is a cloud-based service that you use to create and


deploy machines, applications, and services to your cloud infrastructure.

This service includes features:


• Multiple Cloud Accounts
• Cloud accounts can include vCenter Server, Amazon AWS, Microsoft Azure, and
Google Cloud Platform.
• Templates
• You can use Cloud Assembly to create and manage VMware Cloud Templates as
code in YAML format.
• Self-Service Provisioning
• You can download and manage blueprints and appliances from VMware
Marketplace.
• Extensibility
• You can use the built-in vRealize Orchestrator instance to design and manage
workflows for customer IT resources.
• Kubernetes Integration
• You can use Tanzu Kubernetes Grid Integrated Edition to provision and manage
Kubernetes clusters.

Using Cloud Assembly

Cloud Computing and Operations Page 34


Cloud Assembly Templates

Cloud administrators and cloud template developers use Cloud Assembly in different ways.

Cloud administrators: Cloud template developers:

• Configure the cloud vendor infrastructure to • Create and iterate on templates


support cloud-agnostic template development until they meet development

Cloud Computing and Operations Page 35


support cloud-agnostic template development until they meet development
and deployment for multiple clouds. needs.

• Set up projects, add groups or users, and enable • Deploy templates to the
access to resources in cloud accounts or supporting cloud vendors based
regions. on project membership.

• Import or develop cloud templates, or delegate • Manage the deployed resources


development to the project administrators and throughout the development life
members. cycle.

Service Broker supports multi-cloud environments by providing predefined


services that run on different cloud environments, including VMware based
private and hybrid clouds, and native public clouds.

Example of Catalog Items page in Service Broker interface

In Service Broker, the Catalog Items page includes sample templates.

Cloud Computing and Operations Page 36


In Service Broker, the Catalog Items page includes sample templates.

How Service Broker Works

Using Service Broker

Cloud administrators: Users:

• Provide Service Broker • Request and monitor the provisioning process.


as a portal for users,
such as operations and • After deployment, manage the deployed catalog items

Cloud Computing and Operations Page 37


such as operations and • After deployment, manage the deployed catalog items
development teams throughout the deployment life cycle.

• Import content such as


Cloud Assembly cloud
templates, AWS
CloudFormation
templates, and
extensibility actions.

• Configure governance in
the form of projects to
control accessibility of
resources and
deployment location.

Code Stream is continuous integration and continuous delivery (CI/CD)


software that delivers software rapidly and reliably, with little overhead.

Code Stream works in the following way:

• You create CI/CD pipelines that automate your entire DevOps life cycle, using existing
development tools.
• Code Stream runs your software through each stage of the pipeline until it is ready to be
released.
• You can integrate the pipeline with one or more DevOps tools, which provide data for the
pipeline to run.

Cloud Computing and Operations Page 38


For example, when a developer checks in code to a Git repository, Code Stream can trigger the
pipeline and automate the build, test, and deployment of an application.

Code Stream can be integrated with other vRealize Automation services.

Integrating Code Stream with Other Services

Cloud Computing and Operations Page 39


You can integrate Code Stream with other cloud services.

For example, you can publish your Code Stream pipeline to Service Broker as a catalog item that
can be requested and deployed on cloud accounts or regions.

Or you can deploy a Cloud Assembly cloud template and use the parameter values the cloud
template exposes.

vRealize Orchestrator is a development and process-automation


platform that provides an extensive library of workflows and a workflow
engine.

Cloud Computing and Operations Page 40


vRealize Orchestrator interface shows a hierarchical tree that supports easy management of workloads
at scale.

With the workflow engine, you can create and run workflows that automate orchestration
processes.

You run workflows on objects of different technologies that vRealize Orchestrator accesses
through a series of plugins.

A library workflow engine runs on objects of different technologies that vRealize Orchestrator
accesses through a series of plug-ins:

• vRealize Orchestrator provides a standard set of plug-ins, including a plug-in for vCenter
Server, with which you can orchestrate tasks in the different environments that the plug-
ins expose.

• vRealize Orchestrator also presents an open architecture for plugging in external third-
party applications to the orchestration platform.

Cloud Computing and Operations Page 41


party applications to the orchestration platform.

Knowledge Check: vRealize Automation Cloud Services

Your organization is using vRealize Automation Cloud to support its multi-cloud strategy. Which
automation services do you use for each multi-cloud task?

Connecting vRealize Automation Cloud and Cloud SDDCs

To connect vRealize Automation Cloud to a cloud SDDC, you must define resource
infrastructure and cloud template settings for deployment to the cloud SDDC environment.

For example, to connect VMware Cloud on AWS and vRealize Automation Cloud, you perform
the following general procedure.

Note: The procedure requires that the VMware Cloud on AWS SDDC is configured with basic
networking and other parameters.

1. Configure a basic VMware Cloud on AWS workflow.

In this procedure, you configure a VMC on AWS workflow in vRealize Automation Cloud:
• Deploy a new cloud proxy to your VMC on AWS SDDC in vCenter.
• Create a VMC on AWS cloud account that accesses the proxy.
• Configure infrastructure that supports cloud template deployment to resources in
your VMC on AWS environment.

Cloud Computing and Operations Page 42


your VMC on AWS environment.

2. Configure an isolated network.

In this procedure, you add an isolated network for your VMC on AWS deployment in
vRealize Automation Cloud

This procedure expands on the basic VMC on AWS workflow:

• Define an isolated network for a VMC on AWS deployment in vRealize Automation


Cloud.

You can configure network isolation for a VMC on AWS deployment by using either
of the following procedures:

▪ Configure on-demand network-based isolation in vRealize Automation Cloud


▪ Configure on-demand security group-based isolation in vRealize Automation
Cloud

• Define a network component in a cloud template to support network isolation for


VMC on AWS in vRealize Automation Cloud.

In this step, you drag a network machine component onto a vRealize Automation
Cloud cloud template canvas and add settings for an isolated network deployment
to your target VMC on AWs environment.

Connecting vRealize Automation Cloud

For more information about prerequisites and procedures, access Tutorial: Configure VMware
Cloud on AWS for vRealize Automation Cloud in the VMware vRealize Automation Cloud
documentation.

Knowledge Check: Connecting vRealize Automation Cloud

You want to connect vRealize Automation Cloud and your VMware Cloud on AWS SDDC. Which
steps do you take? (Select two options)

Configure on-demand network-based isolation in vRealize Automation Cloud


Deploy a new cloud proxy to your VMware Cloud on AWS SDDC in vCenter and then create a
VMware Cloud on AWS cloud account that accesses the proxy.
Disable management gateway firewall rules in the SDDC's VMware Cloud on AWS console to
support cloud proxy communication.
Configure an encrypted network for VMC on AWS.

vRealize Operations Cloud automates and simplifies IT management with full-stack


visibility across physical, virtual, and cloud environments.

Cloud Computing and Operations Page 43


You can manage, monitor, and troubleshoot VMs across multiple SDDCs. And you can
manage those environments from a single console.
Use Cases

With its unified operations platform, vRealize Operations Cloud supports several use cases:

• Continuous performance optimization


Real-time predictive analytics and AI help to automatically balance workloads and
proactively avoid contention.

Workload balancing and placement can be automated on the following platforms:


▪ VMware Cloud Foundation
▪ vSphere
▪ vSAN
▪ VMware Cloud on AWS

• Efficient capacity and cost management


Using a real-time, forward-looking capacity analytics engine, vRealize Operations can

Cloud Computing and Operations Page 44


Using a real-time, forward-looking capacity analytics engine, vRealize Operations can
predict future demand and provide actionable recommendations.

vRealize Operations provides the following options:


▪ Reclamation of resources
▪ Cloud migration planning

• Intelligent remediation
With vRealize Operations, you can access data across your environment, from one
place.

You can predict, prevent, and troubleshooting faster using actionable insights that
correlate metrics and logs.

Alerts notify you when objects in your environment experience problems.

• Integrated configuration and compliance


To ensure your environment's adherence to common requirements, vRealize
Operations uses compliance templates.

You can set compliance on your objects to meet defined standards, and vRealize
Operations determines the compliance of your objects with those standards.

Workload Optimization

Using the Workload Optimization feature, you can move virtual compute resources and their
file systems dynamically across datastore clusters in a data center.

For example, you can perform the following tasks:


• Rebalance VMs and storage across clusters to relieve demand on an overloaded individual
cluster and maintain or improve cluster performance.

• Set automated rebalancing policies to emphasize VM consolidation, which potentially


frees up hosts and reduces the resource demand.

• Automate a significant portion of your data center compute and storage optimization
efforts.

How Workload Optimization Works

vRealize Operations Cloud monitors virtual objects and collects and analyzes related data,
which is presented in graphical form on the Workload Optimization page.

You use the information on this page to determine whether an action is required. If an action is
required, you can select the appropriate optimization function to help resolve the issue.

Which statements do you think describe possible actions for optimizing workloads? (Select
three options)

Cloud Computing and Operations Page 45


Move compute and storage resources from some clusters to other clusters in a data
center to free up a host on one cluster.
Run a regularly scheduled optimization action on volatile compute and storage resources
in a given data center.
Create a rebalancing plan that automatically runs an optimization action on strained
compute and storage resources.
Automatically identify all architectural constraints and run an optimization plan based on
the constrains.

Configuring and Using Workload Optimization


For information about configuring and managing the Workload Optimization feature, access the
vRealize Operations Cloud documentation.

Managing Costs

With its capacity and cost management features, vRealize Operations Cloud can predict future
demand and provide actionable recommendations to help in managing costs.

• Reclamation of Existing Resources


Assess workload status and resource contention in data centers across your environment
• Determine the time remaining until CPU, memory, or storage resources run out.

• Future Infrastructure Requirements


Run what-if scenarios:
• Identify how much capacity remains after you add or remove VMs or hosts.
• Add hyperconverged infrastructure (HCI) nodes.

• Cloud Migration Planning


Migration planning shows you the capacity and cost information after the migration to a
cloud-based infrastructure.

Cost Overview

vRealize Operations Cloud supports costing for private clouds, public clouds, and VMware Cloud
infrastructure.

You can track expenses for a single virtual machine, and identify how these expenses attribute
to the overall cost associated with your private cloud accounts and VMware Cloud
infrastructure accounts.

On the Cost Overview home page in vRealize Operations Cloud, you can find details about the
costs associated with your VMware Cloud infrastructure accounts, public cloud accounts, and
your private cloud accounts.

You can view the Total Cost of Ownership, Potential Savings, and Realized Savings for your
VMware Cloud infrastructure cloud accounts and vSphere private cloud accounts, and Total
Cost of Ownership for your private cloud accounts.

Cloud Computing and Operations Page 46


You can view the cost details for the following private and public cloud accounts in vRealize
Operations Cloud:
• vSphere on-prem
• VMC on AWS
• Azure VMware Solutions
• Amazon Web Services
• Microsoft Azure
• Google Cloud

Knowledge Check: Managing Costs with vRealize Operations Cloud

How can you use vRealize Operations Cloud to analyze and manage costs? (Select three options)

Identify underutilized VMs and reclaim them.


Determine how much capacity remains after you add VMs.
Use migration planning to set limits on capacity of cloud-based infrastructure.
Track costs associated with the private cloud but not the public cloud.
Plan the migration of existing workloads from a private vSphere cloud environment to a
multi-cloud environment for comparison.

Cost Management with vRealize Operations Cloud


For more information about managing costs with vRealize Operations Cloud, access the
VMware product documentation.

Troubleshooting Workbench

You can use the Troubleshooting Workbench to analyze alerts and changes in your environment
when troubleshooting problems.

On the Troubleshooting Workbench home page in vRealize Operations Cloud, you can find
active troubleshooting sessions and recent searches. The page also includes a search bar.

The active troubleshooting sessions do not persist after you log out of vRealize Operations
Cloud. But the next time that you log in, your earlier active sessions appear as recent searches.

Cloud Computing and Operations Page 47


Troubleshooting Workbench home page

You can start the Troubleshooting Workbench with an alert in context from the alert
information page, or you can search for an object and start the Troubleshooting Workbench to
investigate known or unknown issues related to the object.

How Troubleshooting Workbench Works

On the Potential Evidence tab, you look for evidence of a problem within a specific scope and
time range. Extending the time range and scope can reveal more evidence for troubleshooting.

You can select only the object that you are investigating or include several upstream and
downstream relationships by increasing the scope. As you increase the scope, more objects
appear in the inventory tree.

Cloud Computing and Operations Page 48


Potential Evidence tab with a range of Last 12 hours selected

By increasing the scope to include additional objects, you can view new evidence. In the
example, significantly more events, property changes and anomalous metrics appear.

Cloud Computing and Operations Page 49


Potential Evidence tab where new objects are added

You investigate potential evidence in an object's events, property changes, and anomalous
metrics.

• Events
• Shows events based on change in metrics, for example, events where metrics breach
the usual behavior, and major events that occur in the selected scope.
• Property Changes
• Shows important configuration changes that occurred in the selected scope and
time, including both single and multiple property changes.
• Anomalous Metrics
• Focuses on metrics that show drastic changes in the selected scope and time.
Results are ranked according to the degree of change.

You can select individual metrics directly from the Potential Evidence tab for comparison.

After you select the metrics that you want to compare, you click the Metrics tab to view the
metrics.

Cloud Computing and Operations Page 50


In the example, several metrics are selected by clicking the pin in the card.

Correlating Metrics
Correlation is the key to focusing efforts in the right area when investigating problems. You click
the Correlation icon to investigate the potential root causes through pattern matching.

Metric correlation identifies the metrics with similar patterns of behavior in a time range. In
this way, you can access relevant data that helps you to resolve problems faster.

Cloud Computing and Operations Page 51


Knowledge Check: Troubleshooting Workbench

True or False: Changing the scope or time in the Troubleshooting Workbench changes the
potential evidence identified by the tool.

True
False

Compliance Benchmarks

Compliance benchmarks show score cards that help you proactively detect compliance
problems in vRealize Operations Cloud. The compliance benchmarks are measured against a set
of standard rules, regulatory best practices, or custom alert definitions.

If an object is not compliant with a specified standard, vRealize Operations Cloud generates an
associated alert.

vRealize Operations Cloud displays compliance score cards for VMware SDDC, custom, and
regulatory benchmarks.

Knowledge Check: vRealize Operations Cloud Features

Match each description to the appropriate vRealize Operations feature.

Cloud Computing and Operations Page 52


Knowledge Check: vRealize Operations Cloud Use Cases

Given the use cases for vRealize Operations Cloud, what benefits does this service provide for
multi-cloud operations? (Select three options)

Manual workload-balancing across on-premises and cloud environments


Multiple management interfaces to view information from each environment individually
Configuration of security and compliance policies across clouds
Actionable recommendations related to future demand
Alerting that provides recommendations for solving problems.

vRealize Network Insight Cloud provides end-to-end network visibility across VMware
NSX, VMware SD-WAN, VMware Cloud, public cloud, and other multi-cloud
deployments.
Visibility with vRealize Network Insight Cloud

vRealize Network Insight Cloud provides visibility into the network flows and security of your
on-premises and cloud applications, and it helps you to administer your NSX based SDDC.

You can use vRealize Network Insight Cloud to monitor and diagnose problems with your
network resources. For example, you can check your network flows and your virtual machine
and NSX security rules, and plan for optimal micro-segmentation.

Cloud Computing and Operations Page 53


The vRealize Network Insight home page provides a quick summary of actions in your entire
environment.

vRealize Network Insight Cloud puts the data it gathers to good use

How vRealize Network Insight Cloud Works


vRealize Network Insight Cloud gathers data from many different source environments and
device types, ranging from VMware vSphere hosts, NSX network virtualization, and physical
devices, as well as public cloud networking constructs.

Overview of Application Discovery and Visibility

Take Inventory
The process starts with vRealize Network Insight Cloud collectors taking inventory of the
various physical components—switches, routers, firewalls, load balancers, and so on—as well
as virtual components, including vCenter, NSX, and AWS inventories.

Construct Meaning
vRealize Network Insight Cloud takes the networking data and constructs meaningful insights
about the networking components of applications, how those components are dependent on
each other, which are shared, and where the different components run.

By turning on network flow collection, vRealize Network Insight helps you to understand the
movement between application components. In this way, network engineers can view traffic
data from an application perspective.

Cloud Computing and Operations Page 54


Group Components
Analyzing its traffic flows, vRealize Network Insight Cloud can help determine which workloads
on the network communicate and over which protocols.

You then group these components into applications, and can mark components as shared
between applications.

Defining Security Rules


vRealize Network Insight Cloud analyzes the flow data it collects and uses that data to generate
recommended firewall rules for workloads.

Observing the traffic at a granular level and taking the underlay network and the workload into
account, it translates that information into ready-to-go firewall rules that can be easily
imported into NSX.

Search Functionality
The search functionality is fundamental to vRealize Network Insight Cloud.

Anything that you do in the interface is a search command. The search looks through all traffic
flow data, as well inventory across vSphere, NSX, VMware Cloud on AWS, native AWS (EC2) and
Azure, and events, as well as metrics across time.

vRealize Network Insight Cloud Use Cases


Cloud, network, and security administrators can use vRealize Network Insight Cloud to view
usage details across all their clouds, both public and private.

Cloud Computing and Operations Page 55


Plan application security and migration
• Identify application dependencies when migrating applications to public clouds, other
data centers, or disaster recovery sites.
• Recommend firewall policies and network segmentation for apps and map dependencies
to reduce risk during migrations.

Optimize and troubleshoot virtual and physical networks


• Unify the troubleshooting experience across the virtual and physical infrastructure.
• Monitor and diagnose problems with your network resources.
• Optimize network performance by identifying bottlenecks.
• Audit network and security changes over time.

Manage and scale NSX deployments


• Scale across multiple NSX instances with visualizations for topology and health.
• Boost the uptime by proactively detecting misconfiguration errors.

Network assurance and verification


• Accomplish formal verification by using a unified mathematical model of how the network
functions.
• Get details about your network model for supported data sources in vRealize Network
Insight and search paths using IP addresses.
• Ensure uptime and network resilience through network planning and path
troubleshooting.

Getting Started
To onboard with vRealize Network Insight Cloud, you take the following steps:

1. Sign up for VMware Cloud Services and request a vRealize Network Insight Cloud trial.
2. Log in to vRealize Network Insight Cloud.
3. Deploy Collector and connect to Cloud Platform
4. Add data sources in vRealize Network Insight Cloud

Onboarding with vRealize Network Insight Cloud


For more information about the onboarding steps, access the relevant VMware documentation.

Knowledge Check: vRealize Network Insight Cloud Use Cases

Which examples illustrate the uses for vRealize Network Insight Cloud? (Select four options)

You want a unified view of networks across a multi-cloud environment.


You must ensure compliance with network security policies.
You want to find network bottlenecks to optimize application performance.
You must perform end-to-end troubleshooting, traffic, and path analytics.
You must provide virtual desktop instances to your company's employees.
You want to view the physical infrastructure of a public cloud provider.

Cloud Computing and Operations Page 56


VMware Horizon: Delivering Desktops and Apps to Cloud SDDCs
Thursday, January 12, 2023 9:18 AM

Learner Objectives

After completing this lesson, you should be able to:

• Explain how VMware Horizon delivers virtual desktops and applications


• Recognize VMware Horizon deployment options across private and public clouds
• Identify the benefits of using VMware Horizon in cloud SDDCs

Virtual Desktop Infrastructure

VMware Horizon® provides a virtual desktop infrastructure (VDI) platform for the management
and secure delivery of personalized virtual desktops and published applications to users.

VDI is a technology that hosts and manages desktop environments on a centralized server and deploys
them to users on request.

How VMware Horizon Works

Step 1: Delivering Virtual Desktops and Apps

Cloud Computing and Operations Page 57


Suppose that you require several new workstations for a group of seasonal employees.

You use VMware Horizon to create virtual desktops on-demand, based on location and profile,
and you securely deliver managed desktops and applications to the employees.

The desktops and applications are managed in a centralized data center. VMware Horizon
supports both Windows and Linux virtual desktops, and Remote Desktop Sever Host (RDSH)
hosted applications.

Step 2: Accessing Desktops and Apps

Users can access published desktops and applications exclusive of the client device.

They can access their personalized virtual desktops or remote applications from company
laptops, their home PCs, thin-client devices, Macs, tablets, or smartphones.

VMware Horizon Features

VMware Horizon uses several features to deliver just-in-time desktops and applications:

Instant Clone Technology

Cloud Computing and Operations Page 58


Instant clone technology deploys and delivers new personalized desktops instantly to users at
every login:
• New VMs are deployed as pristine clones of an existing optimized parent VM.
• Instant clone desktops retain user customization and persona from session to session and
can be destroyed at logout.

Dynamic Environment Manager

VMware Dynamic Environment ManagerTM provides the personalization and dynamic policy
configurations across virtual, physical, and cloud-based environments.

User settings and data on an application persist across devices.

Cloud Computing and Operations Page 59


User settings and data on an application persist across devices.

Dynamic Environment Manager enhances the VMware Horizon by enabling customers to take
advantage of user and application management for Horizon virtual desktops, session-based
desktops and hosted applications.

App Volumes

VMware App VolumesTM attaches applications to a virtual machine at login.

App Volumes delivers applications concurrently to virtualized desktop environments:

• Applications are packaged into containers:


○ Packages: Application containers of IT-managed applications and application suites.
○ Writable Volumes: User-specific containers used for persisting user changes
between sessions.
• Application containers are delivered to virtual desktops, without modifying the VM.

Other features of VMware Horizon include desktop pool management, Virtual


Printing, application virtualization, storage management, and application entitlement.

VMware Horizon Across On-Premises and Cloud SDDCs

VMware Horizon can be deployed on premises, in cloud SDDCs, or both.

Cloud Computing and Operations Page 60


When using both deployments, you can build your own
hybrid cloud by spanning VMware Horizon cloud pod
architecture across on-premises and one or more cloud
SDDC locations.

In this way, you can scale the deployment across


multiple pods and sites.

Example of VMware Horizon cloud pod architecture spanning


VMware Horizon across an on-premises pod and a VMC on AWS
pod.

You can deploy VMware Horizon desktops and


applications on VMware CloudTM on AWS, which is
VMware SDDC infrastructure-as-a-service (IaaS) on
AWS.

Similarly, Google Cloud VMware® Engine is VMware SDDC


IaaS on Google Cloud, where you can deploy VMware Horizon
desktops and applications.

Cloud Computing and Operations Page 61


You can also deploy VMware Horizon desktops and
applications on Azure VMware® Solution.

You can create instant-clone and full-clone desktop


pools on Azure VMware Solution.

Deploying VMware Horizon on Cloud SDDCs


During the installation of VMware Horizon, you select AWS, Google Cloud, or Azure as an
available deployment type when installing the Connection Server component.

After you select the deployment type, VMware Horizon automatically operates in a mode that
is compatible with the AWS, Google Cloud, or Microsoft Azure cloud admin privileges.

For more information about deploying VMware Horizon on AWS, Google Cloud, and Microsoft
Azure, access the VMware Horizon Product Documentation.

Knowledge Check: VMware Horizon on Cloud SDDCs

Which statement accurately describes VMware Horizon deployment options across private and
public clouds? (Select one option)

VMware Horizon can be deployed on premises and in cloud SDDCs but not both.
When installing VMware Horizon, you can select a cloud provider as a deployment type.
You can deploy VMware Horizon on premises only.

Use Cases for Installing VMware Horizon on Cloud SDDCs

Cloud Computing and Operations Page 62


Data Center Expansion
You want to create additional desktops for a seasonal group of works, and you do not have
capacity in your data center.

You can expand on-premises VMware Horizon without a lengthy hardware purchase,
installation, and configuration process.

Application Locality
You want to move published applications that are latency-sensitive to the cloud and need
virtual desktops and Remote Desktop Session Hosts (RDSH) to be co-located with your
published applications.

When you extend the VMware Horizon deployment to the cloud, you can allow end users to
connect to the nearest virtual desktop or RDS host to launch the application.

Disaster Recovery and Business Continuity for On-Premises Deployment


The cost of building an on-premises business continuity and disaster recovery infrastructure for
a VDI environment can be high.

When you use the cloud, you pay for the use of the this infrastructure during those times when
the primary infrastructure is down. A unified VMware Horizon architecture across the primary
site on-premises and the disaster recovery and continuity site on a cloud provider makes the
failover process simple.

Temporary Desktop and Application Capacity


You must provide temporary desktops to external developers or contractors.

Quick POC of On-Premises VMware Horizon


You must deploy a proof-of-concept (POC) for a VMware Horizon project, and you do not want
to wait for the hardware purchase, installation, or configuration of vSphere.

Benefits: VMware Horizon on Cloud SDDCs


Give what you have learned so far, which benefits apply when you run VMware Horizon on
cloud SDDCs? (Select three options)

A unified architecture with familiar tools.


Optimized costs with consumption-based billing.
Ability to scale host capacity up or down quickly.
Support for all VMware Horizon features on VMware Cloud.
Configuration of security and compliance policies across SDDCs.

Cloud Computing and Operations Page 63


Multi-Cloud Management with CloudHealth
Thursday, January 12, 2023 10:54 AM

Learner Objectives
After completing this lesson, you should be able to:

• Describe the multi-cloud management capabilities of CloudHealth

Problem: Rising Cloud Costs and Risks

A retail organization is starting to


run multiple clouds for its web
applications, data analytics, and
data processing.

Teams find it easy to get services


started, and the organization pays
as employees to consume the
service
But cloud costs start to add up.

Limited visibility into the cloud environments contributes


to the increasing costs. And the cloud team is taking
more time to refactor and migrate applications to the
cloud.

Security, cost, and compliance risks increase in this


fragmented environment.

It is clear that the organization requires more control


over its cloud costs and resources.

Why Use CloudHealth?


CloudHealth® is a multi-cloud management platform that ingests and aggregates data from
cloud providers, containerized or on-premises environments, and third-party integration tools.

It can help the retail organization address its cloud challenges.

From a single platform, CloudHealth provides information to help achieve the following key
goals in a
multi-cloud environment:
Cloud Computing and Operations Page 64
multi-cloud environment:

• Control cloud spend


• Reduce resource waste
• Reduce risks
• Avoid regulatory compliance gaps

CloudHealth Capabilities
CloudHealth capabilities can be divided into the areas of financial management, operational
governance, and security and compliance.

Financial Management
CloudHealth includes budget management, cost reporting, and cost forecasting capabilities.

For example, you can perform the following tasks:


• Analyze workloads for projected costs to create budgets
• Visualize and analyze costs, usage, performance, security, and configurations in a
centralized location
• Segment data by project, team, or department to hold teams accountable for their cloud
usage

Operational Governance

Cloud Computing and Operations Page 65


Operational Governance
With CloudHealth, you can monitor your organizations infrastructure utilization, optimize
workloads, and build automated policies for the proper provisioning and use of cloud
resources.

For example, you can use the platform to perform the following tasks:
• Rightsize cloud infrastructure to eliminate wasted spending
• View recommendations for purchasing and managing commitment-based discounts
• Create customer policies and receive alerts
• Set automated actions when policy conditions are met to ensure continuous governance

You can get alerted when conditions deviate from your desired state and enable automated
actions to execute changes in your environment.

Security and Compliance

Through CloudHealth, you can access intelligent and real-time security insights.

For example, you can perform the following tasks:

• Get real-time visibility into misconfigurations and threats


• Optimize security and compliance rules to focus on risky violations and business-critical
projects
• Resolve violations by suppressing false positives and automating remediation
• Reduce time that developers spend on fixing security and compliance violations by
building guardrails into the development process.

Cloud Computing and Operations Page 66


building guardrails into the development process.

The overview dashboard displays a summary of multi-cloud security and compliance insights.

Knowledge Check: CloudHealth Capabilities

To address the challenges of increasing costs and risks, the retail organization implements the
CloudHealth platform.

Which examples demonstrated the capabilities of CloudHealth? (Select four options)

With visibility into all its cloud environments, the retail organization learns which
departments, teams, projects, or applications are driving cloud cost and usage.
Accessing a report generated by CloudHealth, the retail organization finds that unused
storage volumes often go unnoticed.
The cloud team tracks cost patterns over time to accurately forecast future budgets and
reduce miscalculations.
The cloud team prioritizes threat events, visualizing all the services, key relationships, and
associated security risks.

Cloud Computing and Operations Page 67


associated security risks.
The IT administrator automates the delivery of virtual machines, applications, and
personalized IT services across different data centers and hybrid cloud environments.
Developers build and run pipelines, and monitor pipeline activity on the dashboards to
determine if their code succeeded through all stages of the pipeline.

Cloud Computing and Operations Page 68


VMware Cloud Use Cases
Friday, January 13, 2023 8:15 AM

Learner Objectives
After completing this lesson, you should be able to:

• Describe how VMware and its hyperscaler partners address IT challenges


• Recognize use cases for VMware hyperscaler partners

Hybrid Cloud Computing

A hybrid cloud includes two or more distinct


public or private clouds that are connected by
standardized or proprietary technology.

A hybrid cloud relies on a set of common tools


that orchestrate workloads between platforms
and support data and application portability.

Hybrid clouds offer flexibility and speed to help


businesses respond to changing needs and
thereby accelerate innovation.

Hybrid clouds align costs to business


requirements by managing upfront expenses,
operational support, and total cost of
ownership.

Benefits of Hybrid Cloud for Modern Applications

As customers deploy modern applications and services, they require consistency in


infrastructure and operations, which a hybrid cloud provides.

In addition to relying on on-premises data centers and cloud infrastructures, modern enterprise
applications are increasingly using edge compute services (co-located with data sources) to
deliver real-time insights and processes.

Cloud Computing and Operations Page 69


Managing and Running Applications

Managing and running workloads requires a common approach, which a hybrid cloud offers. An
enterprise has several options for managing and running workloads.

Maintain and Expand


Run select applications and workloads in the public cloud.

Consolidate and Migrate


Migrate more of your private cloud workloads to the public cloud.

Reduce and Eliminate


Run all, or most, of your applications and workloads in the public cloud.

Challenges of Hybrid Cloud Implementation

For the successful implementation of a hybrid cloud, companies must overcome several
challenges.

Operational inconsistencies
The on-premises infrastructure and the public cloud environment are different in terms of
operations and the infrastructure stack.

Different skill set and tools


Employees with experience in physical data center might not have the architectural experience
for designing and deploying applications in the cloud.

Organizations require different skill sets and tools when moving workloads from on premises to
the cloud. This move requires new training for current employees, or hiring new employees

Cloud Computing and Operations Page 70


the cloud. This move requires new training for current employees, or hiring new employees
who have experience with cloud technologies.

Disparate management tools and security controls


Organizations use disparate management tools, security controls, and governance policies
across on-premises and cloud environments.

In a hybrid cloud environment, you require a unified interface or console from which you can
manage your environment and prevent tasks from being overlooked in the workflow.

Inconsistent application SLAs


Applications running on-premises might have different SLAs than applications running in the
cloud.

Incompatible machine formats


The machine format for the on-premises environment is different from the cloud environment.

As a result, workloads cannot move bidirectionally across environments without major


conversion work first.

If companies do not overcome these challenges, the result can be decreased


agility, higher costs, and higher risks.

Knowledge Check: Hybrid Cloud Challenges

Which statement describe challenges that IT organizations might face when moving to a hybrid
cloud environment? (Select three options)

Applications have different SLAs depending on whether they run in the private or public
cloud.
You cannot move workloads between on-premises and public clouds because of
incompatible machine formats.
No disaster recovery options are available in a hybrid cloud environment.
On-premises IT teams do not have the skills to manage and operate a hybrid cloud
environment.
Modern applications are not supported in a hybrid cloud environment.

Enterprise Capabilities

VMware hyperscaler partners (for example, AWS, Azure, and Google) deliver several enterprise
capabilities in the public cloud.

Seamless Migration

Cloud Computing and Operations Page 71


Seamless Migration
Hyperscaler partners provide consistent infrastructure and operations for the on-premises
environment.

As a result, you get fast, cost-effective and low-risk migration of workloads between the on-
premises environment and the cloud environment.

Seamless migration across clouds is achieved using a solution called VMware HCX.

As-a-Service Model
The VMware software-defined datacenter is delivered as a cloud service that runs in the
hyperscaler partner's cloud.

By using the SDDC as-a-service model, you can help lower your costs because you do not need
to purchase the infrastructure to run your workloads.

Operational Consistency
Hyperscaler partners provide the consistency and familiarity of VMware technologies between
on-premises and cloud environments through consistent infrastructure and operations.

And you can use the same tools and skillsets for both environments.

Workload Portability
You can move your workloads from on premises to the cloud and vice-versa.

Whatever you build in your on-premises environment, you can also build in the cloud, and vice-
versa.

And, you can manage your hybrid cloud environment with the hybrid capabilities provided by
the platform.

Modern Application Support


You get support not just for virtual machines but also for Kubernetes and containers.

You can build Kubernetes containers and access native cloud services.

With a single platform, you can create and run these modern composite applications.

Hybrid Cloud Solution


VMware and its hyperscaler partners deliver a jointly engineered hybrid cloud solution.

Cloud Computing and Operations Page 72


VMware: Hyperscaler Partners:
• Leading compute, storage, and network • On-demand capacity and flexible
virtualization capabilities consumption
• Support for broad range of workloads • A broad set of cloud services
• Standard for enterprise data centers • Global scale, reach, and availability
• Integrated hybrid cloud service

VMware Cloud on AWS:

• VMware SDDC running on AWS bare metal


• Sold, delivered, operated, and supported by VMware
• Global AWS footprint
• Direct access to native AWS services

Azure VMware Solution:

Cloud Computing and Operations Page 73


• VMware SDDC running on Azure bare metal
• Sold, delivered, operated, and supported by Microsoft
• Global Azure footprint
• Direct access to native Azure services

Google Cloud VMware Engine:

• VMware SDDC running on Google bare metal


• Sold, delivered, operated, and supported by Google
• Global Google Cloud Platform footprint
• Direct access to native Google Cloud Platform services

Cloud Computing and Operations Page 74


Cloud Compliance

Hyperscaler partners provide a cloud service that adopts industry best practices. The cloud
service meets a comprehensive set of international and industry-specific security and
compliance standards.

VMware Cloud on AWS


VMware Cloud on AWS meets a comprehensive set of international and industry-specific
security and compliance standards. For compliance information, see the VMware Cloud on AWS
release notes at https://docs.vmware.com/en/VMware-Cloud-on-AWS/0/rn/vmc-on-aws-
relnotes.html.

Azure VMware Solution


The Azure VMware Solution inherits the security and compliance certifications of the Azure
cloud that it runs within.

For compliance information about this solution, see https://docs.microsoft.com/en-


us/azure/compliance.

Cloud Computing and Operations Page 75


Google Cloud VMware Engine
For compliance information about this solution,
see https://cloud.google.com/security/compliance.

Common Hybrid Cloud Use Cases

Use cases for hybrid clouds include:

• Disaster recovery
• Data center extension
• Cloud migrations
• Next-generation applications

Disaster Recovery

You can use disaster recovery (DR)

Cloud Computing and Operations Page 76


You can use disaster recovery (DR)
orchestration and automation in a number
of ways:

• Create a net-new DR plan


• Minimize DR costs by replacing the
existing DR solution and move the on-
premises solution to the cloud
• Supplement the existing DR solution
with DR-as-a-service in the cloud.

Key Capabilities
Hyperscaler partners provide key capabilities that you want in a disaster recovery system.

Streamlined automation of the VMware hypervisor-based VM Elastic and


runbooks through disaster recovery replication consistent cloud
and VMware Site Recovery Manager infrastructure for
rapid scaling.

If you have a loss


event, you can
quickly scale up the
SDDC to meet the
demands of the
transitioning
workloads.

DR Benefits

The benefits of using the cloud as part of your DR solution include:

Cloud Computing and Operations Page 77


Data Center Extension

Hyperscaler partners provide the


capabilities to easily extend into
the cloud.

You might extend your data center for a number of reasons.

Expanding data center footprint


Your company is growing rapidly, and you must extend your infrastructure to accommodate the
growth.

On-demand capacity
You require IT capacity to support seasonal spikes in demand.

Virtual desktops
Your training organization requires virtual desktops for their weekly online classes, so you
expand into the cloud to meet this requirement.

Test and development workloads


You want to keep your production workloads on-premises and move your test workloads to the
cloud.

Key Capabilities

Hyperscaler partners provide key capabilities to support extending your data center into the
cloud.

Cloud Computing and Operations Page 78


Consistent infrastructure and operations: Enterprise-grade infrastructure:

By extending you data center through a hyperscaler In the hyperscaler partner cloud, your
partner, you have seamless application portability dedicated hardware lives within the
hyperscaler partner infrastructure,
because you use the same application format across
which is high-performing and powerful.
your on-premises and cloud infrastructure.

Extension Benefits

By extending the data center, you can, in turn, expand your environment seamlessly as
necessary.

You can scale rapidly, while managing your environment in a unified way, using one interface or
console view.

In addition, you can reuse the skills and tools that you already have.

Cloud Migrations

Cloud Computing and Operations Page 79


Hyperscaler partners provide rapid,
low-risk, and cost-effective
migrations at scale.

You migrate workloads to the cloud for different reasons.

To run business-critical applications


You might move business-critical applications from on-premises to the cloud because it is more
cost-effective to run those applications in a public cloud environment.

To evacuate the on-premises data center


Your data center is physically located in a co-location facility and your lease is about to expire.
You decide to move the data center to the cloud instead of renewing your lease at the co-
location facility.

To refresh the infrastructure


You are starting an infrastructure refresh cycle, and you decide to use the cloud environment
for your new infrastructure.

Key Capabilities
Hyperscaler partners provide key capabilities to support cloud migration use cases.

Perform large-scale migrations with minimal Hyperscaler partners have an enterprise-


disruption to your business, while monitoring costs. grade infrastructure, which provides:

You can use tools such as VMware HCX and VMware • Predictable, high-performance
compute with vSphere
vSphere vMotion to migrate workloads that exist on
• Feature-rich SDDC with NSX and
the cloud and bring them into the data center, and
vSAN
vice-versa. • Ability to spin up an SDDC and
seamlessly add additional hosts in
minutes
• An infrastructure that supports

Cloud Computing and Operations Page 80


• An infrastructure that supports
Kubernetes and is container-ready

Benefits of Migration
The benefits of migrating workloads to the cloud include:

Next-Generation Apps

As you consider the digital future, you


want to implement modern, next-
generation applications within your
business.

You might run your next-generation applications in the cloud for an number of reasons:

• To modernize existing applications as a way to enhance their value


• To build new, modern applications
• To build hybrid applications that get their resources across the private cloud, public cloud,
and edge computing

Key Capabilities

Cloud Computing and Operations Page 81


Automation of infrastructure Application transformation with Infrastructure access to cloud
operations: Kubernetes: services:

You can select from a variety of tools You can use infrastructure as a You can use the hyperscaler
partner ecosystem and still
to automate your infrastructure. service, while also considering
take advantage of your
containerization through
VMware infrastructure.
VMware Tanzu and Kubernetes.

Benefits of Next Generation Applications


The benefits of using next-generation applications include:

Cloud Computing and Operations Page 82


Module Summary
Friday, January 13, 2023 9:35 AM

Review the key concepts covered in this module:

The VMware Cloud operating model aligns your applications, cloud, and investments to
your business strategy. The operating model includes the people, processes, and
technology that are key to executing your business strategy.

The VMware cloud operating model focuses on delivering three main competencies for
multi-cloud management: service delivery, operations, and governance.

Because hybrid clouds use a consistent software-defined infrastructure stack, you can
manage on-premises data centers and public cloud environments using familiar skill sets
and tools. These tools include the vRealize Cloud Management stack (which includes
vRealize Suite), VMware Horizon, and CloudHealth.

VMware and its hyperscaler partners provide joint solutions for the hybrid cloud. These
solutions include disaster recovery, data center extension, cloud migrations, and next-
generation applications.

Cloud Computing and Operations Page 83


SDDC Design Overview
Wednesday, January 18, 2023 8:35 AM

Learner Objectives

After completing this lesson, you should be able to:


• Describe components in VMware SDDC
• Explain high availability and resiliency in the infrastructure
• Explain the purpose of virtual machine storage policies

SDDC Overview

The VMware software-defined datacenter (SDDC) is a key component of VMware Cloud

VMware Cloud Foundation


is the unified SDDC
platform that bundles
VMware vSphere, VMware
vSAN, and VMware NSX to
deliver an enterprise-ready
infrastructure for the
private and public cloud.

The SDDC can also be consumed as a service by leveraging VMware services, such as VMware Cloud on
AWS or VMware Cloud on Dell.

You can also spin up an SDDC as a service using a VMware partner, such as Azure VMware Solution,
Google Cloud VMware Engine, or one of the more than 4,500 VMware certified partners worldwide.

The virtualization

SDDC Planning and Design Page 84


The virtualization
management layer of the
SDDC consists of compute,
storage, and network
components:

• vSphere is the foundation


technology, which virtualizes
the compute resources in the
SDDC.
• vSAN manages the
virtualization of storage
resources.
• NSX manages the
virtualization of networking.

vSphere

vSphere provides the core virtualization platform for the SDDC and includes the following key products:

• VMware ESXi
○ Provides the compute platform where you create and run VMs.
• VMware vCenter Server
○ Acts as a central administration point for managing ESXi hosts and VMs that are connected
in a network.
○ vCenter Server exposes functionality such as VMware vSphere vMotion and VMware
vSphere High Availability.

SDDC Planning and Design Page 85


vSphere High Availability.
○ vCenter Server also provides common services, such as VMware vCenter Single Sign-On,
vSphere License Service, and VMware Certificate Authority.

vSAN

vSAN is a software-defined storage solution that enables administrators to provide a host cluster with
redundant storage without having to use traditional, external, shared storage. By clustering solid -state
drives (SSDs) or host-attached hard disk drives (HDDs), vSAN creates an aggregated datastore shared by
VMs.

vSAN is an object-based, policy-driven storage environment. The datastore contains all the VM files,
including the VMDK files. For each of the VMDK files, you can create a different VM storage policy,
which defines how data is stored on the disks of the datastore. You configure these VM storage policies
to take advantage of the vSAN features.

NSX

SDDC Planning and Design Page 86


NSX provides consistent networking and security services across multiple endpoints.

These workloads can run on the on-premises data center or on public clouds, such as VMware Cloud on
AWS, Azure VMware Solution, or Google Cloud VMware Engine.

NSX also supports modern applications through integration with vSphere with VMware Tanzu®.

Knowledge Check: SDDC Components

True or False: The SDDC consists of vSphere, vSAN, and NSX, whether the SDDC is located on-premises
or in the public cloud.

True
False

vSphere HA

vSphere HA ensures availability of the VMs in your SDDC. vSphere HA provides uniform, cost-effective
failover protection against hardware and operating system outages within your virtual environment. It
uses multiple ESXi hosts to provide rapid recovery from outages and cost-effective high availability for
applications.

vSphere HA protects against ESXi host failures, guest OS failures, and application failures. It also
protects VMs against network isolation.

Host Failure

SDDC Planning and Design Page 87


Host Failure

When a host fails, vSphere HA restarts


the impacted VMs on other hosts and
immediately replaces the failed host so
that a full set of resources is available in
the SDDC.

Guest OS Failure

When a VM stops sending heartbeats or


the VM process (vmx) fails, vSphere HA
restarts the VM on the same host.

Application Failure

When an application fails, vSphere HA


restarts the impacted VM on the same
host. Application failure detection
requires the installation of VMware
Tools.

Network Isolation

If a VM host becomes isolated on the


vSAN network, vSphere HA shuts down
the VM and restarts it on another host
in the cluster.

Host network isolation occurs when a

SDDC Planning and Design Page 88


Host network isolation occurs when a
host is still running but it can no longer
observe traffic from the vSphere HA
agents on the vSAN network:

• vSphere HA tries to ping the cluster


isolation addresses. An isolation
address is an IP address that is pinged
to determine whether a host is
isolated from the network.
• If pinging fails, the host declares that it
is isolated from the network.

This protection is provided even if the


network becomes partitioned.

vSAN Storage Policies

vSAN storage policies define storage requirements for your virtual machines (VMs). These policies
guarantee the required level of service for your VMs because they determine how storage is allocated
to the VM.

Storage policies are sets of rules that you configure for VMs. Each VM has a storage policy. Each storage
policy reflects a set of capabilities that meet the availability, performance, and storage requirements of
the application or service-level agreement for that VM.

Failures to Tolerate

The storage policy defines the failures to tolerate (FTT). The value for the number of failures to tolerate
defines the number of failures that a storage object can tolerate and the method that is used to
tolerate failures.

Failures to Tolerate (FTT) Fault Tolerance Method Minimum Hosts Required


1 RAID-1 (Mirroring) 3*
1 RAID-5 (Erasure Coding) 4
2 RAID-1 (Mirroring) 5
2 RAID-6 (Erasure Coding) 6
3 RAID-1 (Mirroring) 7
* For VMware Cloud on AWS, 2 hosts if i3.metal and 3 hosts if i3en.metal

Other Storage Capabilities

Additional values can be configured in the storage policy

Storage Capability Use Case Available Values Default


IOPS limit for object Performance Valid integers 0 (No Limit)

SDDC Planning and Design Page 89


IOPS limit for object Performance Valid integers 0 (No Limit)
Object Space Capacity Planning Thin provisioning Thin
Reservation Thick provisioning provisioning
25%, 50%, or 75%
Reservation
Disable object Performance Off or On Off
checksum
Force provisioning Overriding the current Off or On Off
policy

• IOPS limit for object


○ If the IOPS of a disk exceeds the limit, the I/O is throttled. If the limit is set to 0, the I/O has
no limit.
• Object Space Reservation
○ Object Space reservation (%) is the percentage of the capacity for the storage object that is
reserved on VM creation. The remainder of of the storage is thin-provisioned.
○ This setting is useful if a predictable amount of storage is always filled by an object, reducing
repeatable disk growth operations for all but new or non-predictable storage use.
• Disable object checksum
○ If the Disable object checksum option is On, then the object will not calculate checksum
information.
• Force provisioning
○ Force provisioning forces provisioning to occur even if the currently available cluster
resources cannot satisfy the current policy.
○ This setting is useful for planned expansion of the vSAN cluster, during which provisioning of
VMs must continue. VMware vSAN automatically tries to bring the object into compliance as
resources become available.

If vSAN fault domains are enabled, vSAN applies the active VM storage policy to the fault domains
instead of to the individual hosts.

vSAN Fault Domains


vSAN fault domains can spread component redundancy across servers in separate computing racks. By
doing so, you can protect the environment from a rack-level failure, such as power and network
connectivity loss.

vSAN requires a minimum of three fault domains. Each fault domain consists of one or more hosts. At
least one additional fault domain is recommended to ease data resynchronization in the event of
unplanned downtime or planned downtime, such as host maintenance or upgrades.

A sufficient number of fault domains should exist to satisfy the failures to tolerate (FTT) value defined in
the VM storage policy.

Knowledge Check: VM Storage Policies

SDDC Planning and Design Page 90


Which of the following capabilities can be used to improve a VM's performance? (Select two options)

Disable object checksum


Failures to tolerate
Force provisioning
IOPS limit for object
Object space reservation

SDDC Planning and Design Page 91


VMware Cloud on AWS Architecture
Wednesday, January 18, 2023 9:22 AM

Learner Objectives

After completing this lesson, you should be able to:

• Explain how hosts are configured in VMware Cloud on AWS


• Recognize the benefits of stretched clusters
• Describe the function of AWS availability zones
• Identify the roles for managing a VMware Cloud on AWS SDDC
• Deploy a VMware Cloud on AWS SDDC

Cloud SDDCs on VMware Cloud on AWS

With VMware CloudTM on AWS, customers can integrate SDDC clusters with Amazon Web
Services, such as Amazon Simple Storage Service (Amazon S3), Amazon Elastic Compute Cloud
(Amazon EC2), and Amazon Relational Database Service (Amazon RDS).

Video Transcript

So, you're probably saying what's an SDDC? So an SDDC is a software-defined data center with
VMware Cloud on AWS. You can integrate these software-defined data centers that are using
the best of VMware technology with the native services that you find on AWS.

Each organization within VMC on AWS supports two SDDCs, and each SDDC can support up to
20 clusters. And those clusters can have between 2 and 16 hosts for a maximum of 160 hosts

SDDC Planning and Design Page 92


20 clusters. And those clusters can have between 2 and 16 hosts for a maximum of 160 hosts
per SDDC, or 320 hosts for the entire organization. It's amazing the way that you can scale out
this environment in such a rapid manner.

Now, if you'll look on your screen, you're going to see that we have three logos under each of
the SDDCs, and that's for our ESXi, NSX, and vSAN technologies, because the three of those
together make cloud foundation, which is the foundation of the VMC on AWS stack.

It's running the same software that you use on-premises up in the cloud to make it easy, to
make your workloads mobile from the on-premises environment to the cloud environment.

VMware Cloud on AWS Host Instance Types

VMware Cloud on AWS hosts are named for the Amazon EC2 bare metal hardware underlying
them.

Video Transcript

At the time of the recording of this video, we have two different types of nodes that are
available. And these are bare metal instances that AWS provides that we install ESXi and the
foundation stack on for you to then be able to administer. The default type of node is what we
call an i3.metal node. That's the actual name of it from AWS.

So the i3 node is going to be the default, the lower cost option, right? Now, if you need
additional storage needs or IOPS needs, you might use an i3en host. The i3en host has
additional NVMe storage and additional RAM for those high-intensity workloads. So, let's go
ahead and quickly review some of the differences between the i3 and the i3en.

The i3 is going to be for that general purpose, running an Intel Xeon Broadwell processor. From
there, it's going to have 36 cores at 2.3 gigahertz. Now you'll notice if you look over at the i3en,
we're running a little bit of a newer stack. We're running Cascade Lake with 48 cores at 2.5

SDDC Planning and Design Page 93


we're running a little bit of a newer stack. We're running Cascade Lake with 48 cores at 2.5
gigahertz.

Now you'll notice that hyperthreading is enabled here, which actually will get you up to 96
hyperthreading cores, as compared to the 36 non-hyperthreaded cores you're going to get on
the i3. You may say, well, Frazier why is hyperthreading not enabled on the i3 versus the i3en?
The i3 simply has a Spectre mitigation implemented because it is the Broadwell chip set. So, we
can't enable hyperthreading on them at this time.

When we're looking at RAM, Amazon does their RAM in gibibytes (GiB), not gigabytes (Gb). So
you need to make sure that you make that calculation. And you're not hearing things when you
hear me say gibi versus giga. We have 512 gibibytes under the i3 instance. If you need more
RAM, you can always use the i3en instance, which is going to have 768 gibibytes.

For both of the i3 and the i3en clusters, we are using vSAN as our storage methodology. That
storage is going to be vSAN storage with local NVMe flash that is underpinning it. If you're using
VMC on AWS for your i3 instances, you're going to have to have compression and
deduplication. It's going to be enabled by default and you can't take it off. Now for the i3en,
deduplication is not available, but compression will be allowed. We enable this because it
allows you to have 150% to 200% additional storage on your host by enabling these different
feature sets.

On the i3 hosts, you get 10.3 tebibytes (TiB) of raw storage capacity. While on the i3en, you're
going to get 45 tebibytes, which has a much larger disk size because of the additional NVMe
drives that are added to the disk groups for the i3en.metal. We'll talk more about that
i3en.metal when we go into the storage section and how you can see the different disk groups
that are created.

You receive 25 gigabytes of capacity for your bandwidth on both the i3 and the i3en. However,
the i3en has a network offload chip within its NIC that allows you to do some data-at-rest and
data-in-motion encryption. That's not available within VMC on AWS on the i3.metal hosts.

Let's go ahead and look at some of the different configurations that are common for these
clusters.

Bare-Metal Host Instance Types

When designing your SDDC to be hosted in VMware Cloud on AWS, you can use two types of
bare-metal hosts for your workloads:
• I3.metal
• I3en.metal

i3.metal Hosts

The i3 host type is the default host type. Each i3 host includes:
• 36 cores
• 512 GiB of RAM
• 10.3 TiB of raw storage capacity

SDDC Planning and Design Page 94


When to Use i3.metal Hosts

You can use i3.metal hosts for most use cases, including general computing workloads,
database, and virtual desktop deployments. These hosts are appropriate for workloads
characterized by high-performance, high throughput, or low latency.

The i3 host favors read-intensive operations, read-intensive operations with occasional high
write bursts, and smaller block sizes.

Many customers choose a host option according to workload capacity requirements. If the i3
host does not meet the requirements, you can use the higher-performance i3en host.

I3en.metal Hosts

Each i3en host includes:


• 96 logical cores
• 768 GiB of RAM
• Approximately 45.84 TiB of raw storage capacity

The i3en host also includes hyperthreading, an upgraded CPU microarchitecture, and increased
CPU clock speed.

When to Use i3en.metal Hosts

The i3en host type is optimized for data-intensive workloads. It has greater network bandwidth,
memory capacity, and storage capacity than the i3 host type.

Single-host SDDCs cannot contain the i3en host type. The i3en host type is currently
available in 17 AWS regions.

Compute Resources

VMware Cloud on AWS hosts run VMware ESXi directly on the computer hardware, without an
operating system. Cluster sizes have different compute capacities.

Cluster Size Total Memory CPU Cores Total GHz


3 x i3.metal 1,536 GiB 108 248.4
6 x i3.metal 3,072 GiB 216 496.8
16 x i3.metal 8 TiB 576 1,324.8
3 x i3en.metal 2,304 GiB 144 360.0
6 x i3en.metal 4,608 GiB 288 720.0
16 x i3en.metal 12 TiB 768 1,920.0

When you use AWS bare-metal hosts without an OS, features such as Intel Virtualization

SDDC Planning and Design Page 95


When you use AWS bare-metal hosts without an OS, features such as Intel Virtualization
Technology (VT) are directly available to the ESXi hypervisor.

I3.metal I3en.metal
The i3 hosts might use CPU functionality The i3en.metal hosts might use CPU functionality
up to Broadwell CPU family instruction set. up to the Cascade Lake CPU family instruction set.

You can use Enhanced vMotion Compatibility baseline of Broadwell for nay cluster in
an on-premises SDDC that might use VMware vSphere vMotion to migrate VMs to a
VMware Cloud on AWS SDDC.

You can use per-VM Enhanced vMotion Compatibility if a different CPU feature set is
required.

For more information about Enhanced vMotion Compatibility, see VMware knowledge base
article 1003212.

Storage Resources

VMware Cloud on AWS hosts use VMware vSAN and can connect to Amazon S3 and Amazon
Elastic File System (Amazon EFS) for additional storage needs.

Cluster Size Total Cache Size Total Capacity Size


3 x i3.metal 10.8 TB 32.1 TB
6 x i3.metal 21.6 TB 64.2 TB
16 x i3.metal 57.6 TB 171.2 TB
3 x i3en.metal 22.5 TB 157.5 TB
6 x i3en.metal 45 TB 315 TB
16 x i3en.metal 120 TB 840 TB

Each i3.metal and i3en.metal host contains NVMe flash drives that provide increased vSAN
performance.

i3.metal i3en.metal
• Uses two all-flash vSAN disk groups for • Uses four all-flash vSAN disk groups in a
increased availability proprietary configuration
• Uses vSAN deduplication and compression

Data encryption is performed at the drive level, and datastore encryption is available
through vSAN with AWS Key Management Services (KMS) integration.

Networking Resources

SDDC Planning and Design Page 96


Using Amazon services, you can create secure, scalable, and highly available connections
between the SDDC and other networks:

• Amazon Elastic Network Adapter (ENA) connects each host to the LAN with a total
available bandwidth of 25 or 100 Gbps.
• A management gateway in the SDDC handles management traffic, and a separate
compute gateway handles workload virtual machine (VM) network traffic.
• Amazon Virtual Private Cloud (VPC) enabled optimized connectivity of the SDDC to other
AWS services, regions, and availability zones.
• Amazon Elastic Network Interfaces (ENIs) is a virtual NIC that is provisioned on the ENA.
The ENI connects the VMware Cloud on AWS SDDC to your Amazon VPC.
• Amazon Direct Connect enables low-latency connectivity of the SDDC to your on-
premises data center.

Knowledge Check: Host Configuration

Which statements accurately describe VMware Cloud on AWS host configurations? (Select
three options)

You use i3.metal hosts for general computing workloads, database, and virtual desktop
deployments.
The i3en.metal host type is optimized for data-intensive workloads.
The i3.metal host includes hyperthreading.
A given cluster in your SDDC can contain a mixture of host types.
VMware Cloud on AWS hosts use vSAN and can connect to the Amazon S3 and Amazon EFS for
additional storage needs.

VMware Cloud on AWS Service Locations

The VMware Cloud service is deployed in AWS data centers in multiple regions.

SDDC Planning and Design Page 97


US West (Oregon) Europe (Ireland)
US East (Northern Virginia) Europe (Paris)
US West (Northern California) * Europe (Stockholm)
US East (Ohio) Asia Pacific (Sydney)
GovCloud US-West Asia Pacific (Tokyo)
Canada (Central) Asia Pacific (Singapore)
South America (Sao Paulo) Asia Pacific (Seoul)
Europe (London) Asia Pacific (Mumbai)
Europe (Frankfurt)

AWS Regions
A customer selects an AWS region where an SDDC is deployed, and the workloads persist in this
data center.

The VMware Cloud on AWS console data includes SDDC configuration information and data
that VMware collects on the use of VMware Cloud on AWS.

This data persists in the AWS us-west-1 (Oregon) data center. But it might be replicated to
other AWS regions to ensure availability of the service.

The location of the service can be global, which might introduce compliance and security
concerns. Compliance and security must be addressed in the design phase and in the customer
contracts.

VMware Cloud on AWS features are not supported in all regions.

SDDC Planning and Design Page 98


For more information about AWS regions that support VMware Cloud, see the chapter on
choosing a region in the VMware Cloud on AWS product documentation.

AWS Availability Zones

Each AWS region consists of multiple, isolated,


and physically separate availability zones within a
given geographic area.

Each availability zone has independent power,


cooling, and physical security and is connected
through redundant, ultra-low-latency networks.

Availability zone naming and numbering is


different for every customer so that availability
An availability zone is one or more discrete
zones do not become hotspots.
data centers with redundant power,
networking, and connectivity in an AWS
region.

Review this example of how a region is laid


out in AWS.

All the data centers in London make up


the London region (eu-west-2).

Each availability zone is identified by the


region's code and a letter, for example,
eu-west-2a.

Stretched Clusters

Stretched clusters offer an availability strategy. Stretched clusters are designed to provide the
SDDC with an extra layer of resiliency in the event of host-level failures within the cluster or
with AZ-level failures within the region.

A stretched cluster SDDC is one in which the hosts of the SDDC are evenly split between 2 AZs
within an AWS Region. A standard (non-stretched) SDDC is one in which all hosts are deployed
within a single availability zone.

Implementation Overview
Stretched cluster SDDCs are implemented using a vSAN feature of the same name. Per the
requirements of vSAN, the SDDC provides two data sizes and one witness host per cluster.

SDDC Planning and Design Page 99


requirements of vSAN, the SDDC provides two data sizes and one witness host per cluster.

The data sites are composed of two groups of hosts, which are evenly split between a pair of
AZs. The witness host is implemented behind the scenes, using a custom EC2 instance, and is
deployed to a third AZ that is separate from the data sites. This witness host is not reflected in
the total host count of the SDDC.

Because of the requirement that stretched cluster SDDCs use a total of three AZs, stretched
clusters are only supported in AWS regions that are able to provide at least three AZs.

Video Transcript

In addition to regular clustering, we also have what they call stretch clustering. This is a really
cool opportunity to use some of the resiliency that's built into VMC on AWS and built into the
AWS architecture by splitting your SDDC into two AWS availability zones, which will then allow
you to spread your workloads across the two zones and have dual rights, and have a extremely
high level of protection and SLA against any type of failure, much more so than just having it in
a single region.

Let's talk a little bit more about the two different types of clusters that you're going to run into
in this SDDC. Your first and a default type of cluster is going to just be a cluster. This is going to
be restricted to a single availability zone within VMC on AWS, and you'll have a 99.9%
availability guarantee backed by an SLA.

This is for customers who want to balance risk and cost. Now, if cost is not a consideration and
you have to have a higher level of availability, that's where the stretched clusters really come in
handy. So they're still restricted to a single AWS region. You can't go cross region with a
stretched cluster, but you can go cross-availability zone.

They'll provide a 99.99% availability uptime, SLA guaranteed. And these are great for those
business critical workloads that you need to be able to abstract away that infrastructure
volatility and know that it's there. The decision to make a multi-cloud versus a stretch cluster
deployment is going to be something that's made at that time of deployment.

SDDC Planning and Design Page 100


deployment is going to be something that's made at that time of deployment.
And it really is figuring out what the problem is you're trying to solve and balancing it with the
cost, because whenever you do a stretch cluster, it's dual write between the two AZs. That
means that you have to double the number of hosts that you're going to bring into your cluster.

So if you needed three hosts, for example, to host all of your workloads, you're actually going to
need six now to be able to accommodate the dual writes that are happening to the two sets of
hosts within your cluster.

And if that's confusing, don't worry. We're about to talk a lot more about stretch clusters, not
only in this module, but also through the rest of this course. So once again, these stretch
clusters are a great way to abstract away that infrastructure volatility. It's built on the intrinsic
vSphere HA that's a part of the VMware stack and it has automated host failure remediation to
keep that uptime of at that 99.99%.

This is really cool because it's actually built into the infrastructure layer. So, if you are running
an application on VMC on AWS, then you don't actually have to design for this because it's in
the infrastructure layer.

As long as you're deploying, as you would normally deploy into this stretch cluster, then it is
happening with this extra resiliency added with no additional work on the developers. It does
this by using that synchronous sync and write between the availability zones for those mission-
critical applications.

So if one of the availability zones goes down, it's going to be treated as a vSphere HA event.
And then that VM will be restarted in the other availability zone. vSphere vMotion is enabled by
default on all hosts within a vSphere VMC on AWS cluster. So, movement of workloads is not a
problem whenever those HA events happen. It also allows you to live migrate workloads in a
cluster that spanning two different availability zones.

Knowledge Check: Stretched Clusters

What benefits do stretched clusters provide? (Select three options)

Protect against the loss of a single availability zone


Are restricted to a single AZ in an AWS region
Provide common logical networks with high availability features enabled
Use synchronous replication between AZs for mission-critical applications

Virtual Private Cloud

A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. Amazon VPC is
a service where you can launch AWS resources in a logically isolated virtual network that you
create.

When you create a VPC, you


must specify a range of IPv4
addresses in the form of a

SDDC Planning and Design Page 101


addresses in the form of a
Classless Inter-Domain Routing
(CIDR) block, for example,
10.0.0.0/16.

Permissions Structure
Each hyperscaler partner has its own approach to permissions for the VMware infrastructure
(ESXi, VMware vCenter Server, and VMware NSX).

In an on-premises environment, the [email protected] user has the Administrator


role on the top-level vCenter Server instance. The Administrator role includes all defined
privileges.

In a VMware Cloud on AWS SDDC, the CloudAdmin and


CloudGlobalAdmin roles together have all the privileges that
are required for managing the SDDC.

However, these roles do not include all the privileges that the
Administrator role includes because VMware performs host
administration and other tasks for you.

These restrictions include limiting access to the root user on


the ESXi host itself.

CloudAdmin: CloudGlobalAdmin:
• Includes the necessary privileges for • This role is an internal role that must exist
creating and managing workloads for an during SDDC deployment by can be removed
SDDC by a CloudAdmin after deployment is
complete.
• Does not allow changing the configuration
of certain management components that
are supported and managed by VMware,
such as hosts, clusters, and management
VMs.

SDDC Planning and Design Page 102


A new VMware Cloud on AWS SDDC is populated with a single organization user account,
[email protected]. This user is a member of the vCenter CloudAdminGroup and has the
vCenter role of Cloud Admin.

On the Settings tab, you can


view the credentials for the pre-
defined user
[email protected].

The password is generated


when the SDDC is created. If
you change the password in
vCenter, the password does not
get updated here, under
Default vCenter User Account.

If you change the password credentials, you are responsible for the new password.
Contact Technical Support and request a password change if the password is lost.

Knowledge Check: Permissions Structure

True or False: In a vCenter Server instance in the VMware Cloud on AWS environment, the
[email protected] user has more permissions than the [email protected] user
in an on-premises vCenter Server instance.

True
False

Hands-On Lab (HOL-2387-01-ISM: Set Up the SDDC)

Now you have an opportunity to apply your knowledge in a practical activity, where you deploy
a VMware Cloud on AWS SDDC.

You can perform the following tasks:

• Create a virtual private cloud through the AWS console.


• Create one or more subnets, corresponding to AWS availability zones.
• Create an SDDC on VMware Cloud on AWS.
• View your VMware Cloud on AWS SDDC.

SDDC Planning and Design Page 103


Azure VMware Solution Architecture
Wednesday, January 18, 2023 11:23 AM

Learner Objectives

After completing this lesson, you should be able to:

• Recognize benefits of Azure VMware Solution


• Describe components in Azure VMware Solution architecture
• Deploy an Azure VMware Solution private cloud

Azure VMware Solution Deployment Overview

Azure VMware Solution delivers VMware based private clouds in Microsoft Azure.

Private cloud hardware and software deployments are fully integrated and automated in Azure.
You deploy and manage the private cloud through the Azure portal, CLI, or PowerShell.

Planning for the Deployment

During the planning process, you take the following steps to identify and gather information
needed for your deployment:

• Identify the subscription that you plan to use to deploy Azure VMware Solution
• Identify the resource group that you want to use for your deployment
• Identify the region in which you want Azure VMware Solution deployed
• Define the resource name for your Azure VMware Solution private cloud
• Identify the size of the hosts that you want to use when deploying Azure VMware Solution
• Define the number of hosts that you want to deploy to the first cluster for your
deployment
• Request a host quota (capacity) early so that you will be ready to deploy your Azure
VMware Solution private cloud
• Identify the /22 CIDR IP segment for your private cloud management
• Define the IP address network segment for your VM workloads
• Define the virtual network gateway
• Define VMware HCX network segments

Azure VMware Solution Deployment


For details on planning your Azure VMware Solution deployment, access the Microsoft
documentation.

Using Azure VMware Solution

SDDC Planning and Design Page 104


Hello! In this brief video, we are going to explore the Azure VMware Solution (AVS) and talk
about how you can utilize this solution to modernize and update your data center
infrastructure using the best of VMware on Microsoft's public cloud.

Let's get started by looking at a typical deployment.

If you’re considering another hyperscaler public cloud like Amazon Web Services, Google Cloud
Platform, or Oracle Cloud Infrastructure, consider these potential benefits of migrating to Azure
and AVS, particularly if you’re a Windows Server and SQL Server shop.

Let's look at these benefits for each of these three groups on the screen.

For business decision makers, Microsoft has introduced a number of specific cost savings for
Azure VMware Solution.

These include:
• Free security updates for Windows Server 2008 R2 and SQL Server 2008 R2 for 4 years
beyond the end of extended support date for those products.
• Extended security updates typically cost anywhere from 75% all the way up to 125% of
the base software license cost per year. That makes running legacy Microsoft platforms on
other cloud prohibitively expensive if you want to stay secure, as you should. No other
VMware hyperscaler service has free security updates.
Microsoft has announced that Azure and Azure VMware will provide free extended security
updates for SQL Server 2012 R2 and Windows Server 2012 R2 when those products reach their
end of extended support dates in 2022 and 2023.

You can also bring their existing on-premises Windows Server and SQL Server licenses with
software assurance to Azure and AVS under their Azure Hybrid Benefit program. This allows
you to save up to 40% on their Microsoft licensing costs.

No other VMware hyperscaler service allows BYOL for Windows and SQL Server licenses
purchased after October 2019.

SDDC Planning and Design Page 105


purchased after October 2019.

Microsoft allows deployment of downloadable Office 365 in VDI desktops running with AVS. All
other VMware hyperscaler services are restricted from running downloadable Office 365
applications.

For IT infrastructure and operations teams, the integration between Microsoft tools and
VMware SDDC simplifies initial and day-to-day operations. Specifically, Azure credits are used
to purchase AVS, the Azure Portal is used to manage AVS subscriptions, and a unified Azure
services bill includes AVS.
Azure Resource Manager templates can be used to automate deployment of AVS capacity and
environment configurations, and integrated audit logging, alerting, and metrics management
are displayed in the Azure Portal as well as Azure Monitor.

For application developers, the integration between the Microsoft Azure environment and the
VMware SDDC accelerates delivery of modern applications.

Develop and deploy applications across VMware and Azure environments through Azure Cloud
API.

Developers can modernize components of existing vSphere applications with Azure's market
leading services such as Internet Of Things.

The integration of the VMware SDDC and vCenter into the Azure Portal gives developers a
single pane of glass to manage all of their Azure services including AVS.

Integrated identity management across VMware and Azure environments minimizes access
control issues when leveraging Azure services from within the AVS SDDC environment.
Azure VMware Solution combines VMware compute, networking, and storage running on top
of dedicated, bare-metal hosts from Microsoft Azure.

Because vSphere is running on bare metal, customers get the same performance and resilience
that they are accustomed to having on-premises.

The service is jointly engineered with Azure as the operator. This means that Azure delivers the
initial environment and provides periodic updates and fixes, remediates any hypervisor, server,
or network failures, and provides support. It also means that the service is fully integrated with
Azure’s native services.

You are not required to have anything from VMware on-premises. However, if you have
VMware technologies on-premises, you can maximize the value of this offering and easily
migrate workloads from on-premises to the cloud.

Azure VMware Solution delivers VMware based private clouds in Azure. The private cloud
hardware and software deployments are fully integrated and automated in Azure. You deploy
and manage the private cloud through the Azure portal, CLI, or PowerShell.
This diagram shows the private cloud within its own Azure Resource Group and adjacent
connectivity to various native Azure services within another resource group in the same region.

Here you can see that we have our vSphere clusters with vSAN storage, managed by vCenter, all

SDDC Planning and Design Page 106


Here you can see that we have our vSphere clusters with vSAN storage, managed by vCenter, all
utilizing NSX-T for network connectivity. NSX-T traffic is routed to an AVS top- of-rack switch
then to Microsoft Edges and out to other Azure services, the internet, or even on-premises.

A private cloud includes clusters with the following software specifications:


• Dedicated bare-metal server nodes provisioned with vSphere 6.7 update 3, patch 05,
Enterprise Plus Edition
• vCenter Server 6.7 U3p for managing ESXi, vSAN Enterprise, and your vSphere workloads
• VMware NSX-T 3.1.2 Advanced for vSphere workload VMs and VMware HCX 4.2.2
Advanced, for workload mobility and cloud migration between your on-premises data
center and your Azure VMware Solution SDDC.
There are several things that need to be identified or configured prior to deploying your private
cloud.

You’ll need to identify the subscription within Azure that you plan to use to deploy Azure
VMware Solution. You can either create a new one, or use an existing one.
The subscription must be associated with an Enterprise Agreement or Cloud Solution Provider
plan.

Once this is complete, a support request will need to be created with Microsoft Azure support
to request a host quota. This is when you’ll provide the region for deployment and number of
hosts. We will go over how to make those decisions a little bit later in this lesson.
Next, you’ll identify a resource group. Generally, a new resource group is created specifically for
AVS, but you can use an existing one.

Then, you’ll need to identify the admin who will be able to enable and deploy the private cloud.
This individual should have the contributor role for the subscription.
Lastly, you’ll need to think about the network requirements.

A /22 CIDR network block is required to deploy AVS. This address space is carved up into
smaller subnets and used for vCenter, NSX-T, vMotion, and HCX. This block should not overlap
with any existing network segment you have on-premises or in Azure.

You’ll need a /24 CIDR block for Azure VNet for your jump box or other services.
You'll also need to scope out an additional 24 CIDR block for NSX-T network segment for your
workload VMs.

Optionally, you will need to define network segments for HCX if you’re planning to leverage this
technology for migrations. However, this can be done after deployment.

You’ll need to determine whether you are using VPN or ExpressRoute, and configure
appropriately – most customers will be using ExpressRoute to be able to have the fastest and
highest performance connection possible between their on-premises infrastructure and their
SDDC in Azure.

Finally, you’ll configure any firewall rules to access on-premises resources.

On this screen, you can see the logical nesting of components within Azure.
As with other resources, private clouds are deployed and managed from within an Azure
subscription. The number of private clouds within a subscription is scalable. However, initially

SDDC Planning and Design Page 107


subscription. The number of private clouds within a subscription is scalable. However, initially
there's a soft limit of one private cloud per subscription. Within the subscription, the region
where the private cloud will live is defined, and, within the region, we create a resource group
for the private cloud. This is where our vSAN clusters and ESXi hosts will live.

As we said earlier, a private cloud contains the vCenter Server for management, ESXi hosts,
vSAN, NSX-T and HCX. Each additional private cloud that is deployed will have separate
management components.

For each private cloud created, there's one vSAN cluster by default. You can add, delete, and
scale clusters. The minimum number of hosts per cluster and the initial deployment is three.

Up to 4 private clouds can be created, with up to 12 vSphere clusters per cloud. There’s a
maximum of 16 hosts per cluster, with up to 96 hosts per private cloud.

If multiple clusters are deployed within the same private cloud, the management components
will only live on the first cluster. All additional clusters will be fully available for workload VMs.
vSphere HA and DRS are enabled by default.

Bare-Metal Host Instance Specifications

Azure VMware Solution clusters are based on hyperconverged, bare-metal infrastructure.

The hosts come from an insolated pool where they pass all health checks and where all data is
securely deleted. These hosts are available for purchase by hourly (on-demand) billing or by
one-year and three-year reserved instances.

Only one host type is available - AV36

AV36 Host Specifications:

• Dual socket, 18 core, Intel Xeon Gold 6140 CPUs at 2.3GHz with hyperthreading enabled
• 576 GB of RAM
• 2 x 1.6 TB NVMe drives for vSAN cache
• 8 x 1.92 TB SSDs for vSAN capacity
• 2 x dual port 25 GbE NICs

Two NICs are provisioned for ESXi system traffic and two for workload traffic

Compute Resources

Azure VMware Solution hosts run ESXi directly on the computer hardware, without an OS.
Cluster sizes have different compute capacities.

Private Cloud Size Total Memory CPU Cores Raw Storage

SDDC Planning and Design Page 108


3 x AV36 1.5 TB 108 46 TB
8 x AV36 4 TB 288 122 TB
16 x AV36 8 TB 576 245 TB
32 x AV36 16 TB 1,152 491 TB
64 x AV36 32 TB 2,304 983 TB
96 x AV36 49 TB 3,456 1.4 PB

By using Azure bare-metal hosts without an OS, features such as Intel Virtualization Technology
(VT) are directly available to the ESXi hypervisor.

Host Processor

Azure VMware Solution AV36 hosts use dual Intel Xeon Gold 6410 CPUs running at 2.3 GHz with
hyperthreading enabled.

You can use an Enhanced vMotion Compatibility baseline of Skylake for any cluster in
an on-premises SDDC that might use vSphere vMotion to migrate VMs to an Azure
VMware Solution SDDC

You can use per-VM Enhanced vMotion Compatibility if a different CPU feature set is
required.

For more information about Enhanced vMotion Compatibility, see VMware knowledge base
article 1003212.

Each AV36 host contains NVMe flash drives that provide increased vSAN performance.

Storage Resources

Azure VMware Solution uses vSAN and can connect to Azure Blob Storage, Azure Disk Pools, or
NetApp Files datastores for additional storage.

SDDC Planning and Design Page 109


Workload Storage Disk Pools and NetApp Files
VMs deployed in AVS can access native Azure disk pools are currently in preview. Azure
Azure storage services such as storage disk pools offer a persistent block storage
accounts, and table and blob storage. datastore option through iSCSI for your vSphere
clusters, using ultra disks or premium SSDs.
The connection of workloads to Azure
storage services does not traverse the NetApp Files datastores are currently in private
internet. The Azure backbone provides preview, so you can create NFS datastores for your
high-speed, low-latency, private, and vSphere clusters.
secure connectivity.

You can use SLA-based Azure storage


services in your private cloud workloads

Storage Architecture

All disk groups use an NVMe cache tier of 1.6 TB with the raw, per host, SSD-based capacity of
about 15.2 TB.

The size of the raw capacity tier of a cluster is the per-host capacity times the number of hosts.
For example, a three-host cluster provides 46.08 TB of raw capacity in the vSAN capacity tier.

Each host provides approximately 7.6 TB useable capacity to the vSAN pool.

Data encryption is performed at the drive level, and datastore encryption is available
through vSAN with Azure Key Vault (AKV) integration.

Networking Resources

You can create secure, scalable, and highly available connections between the SDDC and other

SDDC Planning and Design Page 110


You can create secure, scalable, and highly available connections between the SDDC and other
networks using the following Microsoft Azure services:

• Microsoft Enterprise Edge (MSEE) routers provide north-south network connectivity


between Azure native services, on-premises, and the Internet.
• Dedicated Microsoft Enterprise Edges (D-MSEE) provide dedicated connectivity so that
the Azure VMware Solution environment can connect to native Azure and on-premises
through MSEE.
• ExpressRoute is an Azure service that provides private, secure, high-speed, low-latency
connections between on-premises networks and the Microsoft Cloud.
• ExpressRoute Gateway is a virtual network gateway that exchanges IP routes between
networks, and routes network traffic appropriately.
• Global Reach peers ExpressRoute circuits together to avoid hopping over an Azure Virtual
Network (VNet)

The bare metal hosts that are used for Azure VMware Solution are different from the server
fleet that hosts other Azure IaaS services and are in a dedicated zone within the Microsoft data
center. Consider how the services work together.

1. When an Azure VMware Solution (AVS) private cloud is provisioned, an ExpressRoute


connection is created between the D-MSEE and the MSEE routers. In this way, the AVS
environment can communicate with Azure public services and, optionally, the Internet.
2. An ExpressRoute gateway is configured in an existing customer VNet to allow private
customer Azure resources to communicate with AVS resources. A con pattern is to create
a VNet with a jump box VM and ExpressRoute gateway that is connected to the AVS
ExpressRoute.
3. You use Azure Bastion to connect to the jump box VM to access the AVS vCenter instance.

Most enterprise customers have an existing ExpressRoute circuit between an on-


premises data center and an Azure region.

SDDC Planning and Design Page 111


Clusters in Azure VMware Solution

Azure VMware Solution hosts run ESXi directly on the computer hardware, without an OS.

Azure VMware Solution cluster characteristics:

• Each private cloud has one vSAN cluster by default.

You can add, delete, and scale clusters. The minimum number of hosts per cluster, and in
the initial deployment, is three.

• You can create up to 4 private clouds, with up to 12 vSphere clusters per cloud. The
maximum is 16 hosts per cluster or 96 hosts per private cloud.

SDDC Planning and Design Page 112


• vSphere Ha and vSphere DRS are enabled by default.

For information about cluster maximum configurations for Azure VMware Solution, see the
Microsoft documentation site.

Azure VMware Solutions Service Locations

The Azure VMware Solution service is deployed in Azure data centers in multiple regions.

East US (Virginia) West Europe (Netherlands)


East US 2 (Virginia) UK West (Cardiff)
West Us (California) UK South (London)
Central US (Iowa) France Central (Paris)
North Central US (Illinois) Southeast Asia (Singapore)
South Central US (Texas) West Germany Central (Frankfurt)
Canada Central (Toronto) Japan West (Osaka)
Canada East (Quebec City) Japan East (Tokyo)
Brazil South (São Paolo) Australia East (Sydney)
North Europe (Ireland) Australia Southeast (Melbourne)

For location definitions and the latest availability information, see the Microsoft documentation
site.

Azure Availability Zones

An availability zone is one or more discrete


data centers with redundant power,
networking, and connectivity in an Azure
region.

Each Azure region has multiple availability


zones.

SDDC Planning and Design Page 113


zones.

Review this example of how a region is laid out in


Azure.

All the data centers are part of one region.

Each availability zone is interconnected with diverse


fiber and can be used to provide additional reliability
for your workloads.

Availability Zone Key Points


Each Azure region consists of multiple, isolated, and physically separate availability zones within
a given geographic area:

• Each availability zone has independent power, cooling, and physical security and is
connected through redundant, ultra-low-latency networks.

• Availability zone naming and number is different for every customer so that availability
zones do not become hotspots.

Permissions Structure

Each hyperscaler partner has its own approach to permissions for the VMware infrastructure
(ESXi, vCenter, and NSX).

In an on-premises environment, the [email protected] user has the Administrator


role on the top-level vCenter Server instance. The Administrator role includes all defined
privileges.

In an Azure VMware Solution SDDC, the CloudAdmin and


the CloudGlobalAdmin roles together have all the
privileges required for managing the SDDC.

However, they do not include all the privileges that the


administrator role includes because Microsoft performs
host administration and other tasks for you. These
restrictions include limiting access to the root user on the
ESXi host itself.

CloudAdmin: CloudGlobalAdmin:

SDDC Planning and Design Page 114


CloudAdmin: CloudGlobalAdmin:
• Includes the necessary privileges for • Associated with global privileges
creating and managing workloads for an
SDDC • Allows you to create and manage content library
objects and perform other global tasks.
• Does not allow changing the
configuration of certain management
components that are supported and
managed by VMware, such as hosts,
clusters, and management VMs.

Hands-On Lab: Azure VMware Solution (HOL-2294-91-ISM)


In this activity, you apply the concepts that you learned in this lesson to prepare and deploy an
Azure VMware Solution private cloud.

You can perform the following tasks:

1. Request that an AVS host quote is applied to a subscription.


2. Verify that the Microsoft.AVS resource provider is registered in your subscription.
3. Create Azure resource group for the AVS private cloud and related objects.
4. Deploy the AVS private cloud.
5. Connect the AVS private cloud to an Azure virtual network.
6. Access vCenter from the connected Azure VNet.
7. Connect an on-premises data center to the AVS private cloud.

SDDC Planning and Design Page 115


Google Cloud VMware Engine Architecture
Wednesday, January 18, 2023 12:17 PM

Learner Objectives

After completing this lesson, you should be able to:

• Recognize host configurations for Google Cloud VMware Engine


• Deploy a Google Cloud VMware Engine private cloud.

Cloud SDDCs on Google Cloud VMware Engine


With Google Cloud VMware Engine, customers can integrate SDDC clusters with Google Cloud
native services.

This video provides an overview of Google Cloud VMware Engine, including a demonstration of
how to set up a Google Cloud VMware environment.

Video Transcript

Cloud migration is top of mind for many organizations today. While moving to the cloud can be
full of challenges, the cloud offers many advantages around increased agility, new and
innovative services, and on-demand pricing that traditional data centers don't offer. So let's talk
about one way to make migration easier: Hosting your applications in a native VMware
environment, right in Google Cloud.

Google Cloud VMware Engine is built to address the biggest issues that prevent most workloads
from moving to the cloud: Lack of resources and the cost of rearchitecting apps. With a Google
Cloud VMware Engine, you can migrate your apps with no changes to your processes because
you run your applications on native VMware VMs in a dedicated and private SDDC.

This means that you can use the same tools, processes, and policies while still getting the
SDDC Planning and Design Page 116
This means that you can use the same tools, processes, and policies while still getting the
advantages of being in the cloud, all on top of deeper integration with other Google Cloud
services. And it doesn't take long to spin up an environment. You can quickly lower your total
cost of ownership and spend more time planning how to rearchitect down the road.

Let's walk through a demo of how to set up Google Cloud VMware environment with just a few
clicks. After clicking the navigation menu, we'll scroll down to the COMPUTE section and
click VMware Engine, which takes us to an overview page. The overview provides you with

SDDC Planning and Design Page 117


click VMware Engine, which takes us to an overview page. The overview provides you with
additional details about the service and allows you to perform common operations, like
launching the vSphere Client, creating a private cloud, and adding and managing users.

We’ll create a private cloud by clicking the Create Private Cloud button, then entering a name
for our new cloud. We'll stick with US East for the location and keep the node count at the
minimum number of three. We'll input the CIDR range for our management appliances, and
then click Review and Create.

While our private cloud is being created, let's talk about pricing, which is per node and includes
all the storage, compute and licensing to run your VMware environment. You'll pay monthly by
default, but you can also sign up for one- or three-year plans to reduce costs.

SDDC Planning and Design Page 118


After our private cloud is created, we can access it from the Resources screen, which is where
you can view and manage all your private clouds. We'll click our new private cloud to view the
configuration details. This is where you can view a summary of your cloud details, including
which versions of vSphere, NSX-T, and HCX you're currently using.

The links in the upper-right of the screen allow you to launch the vCenter client, which is the
standard enterprise vCenter client VMware users know today and expand your cloud by adding
nodes.
SDDC Planning and Design Page 119
nodes.

There are also additional options at the bottom of the screen that let you remove nodes, delete
your cloud, and elevate your vSphere privileges, an important feature that gives you admin
access in vCenter so you can make the configuration changes necessary to run certain third-
party software.

Let's scroll back up to the top of the screen and launch our vSphere Client. After logging in, you
can see that we're working with the same vSphere interface that's so familiar to admins. This
native access to VMware provides you a standard way to control your applications, while still
getting all the benefits of running on Google Cloud.

SDDC Planning and Design Page 120


Now let's return to the management console page and take a look at the networking details
and configuration options. This is where you can view your firewall tables and your subnets,
create a public IP with just a single click, set up and manage VPN gateways, configure and
manage your DNS profiles, and create and manage private connections to Google services.

The Activity interface provides important details for your security and operations teams,
including environment alerts, details about past events, and any currently running tasks and
their status. Your team can also audit logs of any activities performed by users.

SDDC Planning and Design Page 121


their status. Your team can also audit logs of any activities performed by users.

The Account screen provides a summary of your entire VMware Engine environment, including
all your private clouds, and lets you subscribe to email alerts and add distribution lists. The
Account screen is also where you can manage any users that have access to the environment.

One really nice feature of the VMware Engine is the native integration into Google services. All
the billing related to the service is integrated seamlessly into the online account management
system, which allows you to see all the usage of your VMware Engine dedicated nodes right in

SDDC Planning and Design Page 122


system, which allows you to see all the usage of your VMware Engine dedicated nodes right in
line with your other Google Cloud services.

With this quick tour, we've shown you how to quickly generate a private native VMware
environment on Google Cloud and use all the same tools and processes you're already familiar
with. The environment is fully supported by VMware and gives you the ability to create hybrid
apps that integrate seamlessly with Google Cloud services.

Google Cloud VMware Engine is available for you to try out now. So check out the
documentation to learn more details and spin up your own private VMware environment.

Google Cloud VMware Engine Architecture

Google Cloud VMware Engine brings VMware enterprise class SDDC software to the Google
Cloud Platform.

Customers can run production applications across vSphere-based private, public, and hybrid
cloud environments, with optimized access to Google Cloud Platform services.

Google Cloud VMware Engine integrates with VMware compute, storage, and network
virtualization products (vSphere, vSAN, and NSX), vCenter management, and robust disaster
protection.

It optimizes these tools to run on dedicated, elastic, Google Compute Engine bare-metal
infrastructure that is fully integrated with the Google Cloud Platform.

Google Cloud VMware Engine Key Features


SDDC Planning and Design Page 123
Google Cloud VMware Engine Key Features

• VMware private cloud running on Google Cloud Platform bare metal.


• Sold, operated, and supported by Google and its partners.
• On-demand capacity and flexible consumption.
• Full operational consistency with on-premises private cloud.
• Seamless workload portability and hybrid operations.
• Global Google Cloud Platform footprint, reach, and availability.
• Direct access to native Google Cloud Platform services.

Bare-Metal Host Instance

When designing your SDDC to be hosted in Google Cloud VMware Engine, you use a ve1-
standard-72 host to run your workloads.

ve1-standard-72 Hosts

The ve1-standard-host type is the default host type. Each host includes:

• 36 cores/72 threads at 2.6 GHz


• 768 GB of RAM
• 19.2 TB of raw storage capacity

Networking on these hosts include 100


Gb connectivity, with 100 Gb dedicated
for cluster networking (vSAN and east-
west traffic).

These hosts are equipped with four


Mellanox ConnectX-4 Lx Dual P{ort 25
GbE network interface cards, which allow
for a fully redundant network design with
100 Gbps network throughput.

Compute Resources
Google Cloud VMware Engine hosts run ESXi directly on the computer hardware, without an
OS. Cluster sizes have different compute capacities.

By using Google Cloud bare-metal hosts without an OS, features such as Intel Virtualization
Technology (VT) are directly available to the ESXi hypervisor.

You can use an Enhanced vMotion Compatibility baseline of Cascade Lake for any

SDDC Planning and Design Page 124


You can use an Enhanced vMotion Compatibility baseline of Cascade Lake for any
cluster in an on-premises SDDC that might use vSphere vMotion to migrate VMs to an
Google Cloud VMware Engine SDDC

You can use per-VM Enhanced vMotion Compatibility if a different CPU feature set is
required.

For more information about Enhanced vMotion Compatibility, see VMware knowledge base
article 1003212.

Storage Resources

Google Cloud VMware Engine hosts use vSAN and can connect to the Google Cloud Storage,
Google Cloud Filestore, or third-party storage providers for your additional storage needs.

Each ve1-standard-72 host contains NVMe flash drives that provider increased vSAN
performance.

Cluster Size Total Cache Size (TB) Total Capacity Size (TB)
3 x ve1-standard-72 9.6 57.6
6 x ve1-standard-72 21.6 64.2
16 x ve1-standard-72 57.6 171.2

vSAN Encryption with Google Cloud VMware Engine


SDDC Planning and Design Page 125
vSAN Encryption with Google Cloud VMware Engine

Encryption of vSAN data at rest


requires a key management system
(KMS). By default, key management
for vSAN data encryption in Google
Cloud VMware Engine uses Cloud
Key Management Service for newly
created private clouds, at no
additional cost.

You can instead choose to deploy


an external KMS for encryption of
vSAN data at rest from one of the
supported vendors.

Networking Requirements

To establish connectivity between Google Cloud VMware Engine private clouds and other
networks, you use networking services such as Cloud VPN and Cloud Interconnect.

Cloud Interconnect
Cloud Interconnect provides connectivity between your on-premises network and Google Cloud
through a high bandwidth, low latency connection.

This services comes in two versions, Dedicated Interconnect and Partner Interconnect:

• Dedicated Interconnect: This version uses a direct circuit (private line) provisioned by a
telco to provide connectivity at 10 or 100 Gbps throughput.
• Partner Interconnect: This version provides similar connectivity through a service
provider at speeds between 50 Mbps to 10 Gbps.

Networking for vSphere / vSAN Management Components

Google Cloud VMware Engine deploys management components of a private cloud in the
vSphere / vSAN subnets CIDR range that you provide during private cloud creation. IP addresses
in this range are reserved for private cloud infrastructure, and cannot be used for workload
VMs. The CIDR range prefix must be between /24 and /21.

The size of your vSphere / vSAN subnets CIDR range affects the maximum size of your private
cloud. This table shows the maximum number of nodes you can have, based on the size of the
vSphere / vSAN subnets CIDR range.

Specified vSphere / vSAN subnets CIDR prefix Maximum number of nodes


/24 26
/23 58

SDDC Planning and Design Page 126


/23 58
/22 118
/21 220

When selecting your CIDR range prefix, consider the node limits on resources in a private cloud.
For example, CIDR range prefixes of /24 and /23 do not support the maximum number of nodes
available to a private cloud.

Clusters in Google Cloud VMware Engine

Cluster size can be dynamically modified through the Google Cloud VMware Engine console on
demand as necessary.

Node Considerations

• You can specify the number of hosts to add or remove to or from their cluster.
• Private cloud initial setup happens in ~30 minutes.
• Additional hosts can be added in ~15 minutes.
• A three-node cluster is the minimum for production.
• You can have up to 32 hosts per cluster.
• You can have up to 64 hosts per private cloud.

Google Cloud VMware Engine Service Locations


The VMware Cloud service is deployed in Google Cloud data centers in multiple regions.

SDDC Planning and Design Page 127


Americas EMEA APAC
Los Angeles Frankfurt Sydney
Virginia London Singapore
Central (Iowa) Netherlands Tokyo
Montreal Switzerland * Mumbai
Sao Paulo Italy *
Toronto
* Coming Soon

You can choose the Google Cloud region where an SDDC is deployed, and the workloads persist
in that data center.

The location of the service can be global, which might introduce compliance and security
concerns. Compliance and security must be addressed in the design phase and in the customer
contracts.

Google Cloud VMware Engine features at not supported in all regions.

For more information about Google Cloud regions that support VMware Cloud, see "Avaialble
Google Cloud Regions" in the Google Cloud VMware Engine Operations Guide.

Google Cloud Global Infrastructure

An availability zone is one or more discrete data centers with redundant power, networking,
and connectivity in a Google Cloud region.

Each Google Cloud region has multiple availability zones.

SDDC Planning and Design Page 128


Each Google Cloud region has multiple availability zones.

GOOGLE CLOUD INFRASTRUCTURE


Google Cloud Infrastructure
We also design custom chips, including Titan and Cloud TPUs. Titan is a secure, low-power
microcontroller designed with Google hardware security requirements and scenarios in mind.
These chips allow us to securely identify and authenticate legitimate Google devices at the
hardware level. Cloud TPUs were designed to accelerate machine learning workloads with
TensorFlow.
READ MORE GOOGLE CLOUD INFRASTRUCTURE

Availability Zone Key Points

• Each Google Cloud region consists of multiple, isolated, and physically separate availability
zones within a given geographic area.
• Each availability zone has independent power, cooling, and physical security and is
connected through redundant, ultra-low-latency networks.
• Availability zone naming and number is different for every customer so that availability
zones do not become hotspots.

Permission Structure

Each hyperscaler partner has its own approach to permissions for the VMware
infrastructure (ESXi, vCenter, and NSX-T Data Center).

In an on-premises environment, the [email protected] user has the


Administrator role on the top-level vCenter Server instance. The Administrator role
includes all defined privileges.

The default Google Cloud


VMware Engine privileges give
users access so they can perform
normal vSphere operations.
Some administrative functions
require a higher level of access to
vCenter.

For these actions, Google Cloud


VMware Engine allows
administrators to escalate
privileges of vCenter Single Sign-
On for a limited time.

Activities that require escalated privileges include:

• Identity source configuration


• Management of users

SDDC Planning and Design Page 129


• Management of users
• Installation of first and third party vSphere solutions (backup and DR solutions)
• Configuration of service accounts

For more information about permissions, access the Google Cloud VMware Engine
documentation.

If you change the password credentials, you are responsible for the new password.
Contact Technical Support and request a password change if the password is lost.

When using a third-party KMS solution, you are responsible for providing the required
licenses for the KMS.

Knowledge Check: Host Configuration

You are designing your SDDC to be hosted in Google Cloud VMware Engine. Which statement
about host configuration is accurate? (Select one option)

The AV36 host is the default host type for running workloads
Google Cloud VMware Engine hosts run ESXi on an operating system
Google Cloud VMware Engine hosts use vSAN and can connect to Google Cloud Storage for
additional storage
The default Google Cloud VMware Engine privileges give you access to all administrative
functions

Hands-On Lab: Google Cloud VMware Engine (HOL-2379-01-ISM)


How do you implement a Google Cloud VMware Engine solution? In this practical lab activity,
you can find out.

You can perform the following tasks:


1. Create a Google Cloud VMware Engine private cloud.
2. View the Google Cloud Platform and the Google Cloud VMware Engine networking
configuration.
3. Migrate a VM using HCX.

SDDC Planning and Design Page 130


Management Resource Requirements
Thursday, January 19, 2023 8:52 AM

Learner Objectives

After completing this lesson, you should be able to:

• Identify the requirements for the management components in the SDDC

Management Components

Management virtual machines run inside the SDDC.

Example VMs

VMware vCenter® Server VMware NSX® ControllerTM VMware NSX® EdgeTM


ApplianceTM

To run in the SDDC, the management VMs must meet several resource requirements.

vCenter Server Appliance


• One instance
• Virtual CPUs: 8
• Memory: 28 GB
• Provisioned Storage: 500
GB
• Consumed Storage: 665 GB

NSX Controller Appliance:

SDDC Planning and Design Page 131


NSX Controller Appliance:
• Three instances
• Virtual CPUs: 4 each / 12
total
• Memory 16 GB / 48 GB
• Provisioned Storage: 120
GB / 360 GB
• Consumed Storage: 160 GB
/ 480 GB
NSX Edge Appliance:
• Two instances
• Virtual CPUs: 4 each / 8
total
• Memory 8 GB / 16 GB
• Provisioned Storage: 120
GB / 240 GB
• Consumed Storage: 160 GB
/ 320 GB

Total Amount of Resources for All Management VMs

• Six instances
• Virtual CPUs: 28 total
• Memory: 92 GB
• Provisioned Storage: 1,100 GB
• Consumed Storage: 1,465 GB

A restricted access model prevents users from adjusting the resources on these
management VMs.

Example: i3 Host Cluster

The resource requirements for management VMs in a VMware Cloud on AWS i3 host are as
follows:

• The management VMs (vCenter Server Appliance and NSX VMs) consume cluster
resources.
• A resource pool is created for the management VMs.
• The resource pool includes CPU and memory reservations to guarantee a minimum
amount of resources.

Scaling the i3 Host Cluster

As you scale out the i3 host cluster, the amount of resources required for the management VMs

SDDC Planning and Design Page 132


As you scale out the i3 host cluster, the amount of resources required for the management VMs
does not scale linearly.

In a 3-node cluster, 7.6% of CPU


resources and 1.5% of memory
resources are reserved for
management functions.

In a 16-node cluster, only 1.5% of


CPU resources and 0.17% of
memory resources are reserved for
management functions.

Knowledge Check: Resource Requirements

What types of resource requirements apply to management VMs? (Select four options)

Virtual CPU
Network Bandwidth
Memory
Guest OS Memory
Provisioned Storage
Consumed Storage

Knowledge Check: i3 Host Clusters

True or False: In a VMware Cloud on AWS i3 host cluster, a 3-node cluster reserves less CPU
and memory resources relative to total capacity for management functions than a 16-node
clusters.

True
False

SDDC Planning and Design Page 133


Elastic DRS and Cloud Scale-Out
Thursday, January 19, 2023 9:12 AM

Learner Objectives
After completing this lesson, you should be able to:

• Identify key features of Elastic DRS


• Recognize available policies for Elastic DRS
• Recognize how to enable and configure Elastic DRS

This lesson focuses on Elastic DRS for VMware CloudTM on AWS and its
scale-out capabilities.

For more information about the scale-out capabilities of other hyperscaler partners, you can
access the following resources:

Azure VMware Solution


Tutorial: Scale Clusters in a Private Cloud

Google Cloud VMware Engine


Managing Autoscale Policies

About Elastic DRS

With Elastic DRS, you can set policies to automatically scale your cloud SDDC by adding or
removing hosts in response to demand.

Elastic DRS replaces VMware vSphere® Distributed Power ManagementTM in a VMware Cloud
on AWS SDDC.

To access Elastic DRS settings within the VMware Cloud on AWS cloud console, you
select Actions > EDIT EDRS SETTINGS on the SDDC pane.

SDDC Planning and Design Page 134


Elastic DRS Policies

With Elastic DRS, you can scale a cloud SDDC automatically.

Automatically Scaling Clusters and SDDCs

The benefits of using Elastic DRS in a VMware Cloud SDDC are numerous:

• Automatic scaling based on utilization


• Enabled at the cluster level
• Monitoring interval that is every 5 minutes
• Scale up when any resource crosses a predefined threshold
• Scale down when all resources consistently remain below thresholds
• Multiple policies that can meet specific needs

SDDC Planning and Design Page 135


The Elastic DRS Baseline policy monitors the SDDC to
ensure that the underlying infrastructure is operational.

You can add rules by selecting an additional Elastic DRS


policy, but this policy is always running and cannot be
disabled.

This policy adds hosts when the following events occur:


• Storage utilization reaches 80% in a vSAN cluster.
• An availability zone fails.

When scaling in, the Optimize for Best Performance


policy removes hosts gradually to avoid performance
slowdowns as demand spikes.

When scaling in, the Optimize for Lowest Cost policy


removes hosts quickly to maintain baseline performance
while keeping host counts to a practical minimum.

SDDC Planning and Design Page 136


Based on cluster CPU and memory utilization, the
Optimize for Rapid Scale-Out policy adds multiple
hosts at a time. (Hosts must be removed manually
when no longer needed.)

Based on cluster storage utilization, one host is


added at a time when storage use becomes
critical.

Optimize for Rapid Scale-out allows you to scale


out 10 hosts in parallel across clusters per SDDC
with a maximum of 6 hosts per cluster at a time.

Knowledge Check: Elastic DRS Policies

How do Elastic DRS policies help you to scale clusters and SDDCs?

SDDC Planning and Design Page 137


Enabling Elastic DRS:

Step 1: Enable Elastic DRS in the VMware Cloud UI

To configure Elastic DRS, you select a policy, configure the minimum and maximum cluster
sizes, and click SAVE.

Scale-out is performed when utilization for any resource remains consistently above built-in
thresholds.

Scale-in is performed when utilization for all resources remains consistently below the built-in
thresholds.

These thresholds are predefined and cannot be altered by the user.

Step 2: Verify that Elastic DRS is enabled

SDDC Planning and Design Page 138


On the Summary tab, you can identify clusters where Elastic DRS is enabled. The
configured policy name appears in the Capacity and Usage pane.

In the example, EDRS Baseline Policy applies to the cluster.

Limitations of Elastic DRS


Elastic DRS has limitations:

• It is not supported on a single-node SDDC.


• If a user-initiated add or remove host operations is in progress, the current
recommendation by the Elastic DRS algorithm is ignored.
• After the user-initiated operation completes, the algorithm might recommend a scale-in
or scale-out operation based on the changes in the resource utilization and current policy
profile.
• If a user-initiated add or remove host operation is started while a recommendation is
being applied, the operation fails with an error, indicated concurrent update exception.

Elastic DRS Notifications

Several types of notifications are available for scaling recommendations that are generated by
Elastic DRS.

Automated Notifications - Automated notifications are sent by email to organization members.

Pop-Up Notifications - A pop-up notification appears in the VMware Cloud console.

Event Notifications - Event notifications are stored in the Activity Log.

vCenter Server Logs - More details about Elastic DRS events are tracked in vCenter Server log
files.

Knowledge Check: Scaling In

True or False: Elastic DRS scales in whenever any of the resources drops below a configured
threshold.

True
False

(Elastic DRS scales in only when all resources are consistently below the configured thresholds
for the policy and only when the performance or cost policies are in use)

SDDC Planning and Design Page 139


SDDC Design Considerations
Thursday, January 19, 2023 9:47 AM

Learner Objectives

After completing this lesson, you should be able to:

• Use sizing tools to assess the cost of running applications and VMs on VMware Cloud
providers
• Recognize the shared responsibility models of each of the major hyperscalers
• Identify services for creating on-premises to SDDC connections

Sizing, Responsibilities, and Connections

When designing a cloud SDDC, you must consider the number of required resources, the
division of responsibilities, and how to connect you on-premises data center with the cloud
SDDC.

How do you estimate the resources that you require?

To integrate an SDDC solution into your existing data center, you must first determine what
resources are required. Then you can reserve and order services from your hyperscaler partner.
Sizing tools can help to automate this process.

How are responsibilities divided between the SDDC and the partner?

A shared responsibility model defines distinct roles and responsibilities, for example, customer,
VMware, and cloud provider.

How do you make connections from on-premises to the cloud SDDC?

When you design your hybrid cloud infrastructure, you must consider how to connect your
existing compute infrastructure and storage in the cloud SDDC.

Explore design considerations in the context of each hyperscaler partner.

VMware Cloud on AWS: SDDC Design Considerations

Sizing Tools

VMware Cloud Sizer Tool


SDDC Planning and Design Page 140
VMware Cloud Sizer Tool

VMware Cloud Sizer is a complimentary VMware Cloud service that estimates the resources
that are required to run various workloads in VMware Cloud.

In addition, the VMware Cloud Services portal includes an integrated user interface for the
sizer, making the sizing process easy to navigate.

VMware Cloud Sizer is responsible for estimating the resource use for any VMware Cloud
deployment. The VMware Cloud Sizer currently supports VMware Cloud on AWS.

You can access the sizer tool on the VMware Cloud on AWS Sizer website.

Design Criteria for Sizing

The default values are based on workload or application profiles obtained from vSAN
assessment, large proofs of concept, and telemetry. They can be changed to match your
environment.

General Oracle Microsoft VDI (Full VDI (Instant


Purpose DB SQL Server Clone) Clone)
vCPU/pCore 4 3 3 - -
vCPUs/VM 2 4 8 2 2
vRAM/VM (GiB) 4 64 32 4 4
Utilized Storage/VM (GiB) 200 1,000 1,000 50 50

Sizing Tool Modes

The sizer tool supports the following sizing modes:

• Quick Sizer
• Import Mode
• Advanced Sizer

Quick Sizer

Using Quick Sizer, you can perform sizing with minimal inputs. Typically, you use the Quick Sizer
for the initial sizing.

With Quick Sizer, you get a simple, high-level input of workload type and number of VMs,
followed by the specific compute and storage resources, which can be either provided as per
VM averages or as total number of resources for the environment in scope.

You can also choose

SDDC Planning and Design Page 141


You can also choose
whether you want a
stretched cluster
deployment.

This deployment model


is where two or more
hosts are part of the
same logical cluster, but
are located in separate
geographical locations.

After clicking GET


RECOMMENDATION,
you can select the host
type that you want to
use in your SDDC. The
recommendations adjust
based on the host
hardware that you
choose.
Import Mode

With Import mode, you can perform sizing on data that is imported from on-premises through
Live Optics or RVTools.

Import option for the VMware Cloud on AWS Sizer tool

SDDC Planning and Design Page 142


Import RVTools file configuration options

When you choose a file to upload, you should not make changes to the file that disrupt the
sizing. For example, you can remove entire rows of VMs but not rename or add columns with
custom data.

You do not need to pre-filter the document because basic filters can be applied through the
sizing tool. For example, you can size powered-on VMs for only the used memory and the
storage, as opposed to the provisioned storage.

For more information about the RVTools software, see the RVTools website.

For more information about the Live Optics tool, see the Live Optics website.

Advanced Sizer

With the Advanced Sizer option, you can perform sizing on multiple workload profiles with
more gradual configuration inputs.

SDDC Planning and Design Page 143


more gradual configuration inputs.

With the Advanced Sizer option, you can perform sizing on multiple workload profiles.

On the Basic tab, manual sizing options are available. These options are similar to the Quick Sizer
options, except that the IOPs per VM option can be changed.

SDDC Planning and Design Page 144


The Additional tab provides several adjustable options for the specific workload profile, such as CPU and
memory utilization.

For SDDC-level settings, you click GLOBAL SETTINGS

Sizer Output

The sizer tool provides several recommendations. You can generate a report PDF for
distribution to stakeholders and decision makers.

SDDC Planning and Design Page 145


VMware Cloud on AWS Shared Responsibility Model

VMware Cloud on AWS implements a shared responsibility model that defines distinct roles and
responsibilities: Customer, VMware, and Amazon Web Services.

Customer: Security in the Cloud

Customers are responsible for the deployment and ongoing configuration of their
SDDC, virtual machines, and data.

In addition to determining the network, firewall, and VPN configuration, customers are
responsible for managing virtual machines (including guest security and encryption) and
using VMware Cloud on AWS user roles and permissions with vCenter roles and
permissions to apply the appropriate controls for users.

VMware: Security of the Cloud

VMware is responsible for protecting the software and systems that make up the VMware
Cloud on AWS service.

This software infrastructure is composed of the compute, storage, and networking software
comprising the SDDC, and the service consoles that are used to provision VMware Cloud on
SDDC Planning and Design Page 146
comprising the SDDC, and the service consoles that are used to provision VMware Cloud on
AWS.

Amazon Web Services: Security of the Infrastructure

AWS is responsible for the physical facilities, physical security, infrastructure, and hardware
underlying the entire service.

On-Premises Connections with VMware Cloud on AWS

AWS Direct Connect

AWS Direct Connect creates a dedicated network connection from an on-premises data center
to an AWS region.

SDDC Planning and Design Page 147


The blue line (B) represents a private AWS Direct Connect connection, which can be used for AWS
resources. However, in this case, the connection is used to securely connect to the VMware Cloud on
AWS SDDC. The green line (G) represents a public AWS Direct Connect connection, used for private and,
potentially, faster access to AWS resources.

Rather than using only a VPN tunnel over the public Internet, AWS Direct Connect uses a
dedicated leased connection to connect the on-premises data center to an AWS Direct Connect
location.

With AWS Direct Connect, network traffic is isolated and bandwidth is, potentially, increased
between the on-premises data center and the AWS resources.

SDDC Planning and Design Page 148


In the example, an on-premises data center
in Kobe connects through a dedicated line to
an AWS Direct Connect location in Osaka.

Then, the AWS Direct Connect location in


Osaka connects through a dedicated line to
the AWS regions in Tokyo.

An example AWS Direct Connect gateway


service to multiple AWS regions in the United
States might include these connections:

• An on-premises data center in Palo Alto


connects through a dedicated line to an
AWS Direct Connect location in Portland.
• The AWS Direct Connect location in
Portland connects through a dedicated
line to the AWS region in Oregon.
• The AWS region in Oregon is connected
through an AWS Direct Connect gateway
to the AWS region in northern Virginia

The example is a representation of the technology. A company based in Palo Alto is


not likely to run a dedicated line to Portland. Each AWS region includes multiple
availability zones, and each availability zone is potentially made up of multiple data
centers.

For more information, see "Direct Connect Gateways" on the Amazon website.

Establishing Direct Connect Access

SDDC Planning and Design Page 149


You can establish an AWS Direct Connect connection in different ways:

• Using an AWS Direct Connect partner


• Using a private connection from your on-premises data center to an AWS Direct Connect
location
• Connecting at the AWS Direct Connect location with a co-located SDDC.

A full list of AWS Direct Connect partners is available on the AWS website.

A full list of AWS Direct Connect locations is available on the AWS website.

Knowledge Check: SDDC Design Considerations with VMware Cloud


on AWS

1. True or False: VMware Cloud on AWS uses a shared responsibility model that defines
distinct roles and responsibilities for the customer, VMware, and AWS.

True
False

2. You are designing your SDDC in partnership with VMware Cloud on AWS. Which
statement accurately describes tools for helping you determine resources? (Select one
option)

Quick Sizer provides a high-level input of workload type and number of VMs.
The default values for workload resources cannot be changed when you use the
sizing tool.
Using Manual mode, you can perform sizing on data that is imported from on-
premises through Live Optics or RVTools

3. How does AWS Direct Connect create a connection from on-premises to the cloud SDDC?

Through a dedicated leased connection.


Through a VPN tunnel over the public internet.
By increasing bandwidth.

Azure VMware Solution: SDDC Design Considerations

Sizing with Azure VMware Solution

Capacity planning, or sizing, with Azure VMware Solution involves discovering, grouping,
assessing, and reporting.

SDDC Planning and Design Page 150


Azure Migrate is the preferred capacity planning solution to scope your SDDC in the cloud.

Use cases for capacity planning with Azure VMware Solution include:

• Assessing an existing VMware IT landscape: Typical on-premises VMware environments


grow organically over time. To determine how big your on-premises VMware
environment is, you can complete an objective assessment to remove any guesswork in
the decision-making process.

• Identifying relationships between application components: You might want to consider


Azure VMware Solution for only some workloads. Performing capacity planning for only a
subset of workloads helps you to factor in all dependencies.

• Identifying compatibility between on-premises VMware and the Azure VMware


Solution environment: A workload might have a special software or configuration
requirement when running on the on-premises VMware environment. If so, you should
explore the possibility of meeting that requirement in Azure VMware Solution, to help you
make appropriate decisions ahead of time.

• Determining monthly and yearly costs: You want to determine the costs that are incurred
on a monthly and yearly basis. A capacity planning exercise can help provide customers
with potential costs.

Capacity Planning for Azure VMware Solution

To prepare for an Azure VMware Solution deployment, you must consider how the overall
capacity affects business and technical decisions.

Explore how to use the Microsoft Azure Migrate tool to determine the required resources for
transitioning to a cloud SDDC model:

Step 1: Discovery

SDDC Planning and Design Page 151


You can use Azure Migrate in two modes:

• Azure Migrate generates an OVA (open virtualization appliance) template.


• A CSV file with a predefined format is used to upload on-premises inventory data.

OVA Mode
This template can be used to bootstrap an Azure Migrate VM in an on-premises VMware site.
After the Azure Migrate instance is configured, it sends on-premises inventory data in Azure.

CSV Mode
The CSV file expects four mandatory fields: VM/Server Name, Number of Cores, Memory, and
Eligible OS Name.

Other remaining optional fields (such as Number of disks, Disk IOPS, Throughput, and so on) can
be added to improve the accuracy of sizing.

Output from VMware utilities, such as RVTools, can be used to create a CSV file.

Step 2: Grouping

After you gather inventory details, you group VMs.

Grouping helps you to organize and manage a large number of VMs. You can group in different
ways, for example:

• Workload: HR, finance, eCommerce application


• Environment: Production, nonproduction
• Location: US, EU
• Criticality: Mission-critical, small-scale

Azure Migrate provides dependency analysis in VMware environments.

Information obtained through dependency analysis can also be used for grouping related VMs.

Step 3: Assessment

After grouping the VMs, you assess them.

Sizing Parameters
You configure the assessment with parameters that are useful in determining right sizing and
capacity. These parameters can cover target Azure VMware Solution site details, such as the
location, node type, and so on.
For Azure VMware Solution VMs, you must include parameters such as FTT and RAID settings
and CPU oversubscription.

Assessment Criteria
You can assess the VMs from two perspectives:

• Performance: You assess on-premises VMware VMs, using their performance profiles.

SDDC Planning and Design Page 152


• Performance: You assess on-premises VMware VMs, using their performance profiles.

You can select performance history, going back one month, to capture a performance
profile.

An assessment can be further fine-tuned by selecting a specific percentile (such as 50th,


90th, 99th, and so on) for the assessment.

You can provide an additional capacity margin by using a comfort factor, which increases
the capacity by multiplying it by the comfort factor.

• As on-premises: In this case, you use the existing VM specifications, such as CPU and
memory.

Additional capacity can be added, as appropriate.

Step 4: Reporting

After an assessment is completed, reporting provides the final results.

The results include cost and readiness. A summary provides the number of assessed VMware
VMs, the average estimated cost per VM, and the total estimated costs for all VMs.

Reporting shows Azure VMware Solution readiness, providing a clear breakdown of the VM
numbers, across multiple readiness statuses (Ready, Not Ready, Ready with conditions, and so
on).

You get a list of VMs that might require remediation before migration, including reasons for
remediation.

Reporting also provides a number of Azure VMware Solution nodes that are required to run
assessed VMs. You also can access a projected utilization for CPU, memory, and storage in
Azure VMware Solution.

Azure VMware Solution Shared Responsibility Model

The management of Azure VMware Solution is a shared responsibility between the customer
and Microsoft.

SDDC Planning and Design Page 153


• Deployment, configuration, and management of virtual machines are the responsibility of
the customer. This responsibility includes updating VMware Tools and ensuring virtual
machine compatibility.

• Post-deployment, the customer is responsible for the customized configurations of


vCenter Server and NSX. For example, the customer must configure policies, profiles, and
so on.

• Deployment and configuration of virtual infrastructure components are the responsibility


of Microsoft, including life cycle operations.

• Deployment, configuration, life cycle, and management of physical infrastructure are also
the responsibility of Microsoft.

For more information about the Azure VMware Solution shared responsibility model, see Cloud
Infrastructure Services on the VMware TechZone website.

On-Premises Connections with Azure VMware Solution

Azure ExpressRoute

For connectivity to an Azure SDDC from your on-premises data center, you can use an any-to-
any (IP VPN) network, a point-to-point Ethernet network, or a virtual cross-connection through
a connectivity provider at a co-location facility.

ExpressRoute connections are not made through a public network. As a result, they provide
more reliability, quicker speeds, predictable latencies, and higher security than traditional
Internet-based connections.

Many enterprise customers have an existing ExpressRoute circuit between an on-premises data

SDDC Planning and Design Page 154


Many enterprise customers have an existing ExpressRoute circuit between an on-premises data
center and an Azure region.

Using ExpressRoute Global Reach, you can peer the ExpressRoute circuit with the private
ExpressRoute circuit that supports Azure VMware Solution to allow for connectivity between
on-premises resources and Azure VMware Solution.

On-premises to Azure Region connectivity is also available using a VPN connection.

For more information about setting up an ExpressRoute circuit, see the ExpressRoute
documentation on the Microsoft website.

Knowledge Check: SDDC Design Considerations with Azure


VMware Solution
1. Which statement most accurately describes capacity planning with Azure VMware
Solution? (Select one option)

It involves four phases: discovering, grouping, assessing, and reporting.


It focuses on grouping VMs and running a sizer tool on each group.
It assesses requirements using existing VM specifications only.

2. Which responsibilities does each party have in Azure VMware Solution?

SDDC Planning and Design Page 155


3. Which Azure VMware Solution service can you use to create on-premises to SDDC
connections? (Select one option)

ExpressRoute
Direct Connect
Network Segments

Google Cloud VMware Engine: SDDC Design Considerations

Shared Responsibility Model

Google Cloud VMware Engine delivers VMware as a service with all the components you need
to securely run VMware natively in a private and dedicated private cloud.

Google Responsibilities

SDDC Planning and Design Page 156


By running in the cloud, you (the customer) benefit from Google managing the environment
with automated monitoring, patching, and regular upgrades, freeing you from managing the
infrastructure yourself. You can redeploy your staff to work on other more value-added
projects.

Customer Responsibilities

You are responsible for the deployment and ongoing configuration of the SDDC, virtual
machines, and data.

In addition to determining the network, firewall, and VPN configuration, you manage virtual
machines (including guest security and encryption) and use Google Cloud Platform IAM roles
and permissions with vCenter roles and permissions to apply the appropriate controls for users.

On-Premises Connections with Google Cloud VMware Engine

Google Cloud Interconnect

Google Cloud VMware Engine supports options for connecting from on premises to the cloud to
support different customer use cases.

When connecting from on-premises to the Google Cloud VMware Engine private cloud, you can
use Cloud Interconnect or Cloud VPN.

Cloud Interconnect
You use Cloud Interconnect when you require high speed, low latency connectivity into the
Google Cloud Platform (GCP) for access to Google Cloud VMware Engine and other native GCP
services.

SDDC Planning and Design Page 157


services.

Cloud VPN
You might consider Cloud VPN in situations where you do not require additional resiliency or if
you require a lower cost option for hybrid connectivity into GCP. For less error-prone
configuration, configure BGP for Cloud VPN connectivity.

You can use the GCP Org and associated virtual private cloud (VPC). The CIDR range for the
Google Cloud VMware Engine management network is configured in this VPC and connectivity
to this environment is established.

Google Cloud VMware Engine Connectivity Options

You have several options for connectivity between your on-premises infrastructure and your
cloud SDDCs in Google Cloud VMware Engine:

• Cloud Interconnect - High bandwidth, low latency 10Gb and 100 Gb options
• Partner Interconnect - Partner-managed 50 Mbps to 10 Gb bandwidth options
• Cloud VPN - Secure layer 3 connection over the Internet
• Layer 2 VPN - Migration use cases NSX standalone edge or VMware HCX
• Point-to-site VPN - Secure Admin access to vCenter

For information about connectivity options with Google Cloud VMware Engine, see the guides
on the Google Cloud website.

Knowledge Check: SDDC Design Considerations with Google Cloud


VMware Engine

1. Which responsibilities does each party have in Google Cloud VMware Engine?

2. Which option should you select if you require high speed, low latency connectivity into
the Google Cloud Platform for access to Google Cloud VMware Engine and other native
GCP services? (Select one option)

Cloud Interconnect
SDDC Planning and Design Page 158
Cloud Interconnect
Cloud VPN
ExpressRoute Global Reach

SDDC Planning and Design Page 159


Additional VMware Cloud Partners
Friday, January 20, 2023 9:11 AM

Learner Objectives

After completing this lesson, you should be able to:

• Describe VMware cloud partnerships with IBM, Alibaba, and Oracle.

Other Partner Solutions

In addition to VMware Cloud on AWS, Azure VMware Solution, and Google Cloud VMware
Engine, other partner solutions are available. This lesson provides brief descriptions of each
solution.

Oracle Cloud VMware Solution

Oracle Cloud is an ideal platform for VMware because it provides


security, predictability, and control. And you can rapidly transpose
VMware estates to the cloud without changes to best practices.

VMware environments operate in a securely isolated customer


tenancy with predictable performance and costs. Customers
manage cloud infrastructure and VMware consoles for complete
administrative control.

SDDC Planning and Design Page 160


Oracle Video Transcript

Raghu Raghuram: Larry, thank you for joining us and it's great to see you.

Larry Ellison: It's great to be here.

Raghu Raghuram: The VMware and Oracle partnership has been thriving over the last couple of
years, and it's great to see the giant solutions we have done over in the marketplace to our
customers. Before we dive into the partnership, it'd be great to hear from you - what trends are
you seeing, and what's driving your thinking and Oracle's thinking?

Larry Ellison: Well, several things. One is we're gratified by the fact that the latest report that
surveys IaaS and PaaS services for the second year in a row, Oracle is out there trying to join the
big three. So we were kind of the fourth player in all of this, but we're the most improved
player over the last couple of years. And by the way, the VMware partnership has helped
enormously. We have a number of customers and a lot of nodes running. Our partnership with
Zoom and others has allowed us to make the investments necessary. Now, we think Oracle is
now a fourth major player among the cloud infrastructure providers.

So that's a big deal. Multi-cloud I think is a big deal, because there's really two separate cloud
businesses. There's the application business and then there's the infrastructure business. And
as customers pick applications from cloud application companies and big infrastructure from
cloud infrastructure companies, they need to interconnect these clouds. They're not going to be
using one cloud. And in fact, they need to interconnect there on premise workloads with their
cloud workloads. They need to interconnect their infrastructure cloud provider with their
application cloud provider. So this whole idea of multi-cloud is going to be very important,
important going forward. It used to be people thought, well, I'm just going to move everything
to Amazon. And I think, I think that's not going to be the case.

Amazon's very good at some things. Actually, I think Oracle and VMware are very good at some
things. And they're going to want to pick the best technology available at the best price
available. And that's going to mean having multiple clouds in their future. So multi-cloud
extremely important. Hybrid cloud interconnecting on prem to public clouds, and application
clouds interconnected to infrastructure clouds. I think all of that's going to be a huge trend as
the center of gravity of computing moves from on-premise to the cloud.

SDDC Planning and Design Page 161


the center of gravity of computing moves from on-premise to the cloud.

Raghu Raghuram: Yep. Couldn't agree more. And in all of our conversations with customers, we
are seeing exactly the same thing, customers wanting to use a variety of cloud for different
reasons, because they're all good at lots of different things, and connecting their on-premise to
the various cloud solutions.

Coming to our partnership, you mentioned the Oracle Cloud VMware solution, which was
activated a couple of years ago, and we have started to see very good interest from customers.
Can you talk a little bit more about what you're hearing from customers that are using the
solution along with the rest of OCI?

Larry Ellison: Yeah, I think one of the things we tried to do is make it very easy to lift up an
existing VMware estate, and move it to the cloud without redoing your network architecture.

So as you know well, Raghu, I don't want to dive too deep into the underlying technology. But
we have an L2 network implementation. So you don't have to change all your IP addresses.
You'd don’t have to do all this stuff. You can lift up an existing VMware configuration and move
it largely unchanged into an Oracle public cloud very easily and very, very quickly.

And the interesting thing is the network addresses you have allow you to isolate. When you lift
and shift, so, quick lift and shift is part of it, with our L2 implementation. The other thing that's
interesting about that is because we control, the network addresses are all virtualized, of
course. We can isolate the VMware estate from other customers, or even other estates in the
same company.

So we give you a level of security because of that network architecture, that once it's moved,
other people, neighbors can't address your storage systems, can't address your compute
systems. So we really provide that level of isolation to guarantee security, and that's a very big
deal in a world of ransomware. So, quick lifting and shifting, security built in all because of our
unique approach of an L2 implementation. And in a world where ransomware is getting more
and more common, I think this becomes more and more important, and more important to our
customers and, therefore, a very important offering for VMware and Oracle to deliver to those
customers.

Raghu Raghuram: Yeah, and we've seen some good customer events like in Maxim’s and Ruma
logistics, which is the biggest operator, railway operator in Brazil, and many, many others as
well. This is super exciting.

Larry Ellison: Yeah, exactly. That's in Hong Kong, retailers in Hong Kong, railways in Brazil
moving, that have proved that they can move these estates over, save money, and get better
security.

Raghu Raghuram: Yup. That accelerates their whole journey to the cloud and their whole
journey to the modernization of their application portfolio as well, because they can then
connect it to all of your assets that you've got in the databases, on the applications and
everything else that's in the Oracle cloud.

So that's a great start to OCVS and our teams are working great together. What do you see
going forward for the solution? And what customers can look forward to?

SDDC Planning and Design Page 162


going forward for the solution? And what customers can look forward to?

Larry Ellison: Well, I think, again, some of the unique things that we offer together are an
environment where security is always on. You know, the approach Oracle takes to security is it
is not an optional feature that you buy. We don't have a long list of parts that you order this
security and that security. Everyone gets security, there's no uplift. You have to have security.
You have to have that level of isolation to protect your data.
It's not that you choose to encrypt, or you choose not to encrypt. No, encryption is always on -
encryption at rest, encryption on the net, it's always on. We don't give you the option. The
other thing I think is very important, that people are going to be looking forward to, that I think
is critical for the future of cloud computing is autonomy, autonomous systems.

The only way, the only way to guarantee that your data is not going to be stolen is to ask your
people who are doing implementations over there at AWS, not to make any mistakes, not to
misconfigure something. If human beings, if the infrastructure that you're running on is
manually configured and a human being makes an error, your data is at risk.

So, everyone thinks of autonomous systems, whether it's an autonomous system from Tesla,
that's going to drive you from the restaurant at home in the evening, as it’s a convenience.
Well, it's more than a convenience. The autonomous system is much less likely to have an
accident and crash your Tesla when you're coming home from dinner.

The autonomous database at Oracle and the autonomous Linux systems that make up our
infrastructure - we never miss patches because it's the computer that does the patching. The
computer does the patching immediately when the patch is available, and it does the patching
while we're live.

Just like, I could point out, when you're moving a VM workload from on-premise into the cloud,
you can move that workload while you're running. I mean, it's amazing. Same thing for security,
a patch that becomes available, you don't look for a patch window to take your systems down
and patch it. That patch window is, you know, if you wait two days to patch, that's two days of
vulnerability, we can't afford that.

So we have to patch while we're on, be able to do these things while the systems are running,
and it's go to be the computer, our robots, our AI, our machine learning that has automated all
this stuff and does it autonomously. So your data is safe, it's not going to be stolen. And that's
what the ransomware guys do, right? They take the data, they encrypt it, and then they offer to
sell you the key. And that's going to get worse before it gets better, but not for our customers.
We'll protect our customers.

Raghu Raghuram: Yeah, that's a very similar philosophy to what we have at VMware as well.
We call it intrinsic security, to build it in. And that's great. So, these are very exciting sorts of
developments to go forward, and look forward to the collaboration between the two teams. It's
been great to work with your teams, bring the solution forward. Thank you so much for your
time, Larry.

Larry Ellison: Raghu, thank you very much. We're looking towards growing our business and
making a lot of customers very, very happy. Thank you for taking your time.

Raghu Raghuram: Yup. Thank you.

SDDC Planning and Design Page 163


Raghu Raghuram: Yup. Thank you.

Learn more about Oracle Cloud VMware Solution on the Oracle website.

IBM Cloud for VMware Solutions

With IBM Cloud® for VMware


SolutionsTM , you can migrate VMware
workloads to the IBM Cloud while using
existing tools, technologies, and skills
from your on-premises environment.

The integration and automation with Red


Hat OpenShift helps accelerate innovation
with services like AI, analytics, and more.

IBM Video Transcript

Hi, I'm Simon Kofkin-Hansen from IBM Cloud, and today we're here to talk about IBM Cloud for
VMware solutions. This is the most secure enterprise-grade cloud for VMware at scale. So let's
break down what does this mean? The security leadership, enterprise grade, and the VMware
expertise at scale.
Starting with the security leadership, IBM provides the highest form of encryption for data at
rest and data in motion with FIPS 140-2 Level 4-based encryption, ensuring that your data
where it resides and while it's moving around within your organization is using that the highest
form of encryption available out in the market today.

SDDC Planning and Design Page 164


form of encryption available out in the market today.

We also provide security role-based access control, allowing different parts of your organization
to interact with the data, maintaining the compliance, the security, and the visibility to the
different parts of the business, and ensuring the wrong people don't get access to the wrong
types of data.
Furthermore, we can comply with data sovereignty regulations, ensure we provide geo-fencing
for your workloads, ensuring data doesn't necessarily cross the relevant borders. As we find this
becomes more and more easily achievable in this cloud and this virtual-based world, we want
to ensure that this sovereignty remains intact, also providing compliance and regulatory control
through config management and managing the configuration drift.

Through our wide variety of partners, we've brought all these different security solutions
together, all based on the real-time advice and guidance provided from our highly regulated
and security-conscious clients.
So let's explore enterprise grade, and what does that mean? So, what we've done is we've
utilized and codified IBM's experience of managing, for over a decade, these 850,000 based
VMware workloads for all our enterprise clients across the world. And these consist of banking,
government and things in the financial sector, insurance, retail, and all the other industry
sectors. And the ability to take all that experience, codify it and creating an automated way of
deploying these solutions out there.

The automation that we have provided not only brings a rapid provisioning and a rapid uptime
to provide these solutions and these platforms, but it has ancillary benefits downstream by
making these solutions much easier to support and much easier for all the third parties that we
have out there to integrate their solutions and integrate their products onto this overall
platform.

The other thing with enterprise grade is we have the largest footprint of the solution available,
with the solution available in over 35 data centers globally. We also have the flexibility and this
is what the final factor of enterprise grade: the flexibility of options and the flexibility of choice
with the myriad of options that you can choose.

And what I mean by that is we have a number of different storage options. As we found out by
direct lessons learned from the enterprise is most enterprises are not going to choose just one
or two different storage options. So taking that lessons learned, you have a myriad of choice,
with software-defined back storage, with endurance-based storage, and with object storage for
long-term data archiving and retrieval.
All these lessons learned have been brought together along with the partners and our broad
partner ecosystem to bring together what we believe is a truly enterprise grade and enterprise
ready solution, which has been tested, validated, and verified by many of the enterprises out
there today.
VMware expertise at global scale, let's dive into that for a second or two. We're the largest
manager of workloads out there with 850,000 VMs under management. we've migrated over a
hundred thousand workloads from on-premise into the cloud and ensuring and helping our
clients out there with their data transformation. We have over decades worth of experience in
managing these workloads, providing these solutions across various industry verticals.

So with these three things, it all makes it out to be why we're unique in the market with our
VMware solutions.

SDDC Planning and Design Page 165


VMware solutions.

Thank you for watching this video today, and please feel free to leave any comments down
below. If you like this content, please, like and subscribe to future videos around this and many
other subjects on IBM Cloud.

Learn more about IBM Cloud for VMware Solutions on the IBM Cloud website.

Alibaba Cloud VMware Solution

Alibaba VMware Cloud Service


customers can use familiar VMware
Cloud technologies to easily extend
VMware based on-premises
enterprise workloads to Alibaba
Cloud.

You do not need to rearchitect your


environment but can manage your
workflow in the familiar VMware
environment without network and
security disruption.

Alibaba Video Transcript

Rosa Wang: Hello everyone. My name is Rosa Wang. I am a global alliance manager of Alibaba
Cloud.
Enterprises adopt multi-cloud and the hybrid cloud rapidly to address increasing demands from

SDDC Planning and Design Page 166


Enterprises adopt multi-cloud and the hybrid cloud rapidly to address increasing demands from
our customers. We were looking for market leaders to partner with. VMware is the well-
recognized market leader in this domain. So we are very excited about this partnership. This
year on April 29 (2020), Alibaba Cloud and VMware jointly announced the general availability of
Alibaba VMware Cloud Service (formerly Alibaba Cloud VMware® Solution), which is currently
available in all the regions in mainland China and Hong Kong. Alibaba Cloud VMware Solution
allow customers to easily migrate or extend their current on-premise VMware workload to
Alibaba Cloud.

Today, I'll cover three major use cases for this solution, which include disaster recovery, data
center extension, and cloud migration. Before I get into that, I would like to introduce CT Dong,
who is the SDDC solution architect leader at VMware, to talk about some technical details of
this solution.

CT Dong: Hi, my name is CT Dong and I'm the cloud architect lead from VMware, China region.
So, today we will introduce to you the Alibaba Cloud VMware Solution. So, VMware Solution is
to allow any application running on any cloud on any devices. To achieve that, VMware built
the leading service called VMware Cloud Foundation, which is composed of the very famous
VMware products like vSphere, software-defined network, NSX, and software-defined storage,
vSAN, and the cloud management, vRealize.

The VCF is not only built for enterprise private cloud on customer-owned data center, but also
it's tied to every major public cloud globally, such as VMC on AWS, Microsoft Azure, Google
GCP, and of course, Alibaba Cloud in China. And even the same architecture can be the
foundation for edge computing.

So, you can see VMware Cloud Foundation enables consistent cloud infrastructure everywhere.
Today, hybrid cloud becomes the preferred enterprise cloud adoption strategy. The reason the
research shows the percentage of organization committed to or interested in hyper-cloud
strategy keep growing, and the amount of the mix of workloads moving from on-premise to
cloud: 56% are doing lift and shift and 44% are doing refactoring. So, if you are doing simple
math, you can see lift and shift of enterprise workloads is a multi-billion opportunity and it's
happening now. So, VMware together with our public cloud partners can address these market
requirements very well.

So why do the top public clouds build a joint solution with VMware? There are a couple of
compelling reasons. First, VMware is private cloud leader with more than 15 million workloads
running on VMware vSphere. Second, the joint hybrid cloud solution allows customer to do the
live migration without cost, complexity, or risk caused by refactoring. And last but not least,
migrating the legacy applications to public cloud gives the customer the opportunity to
integrate the public cloud services, such as AI and the machine learning central, with better
connection and a lower cost.

This is the general architecture we built for VMware hybrid cloud. On customer on premise, we
have full stack of VMware Cloud Foundation being built. And on public cloud, with joint
engineering effort, we integrate the full stack VMware Cloud Foundation software on top of
public cloud infrastructure.

Let’s drill down a little bit on the details of Alibaba Cloud VMware Solution. The part on left side
shows the architecture on Alibaba Cloud. We preload the VMware software stack onto the

SDDC Planning and Design Page 167


shows the architecture on Alibaba Cloud. We preload the VMware software stack onto the
innovative Alibaba bare-metal servers, X-Dragon servers, and the underlay, the VMware NSX-T
connect to Alibaba VPC network, which allow the customer to either export the service for
outside work or connecting internal applications with the Alibaba native cloud services. This
solution fives customer much more flexibility by managing the VMware stack by themselves.
For example, if customer wants containers to run new applications, they can deploy VMware
Tanzu Kubernetes Grid with vSphere 7. And also they can take advantage of VMware cloud
management, vRealize Suite to automate their workflows or manage resources on both on
premise or on cloud.

The part on right side shows the architecture of customer data center. It is also full stack
VMware solution, managed by either the customer themselves or by the management service
provider for the customer. In between the customer data center and the public cloud, we can
use Alibaba direct connect lines, or software-defined WAN network, or VPNs to make the
connections.

And by leveraging VMware Site Recovery Manager, SRM, or Hybrid Connect Extender, HCX,
products, and technologies, customer can do disaster recovery or backup in a cost-effective
way. The good news is that Alibaba Cloud launched the services are all regions in mainland
China and Hong Kong. Customers can choose the region close to their business, test it, use it
and append it. And Alibaba Cloud provides the first-line support and VMware provides the
second-line support to make sure we maintain a high-level service level agreement.

A quick summary: Alibaba Cloud VMware Solution brings a lot of benefits to customers. One,
Alibaba Cloud VMware Solution is a true leading solution for hybrid cloud. It is joint engineering
by the two leaders of cloud provider. Two, the service is available now and easy to access, so
that it saves customers time to market. Three, because of the consistent architecture is easy to
migrate the workloads to and from the public cloud. And the IT team can extend their skills to
manage the infrastructure, and avoid a deep learning curve on public cloud. Four, by
introducing Alibaba Cloud VMware Solution, customers can change the cost model from CapEx
to OpEx, start from small-scale to large-scale based on the business needs. So everything is
ready. Checkout the service now.

Rosa Wang: Thank you, CT, for the great explanation. In this slide, I will give you an overview of
Alibaba Cloud VMware Solution. It has four features I want to highlight. First, joint
development. VMware and the Alibaba Cloud engineering team work together to develop the
VMware SDDC version that runs on Alibaba Cloud, the bare-metal service and the VPC. The key
components include VMware vSphere and NSX. Later on, we will also add vSAN support.

It’s a bundle, which means the customer doesn't need to purchase a VMware license
separately. Both VMware SDDC software and Alibaba Cloud infrastructure, such as bare-metal
service, are available in a bundle together for ease of management and purchase.

Seamless integration: The customer can use the existing Alibaba Cloud to easily integrate
Alibaba Cloud VMware Solution with Alibaba compute, storage, network and other cloud native
services.
Last but not the least, the same user experience: The customer can manage that VMware
workload on Alibaba Cloud through VMware vSphere Client, which connects to the vCenter on
Alibaba Cloud, which is exactly the same tool they use on-premise.

SDDC Planning and Design Page 168


So the key advantages of this solution include: Convert a capital expense to operating expense,
which means it has reduced the cost by eliminating the upfront hardware cost. Alibaba Cloud
VMware Solution supports a subscription model, which can be based on monthly or yearly. Fast
migration to the cloud, which means it can save time and resources to migrate the virtual
machines through the same tools of VMware. Also, leveraging the same VMware environment
in a local data center and the public cloud gives you a new hybrid cloud infrastructure.

The first use case scenario I want to cover today is disaster recovery. In this scenario, your
production system runs on VMware environment in your local data center, and the DR site will
be deployed on Alibaba Cloud. So, you can replicate your existing VMware images and protect
your VMware workload from disaster, and recover easily from cloud backup.

In this case, you gain bi-directional workload portability between on-premise and VMware
Alibaba Cloud. So, customer will leverage the VMware Site Recovery Manager, SRM, feature to
replicate virtual machine images to Alibaba Cloud. When they say this is bi-directional, this
means the customer can replicate the image, not only from local data center to public cloud,
but also from Alibaba cloud to local data center, or from another cloud to Alibaba cloud, or vice
versa. And also, there are enough choices for the DR site, through Alibaba Cloud availability
zones and regions. Currently in mainland China and Hong Kong, there are nine regions
available. The data in the Alibaba Cloud backup can be saved to Alibaba Cloud object storage
services.

The second scenario is for data center extension. This case is for customers who want to
continue to maintain their existing local data center VMware environment, but also want to
leverage the elasticity and flexibility of public cloud. So, customers can extend their existing on
premise workload to the cloud to allow easily to scale up and down by leveraging cloud elastic,
compute capacity, while maintaining the same user experience of the VMware environment of
the local data center.

The last scenario I want to talk about today is migration. So, this really includes two scenarios. It
is for customers who have enterprise application workloads running on premise, who want to
move to the public cloud. We can call this lift and shift. So they can easily move traditional
applications to Alibaba Cloud without re-architecting the environment. It also includes net new
application development, such that net new application can leverage a cloud native services on
Alibaba Cloud with the flexible and the hybrid architecture. Enterprise workloads, such as
traditional business applications, ERP, CRM, SRM, or service automation in the VMware
environment can be easily moved to Alibaba Cloud without architecture change, which means it
can save you time and money.

Innovative application development includes e-commerce, net new omni-channel marketing,


artificial intelligence, biz middle-office or data mid-office, which build on top of cloud native
services on Alibaba Cloud.
Alibaba Cloud VMware Solution use case scenarios definitely are not limited to these three use
cases I mentioned today. Because Alibaba Cloud VMware Solution provides the customer
simplest interoperability among public cloud, private cloud and local data center, allowing you
to maximize your VMware environment investment and unlock unlimited potential.

We really appreciate the partnership with VMware. This is only the beginning of the
partnership journey. So we look forward to work with VMware closely to create more success

SDDC Planning and Design Page 169


partnership journey. So we look forward to work with VMware closely to create more success
in the future.

Thank you.

Learn more about Alibaba Cloud VMware Solution on the Alibaba Cloud website.

Knowledge Check: VMware Cloud Partnerships

True or False: In the VMware cloud partnerships with Alibaba, IBM, and Oracle, you must learn
to use new management tools because VMware tools are note integrated.

True
False

SDDC Planning and Design Page 170


On-Premises Cloud Solutions
Friday, January 20, 2023 9:31 AM

Learner Objectives

After completing this lesson, you should be able to:

• Recognize use cases for on-premises cloud infrastructure.


• Identify service features of VMware Cloud on Dell EMC
• Identify service features of VMware Cloud on AWS Outposts.
• Recognize requirements for on-premises cloud infrastructure.

Bringing the Cloud On Premises

With on-premises cloud infrastructure as a service (IaaS), you can host an environment similar
to a public cloud, on premises.

But an on-premises cloud infrastructure seems to contradict a key principle of cloud services:
that they are provided off-site, in the cloud.

Use Cases for On-Premises Cloud Infrastructure

Organizations have different reasons for wanting to keep their workloads in their own data
centers:

• Meeting low-latency requirements


• Protecting highly-sensitive data
• Complying with data sovereignty requirements
• Meeting compliance and regulatory requirements
• Processing data-intensive workloads close to where the data is generated
• Modernizing a data center without having to move applications off site

On-Premises Cloud Solutions

VMware Cloud on Dell EMC and VMware Cloud on AWS Outposts are on-premises products
that you can use to extend the cloud model to the data center.

SDDC Planning and Design Page 171


VMware Cloud on Dell EMC is a
VMware managed service that brings
VMware enterprise class SDDC
software on Dell hardware to your on-
premises environment.

The service accommodates both


traditional data center use cases and
edge computing

With this hybrid cloud option, you can continue operating your data centers without the
traditional capital-funded infrastructure refresh spend and the ongoing maintenance that is
typically required for physical data center infrastructure.

The fully managed on-premises infrastructure as a service offers a cloud-like monthly billing
model.

How VMware Cloud on Dell EMC Works

SDDC Planning and Design Page 172


VMware Cloud on Dell
EMC includes a
physical Dell VxRail
hyperconverged
infrastructure.

This fully managed


infrastructure includes
all the software of the
SDDC (compute,
storage, and
networking), which is
powered by VMware
Cloud.

Service Features

VMware Cloud on Dell EMC provides infrastructure, VMware SDDC software, services such as
shipping, installation, and life cycle management, and support for security updates and
software patching, proactive monitoring, and break-fix service.

Hardware Software

Each deployment includes a 42u rack, with two VMware VMware Cloud on Dell EMC
SD-WAN appliances. VMware uses these appliances to includes the SDDC stack:
manage the solution remotely.
• ESXi running on Dell EMC
An out-of-band management switch connects the Dell VxRail

SDDC Planning and Design Page 173


An out-of-band management switch connects the Dell VxRail
VxRail iDRAC ports for remote monitoring and • vCenter Server Appliance
maintenance of the rack hardware. • NSX Data Center for vSphere
to power networking for the
The redundant top-of-rack switches provide network service
connectivity for business applications. • vSAN to aggregate host-based
storage into a shared
The rack includes three physical hosts, which is the datastore
minimum number of hosts that this solution can be • VMware HCX to enable
ordered with. Capacity can expand to up to 26 hosts per application mobility and
rack,. infrastructure hybridity
• VMware Tanzu Services for
A standby host is used by VMware Site Reliability modern application
Engineers (SREs) when they need to do maintenance on development
the host infrastructure or replace any failed host
hardware.

Services Support

The services include shipping, both delivery and return. Support is provided for all software
components.
For installation, a Dell technician comes onsite to install
the power and networking that is required to activate If a problem occurs with a host, a
the system. four-hour mission-critical onsite fix
is required.
For lifecycle management, the service includes all
patching and upgrades for all hardware and software Global support centers provide full
components. monitoring (24/365) and support.

VMware handles software


deployment and maintenance.

For more information about the features of VMware Cloud on Dell EMC, access the Service
Description.

Knowledge Check: VMware Cloud on Dell EMC


Which statement most accurately describes the service features of VMware Cloud on Dell EMC?
(Select one option)

Dell technicians perform all software maintenance, as well as hardware fixes.


An SDDC includes a minimum of one rack with three hosts. You can add hosts to the rack,
up to the maximum supported by the rack.
VMware Site Recovery is included as part of the initial service offering.
When an onsite response is required to fix a problem related to a host, a Dell technician
must arrive on site within 24 hours.

SDDC Planning and Design Page 174


VMware Cloud on AWS Outposts is on-
premises as-a-service solution that runs
VMware enterprise-class SDDC software
on the AWS Nitro System-based EC2 bare
metal instances.

The service is optimized for VMware


workloads with low latency, data
residency, or local data processing
requirements.

How AWS Outposts Work

AWS delivers and installs the outpost at your on-premises location and monitors, patches, and
updates it. AWS handles all maintenance and replacement of the hardware.
VMware provides continuous life cycle management of the VMware SDDC and serves as your
first line of support.

VMware Cloud on AWS and VMware Cloud on AWS Outposts share the same infrastructure,
architecture, and operations.

The VMware SDDC runs on AWS Outposts bare metal delivered as-a-service on-premises.
SDDC Planning and Design Page 175
The VMware SDDC runs on AWS Outposts bare metal delivered as-a-service on-premises.

You run applications and workloads on premises using familiar AWS services, tools, and APIs.

You can run some AWS services locally and connect to a broad range of services available in the
local AWS Region.

Knowledge Check: VMware Cloud on AWS Outposts

Which statement most accurately describes service features of VMware Cloud on AWS
Outposts? (Select one option)

The SDDC software is managed by AWS


VMware enterprise-class SDDC software runs on AWS Nitro System-based EC2 bare metal
instances
You run applications and workloads on premises using new outpost tools and services
VMware provides break/fix support for mission-critical problems with the hardware

On-Premises Cloud Requirements

SDDC Planning and Design Page 176


When you order an SDDC to host your outpost workloads, you must select a location where you
want to install it, select a rack, specify the number of hosts for your SDDC, and set up the
management and overlay networks.

The following checklists can help as you prepare to order your first SDDC.

Physical and Environmental Requirements

Plan adequate space for the rack based on its dimensions: Verify that you have
accommodations for network cabling and power accessibility and enough space for
service and maintenance.
Verify that you have sufficient space and weight capacity onsite to maneuver the rack into
its designated position in the data center.
Ensure that the rack is not exposed to direct sunlight and that the site maintains the
specified temperature and humidity levels.
Plan for electrical power sources that meet the requirements of the rack.

VMware is not responsible for any delay in installation or any failure of the
SDDC Planning and Design Page 177
VMware is not responsible for any delay in installation or any failure of the
hardware or the SDDC if the customer does not maintain the specified
environmental conditions at the installation site.

Networking Considerations

Verify that an existing network can handle multiple subnets and a router with Internet
connectivity can be connected to the rack.
During the ordering process, specify the IP addressing information for configuring the
management subnets.
Ensure that you provide the underlying networking details for the uplink network to
establish a connection between the SDDC and your network. An uplink connection is
required to migrate your workloads between the rack and your network.
Configure the number of uplink connections based on your requirements.

Accessing Specifications
• For a list of detailed specifications, access the data sheet for VMware Cloud on Dell EMC
• For a list of detailed specifications, access AWS Outposts rack hardware specs

Knowledge Check: On-Premises Cloud Requirements

If necessary, use the specifications information provided in the VMware Cloud on AWS
Outposts website and the VMware Cloud on Dell EMC datasheet to answer the following
questions.

1. Before you deploy VMware Cloud on AWS Outposts or VMware Cloud on Dell EMC at your
data center, you must plan and allocate a dedicated physical space for the hardware.

Which environmental requirements must you consider? (Select two options)

The operating temperature is within the required range with no direct sunlight on
the equipment.
Cabling and power sockets meet requirements in terms of location and number.
Power source locations are on the floor, rather than the ceiling, to avoid fire
hazards.
The power source location is close enough to the hardware so that you do not
require extension cords.

2. Which requirements must you meet for physically moving the outpost hardware into your
data center? (Select two options)

Weight capacity to move the hardware to its location in the data center.
Space clearance to move the hardware to its designated location in the data center.
Trained movers that you hire to manually lift the hardware into place.

SDDC Planning and Design Page 178


Trained movers that you hire to manually lift the hardware into place.
Assembly tools to put the rack together after it is moved into place.

3. Which networking requirements must you consider when deploying the outpost
hardware? (Select two options)

Number of uplinks that you require


Correct IP addressing for subnets
Use of encryption keys
Number of standby hosts

SDDC Planning and Design Page 179


Module Summary
Friday, January 20, 2023 10:17 AM

Review the key concepts covered in this module:

The VMware software-defined data center includes three foundational technologies:


vSphere, NSX-T Data Center, and vSAN.

Hyperscaler cloud partners implement vSphere HA to provide high resiliency against the
potential failure of hosts in their data centers.

VMware Cloud on AWS is a VMware first-party solution. You can integrate SDDC clusters
with Amazon Web Services, such as Amazon Simple Storage Service, Amazon Elastic
Compute Cloud, and Amazon Relational Database Service.

Azure VMware Solution is a Microsoft service, verified by VMware, that runs on Azure
infrastructure. With this solution, you can move VMware workloads from your data center
to Azure and integrate your VMware environment with Azure.

Google Cloud VMware Engine brings VMware enterprise class SDDC software to the
Google Cloud Platform. You can run production applications across vSphere private,
public, and hybrid cloud environments, with optimized access to Google Cloud Platform
services.

With Elastic DRS, you can set policies to automatically scale your cloud SDDC by adding or
removing hosts in response to demand. Elastic DRS replaces VMware vSphere DPM in a
VMware Cloud on AWS SDDC.

When designing your SDDC, you must consider where your other cloud-native services live
and how to make the network connections to those services and to your on-premises
infrastructure.

SDDC Planning and Design Page 180


Network Virtualization Overview
Friday, January 20, 2023 11:44 AM

Learner Objectives

After completing this lesson, you should be able to:

• Distinguish between a vSphere standard switch and a vSphere distributed switch


• Distinguish between functions of the management, control, and data planes

At VMware, network virtualization has evolved over time.

This lesson reviews the basic concepts of the VMware vSphere virtual switches and VMware
NSX networking planes.

Virtual Switches in vSphere

Virtual switches connect VMs to the physical network:

Networking Page 181


• They provide connectivity between VMs on the same VMware ESXi host or on different
ESXi hosts.
• They support VMkernel services, such as VMware vSphere vMotion migration, iSCSI, NFS,
and access to the management network.

To connect VMs and ESXi hosts to the network, a virtual switch uses specific types of
connections, or ports: virtual machine, VMkernel, and uplink.

VM Ports
Virtual machine ports connect virtual machines to the virtual network.

VMkernel Ports
The ESXi hypervisor (VMkernel) uses VMkernel ports for managing infrastructure traffic.
VMkernel ports are used for traffic such as IP storage, vSphere vMotion migration, VMware
vSphere Fault Tolerance, VMware vSAN, VMware vSphere Replication, and the ESXi
management network.

Uplink Ports
Uplink ports connect the virtual network to the physical network.

Each uplink port is associated with a physical network adapter on the ESXi host.

Networking Page 182


Each uplink port is associated with a physical network adapter on the ESXi host.

vSphere supports two types of virtual switches: standard and distributed

vSphere Standard Switches

A vSphere standard switch is a virtual switch that provides virtual networking for an ESXi host
and its virtual machines.

For example, this standard switch has the following


components:
• Two VM port groups called Accounting and Sales
• Two VMkernel ports, one for ESXi management
traffic and the other for iSCSI storage traffic
• Four uplink ports, each connected to a physical
NIC on the ESXi host.

Example of a standard switch, which includes


two VM port groups called Accounting and
Sales
A standard switch is associated with a
single ESXi host.

For example, if you have three hosts


that require network connectivity,
you must create a standard switch on
each host.

vSphere Distributed Switches

VMware vSphere® Distributed SwitchTM is a virtual switch that provides virtual networking for
all ESXi hosts in a data center.

Networking Page 183


A standard switch is owned and managed by a single ESXi host. A distributed switch is owned and
managed by VMware vCenter Server®.

A vSphere distributed switch provides the following benefits:

• Virtual machines maintain a consistent network configuration as they migrate between


hosts in the data center.
• Administrators have a central point of control for creating, administering, and monitoring
virtual networks.

vSphere Distributed Switch Architecture

The distributed switch architecture consists of the control plane and the I/O plane.

The control plane resides in vCenter Server where it configures distributed


Control switches, distributed port groups, distributed ports, uplinks, NIC teaming,
Plane and so on.

Networking Page 184


Plane

The control plane also coordinates the migration of the ports and is
responsible for the switch configuration.
The I/O plane is implemented as a hidden virtual switch in the VMkernel of each
I/O Plane ESXi host.

This plane manages the I/O hardware on the host and is responsible for forwarding
packets. vCenter Server oversees the creation of these hidden virtual switches.

Knowledge Check: vSphere Distributed Switches

Which statement accurately describes vSphere distributed switches? (Select one option)

A distributed switch is a virtual switch that is configured for a single ESXi host.
A standard switch is different from a distributed switch in that standard switches contain
VMkernel ports.
A distributed switch is managed by vCenter Server for all ESXi hosts associated with the
distributed switch.
Each ESXi host can have only one distributed switch configured at any time.

Networking Planes

Networks use the data forwarding process to carry user traffic from one device to another
device.

Networks include three layers or planes: management, control, and data. These planes
coordinate with each other to identify the best possible path between devices.

Management, Control, and Data Planes

The main elements of NSX architecture are the management, control, and data planes. This
architectural separation lets you scale your environment without impacting workloads.

Although not part of NSX, an additional plane, called the consumption plane, provides
integration into a cloud management platform.

Networking Page 185


Management Plane

In the management plane, the following functions are performed:

• Users manage, configure, and monitor the network devices, such as a switch or router.
• The network device usually provides a CLI or GUI for configuring the network and the
device. The CLI or GUI operates in the management plane.

In NSX, the management plane is designed with advanced clustering technology, which allows
the platform to process large-scale concurrent API requests. NSX Manager provides the REST
API and a web-based UI interface entry point for all user configurations.

Control Plane

The control plane is the brains of a network:

• It calculates and determines the best path for a packet to navigate from one device to
another device. Routing protocols, such as BGP, OSPF, RIP, primarily operate in this layer.
• After determining the best path, the control plane propagates this information to the data
plane.

The control plane is responsible for computing and distributing the runtime virtual networking
and security state of the NSX environment.

In NSX, the management plane and control plane are converged. Each manager node in NSX is

Networking Page 186


In NSX, the management plane and control plane are converged. Each manager node in NSX is
an appliance with converged functions, including management, control, and policy.

Data Plane

The data plane, also called the forwarding plane, performs the following functions:

• Forwards the user traffic between the networking devices, such as switches or routers
• Carries the user traffic from one device to another device, which is the fundamental
function of a network

The control and management planes help the data plane to perform effective data forwarding.

In NSX, the data plane includes transport nodes. Transport nodes, such as ESXi hosts and NSX
Edge nodes, are responsible for the distributed forwarding of network traffic.

The data plane includes a virtual distributed switch managed by NSX-T (N-VDS), which
decouples the data plane from vCenter Server and normalizes the networking connectivity. The
ESXi hosts managed by vCenter Server can also be configured to use the vSphere Distributed
Switch (VDS) during the transport node preparation.

Cloud Consumption Plane

Although the consumption plane is not part of NSX-T Data Center, this plane provides
integration into cloud management platforms through the REST API and integration with
VMware cloud management planes such as vRealize Automation:

• The consumption of NSX-T Data Center can be driven directly through the NSX UI.
• Typically, end users tie network virtualization to their cloud management plane for
deploying applications.

Knowledge Check: Networking Planes

What functions do networking planes perform?

Networking Page 187


Networking Page 188
Networking in the SDDC
Friday, January 20, 2023 1:50 PM

Learner Objective
After completing this lesson, you should be able to:

• Recognize key components of the NSX architecture in a cloud SDDC deployment

NSX provides consistent networking and security for cloud SDDCs and the on-premises
SDDC.

NSX provides scalable, easy-to-consume networking, and multiple connectivity options.

The network architecture of cloud SDDCs and on-premises SDDCs is similar.

Logical Switching: Segments

In NSX, segments connect VMs and containers regardless of their physical location. A segment,
also known as a logical switch, reproduces switching functionality in an NSX virtual
environment.

VMs communicate with each other when connected to the same segment. For example, you
can connect all web server VMs to the same segment so they can communicate with each
other and exchange information.

Logical Switching Components

Networking Page 189


Virtual Machines
You can connect virtual machines to a segment regardless of their physical location and the
type of hypervisor they are running on.

Containers
NSX segments provide connectivity for containerized applications.

Segment Profiles
Segment profiles include layer 2 networking configuration details. Segment profiles can be
applied at a port level or at a segment level.

You can configure multiple types of segment profiles such as IP Discovery, Spoof Guard,
Segment Security, and MAC Discovery.

Segment
The NSX-T Data Center logical switches are called segments:

• Segments separate networks and provide layer 2 connectivity to their attached VMs and
containers.

• VMs and containers can communicate with each other if they are connected to the same
segment.

• Each segment has a virtual network identifier (VNI), which is similar to a VLAN ID.
However, unlike VLANS, VNIs scale beyond the limits of VLAN IDs.

Virtual Distributed Switch


The virtual distributed switch managed by NSX (N-VDS) is module-deployed in all transport
nodes during NSX preparation, which provides layer 2 functionality.

In vSphere 7 environments, ESXi hosts can use both N-VDS and VDS for layer 2 forwarding.

Networking Page 190


In vSphere 7 environments, ESXi hosts can use both N-VDS and VDS for layer 2 forwarding.

Transport Node
A transport node, such as an ESXi host, is responsible for forwarding the data plane traffic that
originates from VMs, containers, or applications running on bare-metal servers.

Uplinks
Uplinks are logical interfaces on the N-VDS/VDS.

Uplinks are used to connect the host physical NICs to provide external connectivity.

Logical Switching End-to-End Communication


NSX uses a tunneling encapsulation protocol called Generic Network Virtualization
Encapsulation (GENEVE) to encapsulate the virtual network traffic and carry it over the
physical network.

Each transport node is


configured with a tunnel
endpoint (TEP) that is used
to encapsulate and
decapsulate the GENEVE
traffic as it leaves or enters
the host.

Tunnels are set up between


TEPs.

VM frames are
The GENEVE protocol provides L2 over L3 encapsulation of data plane
packets. encapsulated with GENEVE
tunnel headers and sent
across the tunnel.

VMs can communicate with each other if they are connected to the same segment.

But VMs might also need to communicate with VMs on different segments and with the
Internet.

Logical Routing: Gateways


NSX uses logical routers, called gateways, to connect different networks or segments, and to
provide Internet access to your applications.

Networking Page 191


provide Internet access to your applications.

NSX provides two types of gateways: Tier-1 and Tier-0

Tier-1 Gateways
Tier-1 (T1) gateways are typically used to connect VMs and containers that are attached to
different networks or segments.

The internal communication across segments is called east-west traffic.

Tier-1 gateway example

Tier-1 gateways have the following characteristics:

• Provide segment interconnection and separation

• Offer gateway services to the internal networks or segments

• Are implemented as a distributed solution across all participating transport nodes

• Do not use any dynamic routing protocols

• Connect to a Tier-0 gateway for external connectivity

Tier-0 Gateways
Tier-0 (or T0) gateways connect the virtual and physical networks to provide external

Networking Page 192


Tier-0 (or T0) gateways connect the virtual and physical networks to provide external
connectivity to all the containers and VMs that run in the data center.

Communication between the cloud SDDC and external networks, such as on-premises data
centers, the Internet, or public cloud services, is called north-south traffic.

Tier-0 gateway example

Tier-0 gateways have the following characteristics:

• Offer gateway services between NSX and the external networks.

• Require the deployment of one or more VMware NSX® Edge™ nodes to centrally
configure and manage the routing capabilities.

• Support static and dynamic routing protocols (BGP) toward the physical network.

• Support equal-cost multipath (ECMP) routing to load balance traffic and provide fault
tolerance.

Knowledge Check: Logical Routing

Which use cases apply to NSX logical routing? (Select two options)

You must provide external connectivity to VMs and containers.


You require intrinsic security for VMs connected to different segments.
You want to provide layer 2 connectivity between VMs and microservices.
Your organization must provide connectivity between VMs and containers that are
conncted to different segments.

Networking Page 193


The T0 and T1 gateways are located on appliances called VMware NSX Edge nodes.

NSX Edge Nodes


NSX Edge nodes provide routing services and connectivity to external networks. NSX Edge
node appliances are created during an SDDC deployment.

Two NSX Edge nodes are created for high availability. NSX Edge nodes run in active-passive
mode, and the failover is handled by the NSX Edge nodes themselves.

Two NSX Edge node appliances are created during an SDDC deployment. Although not pictured here, each NSX
Edge node is connected to a different management segment to make the edge services highly available.

Features that are typically used when setting up network connectivity are DHCP and NAT.

What is DHCP?

With DHCP (Dynamic Host Configuration Protocol), clients can automatically obtain network
configuration settings such as IP addresses, subnet masks, default gateways, and DNS
configuration from a DHCP server.

DHCP makes it easier to manage IP addresses because IP addresses are assigned automatically
rather than manually. DHCP ensures that each client is assigned a unique IP address.

In the VMware Cloud SDDC, you can configure a DHCP server or a DHCP relay.

DHCP Server

A DHCP server handles DHCP requests from VMs that are attached to segments. The VM
becomes the DHCP client.

Networking Page 194


becomes the DHCP client.

DHCP Relay

A DHCP relay forwards DHCP requests from VMs to external DHCP servers.

What is NAT?
NAT (network address translation) is a mechanism that maps private IP addresses to public IP
addresses.

A public IP address is unique, but


IPv4 public addresses are limited
because all devices are on the
Internet.

A private IP address is not


globally unique and cannot be
used to access the Internet
directly.

NAT performs one-to-one


mapping or one-to-many
mapping, which allows
computers that use private IP
addresses to access the Internet.
The private IP addresses, which
are used internally, are not
revealed.

Networking Page 195


NSX supports source NAT and destination NAT rules.

Source NAT

SNAT translates source IP packets from a private IP address to a known public IP address.

SNAT is used for traffic originating in the private network and reaching the Internet.

SNAT is automatically applied to all workloads in the SDDC to enable Internet access.

Destination NAT

DNAT translates the destination public IP address to a private IP.

DNAT is used for traffic originating on the Internet and reaching the private network.

Networking Page 196


VMware Cloud on AWS Networking

The VMware Cloud on AWS SDDC includes management and compute segments.

Management Segments

Management segments handle traffic from your infrastructure, or management systems, such
as the vCenter Server appliance, NSX Manager appliance, Edge Node appliances, and ESXi
hypervisors.

In the VMware Cloud SDDC, management nodes are connected to management segments.

Management nodes include vCenter Server, NSX Manager, ESXi hypervisors, and NSX Edge
node appliances. Add-on services deploy other management appliances to the management
segments.

Networking Page 197


Compute Segments

Compute segments handle traffic from your workload systems. Workload VMs and containers
can be connected to one or more network segments.

In this example, the app servers are connected to App-Segment, the web servers are
connected to Web-Segment, and the database servers are connected to DB-Segment.

In a VMware Cloud on AWS SDDC, management segments are created and managed
by VMware. Also, a default compute segment is created by VMware. You can create
additional compute segments if necessary.

In a VMware Cloud on AWS SDDC, VMs, containers, appliances, nodes, and servers are split
between two types of Tier-1 gateways: management and compute.

Management (T1) Gateway

The management gateway (MGW) handles management, or infrastructure, traffic.

Management traffic includes vSphere management, VM provisioning, vSphere vMotion


migrations, vSAN, vSphere Replication, and logging.

Compute (T1) Gateway

The compute gateway (CGW) handles network traffic from workload VMs and containers.

Networking Page 198


For example, the compute gateway allows the web servers on one segment to connect to the
app servers on a different segment.

The Tier-0 gateway provides external connectivity to all the containers and VMs that run in
the VMware Cloud on AWS SDDC

In a VMware Cloud on AWS SDDC, VMware is responsible for deploying, managing,


and configuring the T0 gateway, the T1 gateways (management and compute), and
the edge node appliances.

Compute (Tier-1) Gateways


Every VMware Cloud on AWS SDDC is created with a standardized topology consisting of a
management gateway (MGW) and a compute gateway (CGW) for routing network traffic

Networking Page 199


management gateway (MGW) and a compute gateway (CGW) for routing network traffic
inside the SDDC. Customers create logical segments on the CGW to connect workloads to the
NSX overlay network in the SDDC.

Multiple Compute Gateways

You can create additional compute gateways in your SDDC. Use cases for multiple compute
gateways include the following:

• Disaster recovery testing

• Running applications with overlapping network addresses

Types of Compute Gateways

You can create additional CGWs as Routed ,NATted, or Isolated CGWs.

Routed CGW

A routed CGW is connected to the NSX overlay network. Workload VMs behind a routed CGW
can communicate with other CGW workloads (including the workloads on the default CGW).

You can configure route aggregation to enable routed CGW workloads to communicate over
VMware Transit Connect/ AWS Direct Connect (Intranet endpoint) or Connected VPC (Services
endpoint).
Only the explicitly configured addresses in route aggregation prefix lists are advertised
externally, giving you fine-grained control over reachability to workloads on additional CGWs.

NATted CGW

A NATted CGW requires NAT to be configured to ensure connectivity to the SDDC NSX overlay
network.

As with routed CGWs, workloads on NATted CGWs can communicate externally when using
route aggregation. Addresses behind the NATed CGW are not advertised, so overlapping CIDRs
can be created in the SDDC.

This capability is useful when supporting tenants or applications with overlapping IP addresses.
You can avoid renumbering (re-IP'ing) your applications when you migrate them to the cloud,
saving a significant amount of time, effort, and risk.

Isolated CGW

An isolated CGW is designed to be disconnected from the rest of the SDDC.

The isolated CGW serves as a local router without connectivity to the rest of the SDDC
networks or to the external environment. Workload VMs on isolated CGW subnets can
communicate among themselves but not to VMs on other CGWs.

The isolated CGW configuration is often used to simplify certain advanced use cases such as

Networking Page 200


The isolated CGW configuration is often used to simplify certain advanced use cases such as
disaster recovery (DR) testing.

By combining routed, NATted, and


isolated CGWs, you can enable
applications with overlapping
addresses and have multitenancy use
cases in the SDDC.

Knowledge Check: SDDC Gateway

Which types of gateways can you find in the VMware Cloud on AWS SDDC? (Select two
options)

Control
Compute
Standard
Management
Distributed

Networking Page 201


Configuring Networking in VMware Cloud on AWS
Monday, January 23, 2023 7:40 AM

Learner Objectives

After completing this lesson, you should be able to:

• Create and manage network segments


• Create and manage DHCP profiles
• Configure network address translation

This lesson focuses on configuring networking on VMware Cloud on AWS.

For more information about configuring networking for other hyperscaler partners, you can
access the following resources:

Azure VMware Solution


Under how-to guides, see the "Configure Networking" and "Configure Internet Connectivity"
sections.

Google Cloud VMware Engine


Search the Google Cloud VMware Engine documentation for the desired networking topic.

Network and Security Configuration

You use the VMware Cloud console to configure and manage your NSX network configuration.

On the Networking & Security tab, you perform all networking configurations, with the
exception of connecting VMs to network segments.

Networking Page 202


The overview pane summarizes the network topology of the VMware Cloud on AWS SDDC called
VMware Cloud PCM.

Configuring Compute Segments

Compute segments provide network access to your workload VMs. Compute segments are also
referred to as logical networks.

To add a segment, you give the segment a name, you specify the segment type, and you enter
the subnet. The subnet must be in IPv4 CIDR block.

Classless Inter-Domain Routing (CIDR) block is a method for allocating IP addresses and IP
routing.

A VMware Cloud on AWS SDDC starts with a single default compute segment called sddc-cgw-
network-1.

Compute Segment Types

The VMware Cloud on AWS SDDC supports the following types of compute segments: routed,

Networking Page 203


The VMware Cloud on AWS SDDC supports the following types of compute segments: routed,
extended, and disconnected.

Routed segment
• A routed segment is the default type. It has connectivity to other segments in the SDDC
and, through the SDDC firewall, to external networks.

Extended Segment
• An extended segment requires a layer 2 virtual private network (VPN), which provides a
secure communications tunnel between an on-premises network and one in your cloud
SDDC.
• An extended segment extends an existing L2 VPN tunnel, providing a single IP address
space that spans the SDDC and an on-premises network. An L2 VPN connection can be
used to migrate running VMs between SDDCs.

Disconnected Segment
• A disconnected segment has no uplinks associated with it and provides an isolated
network accessible only to VMs connected to it.
• This segment type can be useful for testing a disaster recovery solution. You can create
disconnected segments and use a VM-based router to provide internal connectivity
between the isolated networks. You can then verify that workloads and applications
connected to these isolated networks function as expected.
• Disconnected segments are created when needed by VMware HCX®. You can also create
them and convert them to other segment types.

Configuring DHCP

To configure DHCP, you must first create a DHCP profile. The DHCP profile identifies whether
you are using a DHCP server or DHCP relay.

After creating the profile , you assign the profile to either a segment or Tier-1 gateway.

Step 1. Create a DHCP Profile

Networking Page 204


On the Networking & Security tab, click DHCP under System and click ADD DHCP PROFILE.

Step 2. Create a DHCP Server Profile

To create a DHCP server profile:


1. Enter a unique name to identify the DHCP server profile.
2. Select DHCP Server from the Profile Type drop-down menu.

Networking Page 205


2. Select DHCP Server from the Profile Type drop-down menu.
3. Enter the IP address of the DHCP server in a CIDR format.
If no server IP address is specified, 100.96.0.1/30 is autoassigned to the DHCP server.
4. Edit the lease time in seconds. The default value is 86400.
5. Click SAVE.

Step 3. Create a DHCP Relay Profile

To create a DHCP relay profile:


1. Enter a unique name to identify the DHCP relay profile.
2. Select DHCP Relay from the Profile Type drop-down menu.
3. Enter the IP address of the remote DHCP server. Both DHCPv4 and DHCPv6 servers are
supported. You can enter multiple IP addresses.
4. Click SAVE.

Step 4: Assign a Profile to a Segment

Networking Page 206


To assign a DHCP profile to a segment:

1. On the Networking & Security tab, click Segments under Network.


2. Click the vertical ellipsis icon next to the segment and click Edit.

Step 5: Choose a DHCP Profile

To configure DHCP settings for the segment:


1. Click EDIT DHCP CONFIG.
The Set DHCP Config window appears.
2. Select the DHCP type
In this example, DHCP Relay is selected.
3. Select a DHCP relay profile.
4. Click APPLY.

Step 6: Assign a Profile to the Tier-1 Gateway

Networking Page 207


In the VMware Cloud SDDC, the Tier-1 gateway is the compute gateway.

To assign a DHCP profile to a Tier-1 gateway:


1. On the Networking & Security tab, click Tier-1 Gateways under Network.
2. Click the vertical ellipsis icon next to the compute gateway and click Edit DHCP
Configuration.

Step 7: Choose a DHCP Profile

To configure DHCP settings for the compute gateway:


1. Click Local | 1 Servers.
The Set DHCP Configuration window appears.
2. Select the DHCP type.
In this example, DHCP Server is selected.
3. Select a DHCP server profile.
4. Click SAVE.

Knowledge Check: Configuring DHCP


Networking Page 208
Knowledge Check: Configuring DHCP

Which task do you perform before configuring the DHCP server on the compute gateway?
(Select one option)

Create a DHCP relay profile.


Configure DHCP server on a compute segment.
Create a DHCP server profile.
Configure the DHCP relay on the compute gateway.

Configuring SNAT

Source NAT (SNAT) is automatically configured when deploying a VMware Cloud on AWS SDDC.

The public IP address used by SNAT appears in the Overview pane under Default Compute
Gateway.

For outbound requests, by default, the workloads of the compute network use a dedicated NAT IP
address, shown as the Source NAT Public IP in the Overview pane.

Configuring DNAT
In the VMware Cloud console, you can create DNAT rules to forward traffic from external,
public IP addresses to internal, private IP addresses.

Creating a DNAT Rule

Networking Page 209


To create a DNAT rule for a VM on the compute network, you must ensure that the VM has a
public IP address. In this example, you request a public IP address from AWS.

In the DNAT rule, you must specify the public IP address for the VM and the internal IP address
of the VM. The public IP address is exposed to external networks and the internal IP address is
private to the compute network.

Step 1: Request a Public IP Address from AWS

You can request a public IP address from AWS to assign to a workload VM:

On the Networking & Security tab, click Public IPs under System and click REQUEST NEW IP.

Step 2: Add a Note about the Public IP Address

Networking Page 210


To add a note for your reference:
1. Enter an applicable note about the IP address. For example, enter the name of the
workload VM associated with the public IP address.
2. Click SAVE.

VMware Cloud on AWS provisions the IP address from AWS. Public IP addresses might incur
additional charges.

Step 3: View the Public IP Address

Networking Page 211


You view the public IP address that you requested. You use this address in your destination NAT
rule.

As a best practice, release the public IP addresses that are not in use.

Step 4: Create a DNAT Rule

Networking Page 212


On the Networking & Security tab, click NAT under Network and click ADD NAT RULE.

Step 5: Enter NAT Rule Information

In this example, you create a DNAT rule to direct HTTP traffic from the public IP address to the
VM whose internal IP address is 192.168.1.2. The name of this VM is Photo-App.
To create the NAT rule:
1. Enter the NAT rule name.
2. Enter the public IP address of the VM.
This is the public IP address that you requested earlier.
3. From the Service drop-down menu, select HTTP.
The public port automatically populates when you select the service.
Selecting a specific service such as HTTP, instead of All Traffic, creates an inbound (DNAT)
rule that applies only to traffic using that protocol and port.
4. Enter the internal IP address of the VM.
5. Click SAVE.

Knowledge Check: NAT Rules

Networking Page 213


The screenshot shows a NAT rule for a VM called Web-Svr-01. Which statement accurately
describes the NAT rule? (Select one option)

The rule is an outbound (SNAT) rule.


The Web-Svr-01 public IP address 54.214.29.42 is translated to its internal IP address on
the compute network.
The IP address 192.168.100.5 is exposed to users on the Internet.
Both HTTP and HTTPS traffic is sent to port 443 through the Web-Svr-01 public IP address
54.214.29.42.

Networking Page 214


Networking Security
Monday, January 23, 2023 8:41 AM

Learner Objectives

After completing this lesson, you should be able to:

• Explain network firewalling and segmentation


• Recognize firewall options in a cloud SDDC

How do you maintain network security across private and public clouds?

Using NSX, you can set up gateway and distributed firewalls to protect your data center from
both external and internal threats.

Gateway Firewalls

Gateway firewalls prevent users from


accessing your servers through the
Internet.

A gateway firewall is used at the


perimeter of the data center to protect
traffic to and from physical
environments.

This traffic is called north-south traffic.


Gateway Firewalls have the following characteristics:

• They use stateful firewall rules:

A stateful firewall monitors the state of active connections and uses this information to
determine which packets to allow through the firewall. Stateful firewall rules allow or
deny traffic based on the source, destination, and protocol or port combination of the
packet.

• They are independent of the distributed firewall in terms of policy and enforcement.

Distributed Firewalls

A distributed firewall protects traffic between virtual


machines and containers in the data center.

Networking Page 215


machines and containers in the data center.

This traffic called east-west or lateral traffic.

With a distributed firewall, you can define and enforce network security policies fore every individual
workloads in the environment.

The distributed firewall resides outside the VM guest OS.

It performs the following functions

• Uses stateful firewall rules


• Controls the I/O path to and from the virtual NIC
• Monitors the state of active connections and uses this information to determine which
packets traverse the VM virtual NIC

Knowledge Check: Gateway and Distributed Firewalls

Which statements accurately describe gateway firewalls and distributed firewalls? (Select two
options)

Only gateway firewalls use stateful rules.


A gateway firewall protects north-south traffic.
A distributed firewall controls the I/O path to and from a VM's virtual NIC.
Gateway firewalls and distributed firewalls can share the same set of rules and policies.

Networking Page 216


Micro-Segmentation

Applying micro-segmentation, security administrators build security controls for each individual
workload based on its application requirements.

Micro-segmentation denies attackers the opportunity to pivot laterally in the internal network,
even after the gateway firewall is breached.

NSX micro-segmentation uses existing network infrastructure and prevents the lateral spread of
threats across an environment.

What Do You Think?

Which statements do you think accurately describe how micro-segmentation works in this
example? (Select three options)

Networking Page 217


The gateway firewall is responsible for east-west traffic.
Each VM and each service has a distributed firewall, and can be its own perimeter.
Security controls can only be created by department (Finance, HR, Engineering).
You can control communicate to shared IT services (AD, NTP, DHCP, and so on).
In the distributed firewall for each VM, centralized rules specific to the requirements of
each VM are applied.

Micro-segmentation performs several functions:

• Logically divides a data center into distinct security segments to the individual workload
level
• Defines distinct security controls for, and delivers services to, each unique segment
• Attaches the centrally controlled and operationally distributed firewalls directly to each
VM

Micro-segmentation supports a zero-trust architecture for IT security.

The zero-trust model trusts nothing and verifies everything. It establishes a security perimeter
around each VM or container workload using a dynamically defined policy.

Micro-segmentation has several uses:

• Protect critical applications: Use security controls to protect each business-critical


application, and control communication to shared IT services, such as AD, NTP, DNS, and
so on.
• Secure virtual desktop infrastructure: Assign security controls to logical groups of your

Networking Page 218


• Secure virtual desktop infrastructure: Assign security controls to logical groups of your
virtual desktops and mobile devices.
• Create DMZs anywhere. Because you can assign security controls to an individual
workload, a DMZ can be defined for each different application.

NSX provides several advanced network security features.


One of these features is NSX Distributed IDS / IPS.

NSX Distributed IDS/IPS

VMware NSX® Distributed IDS/IPSTM (intrusion detection system/intrusion protection system) is


an advanced threat detection engine for detecting lateral threat movement on east-west
network traffic across multi-cloud environments.

Distributed IDS uses network introspection to


identify malicious intrusion attempts:

• Protects east-west traffic


• Detects layer 4 attacks
• Uses external signatures to identify malicious
traffic Distributed IDS is implemented across
multiple ESXi hosts.

Distributed IDS enables security administrators to perform the following tasks:

• Identify security vulnerabilities in the workloads.


• Quarantine and increase the security based on the detected vulnerabilities.

Knowledge Check: Network Security Features

Which function does each security feature perform?

Networking Page 219


VMware Cloud on AWS Network Security
In a VMware Cloud on AWS SDDC, two types of gateway firewalls are available: management
and compute.

These firewalls examine all traffic into and out of the SDDC.

Management Gateway Firewall:

Networking Page 220


The management (tier-1) gateway firewall allows or denies network traffic to management
appliances and hosts.

Compute Gateway Firewall:

The compute (tier-1) gateway firewall allows or denies network traffic to the workload VMs.

Networking Page 221


Configuring Network Security in VMware Cloud on AWS
Monday, January 23, 2023 9:23 AM

Learner Objectives
After completing this lesson, you should be able to:

• Configure gateway firewall rules


• Configure custom services
• Configure distributed firewall rules

This lesson focuses on configuring network security on VMware Cloud on AWS.

For more information about configuring network security for other hyperscaler partners, you
can access the following resources:

Azure VMware Solution


Start by viewing the "Security recommendations for Azure VMware Solution" section in the
Azure VMware Solution documentation.

Google Cloud VMware Engine


For information about firewall rules, see the "Firewall tables" section in the Google Cloud
VMware Engine documentation.

Configuring Gateway Firewall Rules


The gateway firewall is stateful and protects all north-south traffic.

In the VMware Cloud on AWS SDDC, you configure firewall rules on the Tier-1 gateways:
Management and Compute

Management Gateway Firewall:

Networking Page 222


Maintaining the safety and security of your SDDC management infrastructure is critical.

By default, the management gateway firewall blocks traffic to all management network
destinations from all sources. The rule called Default Deny All drops all network traffic.

You must add rules to allow secure traffic from trusted sources. For example, you should create
a rule that allows VMware vSphere® ClientTM users to access VMware vCenter Server®.

The rule called vCenter Inbound is an example of such a rule. The vCenter Inbound rule allows
HTTPS traffic from MgmtGroup to vCenter Server. MgmtGroup is a group of IP addresses from
which you plan on using vSphere Client.

Compute Gateway Firewall:

Networking Page 223


By default, the compute gateway blocks traffics to all uplinks. The rule called Default Uplink
Rule drops all network traffic.

Add compute gateway firewall rules to allow traffic as needed. These rules specify actions to
take on network traffic from a specified source to a specified destination.

Demonstration: Creating Gateway Firewall Rules

Firewall rules are sets of instructions that determine whether the network traffic should be
blocked or allowed based on specific criteria.

All firewall rules can send logs to VMware vRealize® Log Insight CloudTM, if logging is enabled.

In the demonstration, a firewall rule is created for the compute gateway in a VMware Cloud on
AWS SDDC. This rule enables access to the Photo-App application. The rule allows HTTP traffic
from any source to the public IP address of the Photo-App VM.

Networking Page 224


1. In the VMware Cloud console browser tab, click Gateway Firewall under Security.
2. Select the Compute Gateway tab, if not already selected.
3. Create a firewall rule to allow HTTP traffic from any source to the public IP address of the
Photo-App-01 application.
a. Click ADD RULE.
b. Enter Photo-App-Public as the Name.
c. Leave Any as the value for Sources.
d. In the Destinations text box, click the edit icon.
e. Select the Photo-App check box and click APPLY.
f. In the Services text box, click the edit icon.
g. Select the HTTP check box.
h. Click APPLY.
4. Click PUBLISH to save the modifications to the firewall rule.

Custom Services
Firewall rules often apply to traffic from a network service. Many services are defined by
default.

A new SDDC includes inventory

Networking Page 225


A new SDDC includes inventory
entries for most common
network service types, but you
can add custom services if
necessary.

For example, if you want to


create a firewall rule for the AWS
EFS (Elastic File System) service,
you must first add the AWS EFS
service to the inventory.

Demonstration: Creating a Custom Service

In this demonstration, you create a custom service to use with VMware Cloud on AWS firewall
rules. This service is for Amazon EFS, using port 2049.

You create a custom service to use with VMware Cloud on AWS firewall rules.
1. In the VMware Cloud console browser tab, navigate to the SDDC summary page.
2. Click the Networking & Security tab.
3. Under Inventory, click Services.
4. Create a custom service for Amazon EFS connectivity using port 2049.
a. Click ADD SERVICE.
b. Enter AWS-EFS for the Name of the service.
c. Click Set Service Entries.
d. On the Port-Protocol tab, click ADD SERVICE ENTRY.
e. Enter EFS for the Service Entry Name.
f. In the Service Type drop-down menu, select TCP.

Networking Page 226


f. In the Service Type drop-down menu, select TCP.
g. Leave the Source Ports text box empty.
h. In the Destination Ports text box, enter 2049.
i. Click APPLY.
j. Click SAVE.

Knowledge Check: Gateway Firewalls and Custom Services

As an administrator, you want to be able to access your vCenter Server instance using the
vSphere Client. Which option must you create to allow this access? (Select one option)

A custom service
A compute gateway firewall rule
A management gateway firewall rule

Configuring Distributed Firewall Rules

The distributed firewall is stateful and protects all east-west traffic.

Distributed firewall rules are grouped into policies, and policies are organized into categories.
Each category can contain one or more policies. Each policy can contain one or more rules.

On the Networking & Security tab, you can view, add, edit, and remove policies and their rules
in the Distributed Firewall pane.

• The All Rules tab is a read-only view of the policies and their rules.

The Category Specific Rules tab (shown here) lets you view, add, and remove policies.

Networking Page 227


The Category Specific Rules tab (shown here) lets you view, add, and remove policies.

• Five categories are available. To add a policy to a category, you must first select a category
in this row.

In this example, the Application category is selected. This category contains seven rules,
indicated by the number in parentheses.

• The Application category contains three policies.

The number of rules in each policy is identified by the number in parentheses. For
example, 3-TIER POLICY contains three rules.

A policy can also apply to DFW, which means that the policy applies to all workloads. Or
the policy can apply to a specific group of VMs or containers.

Distributed Firewall Rule Categories

Categories are a convenient way to organize security policies. They are an


organizational tool only.

Each category has an intended use.

Category Evaluation Description


Order
Ethernet First Contains all layer 2 policies. The policy rules apply to all layer 2
SDDC network traffic.
Emergency Second Contains temporary firewall policies that are applied in
emergency situations, such as blocking an attack on a web
server.
Infrastructure Third Contains policies that are specific to infrastructure components
such as vCenter Server, ESXi hosts, and so on. This category also
contains policies for defining access to shared services such as
Active Directory, DNS, NTP, DHCP, and backup services.
Environment Fourth Contains policies between security groups such as production
groups, development groups, or groups for specific business
purposes. For example, the production group cannot
communicate with the testing group. Or, the testing group
cannot communicate with the development group.
Application Last This category contains granular application policy rules, such as
rules between applications or application tiers, or rules between
microservices.

Firewall rules are enforced in the categories, from left to right (Ethernet > Emergency >
Infrastructure > Environment > Application), and top to bottom in each category.

Networking Page 228


Knowledge Check: Distributed Firewall Rules

One of your application VMs is compromised, and you want to temporarily block all traffic to
and from this VM so you can resolve the issue.

Into which category should you place this rule? (Select one option)

Ethernet
Emergency
Infrastructure
Environment
Application

Networking Page 229


Connecting Cloud SDDCs
Monday, January 23, 2023 9:52 AM

Learner Objectives

After completing this lesson, you should be able to:

• Recognize the options for connecting on-premises data centers and cloud SDDCs

In a multi-cloud environment, you want to use the most appropriate and secure options for
connecting your cloud environments, whether you're connecting an on-premises
environment to a public cloud, or you're connecting between public clouds.

You can connect cloud SDDCs in different ways and enable workloads to communicate in a
secure manner.

Methods for Connecting Cloud SDDCs

For workloads to communicate with each other, you must choose an appropriate connection to
use between your on-premises data center and your cloud SDDC, or between cloud SDDCs
(cloud to cloud).

Several types of connections are available:

• Public Internet connection


• Private route-based and policy-based IPsec virtual private network (VPN)
• Private L2 VPN
• High bandwidth, low latency connection

When do you typically use each connection type?

• Public Internet Connection - for public applications that share data publicly
• Private IPsec VPN - to secure connection between cloud SDDCs and on-premises data
centers
• Private L2 VPN - To migrate running VMs between SDDCs in different geographical
locations
• High Bandwidth, Low Latency Connection - VMware and its hyperscaler partners provide
connectivity solutions for high bandwidth, highly available, secure, low latency
connections

Public Internet Connection


Networking Page 230
Public Internet Connection
Applications in a cloud SDDC can communicate with other on-premises applications through a
standard public Internet connection.

SDDCs can connect across the Internet through an Internet gateway.

Example Public Connection

This example shows a public Internet connection between an on-premises data center and a
VMware Cloud on AWS SDDC. The connection is over the Internet and through the Internet
gateway provided by AWS.

Public Internet connection between on-premises data center and a VMware Cloud on AWS SDDC

Creating a Public Internet Connection

You can create a public Internet connection to a cloud SDDC by performing the following steps:

1. Request Public IP address.


2. Create a NAT rule that links the public IP address to the application.
3. Create both inbound and outbound firewall rules to allow desired traffic to pass from the
network segment to the Internet, and vice-versa.

You must perform these steps, and others, if necessary, for the on-premises SDDC.

Private IPsec VPN


What Is IPsec VPN?

Networking Page 231


A virtual private network (VPN) creates a secure connection to another network over the
Internet. IP Security (IPsec) is a framework of protocols that are used together to set up
encrypted connections between devices.
So, IPsec VPN creates a secure communication channel over a public network, such as the
Internet, to interconnect your cloud SDDC with remote SDDCs. After the IPsec connection is
established, you can securely transmit encrypted data between the data centers over the
Internet.

Example Use Case for IPsec VPN

If you are have a VMware Cloud on AWS SDDC, consider using IPsec VPN when you require
connectivity to an SDDC and you do not have an AWS Direct Connect in the desirable region,
but the region has reliable Internet.
Performance requirements should be no greater than 3 to 4 Gbps peak total in both directions,
with some tolerance for latency.

IPsec VPN Types

IPsec VPNs can be route-based and policy-based. Either type of VPN provides a secure
connection to your SDDC over the Internet.

• Route-based
○ A route-based VPN creates an IPsec tunnel interface and routes traffic through it as
dictated by the SDDC routing table.

A route-based VPN provides resilient, secure access to multiple subnets. When you
use a route-based VPN, new routes are added automatically when new networks are
created.

Routes are learned dynamically over a special interface called virtual tunnel
interface (VTI) using Border Gateway Protocol (BGP). BGP is a dynamic routing
protocol used to exchange routes.

• Policy-based
○ A policy-based VPN creates an IPsec tunnel and a policy that specifies how traffic
uses it.

A policy-based VPN can be an appropriate choice when you have only a few
networks on either end of the VPN, or if your on-premises network hardware does
not support BGP.

Policy-based VPNs do not require a BGP configuration.

When you use a policy-based VPN, you must update the routing tables on both ends
of the network when new routes are added.

Example Policy-Based IPsec VPN

In this example, a policy-based IPsec VPN is created between the Tier-0 gateway in a VMware

Networking Page 232


In this example, a policy-based IPsec VPN is created between the Tier-0 gateway in a VMware
Cloud on AWS SDDC and the VyOS gateway appliance in the on-premises data center.

Policy-based IPsec VPN between the Tier-0 gateway in a VMware Cloud on AWS SDDC and the VyOS
gateway appliance on premises

Demonstration: Configuring a Policy-Based IPsec VPN

In this demonstration, a policy-based IPsec VPN is created to allow a VMware Cloud on AWS
SDDC (called demo01) to securely connect over the Internet to the on-premises data center.
The connection is established from the T0 gateway in the demo01 SDDC to the VyOS gateway in
the on-premises SDDC.

The following options are configured in this demonstration:

• VPN Name: On-Prem-VPN


• Local IP Address: Public IP1 (44.229.180.55)
This address is the VPN Public IP address of the T0 gateway in the demo01 SDDC. You can
view VPN Public IP on the Overview pane in the Networking & Security tab.
• Remote Public IP: 192.168.101.3
This public IP address is for the VyOS gateway in the on-premises data center.
• Remote Networks: 172.20.10.0/24 and 172.20.11.0/24
These networks are in the on-premises data center that the VPN can connect to.
• Local Networks: sddc-cgw-network-1 and Infrastructure Subnet
These networks are in the demo01 SDDC that this VPN can connect to.
• Preshared Key: VMware 1!
This key must be identical for both ends of the VPN tunnel.
• Remote private IP: 172.20.0.254

Networking Page 233


• Remote private IP: 172.20.0.254
If your on-premises gateway is behind a NAT device, this address is the on-premises
gateway private IP address.
• IKE Type: IKE V1
This protocol is used to set up a secure communications channel between the two SDDCs.

1. In the VMware Cloud console browser tab, navigate to the SDDC summary page.
2. On the Networking & Security tab, click VPN under Network.
3. Select the Policy Based tab.
4. Create a policy-based VPN.
a. Click ADD VPN.
b. Enter On-Prem-VPN for the VPN Name.
c. In the Local IP Address drop-down menu, select Public IP1.
d. In the Remote Public IP text box, enter the on-premises public IP address that you
recorded to your text file earlier.
e. In the Remote Networks text box, enter 172.20.10.0/24 and click Add Item(s).
f. In the Remote Networks text box, enter 172.20.11.0/24 and click Add Item(s).
g. For Local Networks, select sddc-cgw-network-1 and select Infrastructure Subnet.
h. Enter VMware1! in the Preshared Key text box.
i. Enter 172.20.0.254 in the Remote Private IP text box.
j. In the IKE Type drop-down menu, select IKE V1.
k. Click SAVE.

Private L2 VPN
You use a private layer 2 (L2) VPN to extend an on-premises network to your cloud SDDC. This
extended network is a single subnet with a single broadcast domain.

Networking Page 234


extended network is a single subnet with a single broadcast domain.

You can use L2 VPNs to migrate VMs to and from your cloud
SDDC, for disaster recovery, or for dynamic access to cloud
computing resources (often called cloud bursting).

VM migrations across an L2 VPN support VLAN tagging and


GENEVE frame encapsulation when migrating between a
cloud SDDC to another SDDC.

The L2 VPN tunnel extends layer 2 networks across


geographic sites. VMs can move across sites (using vSphere
vMotion) and keep the same IP addresses using an L2 VPN.

Example L2 VPN

In this example, an L2 VPN is created between the Tier-0 gateway in a VMware Cloud on AWS
SDDC and the autonomous NSX Edge appliance in the on-premises data center.

An autonomous NSX Edge appliance is simple to deploy and provides a high-performance VPN.

You do not need NSX on premises to use an L2 VPN. You can download the autonomous NSX
Edge appliance and configure it as the client-side component of your L2 VPN.

You can extend up to 25 of your on-premises networks with L2 VPN.

Networking Page 235


An L2 VPN between the Tier-0. gateway in the VMware Cloud on AWS SDDC and the autonomous NSX
Edge appliances on-premises

Knowledge Check: Cloud Networking Options

You want to migrate a VM (using vSphere vMotion) across SDDCs and allow this VM to keep the
same IP address. Which connection type should you use? (Select one option)

Private L2 VPN
Private route-based IPsec VPN
Private policy-based IPsec VPN
Public Internet connection

High Bandwidth, Low Latency Connectivity

VMware and its hyperscaler partners provide connectivity solutions that are highly available,
secure, high bandwidth, low latency connections:

• VMware Cloud on AWS provides AWS Direct Connect.


• Azure VMware Solutions provides ExpressRoute.
• Google Cloud VMware Engine provides Cloud Interconnect.

AWS Direct Connect


For information on AWS Direct Connect, see the Connectivity Solutions for VMware Cloud on
AWS lesson.

Azure ExpressRoute
For information on ExpressRoute, see the Azure VMware Solution networking and
interconnectivity concepts section in the Azure VMware Solution documentation.

Google Cloud Interconnect


For information on Cloud Interconnect, see the Private cloud networking for Google Cloud
VMware Engine section in the Google Cloud VMware Engine documentation.

Networking Page 236


Connectivity Solutions for VMware Cloud on AWS
Monday, January 23, 2023 10:15 AM

Learner Objectives

After completing this lesson, you should be able to:

• Describe how AWS Direct Connect is used to connect SDDCs together


• Describe how VMware Transit Connect is used to connect SDDCs together

When connecting VMware Cloud on AWS SDDCs, you can use the following solutions,
depending on your goals:

AWS Direct Connect connection with private VIF


To obtain faster speed and lower latency connections for cold migrations, live migrations, and
ESXi management traffic.

VMware Transit Connect


To enable high-bandwidth, low-latency connections between VMware Cloud on AWS SDDCs,
virtual private clouds (VPC) and on-premises data centers.

AWS Direct Connect


AWS Direct Connect (DX) creates a dedicated network connection from an on-premises data
center to an AWS region.

Rather than using only a VPN tunnel over the public Internet, DX uses a dedicated leased
connection (private line) to connect the on-premises data center to an AWS DX location.

Ports are available with a speed of 1 Gbps and 10 Gbps, and you can order multiple ports.

AWS DX charges per port hour (charges vary per port speed) and per gigabyte of data
transferred, both in and out. Charges vary between locations. Pricing does not include the cost
of the dedicated network connection.

For more information, access the Amazon website.

Examples of AWS Direct Connect

With AWS DX, network traffic is isolated and bandwidth is, potentially, increased between the
on-premises data center and AWS resources.

Examples

Networking Page 237


Examples

Japan
An AWS DX service in Japan includes the following connections:

• An on-premises data center in Kobe connects through a dedicated line to an AWS DX


location in Osaka.
• The AWS DX location in Osaka connects through a dedicate line to the AWS region in
Tokyo.

USA
An AWS DX gateway is used to logically extend an AWS DX connection from one AWS region to
another without creating an extra private connection to AWS.

For example, an AWS DX gateway service to multiple AWS regions in the United States includes
these connections:

• An on-premises data center in Palo Alto connects through a dedicated line to an AWS DX
location in Portland.
• The AWS DX location in Portland connects through a dedicated line to the AWS region in
Oregon.
• The AWS region in Oregon is connected through an AWS DX gateway to the AWS region in
northern Virginia.

Networking Page 238


Establishing an AWS DX Connection

You can establish an AWS DX connection in different ways:

• Using an AWS DX partner


• Using a private connection from your on-premises data center to an AWS DX location
• Connecting at the AWS DX location with collocated SDDC

For more information about partners, access the AWS Direct Connect Delivery Partners
webpage.

For more information about locations, access the AWS Direct connect Locations webpage.

AWS Direct Connect: Connections and Routing

With AWS Direct Connect, you must identify the type of connection to use. You can use either
dedicated ports or hosted connections.

Dedicated ports
Dedicated ports provide the highest port speed that is available. These ports are assigned and
dedicated to a single customer.

You receive a letter of authorization about the port.

You can use multiple virtual interfaces to load-balance your traffic across the aggregated links.

The possible values for port speed are 1 Gbps, 10 Gbps, and 100 Gbps.

Hosted Connections
Hosted connections are provided by an AWS DX partner and have defined bandwidth and
VLANs.

You get a single virtual interface rather than multiple virtual interfaces.

Networking Page 239


You get a single virtual interface rather than multiple virtual interfaces.

The possible values are 50 Mbps, 100 Mbps, 200 Mbps, 300 Mbps, 400 Mbps, 500 Mbps, 1
Gbps, 2 Gbps, 5 Gbps, and 10 Gbps. Only those AWS DX partners who have met specific
requirements may create a 1 Gbps, 2 Gbps, 5 Gbps or 10 Gbps hosted connection.

This type of connection is simpler and easier to use, especially if you don't require a 1 Gbps
connection or cannot assume the full cost of a dedicated port.

Routing Protocol

AWS DX uses BGP routing.

BGP runs on TCP port 179. It has neighbors that exchange routing information over a peering
session.

BGP is the only protocol supported that AWS supports for exchanging routes. Static routing is not
allowed.

Example BGP Routing

In this example, two routers use BGP to exchange routing information.

Router A and Router B advertise routes to each other so that Autonomous Systems 64512 and
65001 can communicate with each other.

An autonomous system is a collection of networks, or more precisely, the routers joining these
networks, that are under the same administrative authority and that share a common routing
strategy.

All route-based VPNs in the SDDC default to Autonomous System Number (ASN) 65000, so you
must change the local ASN. The local ASN must be different from the remote ASN.

Knowledge Check : AWS DX Connection Types

You want to connect from your on-premises data center to your VMware Cloud on AWS SDDC.
You want to use the highest port speeds available and you want to load-balance traffic over
multiple virtual interfaces.
Networking Page 240
multiple virtual interfaces.

What type of connection should you use? (Select one option)

Dedicated ports
Hosted Connections

How does connectivity between SDDCs work in the AWS DX


connections?

AWS Direct Connect Topology Between SDDCs

In the Japan data centers example, the on-premises data center in Kobe connects through a
dedicated, private line to an AWS DX location in Osaka.

Osaka connects through a dedicated line to the AWS region in Tokyo.

Both private and public AWS Direct Connect connections are used to connect between data
centers:
• The blue line (B) represents a private AWS Direct Connect connection, which can be used
for AWS resources. However in this case, the connection is used to securely connect to
the VMware Cloud on AWS SDDC.
• The green line (G) represents a public AWS Direct Connect connection, used for private
and, potentially, faster access to AWS resources.

Do you know the difference between these private and public


connections, which are also known as private and public VIFs?

Networking Page 241


Private VIFs and Public VIFs
To establish an AWS DX connection, you must create a virtual interface (VIF). A VIF can be
private or public.

Private and public VIFs establish private dedicated connections to the AWS backbone.

Private VIF Public VIF


• Connects to existing AWS VPCs • Uses the public IP address space and terminates
• Uses the private IP address space and at the AWS region level
terminates at the customer VPC level. • Provides reliable connectivity with dedicated
• Provides reliable connectivity with network performance to connect to AWS public
dedicated network performance to endpoints, for example, S3, and DynamoDB
connect directly to the customer VPC • Customers receive Amazon global IP routes
• AWS only advertises the entire through BGP, and they can access publicly
customer VPC CIDR through BGP routable Amazon services

Example AWS Direct connect Private VIF

Private VIF Connection


A private VIF connects to existing AWS VPCs.

The private VIF connects the on-premises data center through an AWS DX connection into the
private VPCs where the SDDCs are located.

Private VIF BGP Peering Session


The logical view of the private VIF shows the BGP peering session between the on-premises
data center and the SDDC in the private VPC.

Networking Page 242


AWS Direct Connect VIF Configuration
High-level steps you take to create an AWS DX private VIF

Step 1: Create a Private VIF

Networking Page 243


In the AWS console, select Private in the Create a Virtual Interface window.

Configure the following settings:

• Name of the interface


• Interface owner: Your account or another account (hosted VIF)
• VLAN
• Whether peer IPs should be autogenerated
• BGP AS number
• Whether the BGP Key should be autogenerated

Step 2: Accept the Configured Connection

After you request a link from the AWS partner (Equinix, in this example), you can view the
connection in the AWS Direct Connect console window.

You must accept the configured connection.

Step 3: Wait for Approval of the Connection

After you accept the configured connection, the status changes to pending until the connection
is approved and initialized.

When the approval process is complete, the status changes from pending to available.

Networking Page 244


When the approval process is complete, the status changes from pending to available.

Step 4: Verify that VIF Configured Successfully

Open the VMware Cloud SDDC console. In the Networking & Security tab, click Direct
Connect under System.

Verify that the State is Attached and the BGP Status is Up.

For further details on the functionality of AWS Direct Connect, see the user guide.

Knowledge Check: Virtual Interfaces

Which statements accurately describe VIFs? (Select two options)

With a dedicated port connection, you can use multiple VIFs to load-balance traffic.
A private VIF uses a public IP address space and terminates at the customer VPC level.
With a hosted connection, you can use multiple VIFs for your 10G connections.
Both private and public VIFs establish private dedicated connections to the AWS
backbone.

VMware Transit Connect

VMware Transit Connect is a VMware managed connectivity solution between the VMware
Cloud on AWS SDDCs. With VMware Transit Connect, customers can build high-speed, resilient
connections between their VMware Cloud on AWS SDDCs and other resources.

VMware Transit Connect is implemented using the following constructs:

• SDDC Groups: An SDDC group helps you to logically organize SDDCs together to simplify

Networking Page 245


• SDDC Groups: An SDDC group helps you to logically organize SDDCs together to simplify
management. With SDDC groups, you can define a collection of SDDCs, virtual private
clouds (VPCs), or on-premises connectivity that need to interconnect.
• VMware Managed Transit Gateway (VTGW): VTGW is a managed service that provides
high bandwidth and low latency connectivity between SDDCs in an SDDC group within a
single AWS region. VTGW enables connectivity between SDDC groups and multiple AWS
native VPCs, as well as on-premises environments connected through an AWS Direct
Connect Gateway.

VMware Transit Connect service provides three primary connectivity models:

SDDC to SDDC
You can use the SDDC-to-SDDC model to create highly available SDDC-to-SDDC connectivity
across different AZs.

This topology shows three SDDCs in the same AWS region. Two of the SDDCs are members of
an SDDC group and can communicate through the high-speed VPC attachment created with the
VTGW.

SDDC to VPC
You can use the SDDC-to-VPC model to allow SDDC workloads to access AWS native services
across different native AWS VPCs.

This model supercharges the hybrid connectivity by reducing the reliance on VPNs to tie these
environments together.

This topology shows three SDDCs in the same AWS region. Two of the SDDCs are members of

Networking Page 246


This topology shows three SDDCs in the same AWS region. Two of the SDDCs are members of
an SDDC group and can communicate through the high-speed VPC attachment created with the
VTGW. These SDDCs can also communicate with native AWS VPCs in the region.

SDDC to On-Premises
You can use the SDDC to on-premises model to migrate or balance workloads from on-premises
to any of the SDDCs in the SDDC group.

This topology show SDDC to on-premises connectivity. With VMware Transit Connect, a transit
VIF is used and can only be terminated between an AWS Direct Connect Gateway and a VTGW.

Direct Connect Gateways are not region-based but are a global construct, so you do not have
the same considerations for regional co-location that SDDCs and VPCs require.

Networking Page 247


Knowledge Check: VMware Transit Connect

Match each VMware Transit Connect connectivity model to its use case.

Networking Page 248


Network Monitoring Tools
Monday, January 23, 2023 12:20 PM

Learner Objectives

After completing this lesson, you should be able to:

• Recognize the tools provided by NSX to monitor networking in the SDDC

How do you monitor and troubleshoot SDDC networking and security


across multi-cloud environments?

NSX provides monitoring tools that you can access from VMware Cloud console or from the NSX
Manager UI:

• IPFIX
• Port mirroring
• Traceflow

IPFIX: Capturing Network Flows

IPFIX (Internet Protocol Flow Information Export) is a standard for the format and export of
network flow information for troubleshooting, auditing, or collecting analytics information.

You monitor network traffic on a logical network or segment. You can monitor the amount of
network traffic generated between two VMs. All flows from the VMs connected to that
segment are captured and sent to the IPFIX collector.

The IPFIX collector receives and stores the flow of packets from the VMs. The collector can be
located on a compute segment or in the on-premises data center.

You define the network segments to monitor and the IPFIX collector to use in the IPFIX profile.

In this example, IPFIX is accessed from the VMware Cloud console. The IPFIX profile is configured to use
an IPFIX collector identified as Collector_1

Networking Page 249


an IPFIX collector identified as Collector_1

Configuring IPFIX in a VMware Cloud on AWS SDDC

You can configure IPFIX from the Networking & Security tab in the VMware Cloud console.

To configure IPFIX, you must add an IPFIX collector. You also create an IPFIX profile. The profile
identifies the objects to collect packets from. For example, you might want to collect packets
from VMs on a particular segment.

Step 1: Add the IPFIX Collector

On the Networking & Security tab, click IPFIX under Tools, select the Collectors tab, and
click ADD COLLECTOR.

Step 2: Enter IPFIX Collector Information

Networking Page 250


Provide information for the required fields:

1. Enter a name for the collector.


2. Enter the collector IP address and port.
3. Click SAVE.

You can add up to four IPFIX collectors.

Step 3: Create an IPFIX Profile

In the IPFIX pane, click the Switch IPFIX Profiles tab and click ADD SWITCH IPFIX PROFILE

Networking Page 251


Step 4: Enter IPFIX Profile Information

Enter information in the required fields:

1. Provide a name for the profile.


2. Set the packet sampling probability.
This setting is the percentage of packets that are sampled (approximately). The default
value of 0.1% has a low impact on performance.
3. Select an IPFIX collector configuration from the drop-down menu.

Step 5: Select Segments to Monitor

Networking Page 252


Apply the profile to one or more segments:

1. Click Set.
2. In the Applied To window, select the Segment category.
The categories are Segment, Port, or Groups.
3. Select one or more segments that you want to collect packets from.
The IPFIX profile is applied to the selected objects.
4. Click APPLY.
5. Click SAVE.

Step 6: View Network Flow

View the network packet flow from the user interface of the IPFIX collector that you configured.

Port Mirroring
Using port mirroring, you can replicate and redirect all the traffic from a source.

Mirrored traffic is sent encapsulated in a Generic Routing Encapsulation (GRE) tunnel to a


collector so that all the original packet information is preserved while it traverses the network
to a remote destination.

Port mirroring is used in the following scenarios:

• Troubleshooting: Analyze the traffic to detect intrusion and debug and diagnose errors on

Networking Page 253


• Troubleshooting: Analyze the traffic to detect intrusion and debug and diagnose errors on
a network.
• Compliance and monitoring: Forward all the monitored traffic to a network appliance for
analysis and remediation.

Port mirroring includes a source group where the data is monitored and a destination group
where the collected data is copied to.

In this session example, the source group is the compute segment that the web servers are
connected to. The destination group contains one or more VMs running the Wireshark
software.

Wireshark is used to mirror the web servers on the compute segment being monitored.

Configuring Port Mirroring in a VMware Cloud on AWS SDDC

You can configure port mirroring on the Networking & Security tab in the VMware Cloud
Console

To configure port mirroring, you create a port mirroring session. During the session, you
configure the direction of traffic being monitored, the source being monitored, and the
destination where the traffic is mirrored.

Step 1: Create a Port Mirroring Session

Networking Page 254


On the Networking & Security tab, click Port Mirroring under Tools and click ADD SESSION.

Step 2: Enter Name and Direction

Provide information about your session:

1. Enter a name for your session.

Networking Page 255


1. Enter a name for your session.
2. Choose the direction:
○ Ingress: Outbound network traffic from the VM to the segment
○ Egress: Inbound network traffic from the segment to the VM
○ Bi Directional: (default) Ingress and egress traffic

Step 3: Enter Name and Direction

Under Source, click Set and select the port mirroring source.

Sources can be segments, segment ports, groups of VMs, or groups of virtual NICs.

Source group membership requires that VMs are grouped according to workload, such as a web
group or application group.

Step 4: Select the Destination

Networking Page 256


Under Destination, click Set and select the port mirroring destination.

Destinations are groups of up to three IP addresses. You can use existing inventory groups or
create new ones.

Destination group membership requires that VMs are grouped according to IP addresses.
Click SAVE.

Traceflow
Traceflow observes a marked packet as it traverses the overlay network, and monitors the
packet until it reaches its destination.

You use Traceflow to inspect the path of a packet. With Traceflow, you can identify the path (or
paths) a packet takes to reach its destination or, conversely, where a packet is dropped along
the way.

Each entity reports the packet handling on input and output, so you can determine whether
issues occur when receiving a packet or when forwarding the packet.

Configuring Traceflow in a VMware Cloud on AWS SDDC

You configure Traceflow on the Plan & Troubleshoot tab in the NSX Manager UI.

If you have a VMware Cloud on AWS SDDC, you access the NSX Manager UI from within the
VMware Cloud console.

Networking Page 257


VMware Cloud console.

To configure Traceflow, you specify the IP address type, the traffic type, the protocol, the
source, and the destination.

Step 1: Open NSX Manager

From the VMware Cloud console, click OPEN NSX MANAGER.

Step 2: Select Traceflow

In the NSX-T user interface, click the Plan & Troubleshoot tab and click Traceflow.

Step 3: Configure Traceflow Information

Networking Page 258


Configure information to perform the trace:

1. Select an IPv4 or IPv6 address type.

2. Select the traffic type: Unicast, Multicast, or Broadcast.


Multicast and broadcast are not supported in a VMware Cloud environment.
3. Select the protocol.

4. Select the source and destination information according to traffic type.

For example, for unicast traffic, you can select VMs as the source and destination.

5. Click TRACE.

Step 4: View Traceflow Results

Networking Page 259


The output includes a graphical map of the topology and a table listing the observed packets.

Summary

You can use Traceflow for visibility and self-serve troubleshooting. With Traceflow, you can
inspect the path of a packet from source to destination in the SDDC.

For hands-on experience, look to lab VMware Cloud on AWS - Advanced Networking
(HOL-2387-05-ISM) at https://labs.hol.vmware.com and complete the following:

1. Microsegment Workload Traffic


You use the NSX distributed firewall feature in VMware Cloud on AWS to microsegment
workload traffic. The distributed firewall helps you to secure east-west traffic between VMs.

2. Configure Distributed IDS/IPS


You explore the NSX Distributed IDS/IPS feature, which is an intrusion detection and prevention
system for east-west network traffic. You define IDS/IPS profiles, generate malicious traffic, and
create a policy to prevent SQL injection.

3. Configure FQDN Filtering Firewalls


You define a distributed firewall policy to block user access to certain social media sites. To
create this policy, you add FQDNs to use in context profiles, create a context profile, add a
security group, and create a distributed firewall rule. You also test this rule.

Networking Page 260


Module Summary
Monday, January 23, 2023 12:59 PM

Review the key concepts covered in this module:

VMware network virtualization can be achieved using vSphere standard switches, vSphere
distributed switches, and NSX distributed switches.

In the VMware Cloud SDDC, logical switching is achieved using management and compute
segments. Logical routing is achieved using a T0 gateway and management and compute
T1 gateways. Logical routing functionality is implemented in NSX Edge nodes.

Management and compute gateway firewalls are used to protect north-south traffic.
Distributed firewalls, which support micro-segmentation, protect east-west traffic.

VMware Cloud SDDCs can communicate with remote SDDCs using public Internet
connections, private IPsec VPNs, and private L2 VPNs. Also, a hyperscaler partner offers
high-performance connections. For example, VMware Cloud on AWS offers AWS Direct
Connect for high-speed, low-latency connections.

The VMware Cloud console and NSX Manager interfaces provide tools such as IPFIX, port
mirroring, and Traceflow to monitor, analyze, and troubleshoot networking in the SDDC.

Additional Resources

For information about configuring networking and security in VMware Cloud on AWS,
see VMware Cloud on AWS Networking and
Security at https://docs.vmware.com/en/VMware-Cloud-on-AWS/services/vmc-on-aws-
networking-security.pdf.

For information about networking and security using NSX-T Data Center, see NSX-T Data
Center Administration Guide at https://docs.vmware.com/en/VMware-NSX-T-Data-
Center/index.html.

Networking with Other Hyperscaler Partners

VMware Cloud on Dell


The VMware Cloud on Dell EMC User Guide includes a section on Networking and Security

Azure VMware Solution


The Azure VMware Solution documentation page has a variety of resources, including
network design considerations, networking and interconnectivity concepts, and a tutorial
on configuring networking.

Google Cloud VMware Engine

Networking Page 261


Google Cloud VMware Engine
The Google Cloud VMware Engine documentation includes sections on private cloud
networking and connecting to a private cloud.

Networking Page 262


VMware Cloud on AWS Onboarding and Setup
Tuesday, January 24, 2023 7:55 AM

Learner Objectives
After completing this lesson, you should be able to:

• Describe how VMware Cloud on AWS organizations work


• Create an organization owner account
• Invite a new user and configure user permissions
• Remove a user from the VMware Cloud organization
• Describe multifactor authentication with VMware Cloud on AWS

Options for Purchasing


You can purchase VMware Cloud on AWS from the following sources:

• VMware
• Amazon Web Services
• Managed service provider

When you purchase VMware Cloud on AWS through one of the available sources, the
purchase source becomes the seller of record.

The seller of record is responsible for billing the resources that are purchased.

Purchasing Through VMware

When you purchase VMware


Cloud on AWS through VMware,
VMware is the seller of record:

• Billing is done by VMware


• VMware terms of service,
payment methods,
currencies, regions,
discounts, and pricing apply.
• You deploy and manage
your VMware Cloud on AWS
through the VMware Cloud
console.

Purchasing Through Amazon Web Services

Onboarding Page 263


Purchasing Through Amazon Web Services

When you purchase VMware Cloud on AWS


through AWS, AWS is the seller of record:

• Billing is done by AWS


• AWS terms of service, payment
methods, currencies, regions,
discounts, and pricing apply.
• You deploy and manage your VMware
Cloud on AWS infrastructure through
the VM Cloud console

Purchasing Through a Managed Service Provider

When you purchase through a managed service provider (MSP), the MSP handles billing,
support, deployment, and management of the VMware Cloud on AWS infrastructure.

Your MSP can provide more information about purchasing.

Consumption Model and Payment Methods

VMware Cloud on AWS provides flexible, consumption-based billing and payment options to
meet your needs.

Determining the Target Regions for Deployment

When selecting sites for deployment of VMware Cloud on AWS, you must evaluate several
factors.

Onboarding Page 264


Locality and data sovereignty: Does user requirements apply for keeping workloads within
certain legal jurisdictions?

Latency: How far away, in terms of network latency, is the target site from the main user base?

Bandwidth: How much bandwidth is available for the target site? Are high-speed private lines
such as AWS Direct Connect available?

Geography: Do any geographic requirements need to be considered? For example, must the
target site be physically separate from other sites for fault tolerance?

Economics: Are pricing differences between regions a factor?

Reviewing the Onboarding Checklist


You can use the onboarding checklist in the VMware Cloud on AWS product documentation to
identify steps and resources for creating your first VMware Cloud on AWS software-defined
data centger (SDDC).

AWS Regions

AWS runs data centers in many geographical locations, or regions, around the world.

Map of AWS Regions

• A region in AWS is a physical locations where AWS data centers reside.


• AWS calls each group of logical data centers an availability zone (AZ).

Onboarding Page 265


• AWS calls each group of logical data centers an availability zone (AZ).
• Each AWS region consists of multiple, isolated, and physically separate AZs within a
geographic area.
• Each AZ has independent power, cooling, and physical security, compliance, and data
protection.

AWS regions are continuously updated. For more information about AWS regions, access Global
Infrastructure on the Amazon Web Services website.

Knowledge Check: AWS Regions

True or False: All AWS regions operate independently.

True
False

Planning IP Addresses for AWS VPCs and VMware SDDCs


Before deploying your VMware Cloud on AWS SDDC, you must plan two separate blocks of IP
address spaces:

• SDDC management CIDR block


• VPC CIDR block

Creating a VPC Using the Amazon Management Console

Onboarding Page 266


To create a VPC using the
management console:

1. Open the Amazon VPC


console at
https://console.aws.am
azon.com/vpc/
2. In the navigation pane,
select Your VPCs >
Create VPC.
3. Specify the following
VPC details as required:
a. Name tag
b. IPv4 CIDR Block
c. Tenancy
d. (Optional) Add or
remove a tag
4. Select Create VPC.

Simulation: Creating an Amazon VPC

In this interactive simulation, you create an Amazon virtual private cloud (VPC) that can be used
to deploy a VMware Cloud on AWS SDDC.

VMware Cloud Organizations


In VMware Cloud on AWS, an organization corresponds to a group or line of business that
subscribes to VMware Cloud on AWS services:

• Each organization includes one ore more owners who


can access all resources and services of the
organization and can invite additional users to the
account.

• Organization members can create, manage, and access


the organization's SDDCs. However, they cannot invite
new users.

• The VMware Cloud organizations that you create, or


are a member of, have no relationships to AWS
organizations.

Onboarding Page 267


organizations.

Creating an Organization Owner Account

Organization owners can invite additional owners and users to the account, manage access, or
remove users. They control access to VMware Cloud services, such as VMware Cloud on AWS.

To create an organization owner account:

1. Click the activation link that is sent to you by


email after product purchase.
2. On the login page, enter the email address
associated with your My VMware account and
click Next.
3. Enter the password associated with your My
VMware account and click Log In.
4. Accept the service terms and conditions and
click Next.
5. Log in with your My VMware credentials.
6. If you are not redirected to the SDDC console,
log in at https://vmc.vmware.com

Adding and Removing Users

To use your SDDC within your organization, you must first assign users to the SDDC so that they
can provision and maintain workloads on the system.

Step 1: Inviting New Users

Onboarding Page 268


You must be an organization owner to invite additional users to your organization.

To invite additional users:

1. Log in to the SDDC console at https://vmc.vmware.com.


2. Click the Services icon in the upper-right corner.
3. Click Identity & Access Management.

Step 2: Inviting New Users

The Active Users view shows a list of all users currently in the organization.

To invite additional users to the organization, you click ADD USERS.

Step 3: Inviting New Users

Onboarding Page 269


To add email addresses and assign roles:

1. Enter an email address for each user, separated by a comma, space, or a new line.

2. Select the role to assign:


• Organization Owner
• Organization Member

3. (Optional) For an organization member, select the Support User check box if the user has
support duties.

By default, all organization owners are support users. The setting for organization owners
cannot be changed.

4. (Optional) Expand ADD SERVICE ACCESS and assign a role.

Step 4: Removing Users

Onboarding Page 270


You remove users on the Identity & Access Management page. You select the user and click
REMOVE USERS.

Step 5: Removing Users

In the dialog box, you click REMOVE to confirm the request.

Knowledge Check: Adding or Removing Users


True or False: Only organization owners can invite and add users to the organization.

True
False

About Multifactor Authentication

Multifactor authentication (MFA) is a security

Onboarding Page 271


Multifactor authentication (MFA) is a security
enhancement that requires you to present two
pieces of evidence (your credentials) when you log
in:

• Something that you know, such as your


password
• Something that you have, such as an
application that generates a one-time
passcode.

You can secure your cloud account with MFA:

• Download an authentication application to your mobile device. This step creates a virtual
MFA device.
• The application generates a six-digit authentication code that is compatible with the time-
based, one-time password standard.

To log in to cloud services, you use the code generated by the application, with your VMware
ID and password.

Activating MFA is globally valid for VMware Cloud and My VMware for that email address.

Enabling Multifactor Authentication

To enable MFA, you activate MFA on your VMware Cloud account. Recovery codes are
generated for you in case you cannot access your virtual MFA device. You can disable MFA at
any time, and you can also regenerate the recovery codes.

Step 1: Log In

Onboarding Page 272


To begin configuring your VMware Cloud account with MFA:

1. Log in to VMware Cloud services with your user name and password.
2. Click User and select My Account

Step 2: Enable MFA

Onboarding Page 273


To enable MFA on your VMware Cloud account:

1. Click the Security tab.


2. Click ACTIVATE MFA DEVICE

Step 3: Activate MFA

Onboarding Page 274


To activate MFA on your VMware Cloud account:

1. Enter the password for the user name.


2. Use the selected authentication application to scan the QR code shown or manually enter
a secret key.
3. Wait for the application to generate two consecutive passcodes.
4. Enter each passcode in turn.
5. Click ACTIVATE.

NOTE: After you click ACTIVATE, a list of 10 recovery codes appear.

Step 4: Recover Access

Onboarding Page 275


If you cannot access your virtual MFA device, use a recovery code to regain access.

The following conditions apply:

• You can use each recovery code only once.


• You cannot continue until you copy, download, or print the list of codes.

Maintain MFA

On the Security page, you configure MFA settings as required:

• To temporarily disable or enable MFA, click the toggle.

NOTE: Reenabling MFA does not require that the device is reconfigured.

• To change or remove a virtual MFA device, click DEACTIVATE MFA DEVICE.

• To regenerate the recovery codes, click REGENERATE RECOVERY CODES.

Each setting is password-protected with the user password.

Knowledge Check: Multifactor Authentication

During the normal MFA-enabled login process, what information must you provide to log in to
your VMware Cloud account? (Select two options)

Account password
QR code
One-time password

Onboarding Page 276


One-time password
Recovery code
PIN number

Onboarding Page 277


Azure VMware Solution Onboarding and Setup
Tuesday, January 24, 2023 9:10 AM

Learner Objectives:

After completing this lesson, you should be able to:

• Describe core Azure and Azure VMware Solution concepts


• Determine what planning considerations must be made in an AVS deployment
• Complete the AVS deployment process
• Configure ExpressRoute to connect on-premises SDDCs to Azure VMware Solution SDDCs.

AVS Deployment Deep Dive Series - Module 1: Planning and Design Considerations

Request Host Quota


Before you can deploy an Azure VMware Solution Private cloud, you must request host quota
be assigned to your Azure account. It can take up to 5 days for the hosts to be allocated within
the quota, so keep this in mind when planning the deployment.

If you plan to scale your cluster for future growth or disaster recovery use cases, consider
requesting the additional hosts in your initial quota request. You are not billed for these hosts
unless they are allocated to your account, and this will save time if you need to scale out

Onboarding Page 278


unless they are allocated to your account, and this will save time if you need to scale out
quickly.

To request your host quota, open a support ticket by following these steps:

1. In the Azure portal, expand the upper left blade and select Help + Support
2. Click Create a support request
3. On the Basics tab, supply the following values:

• Summary: "Need capacity"


• Issue Type: Technical
• Subscription: The subscription you intend to deploy AVS into
• Service: All services
• Service Type: Azure VMware Solution
• Resource: General question
• Problem type: Customer Management Issues
• Problem subtype: Customer Request for Additional Host Quota/Capacity

4. Click Next: Solutions >> and then Next: Details >>


5. On the Details tab, provide the following information in the Description text box:

• Whether this deployment will be for a POC or Production


• The region you intend to deploy into
• The number of hosts required

6. Select whether you want to share diagnostic information, provide your preferred contact
method and contact info, then click Next: Review + create >>
7. Review the information, then click Create

Register the Microsoft.AVS Resource Provider


To use Azure VMware Solution, you must first register the resource provider with your
subscription.

1. Sign in to the Azure portal


2. On the Azure portal menu, select All services.
3. In the All services box, enter subscription, and then select Subscriptions.
4. Select the subscription from the subscription list to view.
5. Select Resource providers and enter Microsoft.AVS into the search
6. If the resource provider is not registered, select Register

Identify Network Requirements


Onboarding Page 279
Identify Network Requirements

At provisioning, an ExpressRoute circuit is created connecting the AVS private cloud to the
Microsoft Dedicated Enterprise Edge routers, allowing the AVS private cloud to connect to the
Azure backbone and access Azure services.

The AVS private cloud can be connected to an existing Azure VNet by way of an ExpressRoute
Gateway. The preferred method for connecting an AVS private cloud to an on-premises
datacenter is via ExpressRoute Global Reach. If an ExpressRoute circuit between the on-
premises datacenter and Azure is not available, a Site-to-Site VPN connection can be used.

AVS requires a /22 CIDR network that does not overlap with any existing network segments
that are deployed on-premises or in Azure. This network block is automatically carved up into
supporting subnets for management, provisioning, vMotion, and related purposes. Permitted
ranges for this address block are the RFC 1918 private address spaces (10.0.0.0/8,
172.16.0.0/12, and 192.168.0.0/16), with the exception of 172.16.0.0/16).

As an example, if the block 10.2.0.0/22 were provided, the following subnets would be created:

Purpose Subnet Example


Private cloud management /26 10.2.0.0/26
HCX Management Migrations /26 10.2.0.64/26
Global Reach Reserved /26 10.2.0.128/26
NSX-T DNS Service /32 10.2.0.192/32
Reserved /32 10.2.0.193/32
Reserved /32 10.2.0.194/32
Reserved /32 10.2.0.195/32
Reserved /30 10.2.0.196/30
Reserved /29 10.2.0.200/29
Reserved /28 10.2.0.208/29
ExpressRoute Peering /27 10.2.0.224/27
ESXi Management /25 10.2.1.0/25
vMotion Network /25 10.2.1.128/25
Replication Network /25 10.2.2.0/25
vSAN /25 10.2.2.128/25
HCX Uplink /26 10.2.3.0/26
Reserved /26 10.2.3.64/26
Reserved /26 10.2.3.128/26
Reserved /26 10.2.3.192/26

The AVS private cloud requires an Azure VNet. You can connect AVS to an existing Azure VNet
or create a new one. A non-overlapping IP range must be defined for the VNet, and a subnet

Onboarding Page 280


or create a new one. A non-overlapping IP range must be defined for the VNet, and a subnet
named GatewaySubnet must be created. The GatewaySubnet subnet should be a /27 network
or larger.

Two additional VLANs should be defined as well. These will be used for a Jumpbox VM and for
the Azure Bastion Service for connectivity to the Jumpbox VM. The Bastion subnet must be
named AzureBastionSubnet.

AVS Deployment Deep Dive Series - Module 2: AVS Initial Deployment and Connectivity Demo

Deployment

Topics in this section address the deployment of the AVS private cloud, connecting the AVS
private cloud to an Azure VNet, and connecting the AVS private cloud to an on-premises data
center.

Deploy the Private Cloud

After host quota has been allocated, you can create your first AVS Private cloud by following
these steps:

1. Log into the Azure portal


2. Navigate to and open your Resource Group
3. Click Create
4. Type azure vmware solution into the search bar and select the Azure VMware Solution
item.
5. Click Create

Onboarding Page 281


5. Click Create
6. The Create a private cloud wizard opens. The Prerequisites tab reminds us of the need to
have host quota assigned and a /22 network available. Click Next: Basics >
7. Subscription and resource group will be pre-populated with the appropriate values.
Provide values for the remaining fields:
• Resource name: A name for the AVS Private cloud object
• Location: The region in which host quota was assigned
• Size of host: The AVS node type. At the time of writing, AV36 is the only host type
available.
• Number of hosts: Select the number of hosts for the initial cluster
• Address block for private cloud: The /22 network to be assigned
8. Click Review + create
9. Review the settings specified and click Create. The deployment process may take up to
five hours to complete.

Configure Azure vNet and connect to AVS ExpressRoute

By default, there will be no connectivity between the AVS Private cloud and other Azure
resources deployed in your subscription. You can connect a new or existing Azure VNet to the
AVS Private cloud when the AVS deployment is complete. This VNet must have a subnet named
GatewaySubnet defined.

A Virtual Network Gateway will be created in this VNet and connected to the AVS ExpressRoute
connection, allowing communication between resources attached to this VNet and AVS VMs.
To create a new VNet, follow these steps:

1. Log into the Azure portal


2. Navigate to and open your AVS private cloud object
3. Click Manage > Connectivity
4. On the Azure vNet connect tab, click Create new under the Virtual network drop down.
5. Provide a VNet name, VNet address range, subnet names, and subnet address ranges. An
entry for GatewaySubnet will be pre-populated. Add additional rows for
AzureBastionSubnet and the Jumpbox VM subnet. Refer to table 3 for example values.
6. Click OK
7. Click Save. This operation will take several minutes to complete.

Peer on-premises networks with ExpressRoute Global Reach

ExpressRoute Global Reach allows you to connect your on-premises environment to your Azure
VMware Solution private cloud. ExpressRoute Global Reach peers the private cloud
ExpressRoute circuit with an existing ExpressRoute circuit connecting your on-premises and
Azure environments.

To complete this step, an existing, functioning ExpressRoute circuit must exist connecting the
on-premises environment to Azure. This will be referred to as “on-prem ExpressRoute.”
Additionally, all gateways must support 4-byte Autonomous System Numbers (ASNs).

Onboarding Page 282


Create an ExpressRoute authentication key for the on-prem ExpressRoute.

1. From the Azure Portal, navigate to the ExpressRoute circuits page and select the on-prem
ExpressRoute
2. Under Settings, select Authorizations
3. Enter a name for the new Authorization and click Save. The Authorization will begin
provisioning and should complete within a few minutes.
4. Copy the on-prem ExpressRoute Resource ID and the Authorization key. These will be
used to complete the peering.

Peer the AVS private cloud to on-prem ExpressRoute

1. From the Azure Portal, navigate to the Private cloud object and click Manage >
Connectivity > ExpressRoute Global Reach > Add
2. Enter the on-prem Resource ID and Authorization key created in the previous step, then
click Create. These operations will take a few minutes to complete.

Verify connectivity between on-premises networks and AVS networks

1. From the Azure Portal, navigate to the ExpressRoute circuits page, and select the on-prem
ExpressRoute
2. Under Settings, select Peerings
3. Click the Azure private row, then click View route table in the top menu
4. Examine the route table and confirm the AVS management networks and any NSX-T
segments are listed.
5. From your on-premises edge router, confirm routes exist to the AVS management
networks and any NSX-T segments.
6. From an on-premises device, attempt to access the AVS-hosted vCenter management
console.

To see all the AVS Deployment Deep Dive Series:


Azure VMware Solution Deployment Deep Dive Series

Onboarding Page 283


Google Cloud VMware Engine Onboarding and Setup
Tuesday, January 24, 2023 9:10 AM

Learner Objectives
After completing this lesson, you should be able to:

• Describe core concepts of Google Cloud VMware Engine deployments


• Determine what prerequisites must be met in order to deploy an SDDC on Google Cloud
VMware Engine
• Configure a network connection between on-premises SDDCs and Google Cloud VMware
Engine SDDCs

Google Cloud VMware Engine Private Cloud Creation

Prerequisites for Creating a Google Cloud VMware Engine SDDC


You will need to designate several unique IP ranges to be used for SDDC infrastructure and
workloads, ensure the proper firewall ports are allowed to manage your SDDC, and prepare
your Google Cloud Platform environment before deploying an SDDC.

All of these prerequisites are detailed in the Google Cloud VMware Engine documentation.

Here is an overview of the required steps:

Onboarding Page 284


• Plan the IP ranges you will use with Google Cloud VMware Engine. These are all RFC 1918
private addresses. You will need ranges for each of the following:
○ vSphere and vSAN (/21 – /24 accepted). Depending on the size of the range you
choose, it will be divided into additional subnets for management, vMotion, vSAN,
and NSX.
○ HCX (/27 or higher)
○ Edge Services, required for client VPN and internet access (/26)
○ Client subnet, assigned to clients connecting via VPN Gateway (/24)
○ Workload subnets, which will be configured in NSX-T after your SDDC is deployed.
• Ensure your local firewall is configured for communication with vCenter and workload
VMs.
• Enable the VMware Engine API in your Google Cloud Project.
• Enable the VMware Engine node quota.

Once these are completed, you are ready to create your SDDC!

Google Cloud VMware Engine Portal Overview

Creating a Google Cloud VMware Engine SDDC


Now that you have learned how to set up the prerequisites in order to enable your cloud
environment, you are able to create a Cloud SDDC in Google Cloud VMware Engine.

In this example you will learn how to create an SDDC in Google Cloud VMware Engine.

Onboarding Page 285


Step 1: Navigate to Google Cloud VMware Engine

To create a Google Cloud VMware Engine SDDC,


browse to Compute > VMware Engine in the Google
Cloud Platform Console. This will bring you to the
Google Cloud VMware Engine homepage.

Step 2: Create a Private Cloud

Onboarding Page 286


Click Create a Private Cloud to get started.

Step 3: Customizing Your Private Cloud

Onboarding Page 287


Specify your cloud name, location, node count, and predetermined network ranges. If you
cannot choose your desired region, ensure you have requested VMware Engine nodes quota for
that region.

Your quota will also determine how many nodes you can request. The minimum node count for
a production SDDC is three nodes.

After clicking Review and Create, you will be shown a confirmation page. Review your choices
and click Create.

Step 4: Provisioning Your Private Cloud

Onboarding Page 288


You will be taken to a summary page for your new cluster once provisioning begins. Note that
the state is Provisioning in the screenshot above, and it will take between 30 minutes to 2
hours to complete. My experience has been that it takes just over 30 minutes to provision an
SDDC, which is pretty impressive.

You can click on the Activity to tab view recent events, tasks, and alerts. Drilling into those will
provide specifics on any activity in your SDDC, including the provisioning process.

Google Cloud VMware Engine VPC Network Peering

Onboarding Page 289


Initial VPN Access
There are several ways to access your Google Cloud VMware Engine environment, including
Cloud Interconnect and Cloud VPN.

Initial VPN Set-Up

To establish initial connectivity to Google Cloud VMware Engine, a VPN gateway can be used.
This is an OpenVPN-based client VPN that will allow you to connect to your SDDC’s vCenter and
perform any initial configuration that you desire.

Step 1: Configuring Edge Services

Before the VPN gateway can be deployed, you will need to configure the “Edge Services” range
for the region where your SDDC is deployed. To do this, browse to Network > Regional settings
in the Google Cloud VMware Engine portal, and click Add Region.

Step 2: Setup VPN Gateway

Onboarding Page 290


Choose the region where your SDDC is deployed and enable Internet Access and Public IP
Service. Supply the Edge Services range you earmarked during planning and click Submit.
Enabling these services will take 10-15 minutes.

Once complete, they will show as Enabled on the Regional Settings page. Enabling these
settings will allow Public IPs to be allocated to your SDDC, which is a requirement for deploying
a VPN Gateway.

To begin the deployment, browse to Network > VPN Gateways and click Create New VPN
Gateway.

Supply the name for the VPN gateway and the client subnet reserved during planning and click
Next.

Step 3: Configure VPN Users

Onboarding Page 291


Choose specific users to grant VPN access, or enable Automatically add all users, and click
Next.

Step 4: Map Networks to VPN Gateway

Next, specify which networks to make accessible over VPN. I opted to add all subnets
automatically.

Onboarding Page 292


automatically.

Click Next, and a summary screen will be displayed. Verify your choice and click Submit to
create the VPN Gateway.

Step 5: Verify VPN Creation

You will be returned to the VPN Gateways page, and the new VPN gateway will have a status
of Creating. Once the status shows as Operational, click on the new VPN gateway.

Step 6: Download OpenVPN Configuration

Click Download my VPN configuration to download a ZIP file containing pre-configured


OpenVPN profiles for the VPN gateway.

Profiles for connecting via UDP/1194 and TCP/443 are available. Choose whichever is your
preference and import it into Open VPN, then connect.

In the Google Cloud VMware Engine portal, browse to Resources and click on your SDDC.

For complete GCVE Deployment videos:


Google Cloud VMware Engine

Onboarding Page 293


Onboarding Page 294
Accessing vCenter Server in the Cloud SDDC
Wednesday, January 25, 2023 8:58 AM

Learner Objectives

After completing this lesson, you should be able to:

• Configure access to the SDDC vCenter Server instance


• Connect to the SDDC vCenter Server instance

Connecting to the SDDC vCenter Server

You can connect to a VMware vCenter Server instance from a cloud SDDC. The example shows you how to
connect from a VMware Cloud on AWS SDDC

Step 1: Log in to SDDC Console

To connect to the SDDC vCenter Instance:


1. Log into the SDDC console at https://vmc.vmware.com
2. Click the OPEN VCENTER option.

Step 2: Configure Access Rules

Workload Management Page 295


If a VPN or management gateway firewall rule is not configured, you must create a rule to access vCenter Server.

Creating Gateway Firewall Rules


The Configuring Security in the VMware Cloud Console lesson provides details about creating gateway firewall
rules in a VMware Cloud on AWS SDDC.

Step 3: Connect to vCenter Server

After networking is configured, a dialog box with the default vCenter Server credentials appears. Use these
credentials and log in to the vCenter Server instance.

Demonstration: Connecting to the SDDC vCenter Server Instance

Workload Management Page 296


Transcript

You log in to the vSphere Client to view your new SDDC.

1. In the VMware Cloud console browser tab, click OPEN VCENTER in the top-right corner.
2. Click SHOW CREDENTIALS.
3. Click the Copy password to clipboard icon.
4. Click OPEN VCENTER.
5. Enter [email protected] in the User name text box.
6. In the Password text box, paste the password that you copied.
7. Click LOGIN.
8. If the following alarms or warnings appear, click Reset to Green for each one:
• Key Management Server Health Status alarm
• Skyline Health has detected issues in your vSphere environment
• Certificate Status alarm

Knowledge Check: Accessing the SDDC vCenter Server Instance

Which steps do you take to connect to the SDDC cloud instance?

Workload Management Page 297


Workload Management Page 298
Introduction to Virtual Machines
Wednesday, January 25, 2023 9:32 AM

Learner Objectives

After completing this lesson, you should be able to:

• Recognize the functions of virtual machine components


• Describe the function of a guest operating system
• Recognize the file types in virtual machine file structure

The way that you interact with a VM is similar to how you interact
with a physical machine.

You power on the VM. The OS loads. And you use a keyboard or a
mouse to interact with the OS and its applications.

VM Architecture

VMs use the same types of components as physical machines. Can you identify the layers in a VM?

VM Components

VMs provide the same functionality as physical machines because they use the same types of components.

Workload Management Page 299


Applications perform tasks using computer resources such as CPU, memory, and storage.

For example, a web server application in need of storage space requests the OS for the
required space. If this web server is running on a VM, the guest OS presents the application
with the storage space.

When a client requests access to websites, this web server responds to the client requests
without ever knowing that the OS is running on a VM.

The OS that is installed on a VM is called a guest OS. Similar to the OS on the physical
machine, the guest OS interacts with the VM hardware and allocates resources to the
applications on demand.

Multiple operating systems can run on a single server. For example, if two VMs are running
on a server, each guest OS can access only a subset of resources.

The guest OS presents those resources to the applications that it runs.

A driver is a software component that links a computer's hardware with the OS so that they
can communicate with each other.

For example, the OS comes with drivers for basic operations such as controlling the
keyboard. VMware VMs include VMware Tools, a bundle of drivers that help the guest OS
interact efficiently with the guest hardware.

The virtualization software abstracts the physical hardware and presents it as virtualized

Workload Management Page 300


The virtualization software abstracts the physical hardware and presents it as virtualized
resources to the VM.

The guest OS uses the virtualized hardware devices of the VM but is unaware that those
devices are virtual.

Examples of VM hardware devices include:

• CPU and memory devices


• Network adapters
• Disks and controllers
• Parallel and serial ports

Knowledge Check: VM Components


Which component of a virtual machine links the computer hardware and software? (Select one option)

Driver
Hardware
Application
Guest operating system

Guest OS

When you build a VM, you must find a


guest OS that the VM supports.

The OS controls your business applications


so you must find an OS that works with
these applications.

Stack of VM components

Installing the Guest OS

After you create the VM, you install a guest OS that meets your requirements.

Installing a Guest OS

Workload Management Page 301


Installing a Guest OS
Installing a guest OS in a VM is similar to installing an OS in a physical device. You can install the guest OS in multiple ways.
Access the link to learn more about installing a guest OS.

Knowledge Check: Guest OS

How many guest operating systems can run on a single physical server? (Select one option)

Only 1
2 to 5
Less than 10
Multiple guest operating systems

VM Encapsulation
In its most basic form, a VM is a set of files.

When you create a VM, VMware ESXiTM places the VM files


encapsulated in one folder and stores the folder in a datastore.

Multiple ESXi hosts can access this datastore. Any host accessing the
datastore can find the VM files, power on the VM, and run it.

Encapsulation makes VMs easier to manage.

For example, if you must reboot a host, you can move the VM to
another host that can access the same datastore.

Exploring VM Files

When you create a VM, ESXi creates a folder that is named after the VM. The files inside the folder share the name of the
VM, followed by an extension.

For example, when you create a VM called VM1, the folder in which it is placed is also called VM1. One of the files inside
that folder is called VM1.vmx.

Workload Management Page 302


Configuration File

Every VM has a file that describes the configuration of the VM.

The VM configuration file has the extension .vmx, for example, VM1.vmx

Swap Files

A swap file extends the VM's RAM when the RAM is fully used.

The swap files use the .vswp extension, for example, VM1.vswp or vmx-VM1.vswp.

BIOS File

A VM has a file that stores the BIOS settings even when the VM is turned off.

BIOS settings use the .nvram extension, for example, VM1.nvram.

Log Files

A VM uses a log file to record the activity of the VM. A VM keeps other log files to archive old log entries.

The log files take the .log extension:

• vmware.log
• vmware-1.log
• vmware-2.log

Template Configuration File

If a VM is converted to a template, a VM template configuration file replaces the VM configure file (.vmx).

The template configuration file takes the .vmtx extension, for example, VM1.vmtx.

Disk Descriptor File & Disk Data File

Workload Management Page 303


Disk Descriptor File & Disk Data File

A VM has two files for each virtual disk. The virtual disk files use the .vmdk extension.

Each virtual disk has a data file and a descriptor file:


• VM1-flat.vmdk stores the data that the VM writes to the disk.
• VM1.vmdk describes the structure of the data file.

Suspend State File

When you suspend a VM, a suspend state file records the state of the VM. When you resume the VM, the VM uses the file
to continue where it left off.

The suspend state file takes the extension .vmss, for example, VM1.vmss.

Knowledge Check: Purpose of VM Files


What is the purpose of each file in the VM file structure?

Workload Management Page 304


Creating Virtual Machines in Cloud SDDCs
Wednesday, January 25, 2023 10:18 AM

Learner Objectives

After completing this lesson, you should be able to:

• Create and manage virtual machines using different methods


• Use a vSphere content library to create a virtual machine in your SDDC
• Use the vSphere Client to upload content files to your SDDC
• Use the VMware Cloud Content Onboarding Assistant to transfer files to a VMware Cloud
on AWS SDDC

In a cloud SDDC, you can provision and transfer a large number of VMs in
multiple ways. The VM provisioning must be optimized to support the cloud
environments in using the available resources effectively and to function
productively.

Provisioning VMs

In a cloud SDDC, you can provision new VMs in


different ways:

• Using the New Virtual Machine wizard


• Cloning a VM
• Deploying a VM from a template
• Using the Content Library

Provisioning Restrictions

Restrictions can apply. Check your hyperscaler partner documentation for details.

For example, the following restrictions apply to the placement of VMs in the VMware Cloud on
AWS SDDC:

Workload Management Page 305


• VMs cannot reside in the Management VMs folder or in the Discovered virtual machine
folder.
• VMs cannot use Mgmt-ResourcePool
• VMs cannot reside on vsanDatastore

How do you choose a VM creation method that meets your requirements?

• Create a VM from scratch


○ Create VMs with the Virtual Machine Wizard
• Create a copy of an existing VM
○ Clone a VM
• Create multiple VMs with the same configuration
○ Use VM Templates

Creating VMs with the New Virtual Machine Wizard

In the VMware vSphere Client, you can use the New Virtual Machine wizard to create a VM
from scratch.

Using the New Virtual Machine Wizard


Learn to configure a VM from scratch by accessing the VMware vSphere product
documentation.

Cloning an Existing VM

Cloning a VM creates a VM that is a copy of the


original.

This method is useful when you want to make


modifications to the production VM without
affecting user access.

Cloning VMs
The Virtual Machine Management lesson discusses cloning and its use cases in more detail. If
you wish, click the link to go to this lesson now.

Deploying VMs from VM Templates

Workload Management Page 306


A VM template is a primary copy from which you can deploy multiple
VMs with the same configuration.

Typically, you use this method when you want to create multiple VMs
with the same configuration.

VM Templates
The Virtual Machine Management lesson discusses templates and their use cases in more
detail. If you wish, click the link to go to this lesson now.

Guest OS Customization

When cloning a VM or deploying it from a


template, you must provide vCenter Server
with information that establishes the guest
operating system's unique identify, such as IP
address, administrator password, computer
name, and license settings.

Why should you customize your guest OS?

VMs with identical settings can conflict and create connection problems. To avoid these
conflicts, you customize the guest OS to make a VM unique.

For example, if two systems use the same IP address, a conflict arises and both systems are
unable to connect to the network.

A customization specification contains the information necessary to ensure that each guest
operating system instance is unique. Customization specifications are stored in the vCenter
Server database.

Demonstration: Customization Specifications

In vCenter Server, you can create a customization specification for either a Windows or Linux
guest OS.

Workload Management Page 307


Use a customization specification in these cases:

• Cloning a VM
• Deploying a VM from a template

Click the below picture to play a demonstration video to learn about how to create a
customization specification

Transcript

You create a new guest customization specification. When deploying virtual machines from a
template, we want to make sure that certain properties inside the template are unique. For
example, we want to make sure that the IP address assigned to the virtual machine is unique
and not duplicated across the network. To create a specification:

1. Go to Menu, and then select Policies and Profiles.


2. We can see the VM Customization Specifications. We have one customization already
existing. We will create a new one. We need to provide a Name. This specification will be
used for windows specification, we will call it win10.
3. Make sure that we have Windows selected.
4. Populate the information regarding the organization.
5. We can customize the system or guest hostname for the virtual machine. In this example,
we will provide the virtual machine name. If we wish, we can provide a license key.
6. We can specify a Password for the administrator account.
7. We can select if the administrator can login automatically.
8. We will select the time zone for our virtual machine.
9. On deployment, the guest customization can run scripts. If we have certain commands

Workload Management Page 308


9. On deployment, the guest customization can run scripts. If we have certain commands
that we want to run, we can place them in here.
10. We can specify whether we want to apply static or THCP address.
11. We can decide whether we want our virtual machines to be deployed to a workgroup or
specify a domain. If we are deploying to a domain, we must provide credentials that can
be used to authenticate against a domain to add a virtual machine.
12. Finally, we can review our settings. When we are satisfied everything is correct,
click Finish.

Knowledge Check: Guest OS Customization


When should you use a guest OS customization specification? (Select one option)

Cloning a VM
Creating a Template
Creating a VM from scratch
Both while cloning a VM and creating a template

Uploading ISO Images and Templates

You can upload ISO images, VMTX templates,


and OVA or OVF templates to the SDDC in
different ways:

• Upload ISO images and OVA/OVF templates directly to a datastore in the SDDC.
For example, in a VMware Cloud on AWS SDDC, you upload files to the datastore called
WorkloadDatastore.
• Import the ISO images and OVF/OVA templates, from a local filesystem or web server
URL, to a content library.
• For a VMware Cloud on AWS SDDC, use the Content Onboarding Assistant to import
VMTX templates.

Deploying VMs from ISO Images and Templates

After uploading the ISO images, VMTX templates, and OVA/OVF templates, you can use them to

Workload Management Page 309


After uploading the ISO images, VMTX templates, and OVA/OVF templates, you can use them to
deploy VMs to the cloud SDDC. For example, you can use images or templates from the
following locations:

• An ISO image or template in a datastore


• An ISO image or template in a content library
• An OVF/OVA template on a local file system or from a URL

Your organization has multiple instances of vCenter


Server, each with its own set of VM templates and ISO
images.

You want to share and manage these templates and


images across the cloud environment.

How might you do this?

Content Libraries

Content libraries are container objects that store and manage VM templates,
vApp templates, and other file types. Using the content library, you can deploy
and share the stored items within a vCenter Server instance and between
vCenter Server instances

Content Library Uses


vSphere content libraries have several functions.

Provide storage, versioning, and synchronization of files across sites


and vCenter server instances.

Workload Management Page 310


Manage templates, vApps, OVF files, ISO images, and scripts.

Provide powerful publish and subscribe features to replicate content.

Content libraries are stored in vSphere datastores.

Uploading Templates to a Content Library

For your SDDC, you can create a content library that subscribes to the content library in your
on-premises data center. You publish the on-premises content library to import library items
into your SDDC.

To synchronize your on-premises and SDDC content libraries, follow these steps:

1. Add your templates, ISO images, and scripts to the on-premises content library. All .vmtx
templates are converted to OVF templates.

2. Publish your on-premises content library.

3. In your SDDC, create a content library that subscribes to the one you published in Step 2.
Content is synchronized from your on-premises data center to your cloud SDDC.

Upload Items to Content Library


Learn to upload items to your content library by accessing the VMware vSphere product
documentation.

Demonstration: Deploying VMs from a Content Library

You can deploy VMs and vApps from the VM or OVF templates that are stored in a content
library.

In this demonstration, you launch the vSphere Client from the VMware Cloud on AWS console.

Workload Management Page 311


In this demonstration, you launch the vSphere Client from the VMware Cloud on AWS console.
You log in as [email protected] and deploy a VM from an OVF template in the content
library.

Transcript

You create a virtual machine (VM) from a content library.

1. In the SDDC vSphere Client browser tab, select Menu, and then Content Libraries.
2. On the Content Libraries page, click VMC-CL-01.
3. Select the Templates tab and click OVF & OVA Templates.
4. Deploy a new virtual machine from the Lychee-ubuntu template. Right-click the Lychee-
ubuntu template and click New VM from This Template. The New Virtual Machine from
Content Library wizard opens.
5. On the Select a name and folder page, enter Photo-App-01 for the Virtual machine name.
6. Expand the location tree and select the Workloads folder.
7. Click NEXT.
8. On the Select a compute resource page, expand the compute resource tree and
select Compute-ResourcePool.
9. Click NEXT.
10. On the Review details page, click NEXT.
11. On the Select storage page, select WorkloadDatastore and click NEXT.
12. On the Select networks page, select sddc-cgw-network-1 from the Destination
Network drop-down menu and click NEXT.
13. On the Ready to complete page, click FINISH.
14. Wait for the Deploy OVF template task to finish.
15. Power on the newly created Photo-App-01 VM. Select Menu, and then Host and Clusters.
16. In the left pane, expand Compute-ResourcePool and locate the new VM called Photo-
App-01.
Workload Management Page 312
App-01.
17. Right-click the Photo-App-01 VM and select Power, and then Power On.

The VM powers on and acquires an IP address using DHCP from the 192.168.xxx.0/24 range.

You can create VMs from a content library in other ways. Explore the VMware vSphere product
documentation to learn more.

Deploy a VM from a VM Template in a Content Library


Create a New vApp from a Template in a Content Library

Content Libraries Versioning

Content libraries support in-place updates of VM templates with a rich version history.

With versioning, you can quickly check out


a VM from a VM template in a content
library, update it, and check it back into
the content library as a new version

The previous version of the template is retained


so you can revert to this version if necessary.

The timeline view on the Versioning tab presents


a version history.

The history includes the user that initiated an


operation and the time of the operation.

Knowledge Check: Content Libraries


Why use a content library? (Select one option)

To standardize templates and ISO images across vCenter Server instances


To store virtual machines that are powered on and in production
To back up virtual machines in the vCenter Server inventory
To share datastores across the ESXi hosts in the cluster

You have a variety of .vmtx templates, ISO images, scripts,


and other conent that you want to use in your VMware
Cloud on AWS SDDC.

Workload Management Page 313


How do you transfer them to the SDDC?

Content Onboarding Assistant

The VMware Cloud Content Onboarding Assistant automates the transfer


of .vmtx templates, ISO images, scripts, and other files to a VMware Cloud on
AWS SDDC.

Accessing the Content Onboarding Assistant

The VMware Cloud Content Onboarding Assistant is a


Java CLI that you must download.

You can download the VMware Cloud Content


Onboarding Assistant from the Customer Connect
Downloads page.

Before you download the tool, a VPN connection must


be established between the on-premises and VMware
Cloud on AWS SDDCs.

Transferring Content

You can easily transfer content using the Content Onboarding Assistant. The process works as
follows:

1. Check the connectivity between the client and on-premises vCenter Server instance and
VMware Cloud on AWS.
2. Scan vCenter Server Inventory for VMTX templates.
3. Scan given datastores and folder for any files.
4. Create a published content library in the on-premises vCenter Server instance.
5. Copy the selected vCenter Server VMTX templates.
6. Import the content from a given folder into the content library.
7. Create a subscribed content library in the VMware Cloud on AWS SDDC.
8. Synchronize all content from Step 6.

Knowledge Check: Transferring Content to Your SDDC

True or False: The VMware Cloud Content Onboarding Assistant is built into the VMware Cloud
on AWS client.

True
False

Workload Management Page 314


Using the vSphere Client to Upload Files or Folders
In a VMware Cloud on AWS SDDC, the CloudAdmin role uploads content to WorkloadDatastore:

• You cannot upload content to vsanDatastore, which is managed by VMware.


• You can upload files and folders, and create folders.

To upload content:

1. In the vSphere Client, click the Storage icon.


2. Click WorkloadDatastore
3. Click Files and upload the required items.

Unsupported VM Configurations in VMware Cloud on AWS

VMware Cloud on AWS does not support the following VM


configurations:

• Bus sharing configurations


• DirectPath I/O
• Flash Read Cache
• ISOs mounted using the client device when a CD/DVD drive is
used
• Apple macOS and OS X
• Multi-writer and changed block tracking
• NVIDIA GRIDS vGPU
• Parallel ports
• Raw device mapping (RDM)
• USB device passthrough

If you upload unsupported templates from an on-premises content library to an SDDC, the VMs
that are created from the template do not power on in the SDDC.

VM Configurations with Limited Support in VMware Cloud on AWS

These VM configurations have limited support and, as a result, are incompatible with VM
migrations that use VMware vSphere vMotion in VMware Cloud on AWS:

• Remote devices attached (CDs, floppy disks, etc.)


• Serial ports with network output
• Mounted paravirtual SCSI (PVSCSI) disks

Workload Management Page 315


Knowledge Check: Unsupported VM Configurations
Which VM configurations are unsupported in VMware Cloud on AWS? (Select two options)

NVIDIA GRID vGPU


VMXNET3
Parallel ports
Serial ports

Workload Management Page 316


Virtual Machine Management
Wednesday, January 25, 2023 10:29 AM

Learner Objectives:
After completing this lesson, you should be able to:

• Distinguish between use cases for VM snapshots, clones, and templates


• Create and manage snapshots, templates, and clones
• Create tags and custom attributes
• Recognize methods for securing virtual machines

Distinguish Between Snapshots, Templates, and Clones


Snapshots, templates, and clones are similar but not the same.

Snapshots preserve the state and data of a VM at a specific point in time.

Example: If problems occur during the patching or upgrading process, you can stop the process and
revert to the previous state.

Cloning is a quick and simple way to create a VM that shares properties with an existing one.

Example: You must diagnose a problem with a production VM. You find a potential fix for the problem,
but you do not want to install the fix on the production VM because users need to access it. You decide
to clone the VM and use the clone to test the fix. In this way, users can still access the production VM
during the cloning and testing processes.

A VM template is the original copy of a VM from which you can create ready-to-use VMs. The template is
useful for creating many VMs of the same kind.

Example: You require four VMs. The steps for creating these four VMs are repetitive and time-
consuming and can introduce errors. A more efficient method is to create a base template containing
the essential VM configuration. You can also customize the VMs created from a template based on
need.

Workload Management Page 317


Taking a Snapshot
You can take a snapshot of a VM in any of the following states:

• Powered on
• Powered off
• Suspended

A snapshot captures settings, memory, and disk state.

VM Settings VM Memory Content VM Disks State


(when the VM is powered on) (whether the VM is powered on,
powered off, or suspended)

Steps for Creating a Snapshot

1. In the vSphere Client, select the required VM in the left pane.


2. Select the Snapshots tab and click TAKE SNAPSHOT.
Alternatively, you can right-click the VM and select Snapshots > Take snapshot.
3. In the Take snapshot wizard, enter a snapshot Name and Description (optional) in the text boxes.
Additionally, you can select Include virtual machine's memory and Quiesce guest file system (required
VM tools) checkboxes, if necessary.
4. Click CREATE.

A snapshot does not include independent virtual disks (persistent and non-persistent)

VM Snapshot Files

A snapshot consists of a set of files:

-Snapshot#.vmsn: Configuration state

The configuration state file has a .vmsn extension and is used to hold the active memory state of the VM at

Workload Management Page 318


The configuration state file has a .vmsn extension and is used to hold the active memory state of the VM at
the point that the snapshot was taken, including virtual hardware, power state, and hardware version.

A new .vmsn file is created for every snapshot that is created on a VM and is deleted when the snapshot is
deleted. The # symbol stands for the next number in the sequence, starting with 1.

The size of this file varies, based on the options selected when the snapshot is created. For example, including
the memory state of the VM in the snapshot increases the size of the .vmsn file.

-Snapshot#.vmem: Memory state

The memory state file has a .vmem extension and is created if the option to include memory state is selected
during the creation of the snapshot.
It contains the entire contents of the VMs at the time that the snapshot of the VM was taken.

00000#.vmdk: Disk descriptor

The disk descriptor file is a small text file that contains information about the snapshot. The # symbol
indicates the next number in the sequence, starting with 1.

-00000#-delta.vmdk: Snapshot delta

The snapshot delta file contains the changes to the virtual disk data since the snapshot was taken.

When you take a snapshot of a VM, the state of each virtual disk is preserved.

The VM stops writing to its -flat.vmdk file. Writes are redirected to the
-######-delta.vmdk. The ###### symbols indicate the next number in the sequence.

You can exclude one or more virtual disks from a snapshot by designating them as independent disks.
Configuring a virtual disk as independent is typically done when the virtual disk is created, but this option can
be changed whenever the VM is powered off.

.vmsd: List file

The snapshot list file is created at the time that the VM is created. It maintains snapshot information for a VM
so that it can create a snapshot list in the vSphere Client.
This information includes the name of the snapshot .vmsn file and the name of the virtual disk file.

Snapshots and Independent Disks

An independent disk does not participate in virtual machine snapshots. That is, the disk state is independent
of the snapshot state. Creating, consolidating, or reverting to snapshots does not affect the disk.

In general, virtual disks are created using one of the following modes: Independent persistent, independent
nonpersistent, and dependent.

Independent Persistent - changes are persistently (permanently) written to the disk.

Workload Management Page 319


Independent Nonpersistent - disk writes are appended to a redo log. The redo log is erased when you power
off the virtual machine or revert to a snapshot, causing any changes made to the disk to be discarded.

When a virtual machine reads from an independent nonpersistent mode disk, the redo log is checked first. If
the relevant blocks are listed, the virtual machine reads the information. Otherwise, the read goes to the base
disk for the virtual machine.

Dependent - the default disk mode. When you take a snapshot of a virtual machine, dependent disks are
included in the snapshot. When you revert to the previous snapshot, all data are reverted to the point of
taking a snapshot.

Managing Snapshots

A VM provides several operations for working with snapshots and snapshot chains. You can manage
snapshots, revert to any snapshot in the chain, and remove snapshots.

Opening Snapshot Manager

To open the Snapshot Manager in the vSphere Client, select the required VM and navigate to the Snapshots
tab.

Alternatively, you can right-click the VM and select Snapshots > Manage Snapshots.

Viewing Snapshots for the Active VM

Workload Management Page 320


You can view all the available snapshots from the Snapshots page.

Editing a VM Snapshot

To edit a snapshot:
1. On the VM Snapshots page, click EDIT.
2. In the Edit snapshot dialog box, make changes.
3. Click EDIT.

Reverting to a VM Snapshot

Workload Management Page 321


To revert a snapshot:
1. On the VM Snapshot page, click REVERT.
The Revert to selected snapshot dialog box appears with a warning.
2. Read the warning message and click REVERT.

Deleting Snapshots

Deleting a snapshot removes the snapshot from the Snapshot Manager. The snapshot files are consolidated
and written to the parent snapshot disk and merge with the VM base disk.

Deleting a snapshot does not change the VM or other snapshots.

To delete a snapshot:
1. In the vSphere Client, select the required VM in the left pane.
2. Click the Snapshots tab.

Workload Management Page 322


2. Click the Snapshots tab.
3. Select the snapshot that you want to delete and click DELETE.
4. In the Delete snapshot wizard, click DELETE.
You can also delete all the available VM snapshots by clicking DELETE ALL.

Consolidating Snapshots

Snapshot consolidation is a method for committing a chain of delta disks to the base disks when the Snapshot
Manager shows that no snapshots exist, but the delta disk files remain on the datastore.

Snapshot consolidation is useful when snapshot disks fail to compress after a Revert, Delete, or Delete
all operation. This failure to compress might happen, for example, if you delete a snapshot but its associated
disk does not commit back to the base disk.

The presence of redundant delta disks can adversely affect the virtual machine performance. You can
combine such disks without violating a data dependency.
After snapshot consolidation, redundant disks are removed, which improves the virtual machine performance
and saves storage space.

Snapshot consolidation resolves problems that can occur with snapshots:

• The snapshot descriptor file is committed correctly, and the Snapshot Manager window shows that all
the snapshots are deleted.

• The snapshot files (-delta.vmdk) are still part of the VM.

• Delta disk files continue to expand until the datastore on which the VM is located runs out of space.

Snapshot Consolidation
For more information about how to consolidate a VM snapshot in the vSphere Client, access VMware
knowledge base article 2032907.

Snapshot Recommendations

Follow these recommendations to get the best performance when using snapshots:

• Use snapshots as a temporary measure only.

The presence of snapshots can have a significant impact on guest application performance, especially in
a VMFS environment, for I/O intensive workloads. The guest applications fully recover performance
after snapshots are deleted.

• Keep snapshot chain length short when possible, to minimize the guest application performance impact.

Performance degradation is higher as the snapshot chain length increases.

• If you need to increase the size of a virtual disk that has snapshots associated with it, you must delete
the snapshots first before you can increase the virtual disk's size.

Workload Management Page 323


Snapshots and Backups

Snapshots and backups are thought to be the same. However, they are different and have different purposes.

Snapshot Backup
Saves the state of the VM with the VM files. Saves a copy of the VM files in a remote site.
Depends on the availability of VM files. To revert a Is autonomous. If a problem affects the VM files, you
snapshot, the VM files must be accessible and show can restore from the backup because it is stored
no errors. separately.
Use cases: Checkpoint in upgrades, patching, testing, Use cases: VM or data safeguard in disaster and
and development processes. recovery plans.

Knowledge Check: Taking a VM Snapshot


In which VM power state can you take a snapshot that includes memory? (Select one option)

Powered on
Powered off
Suspended
Quiesced file system

Cloning VMs

Cloning a VM creates a VM that is an exact copy of the original.

A VM can be cloned in powered-on and powered-off states.

Cloning Example

Powered off - An exact copy of the VM created.

Powered on - An exact copy of the VM is not possible because of the services and applications running in the
VM are not paused during the cloning process.

Steps for Cloning an Existing VM

Workload Management Page 324


1. In the vSphere Client, select the required VM in the left pane.
2. Click ACTIONS.
3. From the drop-down menu, select Clone > Clone to Virtual Machine.
Alternatively, you can right-click the VM and select Clone > Clone to Virtual Machine.
The Create Existing Virtual Machine wizard opens.
4. On the Select a name and folder page, enter a name in the Virtual machine name text box and select a
location for the virtual machine.

Folders provide a way to store VMs and templates for different groups in an organization. You can set
permissions on them. If you prefer a flatter hierarchy, you can put all VMs and templates in a data
center and organize them in a different way.
5. Click NEXT.
6. On the Select a compute resource page, select the host, cluster, resource pool, or vApp where the VM
will run and click NEXT.
7. On the Select storage page, select the datastore or datastore cluster for storing the template
configuration files and all virtual disks.

For information on available storage options, see the chapter called Clone an Existing Virtual Machine in
the VMware vSphere documentation at https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-1E185A80-0B97-4B46-
A32B-3EF8F309BEED.html.
8. Click NEXT.
9. On the Select clone options page, select additional customization options for the new virtual machine
and click NEXT.

You can choose to customize the guest OS or the VM hardware. You can also choose to power on the
VM after its creation.
10. On the Ready to complete page, review the virtual machine settings and click FINISH.

Workload Management Page 325


Knowledge Check: Cloning a VM

Which statement accurately describes VM cloning? (Select one option)

An exact copy of a VM is created when the VM is cloned in the powered-on state.


Cloning is the best option to safeguard data during disaster and recovery.
An exact copy of a VM is created when the VM is powered off during the cloning process.
The guest OS cannot be customized during cloning.

VM Template Components
A template is a VM, with all its components:

Template Examples

• Template for each guest OS (such as Windows or Linux) that is


used by the company

• Template for a type of system that is often deployed in your


environment, such as web server, application, or database
server.

Workload Management Page 326


Creating Templates

You can create templates using the following methods:

• Convert a VM to a template.
• Clone a VM to a template.
• Clone an existing template.

Steps for Converting a VM to a Template

1. In the vSphere Client, select the required VM in the left pane.


2. Click ACTIONS.
3. From the drop-down menu, select Template > Convert to Template.
Alternatively, you can right-click the VM and select Template > Convert to Template.
4. In the Confirm Convert pop-up box, click YES.
vCenter Server marks that VM is a template, and it appears on the Templates tab.

Steps for Cloning a VM to a Template

1. In the vSphere Client, select the required VM in the left pane.


2. Click ACTIONS.
3. From the drop-down menu, select Clone > Clone to Template.
Alternatively, you can right-click the VM and select Clone > Clone to Template.
4. On the Select name and folder page, enter a name in the VM template name text box and select a
location for the virtual machine.
5. Click NEXT.

Workload Management Page 327


5. Click NEXT.
6. On the Select a compute resource page, select the host, cluster, resource pool, or vApp and click NEXT.
7. On the Select storage page, select the datastore or datastore cluster for storing the template
configuration files and all virtual disks.
8. Click NEXT.
9. On the Ready to complete page, review the virtual machine settings and click FINISH.

Steps for Cloning an Existing Template

1. In the vSphere Client, navigate to the Templates tab.


2. Select the required template.
3. Click ACTIONS.
4. From the drop-down menu, select Clone to Template.
Alternatively, you can right-click the template and select Clone.
5. On the Select name and folder page, enter a VM template name and select a location for the virtual
machine.
6. Click NEXT.
7. On the Select a compute resource page, select the host, cluster, resource pool, or vApp and click NEXT.
8. On the Select storage page, select the datastore or datastore cluster in which to store the template
configuration files and all virtual disks.
9. Click NEXT.
10. On the Ready to complete page, review the VM settings and click FINISH.

Deploying Virtual Machines from Templates

The Deploy from Template wizard guides you through the steps in the VM deployment process.

Workload Management Page 328


Steps for Deploying VMs from a Template

1. In the vSphere Client, navigate to the Templates tab.


2. Select the required template.
3. Click ACTIONS.
4. From the drop-down menu, select New VM from This Template.
Alternatively, you can right-click the template and select New VM from This Template.
5. On the Select name and folder page, enter a name in the Virtual machine name text box, and select a
location for the virtual machine.
6. Click NEXT.
7. On the Select a compute resource page, select the host, cluster, resource pool, or vApp and click NEXT.
8. On the Select storage page, select the datastore or datastore cluster in which to store the template
configuration files and all virtual disks.
9. Click NEXT.
10. On the Select clone options page, select additional customization options for the new virtual machine
and click NEXT.

You can choose to customize the guest OS or the virtual machine hardware. You can also choose to
power on the virtual machine after its creation.
11. On the Ready to complete page, review the virtual machine settings and click FINISH.

Knowledge Check: Creating a VM Template


You create a VM called Win10-06. You want to use this VM to create several VMs with the same configuration
as Win10-06. But you also want to continue to use the Win10-06 VM. Which method is best for creating the

Workload Management Page 329


as Win10-06. But you also want to continue to use the Win10-06 VM. Which method is best for creating the
VMs? (Select one option)

Clone the VM to a template.


Convert the Win10-06 VM to a template.
Use the New Virtual Machine wizard.
Clone a template of the VM to a template.

Assigning Custom Tags to Inventory Objects

A tag is a label that you can apply to objects in the vSphere inventory.

When you create a tag, you assign that tag to a category. Using categories, you can group related tags
together.

When you define a category, you can specify the object types for its tags, and whether more than one tag in
the category can be applied to an object.

You can assign tags to objects, search for tagged objects, and view objects that have the same tag.

Assigning Tags to Objects

Workload Management Page 330


To assign tags to objects, you select an inventory object in the vSphere Client interface and take the following
steps:

1. Select the Actions menu.

2. Select Tags & Custom Attributes > Assign Tag.


A list of previously created tags appears.

3. Select the tag from the list and click OK.

Searching for Tagged Objects

Workload Management Page 331


You can use the search box to browse and select objects in the vSphere Client inventory.

You can search by typing key words such as the tag name or the display name of the object that you want to
find.

Viewing Objects that Share a Tag

After searching for keywords such as the tag name, you can view related objects, regardless of type.

Workload Management Page 332


After searching for keywords such as the tag name, you can view related objects, regardless of type.

In the example, objects for the RD - Tag include a template, two hosts, and one datastore.

Knowledge Check: Assigning Tags


Why do you use tags? (Select two options)

To create identical VMs


To categorize and group related objects
To search for and find an object
To create multiple identical VMs

Workloads are increasingly becoming distributed as our environments continue to get broader and more
complex. Many cloud-based applications are business critical but vulnerable to compromise if any part of
the workload (app, data or OS) malfunctions.

This means that securing each part of the workload is a critical part of securing your business.

Two ways to secure your VMs are covered here: vTPM and Widows VBS.

Virtual Trusted Platform Module


A virtual Trusted Platform Module (vTPM) is a software-based representation of a physical Trusted Platform
Module 2.0 chip.

The vTPM acts as any other virtual device. You can add a vTPM to a virtual machine in the same way you add
virtual CPUs, memory, disk controllers, or network controllers.

When using this feature, you do not require a hardware Trusted Platform Module chip.

By default, no storage policy is associated with a virtual machine that is enabled with a vTPM.

You can choose to add encryption explicitly for the virtual machine and its disks, but the virtual
machine files must already be encrypted.

Requirements for vTPM

Virtual Machine Requirements:


• EFI firmware
• Hardware version 14 or later

Workload Management Page 333


• Hardware version 14 or later

Component Requirements:
• vCenter Server 6.7 and later for Windows virtual machines, and vCenter 7.0 Update 2 and later for Linux
virtual machines
• Virtual machine encryption (to encrypt the virtual machine home files)
• Key provider configured for vCenter Server

Guest OS Support
• Linux
• Windows Server 2008 and later
• Windows 7 and later

Windows Guest Operating Systems: Virtualization-Based Security


Microsoft Virtualization-Based Security (VBS), a feature of Windows 10 and Windows Server 2016 operating
systems, uses hardware and software virtualization to enhance system security by creating an isolated,
hypervisor-restricted, specialized subsystem.

Starting with vSphere 6.7, you can enable Microsoft virtualization-based security (VBS) on supported
Windows guest operating systems.

With Microsoft VBS, you can use the following Windows security features to harden your system and isolate
key system and user secrets so they are not compromised.

Credential Guard - aims to isolate and harden key system and user secrets against compromise.

Device Guard - provides a set of features designed to work together to prevent and eliminate malware from
running on a Windows system

Configurable Code Integrity - ensures that only trusted code runs from the boot loader onward

Securing Virtual Machines


For more information on methods for securing virtual machines, including best practices, see vSphere
Security in the VMware vSphere documentation.

Workload Management Page 334


Resource Management in VMware Cloud on
AWS SDDC
Friday, January 27, 2023 9:14 AM

Learner Objectives
After completing this lesson, you should be able to:

• Recognize vSphere DRS placement policies


• Recognize configuration settings for vSphere HA
• Identify resource pool use cases

This lesson focuses on resource management in a VMware Cloud on AWS SDDC.

Azure VMware Solution


For information on DRS policies, see the Create a placement policy in Azure VMware Solution section in the
Azure VMware Solution documentation.

Google Cloud VMware Engine


Search the Google Cloud VMware Engine documentation for the desired management topic.

Your organization has VMs spread across multiple vCenter Server


instances.

You want to manage your workloads and provide failure protection and
rapid recovery from outages.

How might you achieve these goals?

You can automate and manage the demand and supply of your workloads using vSphere features such as
VMware vSphere® Distributed Resource Scheduler™, VMware vSphere® High Availability, and resource pools.

vSphere DRS Function

To ensure that VMs in a cluster get the required resources, vSphere DRS performs the following key functions:

• Aggregates computing capacity across a collection of servers into logical resource pools

• Allocates available resources among VMs based on predefined rules that reflect business needs and
changing priorities

Workload Management Page 335


vSphere DRS attempts to improve resource use across the cluster by using vSphere vMotion to perform automatic
migrations of VMs.

vSphere DRS Cluster Prerequisites

VMware Cloud on AWS clusters are preconfigured with vSphere vMotion migration networks. vSphere DRS
works best when VMs meet the following vSphere vMotion migration requirements:

• The hosts in the cluster must be part of a vSphere vMotion migration network. If they are not, vSphere DRS
can still make initial placement recommendations.
• VMware Cloud on AWS clusters are preconfigured with vSphere vMotion-enabled vSAN, and all hosts can
use the same datastores.

vSphere DRS Policies in VMware Cloud on AWS

vSphere DRS policies in VMware Cloud on AWS provide rules that offer various benefits.

VM-Host Affinity
A VM-Host affinity policy describes a relationship between a category of VMs and a category of hosts.

Use cases:
• When host-based licensing requires that VMs running certain applications be placed on hosts that are
licensed to run those applications
• When VMs with workload-specific configurations require placement on hosts that have certain
characteristics

VM-Host Anti-Affinity
A VM-Host anti-affinity policy describes a relationship between a category of VMs and a category of hosts.

Use case:
• Avoids resource contention by not running general purpose workloads on hosts that run resource-
intensive applications.

VM-VM Affinity
A VM-VM affinity policy describes a relationship between members of a category of VMs.

Workload Management Page 336


Use case:
• Useful when two or more VMs sharing a tag category can benefit from locality of data or where
placement on the same host can simplify auditing.

VM-VM Anti-Affinity
A VM-VM anti-affinity policy describes a relationship between members of a category of VMs.

Use case:
• When you want to place VMs running critical workloads on separate hosts so that the failure of one
host does not affect other VMs in the category.

Disable DRS vMotion


A Disable DRS vMotion policy applied to a VM prevents DRS from migrating the VM to a different host
unless the current host fails or is put into maintenance mode.

Use case:
• For a VM running an application that creates resources on the local and expects those resources to
remain local.

Implementing vSphere DRS Policies


In an on-premises environment, you can configure vSphere DRS policies at the cluster level.

In VMware Cloud on AWS, vSphere DRS is enabled by default. It is managed by VMware, and you cannot change
the configuration.

In a VMware Cloud on AWS SDDC, you use compute policies to control the vSphere DRS behavior.

Disable vSphere DRS vMotion Policy


The Disable DRS vMotion policy is useful for a VM running an application that creates resources on the local host
and expects those resources to remain local.

The policy takes effect after a tagged VM is powered on and keeps the VM on its current host as long as the host
remains available.

Creating a Disable vSphere DRS vMotion Policy


A Disable vSphere DRS vMotion policy applied to a VM prevents vSphere DRS from migrating a VM to a different
host unless the current host fails or is put into maintenance mode.

For more information about creating or deleting a Disable DRS vMotion policy, access the VMware Cloud on
AWS product documentation.

If vSphere DRS moves a virtual machine to another host for load-balancing or to meet reservation
requirements, the resources created by the application are left behind.

Performance can be degraded when the locality of reference is compromised, so this VM should not be
unnecessarily moved by vSphere DRS.

Workload Management Page 337


Knowledge Check: Configuring DRS Policies

You determine the resource requirements for the VMs in your clusters. Which affinity and anti-affinity policies
match your requirements?

vSphere HA in VMware Cloud on AWS


vSphere HA uses multiple ESXi hosts, configured as a cluster, to provide rapid recovery from outages and support
high availability for applications running in virtual machines.

vSphere HA is enabled by default in VMware Cloud on AWS and cannot be disabled or modified. Fault tolerance
is unavailable.

Proactive high availability is turned off because VMware immediately replaces failed hosts.

You can configure clusters to tolerate one host failure by using a percentage-based admission control policy.

Workload Management Page 338


Overview of vSphere Availability configuration for Cluster-1

vSphere HA Settings in VMware Cloud on AWS

Several vSphere HA configuration settings are static in the VMware Cloud on AWS SDDC and cannot be disabled.

vSphere HA Setting Value


Host failure Restart VMs
Proactive HA Disabled
Host Isolation Power off and restart VMs
Datastore with Permanent Device Loss Power off and restart VMs
Datastore with All Paths Down Power off and restart VMs
Guest not heartbeating Reset VMs
Admission Control 33% CPU and 33% Memory
Datastore for heartbeating Not configured

Knowledge Check: vSphere HA Scenario


You configure vSphere HA on your clusters. What action does vSphere HA take when a guest operating system
fails? (Select one option)

Powers off the VM


Resets the VM on the same host
Migrates the VM to another host
Deletes the VM

Workload Management Page 339


Deletes the VM

Resource Pools
A resource pool is a logical abstraction of hierarchically managed CPU and memory resources. With a resource
pool, you can divide and allocate CPU and memory resources to VMs and other resource pools in a vSphere DRS
cluster.

Use Cases for Resource Pools

Using resource pools can provide several benefits.

Flexible hierarchical organization


Add, remove, or reorganize resource pools or change resource allocations if necessary.

Isolation between pools, sharing in pools


Top-level administrators can make a pool of resources available to a department-level administrator.

Access control and delegation


VM creation and management are performed within the boundaries of the resources to which the resource
pool is entitled. Delegation is usually done with permission settings.

Separation of resources from hardware


If you use vSphere DRS clusters, the resources of all hosts are always assigned to the cluster.

Management of VMs running a multitier service


Group VMs for a multitier service in a resource pool

Resource Pools in VMware Cloud on AWS

A VMware Cloud on AWS environment has two predefined resource pools.

COMPUTE-RESOURCEPOOL

By default, all workload virtual machines are created in the top-level (root) Compute-ResourcePool. It is initially
created in Cluster-1.
Each additional cluster that you create starts with its own top-level Compute-ResourcePool.

You can perform the following actions with this resource pool:
• Create new VMs and child resource pools.
• Rename the resource pools to better match company policy.
• Create child resource pools of any Compute-ResourcePool to give you more control over the allocation of
compute resources.
• Monitor the resource pool, its VMs, and its child resource pools, and examine resource pool use.
• Set tags and attributes.
• Change resource allocation settings on child resource pools.

MGMT-RESOURCEPOOL

Workload Management Page 340


VMware manages the Mgmt-ResourcePool.

Mgmt-ResourcePool works in the following ways:


• It is always created in Cluster-1.
• It never consumes from other clusters.
• Resources in this pool are reserved for management VMs.

Knowledge Check: Resource Pools


When do you use a resource pool? (Select two options)

For grouping VMs for a multitier service


To change resource allocations if necessary
To provide failover protection against hardware and operating system outages in your virtualized IT
environment
For allocating resources among VMs using predefined rules that reflect business needs and changing
priorities.

Workload Management Page 341


Guest OS Optimization
Friday, January 27, 2023 10:08 AM

Learner Objectives:

After completing this lesson, you should be able to:

• Recognize guest OS performance requirements for virtual CPU, memory, storage, and networking
• Optimize the guest OS configuration

Guest OS General Considerations


To achieve optimal performance for VMs, you should follow several guidelines:

• Install the latest version of VMware Tools in the guest OS


• Deactivate screen savers and Window animations in the VMs
• Schedule backups and virus scanning programs in VMs to run at off-peak hours
• For accurate timekeeping, consider configuring your guest OS to use NTP, Windows Time Service, the
VMware Tools time-synchronization option, or another timekeeping utility that is suitable for your OS.

Workload Management Page 342


What Do You Think?

Guest OS optimization requirements can be broadly categorized into virtual CPU, memory, storage, and
network.

Do you know the optimization methods that correspond to each resource category?

vCPU Virtual NUMA


Memory Large memory pages
Network VMXNET3
Storage VMware Paravirtual Adapter

CPU Considerations

Side-Channel Vulnerability Mitigation

A class of security vulnerabilities called side-channel vulnerabilities


affects many modern CPUs. Vulnerabilities include Spectre, Meltdown,
Foreshadow, L1TF, and others.

Mitigation for some side-channel vulnerabilities occurs in the guest OS.


These mitigations address serious security vulnerabilities, but they can
also have a significant impact on performance, especially on systems that
are CPU resource-constrained.

For more information about vulnerabilities as they relate to VMware products, see the OS-specific
mitigations sections in the following VMware knowledge base articles.

VMware Response to Speculative Execution security issues, CVE-2017-5753, CVE-2017-5715, CVE-2017-5754,


and CVE-2018-3693 (aka Spectre and Meltdown)

VMware Response to Speculative Execution security issues, CVE-2018-3639 and CVE-2018-3640

VMware Overview of ‘L1 Terminal Fault’ (L1TF) Speculative-Execution vulnerabilities in Intel processors:
CVE-2018-3646, CVE-2018-3620, and CVE-2018-3615
Workload Management Page 343
CVE-2018-3646, CVE-2018-3620, and CVE-2018-3615

Virtual NUMA

Virtual NUMA (vNUMA) exposes NUMA topology to the guest OS so that NUMA-aware guest operating
systems and applications can make the most efficient use of the underlying hardware in the NUMA
architecture.

vNUMA Topology

Consider the following guidelines for NUMA-aware guest operating systems:

• For the best performance, size your VMs to stay within a physical NUMA node.

• When a VM needs to be larger than a single physical NUMA node, size it so that it can be split evenly
across as few physical NUMA nodes as possible.

• Use caution when creating a VM that has a vCPU count that exceeds the physical processor core count
on a host.

• Changing the corespersocket value does not influence vNUMA or the configuration of the vNUMA
topology.

For more information, access "Virtual Machine vCPU and vNUMA Rightsizing" on the VMware
Performance blog.

• By default, vNUMA is activated only for VMs with more than eight vCPUs. This feature can be activated
for smaller VMs, and is useful for VMs with eight or fewer vCPUs.

To activate vNUMA for VMs with eight or fewer CPUs, you can use the vSphere Client to set
numa.vcpu.min to the minimum VM size (in vCPUs) for which you want vNUMA activated.

• With the CPU Hot Add feature, you can add vCPUs to a running VM. Activating this feature, however,
deactivates vNUMA for that VM, resulting in the guest OS seeing a single vNUMA node.

• Without vNUMA support, the guest OS has no knowledge of the CPU and memory virtual topology of
the host. Consequently, the guest OS can make suboptimal scheduling decisions, leading to reduced
performance for applications running in large VMs. So activate CPU Hot Add only if you expect to use it.

Workload Management Page 344


performance for applications running in large VMs. So activate CPU Hot Add only if you expect to use it.

What is NUMA?
NUMA systems are advanced server platforms with more than one system bus. For more information about
NUMA, access the vSphere product documentation.

Knowledge Check: vCPU Considerations


You want to optimize the vCPU for the guest OS in your cloud SDDC.

Which guideline accurately describes one way to optimize vCPU? (Select one option)

Create clusters that are composed entirely of hosts with matching NUMA architecture.
Activate the CPU Hot Add feature to increase performance of applications running in large VMs.
Create VMs that have a vCPU count that exceeds the physical processor core count on a host.

Memory Considerations

The memory resource settings for a VM determine how much of the host memory is allocated to the VM.
VMware Cloud on AWS can make large memory pages available to the guest OS.

Large Memory Pages

If an OS or application can benefit from large pages on a native system, that operating system or application
can potentially achieve a similar performance improvement on a virtual machine backed with 2 MB machine
memory pages.

Consult the documentation for your operating system and application to determine how to configure large
memory pages.

Storage Considerations
The virtual storage adapter that is presented to the guest OS can influence storage performance. The
device driver, its settings, and other factors in the guest OS can also affect performance.

LSI Logic Parallel, LSI Logic SAS, or VMware Paravirtual

For most guest operating systems, the default virtual storage adapter in VMware Cloud on AWS is either LSI
Parallel or LSI Logic SAS, depending on the guest operating system and the virtual hardware version.

Workload Management Page 345


However, VMware Cloud on AWS also includes a paravirtualized SCSI storage adapter, PVSCSI (also called
VMware Paravirtual). The PVSCSI adapter offers a significant reduction in CPU utilization as well as
potentially increased throughput compared to the default virtual storage adapters, and is thus the best
choice for environments with very I/O-intensive guest applications.

In order to use PVSCSI, your VM must be using virtual hardware version 7 or later.

BusLogic Parallel Virtual SCSI Adapter

If you choose to use the BusLogic Parallel virtual SCSI adapter, and are using a Window guest operating
system, you should use the custom BusLogic driver included in the VMware Tools package.

Non-Volatile Memory Express Virtual Storage Adapter

The Non-Volatile Memory Express (NVMe) virtual storage adapter (virtual NVMe, or vNVMe) allows recent
guest operating systems that include a native NVMe driver to use that driver to access storage through
VMware Cloud on AWS.

Compared to virtual SATA devices, the vNVMe virtual storage adapter accesses local PCIe SSD devices with
much lower CPU cost per I/O and significantly higher IOPS.

Queue Depth

The depth of the queue of outstanding commands in the guest OS SCSI driver can significantly impact disk
performance. A queue depth that is too small, for example, limits the disk bandwidth that can be pushed
through the virtual machine. See the driver-specific documentation for more information on how to adjust
these settings.

Large I/O Requests

In some cases, large I/O requests that are issued by applications in a VM can be split by the guest storage
driver.

Changing the guest OS registry settings to issue large block size I/O requests can eliminate this splitting and
enhance performance.

For more information about large I/O requests, access the VMware knowledge base article 9645697 at
https://kb.,vmware.com/s/article/9645697.

Disk Partitions

You should ensure that disk partitions in the guest OS are aligned.

For more information about tools and recommendations for disk partitions, access the OS vendor
documentation.

4K-Aligned I/Os

VMware Cloud on AWS uses drives with 4 KB sector size (that is, 4 KB native, or 4Kn) but presents storage to

Workload Management Page 346


VMware Cloud on AWS uses drives with 4 KB sector size (that is, 4 KB native, or 4Kn) but presents storage to
the guest with 512ee sector size (512 native). You can obtain the best storage performance if your workload
issues mostly 4K-aligned I/Os.

For more information about device sector formats, access the VMware vSphere product documentation at
https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-5E7B4EBC-2147-42F9-9CCD-B63315EE1C52.html and
knowledge base article 2091600 at https://kb.vmware.com/s/article/2091600.

Knowledge Check: Storage Considerations


True or False: Large I/O requests issued by applications in a VM can be split by the guest storage driver.

True
False

Network Considerations

The guest OS network considerations describe the various types of virtual network adapters, how to select
one, and how to obtain the best performance from it.

Types of Virtual Network Adapters

Workload Management Page 347


Virtual network adapters include three emulated types, three
paravirtualized types, and a hybrid adapter.

Virtual Network Adapter

VLANCE
The VLANCE virtual network adapter is an emulated adapter. It emulates an AMD 79C970 PCnet32 NIC.
Drivers for this NIC are found in most 32-bit operating systems.

E1000
The E1000 virtual network adapter is an emulated adapter. It emulates an Intel 82545EM NIC. Drivers
for this NIC are found in many recent operating systems.

E1000E
The E1000E virtual network adapter is an emulated adapter. It emulates an Intel 82574 NIC. Drivers for
this NIC are found in a smaller set of recent operating systems.

VMXNET2
The VMXNET2 virtual network adapter (also called Enhanced VMXNET). This adapter is based on the
VMXNET adapter but adds a number of performance features.

VMXNET3

Workload Management Page 348


VMXNET3
The VMXNET3 virtual network adapter (also called VMXNET Generation 3). This adapter has all the
features of the VMXNET2 adapter, along with several new ones.

Flexible
The Flexible virtual network adapter is a hybrid virtual network adapter. It starts out emulating a
VLANCE adapter, but can function as a VMXNET adapter if VMware Tools is installed and the guest OS
supports VMXNET.

Additional Resources for Adapters

Virtual Network Adapters


For more information about choosing a network adapter for yoru virtual machine, access VMware
knowledge base article 1001805.

Virtual Machine Network Configuration


For more information about network adapters, access the VMware vSphere documentation.

Selecting a Virtual Network Adapters

When selecting a virtual network adapter, consider the following points:

For the best performance, use the VMXNET3 paravirtualized network adapter for the operating
systems in which it is supported.

For guest operating systems in which VMXNET3 is not supported, use the E1000E virtual network
adapter.

If the E1000E is not an option, use the flexible device type.

Virtual Network Adapter Features and Configuration

Consider the following guidelines for using various network adapter features and configuring adapters for
the best performance.

Networking VMs on the Same Host


When networking two virtual machines on the same host, connect them to the same virtual switch.

When connected this way, their network speeds are not limited by the wire speed of any physical
network card. Instead, they transfer network packets as fast as the host resources allow.

Jumbo Frames
Jumbo frames are recommended as a way to increase network throughput and reduce CPU load. They
do this by allowing data to be transmitted using larger, and, therefore, fewer packets.

Jumbo frames are supported on the E1000, E1000E, VMXNET2, and VMXNET3 devices.
They are activated by default on the underlying network for all same-data-center traffic and connected
VPC traffic.

Workload Management Page 349


TCP Segmentation Offload
TCP segmentation offload (TSO) is activated by default in the VMkernel.

It is supported in virtual machines only when they use an E1000, E1000E, VMXNET2, or VMXNET3
device.

TSO can improve performance even if the underlying hardware does not support TSO.

Large Receive Offload


Large receive offload (LRO) is activated by default in the VMkernel.

It is supported in virtual machines only when they use the VMXNET2 or VMXNET3 device.

LRO is supported by various operating systems.

Knowledge Check: Guest OS Optimization


How do you optimize memory, storage, network, and vCPU in the guest OS?

Workload Management Page 350


vSphere Permissions for VMware Cloud on AWS
Monday, January 30, 2023 8:49 AM

Learner Objectives
After completing this lesson, you should be able to:

• Recognize best practices for using permissions in a VMware Cloud on AWS SDDC
• Identify the roles available in VMware Cloud on AWS
• Describe the privileges of the CloudAdmin user role.
• Add roles and users to the vCenter Server instance in VMware Cloud on AWS

This lesson focuses on vSphere permissions as they relate to VMware Cloud on AWS.

For more information on how vSphere permissions are used by other hyperscaler partners, you
can access the following resources:

Azure VMware Solution


See the Azure VMware Solution identity concepts section in the Azure VMware Solution
documentation.

Google Cloud VMware Engine


See the Private cloud vSphere permission modelsection in the Google Cloud VMware Engine
documentation.

In a cloud SDDC, how do you limit user or group access to specific tasks on vCenter Server
objects?

vSphere Permissions Model


Permissions give one user or group a set of privileges, that is, a role for a selected object. The
permission model for vCenter Server systems relies on assigning permissions to objects in the
vSphere object hierarchy.

The model includes several components:

• Privilege
• Role
• User or group
• Object or resource

Workload Management Page 351


A privilege is an action
in vSphere, for
example, add a VM or
assign a network.

Privileges are grouped


into roles.

Privileges are grouped


into roles. Roles allow
users to perform
administrative tasks in
vSphere.

Roles are further


grouped into categories
to make configuration
simple.

You cannot modify the


following default on-
premises vSphere roles:
• Administrator
• No access
• Ready-only

Workload Management Page 352


A user or group is
entitled to perform the
actions.

Objects are entities on


which actions are
applied. Objects include
data centers, folders,
resource pools, clusters,
hosts, datastores,
networks, and VMs.

Global Permissions

Global permissions are applied to a global root object that spans solution inventory hierarchies.
Using global permissions, you assign the following permissions and privileges:

• Apply permissions to a global root object that spans solutions.

For example, if you want to use solutions such as vCenter Server and Content Library, you
must have global permissions.

• Give a user or group privileges for all objects in all object hierarchies.

You decide on the role for each user or group. The role determines the set of privileges that
the user or group has for all objects in the hierarchy.

Workload Management Page 353


the user or group has for all objects in the hierarchy.

vSphere Permissions Hierarchy Diagram

Global permissions are not replicated if your environment includes an on-premises


vCenter Server instance and a vCenter Server instance in the cloud.

Global permissions do not apply to objects that VMware manages for you, such as SDDC
hosts and datastores.

Permissions Best Practices

VMware Cloud on AWS best practices for using permissions mirror the best practices for vCenter
Server:

• Assign a role to a group rather than individual users wherever applicable.


• Grant permissions only on the objects where they are required and assign privileges only to
users or groups that must have them.
• Check that a group does not contain the CloudAdmin user or other users with administrative
privileges when assigning a restrictive role to a group.
• Use folders to group objects.
• Enable propagation, if possible, when you assign permissions to an objects.
• Use the No Access role to mask specific areas of the hierarchy.

VMware Cloud on AWS Roles


For more information about roles used in VMware Cloud on AWS, access vSphere Administration
in VMware Cloud on AWS in the product documentation.

Knowledge Check: Best Practices

You are assigning permissions in a VMware Cloud on AWS SDDC. Which tasks align with best
practices for assigning permissions? (Select two options)

Replicate global permissions from your on-premises vCenter Server and the vCenter Server
in your SDDC.
Assign roles to groups of users.
When assigning a restrictive role to a group, verify that the group does not contain the
CloudAdmin user.

Workload Management Page 354


CloudAdmin user.
Assign privileges to users who might use them.

CloudAdmin User
The vCenter Server instance in a VMware on AWS SDDC includes two predefined roles that are not
present in your on-premises vCenter Server instance: CloudAdmin and CloudGlobalAdmin.
In VMware Cloud on AWS, the CloudAdmin role has several key characteristics:

• The [email protected] user includes both the CloudAdmin and the CloudGlobalAdmin
roles.

• The CloudAdmin and CloudGlobalAdmin roles are predefined in the vCenter Single Sign-On
domain and cannot be edited.

• The [email protected] user is created automatically with a randomly generated


password.

• When you change the password for your SDDC from the vSphere Client, the new password is
not synchronized with the password that appears on the default vCenter Server credentials
page.

• If you change the credentials, you are responsible for recording the new password. Contact
Technical Support and request a password change if the password is lost.

Using VMware Cloud on AWS Roles

You use the CloudAdmin and CloudGlobalAdmin roles to manage the SDDC.

Workload Management Page 355


Adding Roles to vCenter Server

Custom roles can be created in VMware Cloud on AWS. The creation process is the same as for
on-premises vSphere.

To add roles, select Menu > Administration > Access Control > Roles.

Workload Management Page 356


To add roles, select Menu > Administration > Access Control > Roles.

On the object whose permissions you want to modify, you must have a role that includes
the Permissions.Modify privilege.

Adding Users to vCenter Server

You cannot create new users and groups in the vmc.local or localos domains.

Adding new users requires that the vCenter Server instance in the VMware Cloud on AWS
environment connects to an existing identity source.

To add users, select Menu > Single Sign On > Users and Groups and select ADD USER.

Workload Management Page 357


The ADD USER option is not accessible for new users.

Knowledge Check: Adding Roles and Users to vCenter Server


You are adding roles and users to the vCenter Server instance in your VMware Cloud on AWS
SDDC. Which statement accurately describes how you can perform this task? (Select one option)

To modify permissions of an object, you must have a role that includes the
Permissions.Modify privilege on that object.

You cannot create custom roles in VMware Cloud on AWS

You must create new users and groups in the vmc.local or localos domains of your vCenter.

Workload Management Page 358


Module Summary
Monday, January 30, 2023 9:23 AM

Review the key concepts covered in this module:


• You can connect to a vCenter Server instance in the cloud SDDC by logging in to the SDDC
console, configuring a firewall rule, and connecting to vCenter Server.

• The way that you interact with a VM is similar to how you interact with a physical machine.
Every VM provides the same functionality as a physical machine because they use the same
types of components.

• In a VMware Cloud on AWS SDDC, you can provision VMs in multiple ways:
○ Using the New Virtual Machine wizard
○ Cloning a VM
○ Deploying a VM from a template
○ Using the content library

• Virtual machines are not static objects. They can move from host to host to maintain
availability and performance. You can automate and manage the demand and supply of your
workloads using vSphere features such as vSphere DRS, vSphere HA, and resource pools.

• You can optimize the performance of your guest OS by configuring vCPU, memory, storage,
and network settings according to best practices.

• The task of adding users and roles in VMware Cloud on AWS is similar to on-premises
vSphere. The vCenter Server instance in your SDDC includes two predefined roles that are
not present in your on-premises vCenter Server instance: CloudAdmin and
CloudGlobalAdmin.

Additional Resources

• For information about guest OS performance optimization, access Performance Best


Practices for VMware Cloud on AWS at https://docs.vmware.com/en/VMware-Cloud-on-
AWS/index.html.

• For information about configuring and managing your VMware Cloud on AWS SDDC, access
Managing the VMware Cloud on AWS Data Center at https://docs.vmware.com/en/VMware-
Cloud-on-AWS/services/com.vmware.vsphere.vmc-aws-manage-data-center-
vms.doc/GUID-560F64CA-0C0C-43D2-ABA9-42BD50F84457.html.

• For information about user roles and permissions, access Understanding Authorization in
vSphere at https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.security.doc/GUID-74F53189-EF41-4AC1-A78E-
D25621855800.html.

Workload Management Page 359


Workload Management with Other Hyperscaler Partners

Azure VMware Solution


The Azure VMware documentation provides information and links to content related to workload
management with this solution.

Google Cloud VMware Engine


The Google Cloud VMware Engine documentation contains information and links to content
related to workload management with this solution.

Workload Management Page 360


Kubernetes Essentials
Monday, January 30, 2023 9:34 AM

Learner Objectives

After completing this lesson, you should be able to:

• Describe the purpose of using Kubernetes


• Identify the layers in a Kubernetes environment
• Explain the functions of Kubernetes components

Container Benefits

Container technology changes how businesses deploy


and use applications in the data center and the public
cloud.

A container encapsulates an application, or part of an


application, and its required environment.

What makes this technology so useful?

What Do You Think?

Portable Containers can be deployed across environments with little or no


modification. A container combines an application with everything that it
needs to run, which means that you can run applications on different
environments.

Modern Applications Page 361


environments.

Modular Containers break down the monolithic architecture by separating and


isolating application features and functions into container-enabled services,
or microservices.

Scalable Container deployments can be automatically scaled up or down as workload


requirements change.

Lightweight Containers require fewer resources and less hardware than, for example,
virtual machines. So you can start containers quickly.

Because containers are lightweight, portable, and scalable, they offer benefits in
terms of developing and deploying applications. And organizations are using
more and more containers to modernize their applications.

Container Challenges

But, as you might expect, the more containers, the more complex it becomes to manage them.
Challenges include:

• Managing the life cycle of many containers


• Restarting failed containers
• Scaling containers to meet capacity
• Networking and load balancing

Kubernetes offers a solution for these challenges.

Orchestration Solution

Kubernetes (K8s) is an open-source platform that addresses container challenges by


orchestrating containerized applications.

It manages, schedules, and automates resource use, failure handling, application availability,
configuration, and scalability.

Kubernetes provides an application programming interface (API) where you can define
container infrastructure using a declarative method.

Modern Applications Page 362


Kubernetes from a Developer Perspective

This video is taken from a larger presentation that appears on the VMware Tanzu web site in the
content library at https://tanzu.vmware.com/content/videos/build-manage-secure-a-multi-cloud-
container-infrastructure-with-vmware-tanzu.

Transcript

So, think about why Kubernetes is becoming very popular or becoming the default choice. From
an application development perspective, if I have a multi-cloud strategy, or even if, let's say, I'm
a developer and I know my main function is to write code, to get behind the logic of developing
what I need to do for my business.

And in order to maintain these modern applications that are containerized, I either have to go
through different cloud providers, learn about their different APIs, and learn about the different
methods to manage the applications. Or, I can learn Kubernetes, and Kubernetes will in turn
figure out what is needed and work with all the different cloud providers.

So today it's easier from an application development perspective to write application code, give
the requirements that application needs in order to be stood up, in order to be lifecycle
managed, to Kubernetes through a simple file. And Kubernetes will deploy those applications,
create the back-end services needed to support that application, create microservices so that
those applications can talk with each other or they can talk to the outside world.

Modern Applications Page 363


those applications can talk with each other or they can talk to the outside world.

Now the way Kubernetes does this is that it talks to the back-end infrastructure or the cloud
provider that your Kubernetes cluster is running on. And then once you deploy an app, once
you tell Kubernetes to deploy an application, it is going to go ahead and work with the
southbound APIs for that particular cloud provider and deploy the necessary building blocks
needed for that application to be supported.

So, for example, if you said, here is a containerized application and it needs a storage volume,
what Kubernetes is going to do is go ahead, create that container. Let's say, if you are running
this on AWS, it's going to go ahead and create an elastic block storage volume, or an object
storage for that matter. If you're running in vSphere, for example, what Kubernetes is going to
do is go ahead and create a VMDK disk or a volume drive. Right. And, you know, not just create
that, using an API, talking to that infrastructure or cloud provider, but also go ahead and attach
those volumes to the right containers.

And so this is all happening behind the scene. From an application development perspective, I
don't have to individually learn all the vSphere APIs. I don't have to individually learn all the
AWS APIs in order to do so.

And that's what gives Kubernetes that power. It's kind of that singular infrastructure API. If you
learn that, then we don't really have to dive into a lot of these different cloud-centric APIs, and
you can really focus on what you're doing, which is writing application code.

Knowledge Check: Examples of Kubernetes Benefits

A small service company wants to develop its own mobile application, cost-effectively run the
application in its data center, and provide innovative services.

The new application will run in Docker containers on VMs, and Kubernetes will orchestrate the
containerized application.

Which examples illustrate benefits of using Kubernetes in this way? (Select three options)

The scheduling policies in Kubernetes dynamically match demand.

You can manually fix faults and failures to maintain troubleshooting knowledge on the
team.

Kubernetes services maintain only one version of the same application for consistency
across environments.

The DevOps team can easily port containers from the test environment to production,
accelerating the development and deployment of new features.

Modern Applications Page 364


Kubernetes Full Stack
A Kubernetes environment has several layers. Understanding these layers can help you to build
Kubernetes environments.

Applications
At the application layer, users connect to the applications.

The infrastructure below this layer supports the running of the applications.
Containers

Containers consist of several application components, including images, CSS files,


and service components.

The containers are run according to the Kubernetes layer.


Kubernetes

You control containers at this level by configuring Kubernetes and defining nodes,
pods, and the containers within them.

The Kubernetes control plane takes your configuration commands and relays those
instructions to the compute machines.

Kubernetes then orchestrates the containers.


Cluster API

The cluster API is required for the Kubernetes layer.

Cluster API orchestrates the creation, updating, and management of Kubernetes.


Virtualization

You can run Kubernetes on VMware vSphere, VMware NSX, and VMware vSAN, or
other virtualization software.

Among the considerations for this layer is how to manage hardware contention,
failure, and changes.
Hardware

Modern Applications Page 365


Hardware

The hardware is the physical infrastructure on which the stack runs.

You must consider which hardware components can adequately manage the whole
stack.

Knowledge Check: Kubernetes Layers


Which layer orchestrates the creation, updating, and management of Kubernetes? (Select one
option)

Applications
Containers
Kubernetes
Cluster API
Virtualization
Hardware

Kubernetes Namespaces
Namespaces are a way to organize clusters into virtual subclusters. They can be helpful when
different teams or projects share a Kubernetes cluster.

Any resource that exists within Kubernetes exists either in the default namespace or a
namespace that is created by the cluster operator

Why use Kubernetes Namespaces?

• Provide teams or projects with their own virtual clusters without fear of impacting each
other’s work.

• Enhance role-based access controls (RBAC) by limiting users and processes to certain
namespaces.

• Enable the dividing of a cluster’s resources between multiple teams and users through
resource quotas.

• Provide an easy method of separating development, testing, and deployment of


containerized applications, enabling the entire life cycle to run on the same cluster.

Supervisor Cluster
Modern Applications Page 366
Supervisor Cluster
The Kubernetes cluster that is created when activating the Tanzu Kubernetes Grid service is
called a Supervisor Cluster.

A Supervisor Cluster is composed of the control plane and the compute machines, or worker
nodes. Each node runs pods, which are made up of containers.

Supervisor Cluster Components

The control plane is responsible for maintaining the desired state of the cluster, for example,
the applications or workloads that should be running and the images that they should use.

Compute or node machines run the applications and workloads.

Control Plane Components Node Components


kubectl: Command-line interface (CLI) to the kubelet: Communicates with the control plane to
Kubernetes API ensure that containers are running in a pod and
executes actions that the control plane requests.
etcd: A key-value datastore where Kubernetes kube-proxy: Configures networking rules to route
cluster data is stored, for example, cluster traffic to containers
configuration and current state.
API Server: Entry point into the Kubernetes Image registry: Registry server that stores and
platform distributes the container images that Kubernetes
relies on.
Controller Manager: Runs controllers that Container runtime engine: Runs pods when
watch the API for changes and that responds requested by kubelet
with appropriate actions
Scheduler: Balances pods across nodes Pod: A single instance of an application

Each pod is made up of a container or a series of

Modern Applications Page 367


Each pod is made up of a container or a series of
tightly coupled containers, along with options that
govern how the containers are run.

Knowledge Check: Kubernetes Control Plane


You want to create a pod using the data in a YAML file.

Which function does each Kubernetes control plane component perform to create the pod?

Knowledge Check: Kubernetes Node Components


Pods are scheduled and orchestrated to run on nodes. Which node component starts the pods
and assigns resources from node to container?

Modern Applications Page 368


Correct Answer: kubelet

The kublet component starts the pod and assigns resources from node to container.

How?
The kubelet component receives pod specifications from the API server. It uses the specs to
ensure that pods and their containers are running as expected.

It also reports to the control plane on pod health and status.

Modern Applications Page 369


Kubernetes and VMware Tanzu
Monday, January 30, 2023 10:19 AM

Learner Objectives
After completing this lesson, you should be able to:

• Describe the functions of VMware Tanzu products in Kubernetes life cycle management
• Recognize use cases for VMware Tanzu editions

VMware Tanzu and Kubernetes Life Cycle

Kubernetes is the key technology in the VMware Tanzu portfolio.

Introduction

VMware Tanzu products and services help to build, run, and manage modern applications by
automating the delivery of containerized applications and managing them in production with
Kubernetes.

Step 1: Build

Modern Applications Page 370


To build applications, you can use several products:

• Spring is a framework for writing high-performing and easily testable Java code.
• VMware Tanzu® Application ServiceTM provides a development and deployment platform
across clouds.
• VMware Tanzu® Build ServiceTM automates container creation, management, and
governance.
• VMware Application CatalogTM provides a customizable selection of open-source software
that is maintained and tested continuously for use in production environments.

Step 2: Run

To run applications, you can use the following products:

• VMware Tanzu® Kubernetes GridTM runs Kubernetes-orchestrated containers across


multiple cloud infrastructures. It automates the life cycle management of multiple Tanzu
Kubernetes clusters.

• VMware Tanzu® Kubernetes GridTM Integrated Edition is a Kubernetes-based container


solution with advanced networking, a private container registry, and life cycle
management.

• VMware vSphere® with VMware Tanzu® provides a Kubernetes experience that is tightly
integrated with vSphere. vSphere runs Kubernetes workloads natively on the hypervisor
layer.

vSphere with Tanzu also contains multiple services that provide access to infrastructure
through a Kubernetes API.

Modern Applications Page 371


Tanzu Kubernetes Grid can be run in your private cloud, in the public cloud, or in a highly
distributed edge environment.

Step 3: Manage

To manage applications, you can use the following products:

• VMware Tanzu® Mission ControlTM is a centralized management platform for consistently


operating, managing, and securing Kubernetes infrastructure and modern applications
across teams and clouds. It provides a global view of all of the Kubernetes clusters. You
can use the resource hierarchy to manage and enforce consistent policies across
Kubernetes clusters.

• VMware Tanzu® ObservabilityTM by Wavefront provides insights into the performance of


modern applications through analytics data.

• VMware Tanzu® Service MeshTM Advanced edition provides consistent control and
security for microservices, end users, and data across all your clusters and clouds.

Knowledge Check: VMware Tanzu Portfolio

Build Run Manage


Tanzu Build Service vSphere with Tanzu Tanzu Observability
Tanzu Application Service Tanzu Kubernetes Grid Tanzu Service Mesh

Modern Applications Page 372


Tanzu Application Service Tanzu Kubernetes Grid Tanzu Service Mesh
Tanzu Application Catalog Tanzu Mission Control

VMware Tanzu Editions


VMware Tanzu editions package capabilities of the VMware Tanzu portfolio into clearly defined
solutions.

VMware Tanzu Basic

VMware Tanzu Basic provides a straightforward implementation of Kubernetes in vSphere.

With this edition, you can provision clusters directly from vCenter Server and run VMs and
containers side-by-side.

VMware Tanzu Standard

VMware Tanzu Standard is for organizations that want to operate Kubernetes and container
solutions across multiple clouds.

Whereas VMware Tanzu Basic is intrinsically tied to vSphere, VMware Tanzu Standard can
extend a Kubernetes distribution across on-premises and public clouds.

With VMware Tanzu Standard, you can operate one Kubernetes distribution anywhere and
manage it across all your Kubernetes clusters.

VMware Tanzu Advanced

VMware Tanzu Advanced simplifies and secures the container life cycle so that teams can
deliver modern applications at scale on-premises and in the public cloud.

Modern Applications Page 373


deliver modern applications at scale on-premises and in the public cloud.

It adds a comprehensive global control plane with observability and a service mesh, contains
advanced load balancing, and provides developers with frameworks, data services, an image
catalog, and automated build function.

Knowledge Check: Selecting an Edition


Which edition might your team use?

Modern Applications Page 374


Kubernetes Clusters
Monday, January 30, 2023 10:41 AM

Learner Objectives
After completing this lesson, you should be able to:

• Recognize the tools for building Kubernetes clusters


• Identify steps in deploying a Kubernetes Cluster
• Use Kubernetes commands

Tanzu Kubernetes Cluster

A Tanzu Kubernetes cluster is a full


distribution of the open-source
Kubernetes container orchestration
platform that is built, signed, and
supported by VMware.

You can provision and operate Tanzu


Kubernetes clusters on the Supervisor
Cluster by using the Tanzu Kubernetes
Grid Service.

Tools for Building Kubernetes Clusters

You can use different tools to build a Kubernetes cluster.

Each tool has different goals. For example, in an enterprise Kubernetes deployment, you
typically use kubeadm and cluster API. And in a development environment, you typically use
minikube and kind.

You use kubectl for both development and enterprise goals.

kubectl
With kubectl, you run commands against Kubernetes clusters. For example, you can use
kubectl to fetch all the Pods running in a cluster.

Modern Applications Page 375


kubeadm
The kubeadm tool performs the actions necessary to get a minimum viable Kubernetes
cluster up and running quickly.

It is used for bootstrapping, not for provisioning machines or installing add-ons.

Cluster API
Cluster API is a declarative API specification that builds on top of kubeadm to add optional
support for managing Kubernetes cluster infrastructure and life cycle.

You use this tool for cluster provisioning, configuration, and management.

minikube
With minikube, you can run Kubernetes locally. This tool runs a single-node Kubernetes
cluster on your personal computer so that you can try out Kubernetes, or use it for daily
development work.

kind
You use kind for running local Kubernetes clusters using Docker container nodes.

This tool was developed for testing Kubernetes itself, but it can be used for local
development.

Cluster API uses Kubernetes-style APIs and patterns to automate cluster lifecycle
management for platform operators. In this way, deployment is consistent and
repeatable across a wide variety of infrastructure environments.

The supporting infrastructure, such as VMs, networks, load balancers, and virtual private clouds
(VPCs), as well as the Kubernetes cluster configuration, are defined in the same way that
application developers deploy and manage their workloads.

Cluster API works like this:

1. Cluster API controllers, which run on a Kubernetes cluster, receive Cluster API definitions
that specify the desired state of a new cluster.

2. Cluster API requests that a cloud provider create the cluster according to these
definitions.

Modern Applications Page 376


definitions.

Cluster CRDs
A custom resource definition (CRD) is a built-in resource that you use to extend the
Cluster API.

Each CRD represents a customization of a Kubernetes installation.

Example CRDs
Cluster: Describes a cluster

MachineDeployment: Provides declarative updates for Machines and MachineSets

MachineSets: Maintains a stable set of Machines running at any given time.

Machine: Defines an infrastructure component that hosts a Kubernetes node, for


example, a VM.

MachineHealthCheck: Defines the conditions when a machine should be considered


unhealthy.

Management Cluster
A management cluster is a Kubernetes cluster that manages the lifecycle of workload
clusters.

It is also where one or more infrastructure providers run and where resources such as
machines are stored.

Infrastructure Providers
Cluster API infrastructure providers include cloud providers such as vSphere (CAPV), AWS

Modern Applications Page 377


Cluster API infrastructure providers include cloud providers such as vSphere (CAPV), AWS
(CAPA), and Azure (CAPZ).

They provider resources for running machines, for example, networking, load balancers,
and firewall rules.

Workload Clusters
A workload cluster is a Kubernetes cluster whose lifecycle is managed by a management
cluster.

Deploying a Tanzu Kubernetes Cluster on VMware Cloud with Tanzu services

Deploy Tanzu Kubernetes Cluster (TKC) on VMware Cloud on AWS with Tanzu services

Video Transcript

In this video, we are going to deploy a Tanzu Kubernetes cluster, also known as the TKC,
into our vSphere dev namespace.

We will do so by logging in to our supervisor control plane address. Using the kubectl
vSphere plug-in, we will log in to our supervisor control plane address using the --server
parameter and then the user name, which in this case is [email protected].

Once logged in, we are now going to switch the Kubernetes context into our vSphere dev
namespace. Next, we use the kubectl get tkr, or Tanzu Kubernetes releases command to
show the available Kubernetes versions that are available for us to provision. In this
example, we can see that we have three versions that are supported: 1.20.2, 1.20.7, and
1.21.2.
Before we can provision our TKC, we must first create a YAML manifest that describes our

Modern Applications Page 378


Before we can provision our TKC, we must first create a YAML manifest that describes our
desired configuration. Here you can include the name of the TKC, the vSphere namespace
to deploy the cluster to, the version of Kubernetes, and then the topology configuration,
which controls the values for our control plane and worker nodes. This includes the t-shirt
sizes, the number of nodes, and also the VM storage policy to associate this TKC to.
To create our TKC, we go ahead and specify our kubectl apply command and provide our
YAML manifest. Here we can see that the TKC has been created.

To view the progress of our TKC, we can go ahead and do a kubectl get TKC, and we can
see the current status and whether or not the cluster is currently ready.

If we now switch to our vSphere UI to see what's happening from an infrastructure point
of view, we can see the TKC request has been received by our supervisor cluster, and it is
now retrieving the desired Kubernetes cluster from the OVAs from our vSphere content
library.

It is now cloning the individual VMs to construct the desired Kubernetes cluster, which is
going to be three control plane VMs and three worker nodes. This can take a few minutes
depending on the size of your Kubernetes cluster and also the desired configuration that
you have specified.

Let's now switch back to the console. If we run a kubectl get TKC, we can see that our
Kubernetes cluster is now fully realized with three control plane nodes and three worker
nodes. And the status is now ready.

To start using this Kubernetes cluster, we need to log in to the TKC. We go ahead and use
our kubectl -vSphere command. But now we pass in two additional parameters, which is
the Tanzu Kubernetes cluster name, which in this case, is william-tkc-01, and also the
Tanzu Kubernetes cluster namespace, which is dev.

Once logged in, again we're going to switch the Kubernetes context to go into our Tanzu
Kubernetes cluster.

Using kubectl get nodes, we can confirm we are now switching to the context of our TKC.
And as we can see, there are three control plane VMs and three worker nodes. At this
point, we are now ready to start deploying an application.

Knowledge Check: Deploying a Tanzu Kubernetes Cluster


Which method can you use to deploy a Tanzu Kubernetes Cluster? (Select one option)

Deploy the Tanzu Kubernetes Cluster from the Namespace tab on vCenter Server

Create a YAML file that specifies the options for deploying the cluster and run the
appropriate kubectl apply -f command.

Set the kubectl context to a Tanzu Kubernetes cluster manually by using the kubectl config
use-context command.

Modern Applications Page 379


use-context command.

Running Kubernetes Commands


After you deploy the Kubernetes cluster, you use the kubectl command-line tool to manage the
cluster. The kubectl CLI is available for Linux, macOS, and Windows operating systems.

For a description of commonly used commands, see Command line tool (kubectl) on the
Kubernetes website at https://kubernetes.io/docs/reference/kubectl/.

Collecting a Support Bundle for Tanzu Kubernetes Clusters

To troubleshoot Tanzu Kubernetes cluster errors, you can run a utility to collect a diagnostic log
bundle.

VMware provides the TKC Support Bundler utility that you can use to collect Tanzu Kubernetes
cluster log files and troubleshoot problems.

To obtain and use the utility, access knowledge base article 80949.

Knowledge Check: Troubleshooting a Cluster


You are troubleshooting a cluster to determine why it is not working as expected. Which
commands do you run?

Modern Applications Page 380


Tanzu Kubernetes Grid
Tuesday, January 31, 2023 9:34 AM

Learner Objectives
After completing this lesson, you should be able to:

• Explain Tanzu Kubernetes Grid concepts


• Recognize the functions of Tanzu Kubernetes Grid components
• Identify steps in the Tanzu Kubernetes Grid deployment workflow

Managing Tanzu Kubernetes Clusters


Tanzu Kubernetes Grid automates the life cycle management of multiple Tanzu Kubernetes
clusters.

You can deploy and run containerized workloads across software-defined data centers (SDDCs)
and public cloud environments, including vSphere, Microsoft Azure, and Amazon EC2.

A Tanzu Kubernetes Grid instance is a full deployment of Tanzu Kubernetes Grid

A Tanzu Kubernetes Grid instance includes a management cluster, deployed Tanzu Kubernetes clusters,
and the shared and in-cluster services that you configure

Modern Applications Page 381


Shared Services are also known as Tanzu Kubernetes Grid Extensions

Initializing Tanzu Kubernetes Grid

A bootstrap
machine initializes a
Tanzu Kubernetes
Grid instance by
bootstrapping a
management cluster
on the cloud
infrastructure of
choice.

After bootstrapping
the management
cluster, the machine
manages the Tanzu
Kubernetes Grid
instance.

A bootstrap machine is typically a VM on which you download and run the Tanzu CLI. The
machine includes the Tanzu CLI and installer interface, and Tanzu Kubernetes cluster plans.

Bootstrap Machine Components

Tanzu CLI
After a management cluster is created, the Tanzu CLI communicates with it to create,
scale, upgrade, and delete Tanzu Kubernetes clusters.

The installer interface is launched from the Tanzu CLI and is a graphical wizard that guides
you through the configuration of a management cluster.

Select the Cluster Plan tab to learn more about bootstrap machine components.

Cluster Plans

Modern Applications Page 382


Cluster Plans
A cluster plan describes the configuration with which to deploy a Tanzu Kubernetes
cluster.
It provides a set of configurable values, for example, the amount of control plane
machines, worker machines virtual CPUs, memory, and other parameters.

You can customize default cluster plans and build new cluster plans.

Tanzu Kubernetes Grid Components

Consider the components of a Tanzu Kubernetes Grid instance in more detail.

Step 1: Management Cluster

The management cluster is a Kubernetes cluster that is the primary management and
operational center for the Tanzu Kubernetes Grid instance.

It runs cluster API to create the Tanzu Kubernetes clusters. And it is where you configure
the shared and in-cluster services that the clusters use.

NOTE: In vSphere with Tanzu, the supervisor cluster performs the role of the management
cluster.

Step 2: Tanzu Kubernetes Cluster

Modern Applications Page 383


You can deploy Tanzu Kubernetes clusters from the management cluster by using the
Tanzu CLI.

Your application workloads run in the Tanzu Kubernetes clusters. Tanzu Kubernetes Grid
automatically deploys clusters to the platform on which you deployed the management
cluster.

You can manage the entire life cycle of Tanzu Kubernetes clusters by using the Tanzu CLI.

NOTE: The terms Tanzu Kubernetes cluster and workload cluster are used
interchangeably.

Step 3: Shared Services

Shared and in-cluster services are services that run in a Tanzu Kubernetes Grid instance,
providing authentication, ingress, logging, and service discovery.

Shared Services run on the management cluster or a dedicated shared-services cluster


and are used by multiple Tanzu Kubernetes clusters.

In-cluster services are deployed to specific Tanzu Kubernetes clusters.

Bootstrapping Many Instances

Modern Applications Page 384


Bootstrapping Many Instances

A single bootstrap machine can bootstrap many instances of Tanzu Kubernetes Grid
across different environments, IaaS providers, and failure domains.

Knowledge Check: Tanzu Kubernetes Grid Components


Which function does each component perform?

Modern Applications Page 385


Tanzu Kubernetes Grid Extensions (Shared Services)
Tanzu Kubernetes Grid includes binaries for tools that provide in-cluster and shared services.
All the provided binaries and container images are built and signed by VMware.

Tanzu Kubernetes Grid also includes signed and supported versions of open-source applications
to provide the container registry, networking, monitoring, authentication, ingress control,
logging, and service discovery that a production Kubernetes environment requires.

By default, Tanzu Kubernetes clusters implement Antrea for pod-to-pod networking.

Knowledge Check: Deploying Tanzu Kubernetes Grid


Given what you know about Tanzu Kubernetes Grid, which general steps do you take to deploy
an instance of the grid?

Modern Applications Page 386


Multi-Cloud Operations with VMware Tanzu
Tuesday, January 31, 2023 9:55 AM

Learner Objectives
After completing this lesson, you should be able to:

• Choose appropriate VMware Tanzu solutions to address challenges in a multi-cloud


environment.

Kubernetes can streamline container orchestration to avoid the complexities of interdependent


system architectures.

But the operations team must still manage a Kubernetes runtime consistently across multiple
data centers and clouds.

Finding Solutions

You're a member of a cloud operations team that wants to


operate a scalable Kubernetes environment across multiple
clouds.

You must help make decisions about which tools and


solutions to use for tasks and challenges that arise as your
team works toward this goal.

Deploying Distributed Applications

Modern Applications Page 387


Deploying Distributed Applications

Your first task is to deploy a distributed application across public and private clouds in different
locations.

Which solution is best for deploying the distributed application?

Tanzu Kubernetes Grid

Tanzu Service Mesh

Tanzu Observability

Why Tanzu Kubernetes Grid?"

The cloud operations administrator


asks you to explain the benefits of
this solution to the team.

What do you say to the team?

It simplifies installation.
Tanzu Kubernetes Grid includes the tools and open-source technologies for deploying and
consistently operating a scalable Kubernetes environment across VMware private cloud,
public cloud, edge, or multiple clouds.

It reduces risk through automated lifecycle management of clusters


Using declarative, multi-cluster lifecycle management, a CLI tool, and upgrades and
patching, Tanzu Kubernetes Grid helps to manage large, multi-cluster Kubernetes
deployments and automate manual tasks to reduce risk.

It provides integrated platform services


Tanzu Kubernetes Grid streamlines the deployment of local and in-cluster services and
thereby simplifies the configuration of container image registry policies, monitoring,
logging, ingress, networking, and storage.

It aligns with open-source technologies


You can run containerized applications on key open-source technologies such as Cluster
API, Fluentbit, and Contour. The benefits are portability and support and innovation of the
global Kubernetes community.

It uses existing data center tooling and workflows


With Tanzu Kubernetes Grid Service integrated with vSphere, you use existing data center
tooling and workflows to give developers on-demand access to conformant Kubernetes
clusters in the private cloud and to manage the cluster lifecycle through automated, API-

Modern Applications Page 388


clusters in the private cloud and to manage the cluster lifecycle through automated, API-
drive workflows.

How do we maintain consistent operations?

The operations team must monitor multiple endpoints to manage, scale, and maintain
resiliency and availability. But operational and remediation policies differ across clouds. And
security, auditing, and compliance are inconsistent.

Which solutions can help address these issues? (select two solutions)

Tanzu Mission Control


Tanzu Application Service
Tanzu Service Mesh
Spring framework

Tanzu Mission Control


Tanzu Mission Control is a centralized management platform. You can view all Kubernetes
clusters across your organization, running across many different environments.

An Overview of VMware Tanzu Mission Control

Video Transcript

Hi, I'm Corey Dinkens, a technical marketing manager with VMware. In this short video,
I'm going to give an overview of VMware Tanzu Mission Control.

Modern Applications Page 389


I'm going to give an overview of VMware Tanzu Mission Control.
Tanzu Mission Control is a multi-cloud control plane for consistently and efficiently
managing Kubernetes clusters. As a Kubernetes operator, I can see all the Kubernetes
clusters across my organization running across many different environments. We can even
create Tanzu Kubernetes Grid clusters directly from Tanzu Mission Control by registering a
management cluster on AWS, vSphere, and Azure.

We can also attach any Kubernetes clusters running anywhere for not only visibility but
also control of that cluster. You can see here how I have attached a variety of cluster
types, such as AKS, GKE, EKS, OpenShift, and Tanzu Kubernetes Grid on vSphere.

Here we see a few clusters that have an upgrade available. With Tanzu Mission Control,
you can easily upgrade your clusters with a click of a button in the UI. Tanzu Mission
Control can also be driven using rest API endpoints or command line. Other life cycle
management tasks you can perform are scaling up nodes, scaling down nodes, and also
removing or adding node pools.

As you can have tens of hundreds or even thousands of clusters, you need a way to easily
group them. Cluster groups allow you to organize your Kubernetes clusters into logical
groupings so you can apply a common set of policies to those clusters.

An example would be to align with business units or different environments such as dev,
test, or prod. With Tanzu Mission Control catalog, you can deploy Carvel packages to your
cluster with a click of a button, select your package from the catalog, and click INSTALL
PACKAGE. Public and private Carvel repositories are supported. Tanzu Mission Control has
access, image, network, security, quota, and even custom policies for defining your own.
The underlying policy engine for Tanzu Mission Control is the open policy agent
gatekeeper, also known as OPA gatekeeper. Tanzu Mission Control provides centralized
declarative policy management for your organization. This allows fine-grain policy control
across your Kubernetes fleet, eliminating significant amounts of operational toil.

We can apply policies to nearly any organizational construct within Tanzu Mission Control,
such as an organization, a cluster group, a cluster, and a workspace. Policy insights
provides a centralized holistic view of the current state of policy events in your
organization. You can view fleet-wide policy-related information, including sink issues and
violations.

As an operator, I'm responsible for the health of clusters across my organization. I can
view the baseline health of clusters, which is necessary information for operators. We can
also do this for workloads. This view provides a view of all workloads across all of my
clusters, and I can quickly see their status at a glance.

Tanzu Mission Control integrates with industry-leading monitoring tools, such as VMware
Tanzu® Observability™ by Wavefront, a SaaS monitoring platform. You can easily open
Tanzu Observability from the current cluster you are viewing.

Tanzu Observability allows you to collect data from many services and sources across your
entire application stack. The included out-of-the box dashboards are easily customized.
You can run preconfigured cluster inspections using Sonobuoy, an open-source
community standard. The conformance inspection validates the binary's running on your

Modern Applications Page 390


community standard. The conformance inspection validates the binary's running on your
cluster and ensures that your cluster is properly installed, configured, and working.

The CIS benchmark inspection evaluates your cluster against the CIS benchmark for
Kubernetes published by the Center for Internet Security. Operators need to provide data
protection for the Kubernetes applications and the clusters that they run on.
Tanzu Mission Control data protection leverages the open-source project Valero under
the hood and enables operators to centrally manage data protection on their clusters
across multiple environments, easily backing up and restoring their Kubernetes clusters
and namespaces.

That completes this demonstration. Thank you for watching. For more information about
Tanzu Mission Control, please see Tanzu.vmware.com.

The operations team wants to integrate a monitoring tool with Tanzu Mission Control. Which
VMware Tanzu monitoring tool can you integrate? (Select one option)

Tanzu Observability
Tanzu Application Service
Tanzu Kubernetes Grid

Tanzu Service Mesh


Tanzu Service Mesh provides consistent connectivity and security for modern applications
across all Kubernetes clusters and clouds.

How does Tanzu Service Mesh work exactly?

Some team members are curious

Modern Applications Page 391


Some team members are curious
about how Tanzu Service Mesh
works. They're not as familiar with
this product as with Tanzu Mission
Control.

VMware Tanzu Service Mesh - Connectivity and Security for Modern Applications

Video Transcript

Applications are being transformed from monolith architectures to microservices


architectures, running in single and multiple clouds. At the same time, there's shift-left
culture happening, where security teams work closely with operations and developers to
bake security controls and security teams into the build stages of the CI/CD pipeline
known as DevSecOps. There is complexity involved with all of the three of the
transformations.

With VMware Tanzu Service Mesh, application owners can connect, secure, and observe
distributed applications across end-users, microservices, APIs, and data. With Tanzu
Service Mesh, you can abstract the infrastructure layer from the application layer to
provide strong isolation using global namespace.

By onboarding applications to a global namespace, developers, operations, and security


gain consistent policy controls and operational visibility across single and multi-cloud
environments. Tanzu Service Mesh provides full-stack application connectivity services,
enabling application mobility, high availability, and automated application rollouts and
upgrades.

Modern Applications Page 392


Tanzu Service Mesh controls north-south traffic from end-users at the application edge
through mesh ingress and egress and east-west traffic between application workloads,
APIs, and data. Tanzu Service Mesh provides solutions that make API calls and services
more reliable.

Tanzu Service Mesh can automatically scale application instances up and down or
cloudburst to a standby cluster to meet the performance objectives for SLO compliance.

Tanzu Service Mesh provides dynamic behavior-based security to protect microservices,


APIs, and data, including end-to-end encryption, attribute-based access control, and API
threat detection and protection.

Tanzu Service Mesh provides operations teams with rich troubleshooting tools, including
multi-cloud topology maps and traffic flows and performance and health metrics. While
security teams gain insights from API baselining and drift detection, including API
parameter validation and security analytics that address behavioral anomalies,
unsanctioned usage, API threat detection, and PII detection.

Get advanced end-to-end application connectivity and security for modern distributed
applications with VMware Tanzu Service Mesh.

Learn more at tanzu.vmware.com/service-mesh.

Tanzu Service Mesh can be installed in Tanzu Kubernetes Grid clusters and third-party
Kubernetes-conformance clusters. And it can be used with clusters managed by Tanzu
Mission Control or clusters managed by other Kubernetes platforms and managed
services.

Global Namespaces

A key features of Tanzu Service Mesh is the global namespace.

A global namespace abstracts an application from the underlying Kubernetes cluster


namespaces and networking.

How do global namespaces support cross-cluster and cross-cloud use cases?

Modern Applications Page 393


Global namespace example

With global namespaces, you can transcend infrastructure limitations and boundaries, and
securely stretch applications across clusters and clouds.

You get consistent traffic routing, application resiliency, and security policies for your
applications across cloud siloes, regardless of where the applications are running.

Solution Architecture

Your team's solution includes Tanzu Kubernetes Grid,


Tanzu Mission Control, and Tanzu Service Mesh.

The following diagram brings everything together.

Modern Applications Page 394


Tanzu Mission Control
Tanzu Mission Control is the central point from which you manage the Tanzu Kubernetes
Grid (TKG) clusters.

Tanzu Service Mesh


You use Tanzu Service Mesh to create a global namespace (NS) and for monitoring,
automation, policy management, and secure communications across the multi-cloud
infrastructure.

Tanzu Kubernetes Grid


Tanzu Kubernetes Grid is deployed independently in two distinct multi-cloud locations
that include a VMware Cloud on AWS SDDC and a VMware Cloud on Dell EMC Edge
location.

Modern Applications Page 395


Hands-On Practice: Deploying Tanzu Kubernetes Clusters
Tuesday, January 31, 2023 10:31 AM

Learner Objectives
After completing this lesson, you should be able to:

• Deploy a Tanzu Kubernetes cluster

In a series of interactive simulations, you perform tasks to deploy a Tanzu Kubernetes cluster
using Tanzu Mission Control:

1. Activate VMware Managed Kubernetes


2. Create Namespaces for Workload Clusters
3. Create the Tanzu Kubernetes Cluster
4. Log in to the Tanzu Kubernetes Cluster

https://labs.hol.vmware.com

VMware Cloud on AWS with Tanzu Services (HOL-2387-03-ISM)

Modern Applications Page 396


Module Summary
Tuesday, January 31, 2023 10:34 AM

Review the key concepts covered in this module:


• Kubernetes manages and automates resource use, failure handling, availability,
configuration, and scalability of containerized applications.

• Kubernetes control plane components manage your cluster, its state data, and its
configuration. The control plane interacts with individual cluster nodes using the kubelet,
an agent deployed on each node.

• VMware Tanzu products and services help to build, run, and manage modern applications
by automating the delivery of containerized applications and managing them in
production with Kubernetes.

• To create Kubernetes clusters, you can use the kubectl command line, Cluster API, and
kubeadm.

• Tanzu Kubernetes Grid automates the life cycle management of multiple Tanzu
Kubernetes clusters.

• The management cluster is a Kubernetes cluster that is the primary management and
operational center for the Tanzu Kubernetes Grid instance. Application workloads run in
Tanzu Kubernetes clusters.

• Tanzu Service Mesh provides consistent control, connectivity, and security for
microservices, end users, and data in multi-cluster and multi-cloud environments.

Additional Resources

• For more information about Kubernetes concepts, components, and commands, see the
Kubernetes website at https://kubernetes.io/docs/home/.

• For more information about VMware Tanzu products and solutions, see the VMware
Tanzu documentation at https://docs.vmware.com/en/VMware-Tanzu/index.html?
hWord=N4IghgNiBcIGoFkDuYBOBTABAFTAOwC8BXEAXyA.

• For more information about multi-cloud solutions with Kubernetes and VMware Tanzu,
see "Make Your Move to Multi-Cloud Kubernetes with VMware Tanzu" on the YouTube
website at https://www.youtube.com/watch?v=aRfOxKqPm5o&t=339s.

• For information about Tanzu Kubernetes Grid and Tanzu Mission Control, see "Multi-
cluster and Multi-cloud Demo with TKG and TSM I VMware Tanzu" on the YouTube
website at https://www.youtube.com/watch?v=AJuaiZTn3OA.

Modern Applications Page 397


Network Virtualization Overview
Wednesday, February 1, 2023 9:07 AM

Review Network Virtualization section

Workload Mobility Page 398


Hybrid Linked Mode in VMware Cloud on AWS SDDC
Tuesday, January 31, 2023 2:19 PM

Learner Objectives
After completing this lesson, you should be able to:

• Explain uses for Hybrid Linked Mode in VMware Cloud on AWS SDDCs
• Identify login authentication options for VMware Cloud on AWS SDDCs
• Set up Hybrid Linked Mode using the VMware Cloud Gateway Appliance

Your organization wants to view


and manage applications across
both an on-premises environment
and a VMware Cloud on AWS
SDDC, from a single point.

The organization also has the


following requirements:

• For VMware Cloud on AWS to trust the on-premises users (one-way trust)

• To be able to link and unlink the environments as necessary

• To retain the separation between on-premises and VMware Cloud on AWS permissions

• To migrate workloads both to and from on-premises and VMware Cloud on AWS

What solution is flexible enough to meet all these requirements?

The answer is Hybrid Linked

Workload Mobility Page 399


Workload Mobility Page 400
The answer is Hybrid Linked
Mode.

You can link a VMware Cloud on


AWS instance of VMware vCenter
Server with an on-premises
VMware vCenter Single Sign-On
domain.

In this way, you can jointly


Hybrid Linked Mode manage both VMware Cloud on
AWS and on-premises single sign-
on domains from one view.

Hybrid Linked Mode is a version of Enhanced Linked Mode that is built for VMware Cloud on
AWS.

Enhanced Linked Mode

This mode provides a


consistent operating
experience for managing and
consuming resources from
multiple vCenter Server
systems.

Using Enhanced Linked Mode, you can perform the following tasks:

• Connect multiple vCenter Server systems by using one or more VMware Platform Services
Controller appliances.

• View and search across all linked vCenter Server systems.

• Replicate roles, permissions, licenses, policies, and tags.

Hybrid Linked Mode

With Hybrid Linked Mode, you link your cloud vCenter Server system to a domain that has
multiple vCenter Server instances. These on-premises instances are themselves linked using
Enhanced Linked Mode.

All those instances are linked to a VMware Cloud on AWS SDDC.

Workload Mobility Page 401


Workload Mobility Page 402
How Does Hybrid Linked Mode Work?

With Hybrid Linked Mode, you can use a single VMware vSphere Client interface for both on-
premises and cloud deployment.

The vCenter Server instances are managed in separate vCenter Single Sign-on domains.

Hybrid Linked Mode creates a unidirectional trust between vSphere SSO domains. This trust cannot
be bidirectional.

Uses for Hybrid Linked Mode

Using Hybrid Linked Mode, you can perform the


following tasks:

• Use a single vSphere Client interface to view


and manage the inventories of both your on-
premises and VMware Cloud on AWS data
centers.

• Migrate workloads between your on-premises


data center and VMware Cloud on AWS.
vSphere Client interface
• Share tags and tag categories across vCenter
Server instances.

Knowledge Check: Hybrid Linked Mode Uses

Workload Mobility Page 403


Workload Mobility Page 404
Which statements accurately describe the uses of Hybrid Linked Mode? (Select two options)

Migrate workloads between your on-premises data center and VMware Cloud on AWS

Share tags and tag categories across vCenter Server Instances

Migrate workloads among environments to provide load balancing

Restart VMs on other vSphere hosts upon failure

Hybrid Linked Mode Configuration Options


You can configure Hybrid Linked Mode in different ways:

• From the vCenter Cloud Gateway Appliance


• From the VMware Cloud on AWS SDDC vSphere Client

Prerequisites for Both Methods

Before you configure Hybrid Linked Mode, you must meet several prerequisites. The following
prerequisites are common to both vCenter Cloud Gateway Appliance and VMware Cloud on
AWS SDDC:

• Verify that your on-premises data center and the VMware Cloud on AWS SDDC are
synchronized to an NTP service or other authoritative time sources.

• Configure an IPsec VPN connection between your on-premises data center and VMware
Cloud on AWS.

• Verify that the maximum latency between VMware Cloud on AWS and an on-premises
data center is 100 milliseconds round trip.

• Determine the on-premises users that you want to grant Cloud Administrator permissions
and add the users to a group within your identity source. Verify that this group can access
your on-premises environment.

• Ensure that you have credentials for a user who has a minimum of read-only access to the
base distinguished name (DN) for users and groups in your on-premises environment.

• Confirm that an on-premises DNS server is configured for your management gateway so
that it can resolve the FQDN for the identity source.

• Confirm that you have the credentials for your on-premises vSphere SSO domain.

Hybrid Linked Mode Prerequisites


Workload Mobility Page 405
Workload Mobility Page 406
Hybrid Linked Mode Prerequisites
You can access a detailed overview of the complete set of prerequisites for vCenter Cloud
Gateway Appliance and VMware Cloud on AWS SDDC in the product documentation.

Demonstration: Configuring Hybrid Linked Mode Using the vCenter


Cloud Gateway Appliance
To configure Hybrid Linked Mode using the vCenter Cloud Gateway Appliance:

1. Install the vCenter Cloud Gateway Appliance


2. Link your on-premises data center to VMware Cloud on AWS using the installed vCenter
Cloud Gateway Appliance.

Active directory (AD) groups get mapped from your on-premises environment to the
cloud.

Deploying the vCenter Cloud Gateway and Configuring Hybrid Linked Mode in VMware Cloud
on AWS

Video Transcript

Welcome to the VMware Cloud on AWS quick start series. Wouldn't it be nice if you could
manage your on-premises and cloud inventories in a single pane of glass? Well, you're in
luck. You can maintain operational efficiency with the vCenter Cloud Gateway appliance.

I'm Jeremiah Megie with VMware. And in this video, I'll walk you through deploying this

Workload Mobility Page 407


Workload Mobility Page 408
I'm Jeremiah Megie with VMware. And in this video, I'll walk you through deploying this
appliance on-premises and configuring Hybrid Linked Mode so you can manage both
environments with ease.
It's important that we maintain consistent operations and simplified administration
between both on-premises and cloud environments. Hybrid Linked Mode enables
customers to obtain a single logical view and hybrid management of both on-premises
and VMware Cloud on AWS resources.

This is accomplished by deploying a virtual appliance on-premises called the vCenter


Cloud Gateway. Like Enhanced Linked Mode that you may be running on premises, this
appliance allows us to share some data between vCenters, but also allows us to maintain
some level of administrative separation, such as roles and permissions.

The vCenter Cloud Gateway receives automatic updates based on the version of the
connected SDDC. So there's never a need to manually patch or upgrade the appliance. If
you have multiple vCenters in the same SSO domain, you'll be able to view and manage all
of them in the same inventory, along with the cloud vCenter. Configuring Hybrid Linked
Mode also affords you the ability to perform migrations between environments directly
with the UI.

Deploying the appliance is very simple. From the Cloud Console, we can navigate to Tools
for the DOWNLOAD link. This redirects us to our My VMware download page, where we
can save the image locally and then run the installer.

There are two stages: Deploying the appliance and configuring Hybrid Linked Mode.

Click START and navigate through the wizard. Provide the on-premises vCenter FQDN and
credentials where you wish to deploy the appliance. Then select the data center, folder,
and cluster. Provide a VM name and root password, select the datastore and then
proceed to the network settings. Select the network or port group that the appliance
should be connected to. Then specify the FQDN, IP address, subnet, gateway, and DNS.
Specify your NTP servers as time sync is especially important. Provide your PSC
information, which may be the same as the vCenter information, depending on your
configuration. Finally, join the appliance to Active Directory by providing a domain name
and credentials. The appliance will be fully deployed and configured in about 10 to 15
minutes on average, but this varies based on your environment specifics.

Once the deployment is complete, we can start the configuration and we only need to
supply a small amount of information. Provide the cloud vCenter FQDN and the password
to the cloud admin account. Next, select the domain from the Identity source drop down
menu, then search for the Active Directory groups that you wish to provide administrative
access to. The linking process only takes a few minutes.

At this point, we can launch the vSphere Client by pointing our web browser at the
vCenter Cloud Gateway appliance, and then logging in with our Active Directory
credentials. As long as our user is in the group that we provided access to during the
configuration, we will be able to see all the vCenters in the same on-premises SSO
domains, as well as our cloud vCenter. Notice we can quickly get access to help
documentation and chat support from the UI.

Workload Mobility Page 409


Workload Mobility Page 410
As you can see from this walkthrough, deploying the appliance and establishing the link
between the vCenters is incredibly simple and provides enormous flexibility, allowing
customers to extend their data center, creating a truly hybrid cloud.

Be sure to visit VMware Cloud Tech Zone for the latest VMware Cloud on AWS resources.

Authentication Options

After Hybrid Linked Mode is configured, you can log in to the vSphere Client from the VMware
Cloud on AWS console or from the vCenter Cloud Gateway Appliance.

From the VMware Cloud on AWS Console

From the VMware Cloud on AWS console (also referred to as the VMware Cloud console), open
the vSphere Client and log in as [email protected] (or a user with Cloud Administrator
permissions).

From the vCenter Cloud Gateway Appliance

From the vCenter Cloud Gateway Appliance, launch the vSphere Client and log in with Active
Directory credentials.

The user account should be in the AD group that you provided access to during the Hybrid
Linked Mode configuration. In this way, you can view all vCenter Server instances in the on-
premises SSO domains and in your cloud vCenter Server instance.

You can configure Hybrid Linked Mode from your SDDC if your on-premises LDAP service is
provided by a native Active Directory (Integrated Windows Authentication) domain or an
OpenLDAP directory service.
This step is optional when configuring Hybrid Linked Mode from the Cloud Gateway Appliance,
but adding an identity source does allow you to configure users or groups with a lesser level of
access than the Cloud Administrator

Configuration Options Summary

You can configure Hybrid Linked Mode from the vCenter Cloud Gateway Appliance or from the
VMware Cloud on AWS SDDC vSphere Client.

Compare the two methods.

Configuring Hybrid Linked Mode from the Cloud Configuring Hybrid Linked Mode
Gateway Appliance from the vSphere Client
Centralized administration is available through the Centralized administration is
vSphere Client that is hosted by the vCenter Cloud available through the vSphere Client
Gateway Appliance on-premises. that is hosted on VMware Cloud on
AWS.

Workload Mobility Page 411


Workload Mobility Page 412
Accessing the vSphere Client in the VMware Cloud on Accessing the on-premises-hosted
AWS SDDC does not reveal your on-premises vSphere Client does not show your
inventory. VMware Cloud on AWS inventory.
Minimal overhead is required in the on-premises data Identity Management connection
center. requests must traverse a VPN
connection (either across the
Internet or AWS Direct Connect)
when using an on-premises domain
controller.
On-premises vSphere version must be vSphere 6.5d or On-premises vSphere versions must
later. be 6.5d or later.

When you configure Hybrid Linked Mode from VMware Cloud on AWS, the Identity Management
connection requests can increase network traffic charges and application latency.

Consequently, the vCenter Cloud Gateway Appliance option is more efficient.

Knowledge Check: Hybrid Linked Mode Configuration


You can use two methods to configure Hybrid Linked Mode. Do you know the difference
between the two methods?

Workload Mobility Page 413


Workload Mobility Page 414
Configure Networking Security in VMware Cloud on AWS
Wednesday, February 1, 2023 9:08 AM

Review Configure Network Security in VMware Cloud on AWS

Workload Mobility Page 415


Migration Solutions
Tuesday, January 31, 2023 3:29 PM

Learner Objectives
After completing this lesson, you should be able to:

• Configure different types of migrations

• Perform different types of migrations

Why Migrate Data?


The most common use cases for data migration are cloud migration, data center extension, and
disaster recovery.

Cloud migration can be application specific or data center


wide. A cloud migration might be triggered by an
infrastructure refresh.

Data center extension use cases include:

• Footprint expansion
• On-demand capacity
• Testing and development

Workload Mobility Page 416


You might migrate data in the following situations:

• Implementing a new disaster recovery solution


• Replacing existing disaster recovery solution
• Complementing existing disaster recovery solution

Your organization plans to use cloud migration to move a limited set of mobile applications to
public cloud architecture for hosting and DevOps management.

What challenges does cloud migration pose?

Cloud Migration Challenges

Cloud migration presents several challenges that can have negative outcomes.

You can choose from different migration solutions that help to minimize the negative
outcomes of cloud migration, achieve zero downtime, provide live migration of workloads
from one server to another, and ultimately ensure business continuity.

Workload Mobility Page 417


Are you familiar with different migration methods, for example, VMware HCX®, hot and cold
migrations, Enhanced vMotionTM Compatibility, and content library migrations?

Each method can be used to achieve different goals. Which method do you think can help
achieve the following goals?

VMware HCX Hot and Cold Migration Advanced Cross Content Library
vCenter vMotion
Simplify app Move powered-on or Migrate workloads Share OVF
migration, workload powered-off VMs between vCenter templates, ISO
balancing, and between on-premises Server instances images, and scripts
business continuity. and cloud environments. across vCenter
Server

Migration Solutions

The following migration solutions are available for use within an SDDC or between SDDCS:

• VMware HCX
• Live Migration
• Cold Migration
• Content Library
• Advanced Cross vCenter vMotion
• Enhanced vMotion Compatibility

VMware HCX
VMware HCX is an application mobility platform that helps simplify application migration,
workload rebalancing, and business continuity across data centers and clouds.

VMware HCX is available in cloud SDDCs, such as VMware Cloud on AWS, Azure VMware
Solutions, and Google Cloud VMware Engine.

Workload Mobility Page 418


VMware HCX supports migration from vSphere 6.0+ to VMware Cloud on AWS without introducing the
application risk and complex migration assessments.

Key Capabilities

VMware HCX includes the following features:

• vSphere 6.0+ to any current vSphere version on a cloud or modern data center
• Built-in WAN optimized links for migration across the Internet or WAN
• Built-in scheduler to determine replication transfer time
• Bidirectional migration
• Support for VMware vSphere Distributed Switch and Cisco Nexus 1000v switch
• Internet support or AWS Direct Connect support (if using VMware Cloud on AWS) for bulk
migration and VMware vSphere vMotion

Demonstration: Using VMware HCX to Migrate Workloads

In this demonstration, HCX is deployed in a VMware Cloud on AWS SDDC. Then, virtual
machines are migrated from the on-premises data center to the VMware Cloud on AWS SDDC.

Migrating to a VMware Cloud on AWS SDDC Using VMware HCX

Workload Mobility Page 419


Video Transcript

VMware HCX is available for you when you start using VMware Cloud on AWS. Let's see
how you can migrate your workloads to the VMware Cloud on AWS SDDC using VMware
HCX.

To deploy VMware HCX, go to Add Ons, OPEN HCX, click DEPLOY HCX, and CONFIRM to
start the deployment. HCX Cloud Manager appliance will be deployed and configured in
the SDDC. Network and compute profiles will also be created during this process. Once it's
finished, open HCX and log into the HCX UI using the same vCenter credentials.

Let's explore what the deployment has done, starting with site pairing. There are no site
pairings at the moment. There isn't any service mesh configured either. But if you go to
network profiles, you will see some network profiles that have been created.

Let me take a moment here to talk about HCX connectivity requirements.

VMware HCX creates a VPN tunnel between the on-premises site and the VMware Cloud
on AWS SDDC. HCX can either use the public Internet or a dedicated connection like AWS
Direct Connect. If you have Direct Connect, the first network profile here called
directConnectNetwork1 is the one that HCX will use. If you want HCX to use the Internet,
then the second network profile here called externalNetwork will be used. The last profile
provides network details for the HCX appliances.

Now, looking at compute profiles, there is one created by default. It shows you the HCX
services available with VMware Cloud on AWS, such as HCX Interconnect, Network
Extension and Bulk Migration.
So, VMware HCX is ready on your VMware Cloud on AWS, but you need to deploy an HCX
Connector appliance in the on-premises environment. I already deployed an HCX

Workload Mobility Page 420


Connector appliance in the on-premises environment. I already deployed an HCX
connector on premises. So, let's go to the on-premises vSphere Client.

In the vSphere Client, click Menu and select HCX. Here I am now in the on-premises HCX
UI. I am here because HCX configuration and migration need to be initiated from the
source site, which is your on-premises site.

First thing you have to do is configure the site pairing. Go to Site Pairing. Click ADD A SITE
PAIRING. Here you provide details of HCX deployed in VMware Cloud on AWS, Click
CONNECT.

Now that we have a site pairing, let's configure the service mesh. Select the VMware
Cloud on AWS as the destination site. Next, select compute profiles for each site. For
VMware Cloud on AWS site, you can select the compute profile that has been created by
default. Here you can select the HCX services to activate. The availability of these services
depends on licensing. Depending on the services selected, appropriate appliances will be
deployed automatically by VMware HCX in both sites.

Let's click CONTINUE. This is an optional step. You can choose a specific uplink network
profile for the HCX appliances. But I'm going to leave them as is. This is another optional
advanced configuration that I will leave as is, and same thing here. So, I'll click CONTINUE.
Here, you can review the topology. This diagram displays all the HCX appliances that will
be deployed at each site. Next, let’s name the service mesh. Click FINISH.

It will take a little bit of time for the service mesh between the on-premises site and the
VMware Cloud on AWS SDDC to be created. You can also track the progress by going to
tasks. Once it's done, we can go to the VMware Cloud on AWS SDDC. You will see some
HCX appliances that have been deployed automatically.

Now let's go back to the on-premises HCX UI and use HCX Network Extension to migrate
virtual machines from the on-premises environment to the VMware Cloud on AWS SDDC.
By using HCX Network Extension, you can migrate workloads without changing the
machine IP addresses.

Let's go to Network Extension. Click EXTEND NETWORKS. I'm going to extend the network
named VLAN-10-Apps. Click NEXT. Here, I'm going to enable HCX Mobility Optimized
Networking. This allows virtual machines in the VMware Cloud on AWS SDDC to use the
NSX-T router in that SDDC as a default gateway, instead of having to use the on-premises
router as a default gateway. This optimized routing limits network traffic hair-pinning
from occurring between the sites. Next let's provide the gateway IP address and click
SUBMIT. Refresh the page and you can now see the progress.

Once the network extension has been completed, we can migrate virtual machines on
that network to the VMware Cloud on AWS SDDC. Go to Migration and click MIGRATE.
Select the destination site. We'll add a couple of SQL servers to migrate. We can name this
mobility group SQL-servers-group-01. Mobility group allows you to implement migration
events that you've planned.

Select the destination compute container. Select destination storage. Here, I'm going to
choose to vMotion to virtual machines. Select the destination folder, and you're ready to

Workload Mobility Page 421


choose to vMotion to virtual machines. Select the destination folder, and you're ready to
migrate. You can validate or just click GO to start the migration. HCX will still validate
before the actual migration starts. You can expand the mobility group for more details.
You can also expand each virtual machine to view detailed migration events. I will fast
forward through the migration progress.

One thing to note here is that vMotion migrates one virtual machine at a time. If you want
to migrate multiple virtual machines at once, you can use bulk migration or replication-
assisted vMotion.

Now that the migration is complete, let's go to the VMware Cloud on AWS SDDC. Here,
you can see the two SQL servers that have migrated from the on-premises environment.

VMware HCX supports bulks migrations, live migrations, and cold migrations on vSphere 6.5 and
later.

Knowledge Check: VMware HCX Capabilities


Which of the following are key capabilities of VMware HCX? (Select two options)

Unidirectional migration
Built-in WAN optimized links for migration across the Internet or WAN
Built-in scheduler to determine replication transfer time
Available only in the latest version of vSphere
Support for vSphere standard switches only.

Live Migrations
You can move powered-on VMs between your on-premises environment and your cloud SDDC.
This type of migration is also known as a live, or hot, migration.

vSphere vMotion and VMware vSphere® Storage vMotion® are the underlying technologies in a
live migration:

• vSphere vMotion migrates a powered-on VM from one host to another. With vSphere
vMotion, the entire state of the VM is moved from one host to another, but the data
storage remains in the same datastore.

• vSphere Storage vMotion migrates a powered-on VM from one datastore to another.


During this type of migration, the VM does not change the host that it runs on.

Live Migrations with VMware Cloud on AWS

Workload Mobility Page 422


If you are using VMware
Cloud on AWS, you can use
Hybrid Linked Mode to
perform live migrations
between your on-premises
data center and your VMware
Cloud on AWS SDDC.

Checklist for Live Migrations with VMware Cloud on AWS

If you meet several prerequisites, you can perform live migrations to your VMware Cloud on
AWS SDDC. The main requirement is that you enable Hybrid Linked Mode and establish an L2
VPN between your on-premises environment and your cloud SDDC.

To perform vSphere vMotion migrations using Hybrid Linked Mode, you verify the following
settings:

• Virtual machine is powered on

• The source to destination settings must have a minimum bandwidth of 250 Mbps and
maximum latency of 100 milliseconds RTT.

• IPsec VPN is configured for the management gateway.

• L2 VPN or AWS Direct Connect is configured between the on-premises environment and
the cloud SDDC.

• vSphere Distributed Switch 6.x or a standard switch is used in the on-premises


environment.

• Firewall rules are set for on-premises and cloud SDDCs.

• Virtual machine with version 9 compatibility is required.

• Enhanced vMotion Compatibility is considered. This feature prevents vSphere vMotion


migrations from failing because of incompatible CPUs or vGPUs. Enhanced vMotion
Compatibility defines a common baseline for CPU feature sets and GPU feature sets.

• Source and destination management network IP address families must match. You cannot
migrate a virtual machine from a host that is registered to vCenter Server with an IPv4
address to a host that is registered with an IPv6 address.

Workload Mobility Page 423


Performing a Live Migration with VMware Cloud on AWS

To migrate a powered-on VM from an on-premises environment to a VMware Cloud on AWS


SDDC, you must choose the correct migration type in the Migrate wizard.

The Change both compute resource and storage option migrates the VM from a host and
datastore in the on-premises environment to a host and datastore in the VMware Cloud on
AWS SDDC.

When this migration type is selected, vSphere vMotion and vSphere Storage vMotion are used
to migrate the powered-on VM from source to destination.

Demonstration: Performing a Live Migration with VMware Cloud on AWS

Workload Mobility Page 424


Video Transcript

In the vSphere Client, you use vSphere vMotion to migrate a VM from the on-premises
environment to the VMware Cloud environment.

1. Navigate to the Hosts and Clusters view in the vSphere Client.


2. In the Menu drop-down menu, select Host and Clusters.
3. In the left pane, expand the vSphere inventory tree.
4. Migrate the web-1a virtual machine from on-premises to VMware Cloud.
5. Right-click web-1a and select Migrate.
6. Select Change both compute resource and storage and click NEXT.
7. Expand the VMware Cloud vCenter Server inventory and select Compute-
ResourcePool.
8. Click NEXT.
9. Select WorkloadDatastore and click NEXT.
10. Select the Workloads folder and click NEXT.
11. In the Destination Network drop-down menu, select VLAN10_SDDC.
12. Click NEXT.
13. Select Schedule vMotion with high priority (recommended) and click NEXT.
14. On the Ready to complete page, click FINISH.
15. Monitor the Recent Tasks pane and wait for the Relocate virtual machine task to
finish.
16. Access the three-tier application.
17. Open a browser tab to the web-1a front-end at http://web-01.vclass.local/cgi-
bin/app.py .
18. The application loads.
19. Close the browser tab to the web-1a front-end.

Workload Mobility Page 425


If errors occur during migration, the virtual machine reverts to its original state and location.

Cold Migration
Using cold migration, you can move powered-off VMs between your on-premises environment
and your cloud SDDC.

Cold migration is best used for non-production workloads, for example, development or test
workloads, where business continuity is least impacted by downtime.

Cold Migrations with VMware Cloud on AWS


If you are using VMware Cloud on AWS, you can use Hybrid Linked Mode to perform cold
migrations between your on-premises data center and your VMware Cloud on AWS SDDC.

Checklist for Cold Migrations with VMware Cloud on AWS


To perform cold migration, Hybrid Linked Mode must be running between your on-premises
environment and the VMware Cloud on AWS SDDC.

Before performing a cold migration, you verify the following settings:

• Virtual machine is powered off

• IPsec VPN is configured for the management gateway

• On-premises vSphere 6.5.0d or later or vSphere 6.0 U3 or later is used

• Firewall rules are set for on-premises and cloud SDDCs

• vSphere Distributed Switch 6.x or a standard switch is used

Live and Cold Migrations


For more information on live (hot) and cold migrations, access the chapter on migrating VMs in
the VMware vSphere product documentation.

Advanced Cross vCenter vMotion


Advanced Cross vCenter vMotion helps to migrate virtual workloads between vCenter Server
instances, without the requirement for Enhanced Linked Mode (ELM) or Hybrid Linked Mode
(HLM). You can migrate VMs between vCenter Server instances that are in different single sign-
on (SSO) domains.

Workload Mobility Page 426


A common scenario for this feature is to migrate workloads from an on-premises data center to
a VMware Cloud on AWS SDDC.

To perform an Advanced Cross vCenter vMotion, use the Migrate wizard in the vSphere Client.

Step 1: Select Migration Type

From the Migrate wizard in the vSphere Client, select Cross vCenter Server export for the
migration type.

Step 2: Select Target vCenter Server

Configure the target vCenter Server. Either a new vCenter Server is connected, or a saved
connection is selected.

Workload Mobility Page 427


Saved vCenter Server entries do not persist. They are retained only for the current user session,
which is convenient when you must run multiple migration operations.

Step 3: Select Other Wizard Options

When you select a compute resource, a list of target vCenter Server data centers, clusters, and
hosts appears.

The other wizard options are similar to the compute resource and storage steps.

During the storage step, you can select the correct destination storage.

You might also need to change the VM network to match the target configuration.

The compatibility checks are processed with each step to ensure a successful migration.

Step 4: Start the Migration

Workload Mobility Page 428


After the appropriate resources are selected and compatibility checks are run, you can start the
migration.

Monitor the migration progress in the Recent Tasks pane.

Step 5: Monitor the Migration

Because the live migration occurs over multiple vCenter Server instances, you can view the
migration in process in the current environment, as well as view the receive operation in the
target vCenter Server instance.

Enhanced vMotion Compatibility


Enhanced vMotion Compatibility is not a method for migrating virtual machines. You can use
the Enhanced vMotion Compatibility feature to help ensure vSphere vMotion compatibility for
the hosts in a cluster.

Enhanced vMotion Compatibility ensures that all hosts in a cluster present the same CPU
feature set to VMs, even if the actual CPUs on the hosts differ. Using Enhanced vMotion
Compatibility prevents migrations with vSphere vMotion from failing because of incompatible
CPUs.

The Enhanced vMotion Compatibility feature works differently at the host cluster and VM
levels.

Cluster Level
When you migrate a VM out of the Enhanced vMotion Compatibility cluster, a power cycle
resets the VM EVC mode.

If a VM is in an Enhanced vMotion Compatibility cluster, and per-VM Enhanced vMotion


Compatibility is enabled, the EVC mode of the VM cannot exceed the mode in the
Enhanced vMotion Compatibility cluster in which the VM runs.

The baseline feature set that you configure for the VM cannot have more CPU features

Workload Mobility Page 429


The baseline feature set that you configure for the VM cannot have more CPU features
than the baseline feature set applied to the hosts in the Enhanced vMotion Compatibility
cluster.

VM Level
You can change the per-VM EVC mode only when the VM is powered off.

When you configure Enhanced vMotion Compatibility at the VM level, the per-VM EVC
mode overrides cluster-based Enhanced vMotion Compatibility.

If you do not configure per-VM Enhanced vMotion Compatibility, when you power on the
VM, it inherits the EVC mode of its parent Enhanced vMotion Compatibility cluster or
host. The EVC mode becomes an attribute of the VM.

Enhanced vMotion
Compatibility Example

If you configure a cluster with


the Intel Merom generation
EVC mode, you should not
configure a VM with any other
Intel baseline feature set.

Because all other sets have


more CPU features than the
Intel Merom generation
feature set, this configuration
results in the VM failing to
power on.

Cluster Enhanced vMotion Compatibility is not enabled in VMware Cloud on AWS

Knowledge Check: Enhanced vMotion Compatibility


Why do you use Enhanced vMotion Compatibility with vSphere vMotion to migrate VMs?
(Select one option)

Prevent failure because of incompatible CPUs


Ensure disaster recovery
Automate workload balancing
Perform hardware maintenance without scheduled downtime

Enhanced vMotion Compatibility with VMware HCX

Workload Mobility Page 430


When VMware HCX is used, VM mobility works as follows:

• VMware HCX orchestrates per-VM Enhanced vMotion Compatibility for live migrations.
• VM mobility is possible across all supported Intel chipset generations.
• VM mobility is possible regardless of power cycles.

Content Library
When you first access your cloud SDDC, you spin up new workloads. To perform this task, you
must access the VM templates, ISO images, OVFs, and scripts that you use in your on-premises
data center.

You can onboard or share these objects with your new SDDC. The fastest and easiest way to
onboard content into the cloud SDDC is by using a content library.

A content library organizes and automatically shares your corporate OVF templates, ISO
images, and scripts across vCenter, including the vCenter instance running in your new SDDC.

Creating a Content Library

To create a content library across your cloud, you take the following steps:

Workload Mobility Page 431


Content Library Uses

With a content library, you can perform the following actions:

• Deploy VMs from OVF templates

• Clone VMs to OVF templates

• Synchronize OVF templates

• Apply guest OS customization

• Support a native Virtual Machine Template (VMTX):L


○ Deploy from VMTX
○ Clone to VMTX

• Mount ISO files from the content library

• Store other files, such as scripts

Knowledge Check: Content Library


What is the content library used for? (Select one option)

Standardize templates and ISO images across vCenter Server instances


Migrate VMs in a powered-on state
Set firewall rules for on-premises and cloud SDDCs

Workload Mobility Page 432


Set firewall rules for on-premises and cloud SDDCs
Ensure workload balancing

Single Management View


You have a single management view across your cloud SDDC and your on-premises data center:

• Supports both embedded and external deployments on-premises


• Maintains separate permissions between a cloud SDDC and on-premises data center
• Allows you to enable and disable linking

SDDC view in the vSphere Client

Migration Solutions for VMware Cloud on AWS


VMware Cloud on AWS supports the following migration solutions:

• VMware HCX
• Live migration
• Cold migration
• Content Library
• Advanced Cross vCenter vMotion
• Enhanced vMotion Compatibility

Automated Migration Operations

Workload Mobility Page 433


Additional configurations and tools are available to support VM migration operations in
VMware Cloud on AWS that are not dependent on Hybrid Linked Mode:

• PowerCLI move-VM cmdlet:


○ Works across vCenter Server instances
○ Supports multi-NIC VMs
• Using a REST client with VMware Cloud on AWS
• vSphere SDK support
• Integration with compatible application platforms:
○ CloudFormation
○ Terraform
• Developer Center
○ VMware Cloud console API Explorer

Knowledge Check: Migration Methods

You want to migrate powered-on VMs from your on-premises environment to a VMware Cloud
on AWS SDDC without affecting your business continuity. Which methods do you use? (Select
two options)

Hot migration
Cold migration
VMware HCX
Content Library
Advanced Cross vCenter vMotion

Workload Mobility Page 434


VMware HCX Overview
Wednesday, February 1, 2023 9:10 AM

Learner Objectives
After completing this lesson, you should be able to:

• Identify the benefits and use cases of VMware HCX


• Describe VMware HCX migration types

To build a hybrid and multi-cloud strategy, you must consider the best solution for achieving the following
goals:

• Operating across multi-cloud environments


• Migrating applications
• Extending your data center into the cloud

VMware HCX can help to achieve these


goals.

VMware HCX supports a hybrid and


multi-cloud strategy by providing a
bridge between the on-premises
private VMware data center and your
cloud SDDC, such as VMware Cloud on
AWS, Azure VMware Solutions, or
Google Cloud VMware Engine.

Example of using HCX between on-premises data center and a VMware


Cloud on AWS SDDC

Benefits of Using VMware HCX


Workload Mobility Page 435
Benefits of Using VMware HCX

Offers Bulk Migrations


You can perform zero-downtime live migrations and schedule large-scale warm migrations.

Accelerates Cloud Adoption


Applications can be migrated from on-premises data centers to cloud SDDC.

Employs WAN Connectivity


Intelligent and highly performant WAN connectivity addresses the day 0 and day 1 problem of
connectivity, which can be time-consuming.

Provides Multisite Interconnection


Connections are through a WAN-optimized, secured, load-balanced, and traffic-engineered network
extension.

Knowledge Check: VMware HCX Benefits


Which statement about VMware HCX multisite interconnection is true? (Select one option)

Connections are secure through a WAN-optimized, secured, load-balanced, and traffic-engineered


network extension.
Applications can be migrated from on-premises data centers to the cloud SDDC.
Perform zero-downtime live migrations and schedule large-scale warm migrations.

VMware HCX Use Cases


You can use VMware HCX to extend or migrate data centers in a range of scenarios.

Workload Mobility Page 436


Example of use cases in a VMware Cloud on AWS Environment

Hybrid Applications

Using VMware HCX in a product environment helps improve application performance.

Consider an example of a customer using VMware HCX and VMware Cloud on AWS for its production
environment.

On day 1, the customer moves its production booking system, application, and web tiers to VMware Cloud on
AWS. The customer experiences a performance improvement.

The core application runs on older hardware that uses an older vSphere version. But by moving to VMware
Cloud on AWS, the customer runs on newer hardware and a newer vSphere version.

The customer performs the move on day 1, creating its network bridge, stretching the web and application
networks, and migrating a live application

Burst Capacity

Workload Mobility Page 437


You can use VMware HCX for burst capacity.

Consider an example of a customer using VMware HCX and VMware Cloud on AWS in its environment.

For example, a media company has regular development cycles in its existing on-premises VMware data center.
It runs out of capacity and does not want another purchasing cycle.

By spinning up an instance of VMware Cloud on AWS on-demand, the company can use VMware HCX to
consume excess capacity when required and remove it when it is not needed.

Bulk Migration

You can schedule and migrate several vSphere VMs in and across data centers without requiring a reboot.

Consider an example of an organization using VMware HCX and VMware Cloud on AWS in their environment.

An organization wants to migrate workloads from a legacy vSphere environment and other platforms to
VMware Cloud on AWS. It wants to drive large-scale migration and accelerate transformation (in months or

Workload Mobility Page 438


VMware Cloud on AWS. It wants to drive large-scale migration and accelerate transformation (in months or
weeks).

VMware HCX is an application mobility platform that is designed for simplifying application migration. After the
organization establishes hybridity between on-premises and the cloud, they can efficiently move workloads
without downtime.

Knowledge Check: VMware HCX Use Cases


A retail organization must extend the capacity of a production data center to the cloud so that it can meet
seasonal demand. Can the organization use VMware HCX in this situation, and why?

Yes, the core function of VMware HCX is to migrate workloads transparently between environments.
No, VMware HCX focuses on migrating workloads permanently to one or more SDDCs on premises.
Yes, the main function of VMware HCX is to expand capacity to cloud environments.

VMware HCX Infrastructure Highlights


VMware HCX abstracts on-premises and cloud resources. It creates infrastructure hybridity and presents the
resources to applications as one resource:

• The infrastructure hybridity provides a high throughput, low latency, layer 2 network extension, which is
WAN-optimized and load-balanced and provides traffic engineering with intelligent routing and fairness
for large migrations.

• The hybrid cloud is secured with military-grade level B encryption. This cloud can extend to multiple sites
and multiple clouds of different vSphere versions.

• VMs can securely and seamlessly migrate bi-directionally and in bulk. VMware HCX supports live vSphere
vMotion migration and warm bulk migration, with low downtime.

Example of VMware HCX infrastructure between an on-premises data center and a VMware Cloud on AWS SDDC

Workload Mobility Page 439


VMware HCX Migration Types

Bulk Migration

Bulk migration, or replication-based migration, uses the VMware vSphere Replication protocols to move the
virtual machines to a destination site.

Bulk migration has the following features and benefits:

• The source VM is online during replication.

• You can use this migration type for migrating large number of VMs in parallel.

• It is backward-compatible with VMware ESXi 6.0.

• It uses hypervisor replication (not vSphere Replication Appliance).

• Up to 100 concurrent migrations are possible.

• When the replica is ready, you can choose the switchover mode:
○ Immediate switchover as soon as the replica is ready at the destination
○ Scheduled switchover during a predetermined maintenance

How does bulk migration work?

Workload Mobility Page 440


How does bulk migration work?

• You can schedule a time for the switchover.

• VMs across multiple stretched VLANs begin


replicating to the cloud SDDC (with
deduplication, compression, and WAN
Optimization).

• The VM runs at the source site until the


failover begins. The service interruption with
the bulk migration is equivalent to a reboot.

• The source VMs power down, remove


themselves from inventory, perform a final
Bulk migration between on-premises data center and VMware synchronization, and reboot on the cloud
Cloud on AWS SDDC; The on-premises environment can be SDDC (batches of 15 at a time).
vSphere 6.0+, and the cloud can be the latest SDDC.

Knowledge Check: Bulk Migrations


An organization wants to perform bulk migration of VMs at a specific interval. Which approach can help achieve
this goal? (Select one option)

Set a scheduled time to perform the migration


Because you cannot set a scheduled interval, wait for the required timeframe and perform live migrations
of the VMs
Move VMs one at a time at the required interval to avoid downtime

VMware HCX vMotion Migration


When you perform a VMware HCX migration using vSphere vMotion, you can migrate VMs from on-premises to
your cloud SDDC with zero downtime.

VMware HCX migration using vSphere vMotion provides the following features and benefits:

• Bidirectional migration with no vendor lock-in

• Compatible with vSphere 6.0+ (no upgrade required)

• Works across trust domain boundaries with multitenancy feature

• Bidirectional migration of virtual machines with zero downtime

• Software-based stretch layer 2 networks

Workload Mobility Page 441


• Software-based stretch layer 2 networks

• Migrates workloads into the cloud SDDC without impact to the application owner

• Provides disaster avoidance by quickly migrating VMs to the target site

• Incorporates SD-WAN technologies, including WAN acceleration, traffic management, and intelligent
routing

How does the vMotion migration work?

• VMware HCX vMotion can transfer a live


VM from a VMware HCX-activated vCenter
Server to a VMware HCX-activated
destination site (or from the VMware HCX
destination site towards the local site).

• The vMotion migration option is designed


for moving a single VM at a time.

• The vMotion transfer captures the VM


active memory, its execution state, its IP
address, and its MAC address.

• Migration duration depends on the


vMotion (live) migration between on-premises data center and connectivity, including both the
VMware Cloud on AWS SDDC bandwidth available and the latency
between the two sites.

Bandwidth requirements are approximately 150 MB of bandwidth to perform a migration with


vSphere vMotion.

You can move a running application to the cloud, on a stretched network, with changes to the VM, and
maintain the existing security context.

VMware HCX Cold Migration


Cold migration uses the same network path as VMware HCX vMotion to transfer a powered-off VM. During a
cold migration, the VM IP address and the MAC address are preserved. Cold migrations must satisfy the vSphere
vMotion requirements.

How does cold migration work?

The cold migration method uses the VMware Network File Copy (NFC) protocol. It is automatically selected
when the source virtual machine is powered off.

Workload Mobility Page 442


VMware vSphere vMotion and Cold Migration Requirements

Requirements for VMware HCX using vSphere vMotion and cold migration are as follows:

• HCX Interconnect tunnels must be up or active.

• HCX vMotion requires 100 Mbps or higher throughput capability.

• The VM hardware version must be at least version 9 or higher.

• The underlying architecture, regardless of OS, must be Intel x86.

• VMs with raw disk mapping in compatibility mode (RDM-V) can be migrated.

• VM restrictions for HCX vMotion must be honored.

Knowledge Check: Migration Types


Which migration method uses NFC protocol to migrate VMs? (Select one option)

Cold Migration
vMotion Migration
Bulk Migration

Replication Assisted vMotion


VMware HCX Replication Assisted vMotion is a transformative solution for VM mobility.

VMware HCX Replication Assisted vMotion combines advantages from VMware HCX bulk migration (parallel
operations, resiliency, and scheduling) with VMware HCX vMotion (zero-downtime VM state migration). It
simplifies the planning, execution, and operationalization of large-scale mobility to public or private clouds.

VMware HCX Replication Assisted vMotion provides the following benefits:

Large-Scale Live Mobility


Administrators can submit large sets of VMs for a live migration.

Switchover Window
Administrators can specify a switchover window.

Continuous Replication
After a set of VMs is selected for migration, VMware HCX Replication Assisted vMotion does the initial
syncing and continues to replicate the delta changes until the switchover window is reached.

Concurrency

Workload Mobility Page 443


Concurrency
Multiple VMs are replicated simultaneously. When the replication phase reaches the switchover window,
a delta vMotion cycle is initiated to do a quick, live switchover. Live switchover happens serially.

Resiliency
VMware HCX Replication Assisted vMotion migrations are resilient to latency and varied network and
service conditions during the initial sync and continuous replication sync.

Switchover
Large chunks of data synchronization by way of replication mean smaller delta vMotion cycles, which, in
turn, means that large numbers of VMs switch over in a maintenance window.

HCX Replication Assisted vMotion Requirements

Requirements for HCX Replication Assisted vMotion are as follows:

• VMware HCX Interconnect tunnels must be up/active.

• VMware HCX vMotion requires 100 Mbps or higher throughput capability.

• The VM hardware version must be Version 9 or higher.

• The underlying architecture, regardless of OS, must be x86.

• The Hybrid Interconnect, Bulk Migration, vMotion, and Replication Assisted vMotion services must be
activated and in a healthy state in the relevant service mesh.

• The resources to create, power on, and use the VM must be available in the destination environment.

• VMs must reside in a Service Cluster (defined in the Compute Profile).

Replication Assisted vMotion uses vSphere Replication whose potential throughput can vary
depending on the bandwidth available for migrations, latency, available CPU/MEM/IOPS, and disk read
speed.

How does Replication Assisted vMotion work?

• Replication begins with a full synchronization (replication) of the VM disks to the destination site.

• Migrated VMs enter a continuous synchronization cycle until a switchover is triggered.

• You can have the switchover process start immediately following the initial sync or delay the switchover
until a specific time using the scheduled migration option. If the switchover is scheduled, the
synchronization cycle continues until the switchover begins.

• The final delta synchronization begins when the switchover phase starts. During this phase, vMotion is
engaged for migrating the disk delta data and virtual machine state.

• As the final step in the switchover, the source VM is removed, and the migrated VM is connected to the

Workload Mobility Page 444


• As the final step in the switchover, the source VM is removed, and the migrated VM is connected to the
network powered on.

• Replication Assisted vMotion creates two folders at the destination site. One folder contains the virtual
machine infrastructure definition, and the other contains the VM disk information. This is normal
behavior for Replication Assisted vMotion migrations and has no impact on the functionality of the VM at
the destination site.

VMware HCX OS Assisted Migration


Using VMware HCX OS Assisted Migration, you can migrate VMs from a non vSphere environment to the cloud.
A Sentinel Gateway (SGW) appliance and Sentinel Data Receiver appliance (SDR) can be used for non vSphere to
vSphere virtual machine migrations.

How does OS-Assisted Migration work?

• The HCX OS Assisted Migration service uses the Sentinel software that is installed on Linux- or Windows-
based guest VMs to assist with communication and replication from their environment to a VMware
vSphere SDDC.

• Sentinel gathers the system configuration from the guest virtual machine and assists with the data
replication. The source system information is used by various HCX OS Assisted Migration service
processes.

• Sentinel also helps with the data replication by reading data that is written to the source disks and
passing that data to the SDR appliance at the destination site.

• Guest virtual machines connect and register with an HCX Sentinel Gateway (SGW) appliance at the source
site. The SGW then establishes a forwarding connection with an HCX Sentinel Data Receiver (SDR)
appliance at the destination vSphere site. You specify the network connections between the guest virtual
machines and SGW in the compute profile.

• You must install the HCX Sentinel software on each guest VM requiring migration to initiate the guest VM
discovery and data replication. After Sentinel is installed, a secure connection is established between the
guest virtual machine and the HCX SGW. HCX builds an inventory of candidates for migration as the
Sentinel software is installed on the guest virtual machines.

• Using the established connection between the SGW and SDR, replication connections are made between
the Sentinel software on the guest virtual machines and the SDR, with one connection each for control
operations and data replication.

Knowledge Check: Migration Type


1. Which migration method uses vSphere replication and vMotion in a single VM migration option? (Select
one option)

Replication Assisted vMotion


OS Assisted Migration

Workload Mobility Page 445


OS Assisted Migration
vMotion Migration
Cold Migration

2. An organization has VMs in a non vSphere environment. They want to migrate the VMs to the cloud.
Which type of migration is the best solution in this scenario? (Select one option)

OS Assisted Migration
Replication Assisted vMotion
Cold Migration
vMotion Migration

Summary of Migration Types


The table summarizes the migration types:

Demonstration: Using VMware HCX to Migrate VMs to VMware Cloud on AWS


SDDC

Workload Mobility Page 446


Video Transcript

You use VMware HCX to migrate a VM from the on-premises environment to a VMware Cloud on AWS
environment.

1. Under Services, click Migration.


2. Click the Management tab and click MIGRATE.
3. Select the app-1a check box and click ADD.
4. Under Transfer and Placement, click Mandatory: Compute Container.
5. Select Compute-ResourcePool and click SELECT.
6. Under Transfer and Placement, click Specify Destination Folder.
7. Select Workloads and click SELECT.
8. Under Transfer and Placement, click Mandatory: Storage.
9. Select WorkloadDatastore and click SELECT.
10. Under Transfer and Placement, click the Migration Profile drop-down menu and select vMotion.
11. Click VALIDATE.
12. Click GO.

The migration task starts. It takes approximately 10 minutes to complete.

Knowledge Check: VMware HCX Migration Types


When do you use each migration method?

Workload Mobility Page 447


Workload Mobility Page 448
VMware HCX Components
Wednesday, February 1, 2023 10:19 AM

Learner Objectives
After completing this lesson, you should be able to:

• Identify VMware HCX Components

VMware HCX is delivered as SaaS.

VMware HCX comprises a virtual management component at both the source and destination sites, and up to
five types of VMware HCX Interconnect service appliances depending on the HCX license.

VMware HCX services are configured and activated at the source site and then deployed as virtual appliances
at the source site, with a peer appliance at the destination site.

VMware HCX Components: HCX Manager, HCX Network Extension, HCX WAN Optimization, HCX-IX Interconnect

VMware HCX Components


VMware HCX has several components or solutions that support application migration, rebalancing of
workloads, and optimizing disaster recovery.

Workload Mobility Page 449


HCX Manager

• Provides the framework for the deployment of the VMware HCX service appliances
• Integrates with vCenter and uses existing SSO for authentication
• Supports actions against HCX Manager from the VMware HCX user interface or context menus
HCX-IX Interconnect

• Provides migration and cross-cloud vSphere vMotion capabilities over the Internet or private lines
• Provides suite-B encryption, traffic engineering, and VM mobility
HCX Network Extension

• Provides high performance L2 extension capability


• Capable of 4-6 Gbps of throughput per VLAN
• Can be scaled out to accommodate the extension of additional VLANs
HCX WAN Optimization

• Improves performance characteristics of Internet or private lines


• Uses data deduplication and line conditioning for performance that is closer to a LAN environment
• On ramp to cloud with no need to wait on DX or MPLS circuits

VMware HCX Component Requirements

Virtual hardware requirements for VMware HCX appliances apply for both the source and destination
environments.

HCX Appliance Virtual CPU Memory Disk Space / IOPS

Workload Mobility Page 450


HCX Manager 4 12 GB 60 GB
HCX Interconnect (HCX-IX) 8 3 GB 2 GB
HCX Network Extension (HCX-NE) 8 3 GB 2 GB
HCX WAN Optimization (HCX-WAN-OPT) 8 14 GB 100 GB / 5000 IOPS

When VMware HCX is used to extend networks in deployments using VMware NSX at the destination,
additional network extension (HCX-NE) appliances are required when extending more than 8 networks.

You should never use VMware HCX to extend the vSphere management network or other VMkernel networks
(for example, vMotion, vSAN, replication) to the remove site.

VMware HCX Manager


VMware HCX has two component services: HCX Cloud Manager and HCX Connector. These components work
together to provide VMware HCX services.

In cloud-to-cloud environments, you deploy HCX Cloud Manager at both the source and destination sites. In
legacy vSphere to cloud (private or public) deployments, you install HCX Connector at your on-premises or
legacy site and HCX Cloud Manager at the destination cloud site.

Both the source and destination sites have a management interface for HCX administration and HCX actions.
This management interface is called HCX Manager.

HCX Connector on the source site or HCX Cloud Manager on the destination site are often referred to
as simply HCX Manager.

HCX Connector and HCX Cloud Manager must have connectivity of the following types:

• With the peer manager for site pairing


• To connect.hcx.cmware.com (activation) and hybridity-depot.vmware.com (updates)

VMware HCX Connector Characteristics

HCX Connector is the central launch point for VMware HCX mobility services. HCX Connector has the following
characteristics:

• It is an OVA that must be deployed on the source site from where workloads are migrated
• It provides the job framework for multisite mobility operations
• Its GUI can be used to deploy other components and can be used to migrate virtual machines and to
protect VMs

Knowledge Check: HCX Manager

Workload Mobility Page 451


Knowledge Check: HCX Manager
True or False: HCX Connector and HCX Cloud Manager names are interchangeable.

True
False

HCX-IX Interconnect
The HCX-IX appliance provides VM mobility using vSphere Replication, vSphere vMotion, and NFC protocols.

The HCX-IX service appliance provides VM replication and vSphere vMotion based migration capabilities over
the Internet and private lines to the destination site, with strong encryption, traffic engineering, and virtual
machine mobility.

NFC is a proprietary VMware protocol that is used to transfer virtual disk data between hosts, vCenter Server,
and ESXi clients.

Knowledge Check: HCX-IX Interconnect


How does HCX-IX Interconnect provide VM mobility? (Select one option)

Using vSphere Replication, vSphere vMotion, and NFC protocols


Using vSphere Replication and vSphere vMotion
Using vSphere Replication and NFC protocols
Using vSphere vMotion and NFC protocols

WAN and Compression Optimization Capabilities


In VMware HCX, the WAN and compression optimization capabilities reduce bandwidth use and ensure the
best use of available network capacity to expedite data transfer to and from a cloud provider or site.

WAN Optimization Service

In VMware HCX, the WAN Optimization

Workload Mobility Page 452


In VMware HCX, the WAN Optimization
service performs the following functions:

• Data deduplication of redundant


data (similar OS) reduces the
amount of data sent across the
WAN. It eliminates redundant traffic
patterns.

• WAN conditioning reduces the


WAN Optimization Functions effects of latency

• Use of forward error correction


avoids packet loss scenarios

How the WAN Optimization service helps to move workloads faster and with less network
traffic than traditional methods.

Send VMDK
ESXi passes the source VMDK through military-grade encryption to the WAN Optimization appliance.

Deduplication and Compression


Data is deduplicated, compressed, and transport streamlined, resulting in greater than 40%
improvement.

The WAN Optimization appliance communicates with the HCX Interconnect appliance, which sends the
VMDK, over either IPsec VPN or AWS Direct Connect, to the VMware Cloud on AWS SDDC.

Decompression
In the VMware Cloud on AWS SDDC, the source VMDK is decompressed and decrypted by the WAN
Optimization appliance.

The VMDK then passes through the hybrid cloud gateway and on to the ESXi host.

VMware HCX Network Extension


Workload Mobility Page 453
VMware HCX Network Extension

VMware HCX creates a network


extension from the on-premises data
center to the cloud SDDC.

Network extension from an on-premises data center to a VMware Cloud on


AWS SDDC

You perform the following tasks when connecting from on-premises to a cloud environment:

• Build a layer 3 VPN and new networks on the cloud side.

• Test and certify the new networks with the help of security teams.

• Verify user access on the new networks.

• Evaluate operational tools and processes.

VMware HCX Network Extension Benefits

VMware HCX Network Extension has the following benefits:

• VMware HCX uses virtualization and abstraction so that you can use components on both the on-
premises site and the cloud to set up a secure bridge. Typically, you set up the bridge over the Internet
while waiting for the Direct Connect circuit to arrive.

• NSX network virtualization is part of the cloud SDDC but is not necessary for the on-premises side. The
VMware HCX virtual appliance provides everything that you require for the on-premises site.

• You can stretch layer 2 networks to the cloud. You do not need to create networks. You can bypass the
recertification process because no changes are made to the on-premises network and security.

• VMs can move to the cloud (and back again) without refactoring or IP address changes.

Workload Mobility Page 454


Knowledge Check: Network Extension
1. VMware HCX uses these two services to help you use components on both the on-premises and cloud to
set up a secure bridge. What are the two services?

• Virtualization
• Abstraction

2. How does VMware HCX Network Extension work? (Select one option)

You use components on both the on-premises site and the cloud to set up a secure bridge.
You require NSX network virtualization for the on-premises side.
You create layer 2 networks for the cloud side.
You must change IP addressing to allow VMs to move to the cloud (and back again).

Workload Mobility Page 455


VMware HCX Deployment
Thursday, February 2, 2023 8:16 AM

Learner Objectives
After completing this lesson, you should be able to:

• Deploy and configure VMware HCX appliances in a VMware Cloud on AWS SDDC
• Create site pairing
• Configure the service mesh
• Configure a network extension

Deploying VMware HCX Topology


VMware HCX topology is a one-to-one topology that includes a source on-premises vSphere
environment and a destination environment that is running on VMware Cloud on AWS.

VMware HCX Topology

Preparing to Install VMware HCX on VMware Cloud on AWS

When preparing to install VMware HCX on VMware Cloud on AWS, you perform the following
general steps.

Workload Mobility Page 456


general steps.

1. Deploy an SDDC
2. Configure firewall access to the SDDC vCenter instance
3. Enable VMware HCX on the VMware Cloud on AWS SDDC
4. Enter the VMware Cloud on AWS SDDC [email protected] service account
credentials

Preparing to Install VMware HCX On Premises

1. On the management network, you identify three IP addresses for the following
components:
• HCX Manager
• HCX Interconnect
• HCX Network Extension
2. You identify one IP address on the vSphere vMotion network
3. You use a distributed virtual switch for the L2 extension (if using vSphere vMotion).
4. You require two VLANs:
• One VLAN for management network (cannot be stretched)
• One or more VLANs for workloads to be migrated
5. The required ports must be open for WAN connectivity.
6. The HCX Manager outbound firewall requirements are TCP port 443 to
connect.hcx.vmware.com and hybridity-depot.vmware.com.
7. The HCX Interconnect and HCX Network Extension outbound firewall requirements are
UDP port 500 and UDP port 4500.
8. After deployment, you log into the VMware Cloud on AWS console to find the procured
public IPs for HCX Interconnect and HCX Network Extension appliances in the SDDC.

If firewall rules for IPsec traffic require the specific destination IP, the firewall rules must
be created after the deployment of VMware HCX on VMware Cloud on AWS.

HCX Installation Workflow


When installing HCX, you use the same general workflow for the supported installation
scenarios.

Workload Mobility Page 457


Deploying VMware HCX to the VMware Cloud on AWS SDDC

Step 1: Prepare for the Deployment


Prepare the deployment configurations using Checklist B in Getting Started with VMware
HCX . Access the VMware HCX documentation at https://docs.vmware.com/en/VMware-
HCX/4.3/hcx-getting-started/GUID-EC721FC4-6120-4F05-8F07-7D61AEEF76B7.html.

Step 2: Open HCX

1. Log in to VMware Cloud on AWS and select View Details on the target SDDC.
2. Navigate to the Add Ons tab.
3. On the VMware HCX tile, click OPEN HCX. A new browser tab opens to

Workload Mobility Page 458


3. On the VMware HCX tile, click OPEN HCX. A new browser tab opens to
https://connect.hcx.vmware.com/.

Step 3: Deploy HCX

Click DEPLOY HCX to deploy VMware HCX to your SDDC.

During deployment, the VMware HCX Cloud components are deployed to the VMware
Cloud on AWS SDDC and the SDDC becomes an eligible VMware HCX target site.

Step 4: Add a Management Gateway Firewall Rule


Configure a firewall rule on the management gateway to allow the HCX Cloud Manager
(use the pre-defined HCX group as the destination) to receive inbound TCP 443
connections.

Deploying VMware HCX to the On-Premises Data Center

Step 1: Download the HCX Installer

You download the HCX installer on the source site, which is the on-premises data center:

1. Log in to your VMware HCX cloud console.


2. Under Administration, click System Updates.
3. Click REQUEST DOWNLOAD LINK.

Workload Mobility Page 459


3. Click REQUEST DOWNLOAD LINK.
4. Click the VMWARE HCX download link.

The HCX installer is the HCX Connector OVA file, which is used to deploy HCX Manager on
the source site.

Step 2: Deploy HCX Manager Using the HCX Connector OVA

To deploy the VMware HCX Connector OVA FILE:

1. Log in to the on-premises vCenter Server instance.


2. In the left pane of the vSphere Client, right-click the vCenter Server object, and from
the drop-down menu, click Deploy OVF Template.
3. Click UPLOAD FILES to import the OVA into the on-premises vCenter Server instance.
4. Proceed through the pages of the Deploy OVF Template wizard and record the
password entered for the admin and root user accounts on the Customize template
page of the wizard.

Workload Mobility Page 460


page of the wizard.
5. After the OVA template is successfully deployed, log in to HCX Manager at
http://<fqdn of appliance>:9443.

Step 3: Activate HCX Instance

When you are logged in to the HCX Manager, you are automatically prompted to activate
the VMware HCX instance:

1. Verify that the HCX server URL is https://connect.hcx.vmware.com.


2. Paste the activation key in the license key field.
3. Select ACTIVATE.

Step 4: Enter Location and System Details

Workload Mobility Page 461


1. Enter the city name for the location of the on-premises data center where HCX
Manager is deployed.
2. Click CONTINUE.
3. Review the HCX System Name and click CONTINUE.
4. In the next window, click Yes and CONTINUE.

Step 5: Connect to vCenter

Workload Mobility Page 462


1. In the Connect your vCenter window, enter the on-premises vCenter Server details
and click CONTINUE.
2. In the Configure SSO/PSC window (not shown here), enter FQDN https://<fqdn of
vCenter> for the vCenter Server instance.
3. Test the SSO URL in another tab to verify that it is correct.
4. Click RESTART to finish the configuration.

Step 6: Assign Roles

Workload Mobility Page 463


1. After the service restarts, select the Configuration tab in HCX Manager.
2. Select vSphere Role Mapping.
The default System Administrator role mapping is vsphere.local\Administrators for
both system administrators and enterprise administrators.
3. If your vSphere environment has a different SSO domain than the default
vsphere.local, update the role mapping with the correct role mapping details.

Pairing source and destination sites is a requirement for creating a service mesh.

A site pair establishes the connection that is required for management, authentication, and
orchestration of VMware HCX services across a source and destination environment.

Creating a Site Pairing and Configuring a Service Mesh

Workload Mobility Page 464


The multi-site service mesh activates the configuration, deployment, and serviceability of the
HCX Interconnect virtual appliance pairs.

A service mesh can be added to a connected site pair with a valid compute profile that is
created on both sites. Adding a service mesh initiates the deployment of VMware HCX
Interconnect virtual appliances on both sites.

Step 1: Add a Site Pair

Pair the on-premises site with the VMware Cloud on AWS SDDC.

A VMware HCX site pair establishes the connection needed for management,
authentication, and orchestration of VMware HCX services across a source and
destination environment.

Step 2: Create Network Profiles

Workload Mobility Page 465


A network profile is an abstraction of a distributed port group, standard port group, or
NSX logical switch, and the Layer 3 properties of that network.

A network profile is a subcomponent of a complete compute profile.

Create one or more network profiles. Network profiles are used for management, uplinks,
vSphere Replication, and vSphere vMotion traffic that is associated with a compute
profile.

A network profile specifies the port group, network range, gateway, and DNS settings for
a network that can be consumed by a compute profile.

Step 3: Create Compute Profiles

A compute profile defines the structure and operational details for the appliances that are
deployed for VMware HCX.

Workload Mobility Page 466


The compute profile also defines the source and destination infrastructure, resource pool, and
datastore placement and networks that are used by the VMware HCX appliances.

A compute profile contains the compute, storage, and network settings that VMware HCX uses
on this site to deploy the HCX Interconnect-dedicated virtual appliances when a service mesh is
added.

Create a compute profile in the Multi-Site Service Mesh interface in both the source and the
destination VMware HCX environments using the planned configuration options for each site,
respectively.

Step 4: Create a Service Mesh

A VMware HCX service mesh is the effective VMware HCX services configuration for a source
and destination site. A service mesh can be added to a connected site pair that has a valid
compute profile created on both of the sites.

Adding a service mesh initiates the deployment of VMware HCX Interconnect virtual appliances
on both of the sites. An interconnect service mesh is always created at the source site.

The service mesh defines the compute profiles, both local and remote, for deployment.

Knowledge Check: Service Mesh


1. Which components are used to create a service mesh? (Select two options)

Storage profile
Network profile
Compute profile
Host profile

2. Why do you configure site pairing? (Select one option)

To establish a connection for the management, authentication, and orchestration of

Workload Mobility Page 467


To establish a connection for the management, authentication, and orchestration of
VMware HCX services across two sites: source and destination
To create layer 2 networks at the destination VMware HCX site and bridge the
remote network to the source network over a multi-gigabit-capable link
To create a secure optimized transport fabric between two sites managed by
VMware HCX
To migrate VMs from on-premises to your VMware Cloud SDDC with zero downtime

Extending Networks with VMware HCX


With VMware HCX Network Extension (HCX-NE), you can extend the virtual machine networks
to a VMware HCX remote site. Virtual machines that are migrated or created on the extended
segment at the remote site behave as if on the same L2 segment as virtual machines in the
source environment.

Extending Networks

You can use Network Extension to create layer 2 networks at the destination VMware HCX site
and bridge the remote network to the source network over a multi-gigabit-capable link. The
new stretched network is automatically bridged with the network at the source HCX data
center.

Step 1: Create a Network Extension

In the vSphere Client, select Services > Network Extension and click CREATE A
NETWORK EXTENSION.

Step 2: Select Source Networks

Workload Mobility Page 468


In the Extend Networks window, select the source networks for the extension to the
remote site and click NEXT.

VMware HCX with AWS Direct Connect

Prerequisites

Prerequisites for using VMware HCX with AWS Direct Connect are as follows:

• The AWS Direct Connect with a private virtual interface (VIF) is only supported on the
VMware Cloud SDDC that is backed by NSX networking.

• The SDDC must be configured to use the AWS Direct Connect private VIF.

• A private subnet that can be reached from on-premises over AWS Direct Connect with
private VIF is reserved for VMware HCX component deployments.

Configuring VMware HCX over AWS Direct Connect

To configure VMware HCX over AWS Direct Connect with a private VIF, you take the following
steps:

1. Deploy the HCX Manager in the SDDC and on-premises.


2. Pair the on-premises HCX Manager with the SDDC HCX Manager.
3. Contact VMware Cloud on AWS support:
• Inform them that you are configuring VMware HCX over AWS Direct Connect with a
private VIF.
• Provide support with the IP address range for the SDDC VMware HCX appliances.
4. Configure the host file on the on-premises HCX Manager so that the FQDN resolution of
hcx-sddc.xx-xx-xx-xx.vmwarevmc.com resolves to the private IP address of the SDDC HCX
Manager.
5. After support finishes the configuration, continue with the deployment of the HCX
Interconnect, WAN-OPT, and Network Extension services.

Workload Mobility Page 469


For more information about configuring VMware HCX for AWS Direct Connect private VIFs,
access the VMware HCX product documentation.

Knowledge Check: VMware HCX Installation


What is the process for installing VMware HCX for VMware Cloud on AWS?

Workload Mobility Page 470


Hands-On Practice: Deploying and Configuring VMware HCX
Thursday, February 2, 2023 8:54 AM

Learner Objectives
After completing this lesson, you should be able to:

• Deploy VMware HCX to the VMware Cloud on AWS SDDC and to the on-premises data
center
• Create a site pairing and service mesh between the VMware Cloud on AWS SDDC and the
on-premises data center.

Utilize HOL-2387-02-ISM in the VMware HOL

1. Deploy and Configure VMware HCX to the VMware Cloud on AWS SDDC
2. Deploy and configure VMware HCX to the on-premises data center
3. Create a site pairing and service mesh between the VMware Cloud on AWS SDDC and the
on-premises data center

Workload Mobility Page 471


Module Summary
Thursday, February 2, 2023 8:57 AM

Review the key concepts covered in this module:


• Hybrid Linked Mode is a version of Enhanced Linked Mode that is built for VMware Cloud
on AWS. With Hybrid Linked Mode, you can use a single vSphere Client interface to view
and manage the inventories of both your on-premises and VMware Cloud on AWS data
centers.

• Hybrid Linked Mode can be configured from the vCenter Cloud Gateway Appliance and
the VMware Cloud on AWS SDDC vSphere Client.

• The vCenter Cloud Gateway Appliance can be installed so that Hybrid Linked Mode can be
configured from an on-premises SDDC.

• VMware HCX is a flexible SaaS tool that is well-suited for bulk workload migrations.

• VMware Cloud on AWS has multiple migration solutions with specific requirements that
help to minimize the negative outcomes of cloud migration challenges.

• VMware HCX does not require NSX to create a network extension from the on-premises
data center to VMware Cloud on AWS.

• In VMware Cloud on AWS, you have restricted permissions on objects that VMware
manages.

Additional Resources

• For more information about VMware HCX, access the chapter on VMware HCX in the
VMware Cloud on AWS in the VMware HCX product documentation
at https://docs.vmware.com/en/VMware-HCX/4.3/hcx-user-
guide/GUID-90467C70-6D3B-411C-B056-16023ED2B839.html.

• For more information about migration solutions, access the VMware Cloud Migration
website at https://vmc.vmware.com/solutions/migration/overview.

Workload Mobility Page 472


Backup and Disaster Recovery Options
Thursday, February 2, 2023 9:03 AM

Learner Objectives
After completing this lesson, you should be able to:

• Describe the backup methods for VMs


• Compare disaster recovery methods

VM backup and disaster recovery (DR) methods are important parts of business continuity and
DR plans. Each method fulfills different objectives.

Disaster Recovery (DR)


Automate data transfer in the event of a disaster

Backup
Store copies of VM data in multiple environments as a recovery option

Backup ensures that your data is safe and recoverable.

Disaster recovery keeps your workloads available if an outage occurs.

Backing Up Virtual Machines


In the SDDC, you are responsible for performing regular backups of your workload VMs.

Backing Up VMs using Third-Party Backup Solutions

To protect workload data, you can use your preferred third-party backup tool.

Disaster Recovery Page 473


You can also use a backup solution based on VMware vSphere® Storage APIs -
Data Protection.

With vSphere Storage APIs - Data Protection, backup products can perform
centralized, efficient, off-host, LAN-free backups of VMs.

A backup product that uses vSphere Storage API - Data Protection can back up
VMs from a central backup system (physical or virtual system). The backup
does not require backup agents or any backup processing inside the guest
operating system.

Backing Up VMs in VMware Cloud on AWS

The backup solution must be considered closely when integrating with a VMware Cloud on
AWS environment. The solution might need to be redesigned, upgraded, or replaced.

Consider the following factors:

• Many backup vendor


software providers, such
as Veeam and Veritas,
offer support for VMware
Cloud on AWS.

• Depending on the backup


vendor-supported
architectures, additional
backup infrastructure and
use of AWS S3 storage
might be required.

• Backup solutions that


require VIBs or root access
to the host servers are not
supported by VMware
Cloud services.

• Process and technical


runbooks must address
gaps in managing a new
backup approach.

Backing Up VMs Using a Native Cloud Backup Service

You can back up your workload VMs using the tools and services available from your cloud
service provider.

Disaster Recovery Page 474


For example, if you are using VMware Cloud on AWS, you can back up your VMs using AWS
Backup, a native AWS backup service. AWS Backup lets you centralize and automate data
protection of VMs running in an on-premises data center and VMware Cloud on AWS.

With AWS Backup, you can configure backup policies from a central backup console, making it
easy to ensure that your application data is backed up and protected.

AWS Backup provides automated backup schedules, retention management, and life cycle
management. You can enforce your backup policies, encrypt your backups, protect your
backups from manual deletion, and report on backup activity from a centralized console.

While you are responsible for backing up your workload VMs, VMware is responsible
for backing up and restoring the management infrastructure, which includes VMware
vCenter Server®, VMware NSX® Controller instances, and VMware NSX®
Edge appliances.

Knowledge Check: Backup Process


True or False: In VMware Cloud on AWS, VMware provides backup and restore services for the
management infrastructure, which includes vCenter Server, NSX Controller instances, and NSX
Edge appliances. But the backup of workload VM data is not supported by this service.

True
False

Why is disaster recovery planning critical to business continuity?

Disasters with the potential to severely damage and


disrupt organizations are becoming more frequent:

• Cyberattacks

• Natural disasters such as hurricanes, floods,


wildfires, and earthquakes

• Equipment failures and power outages

Disaster recovery planning helps organizations regain


access and functionality to its IT infrastructure after such
events.

VMware provides several disaster recovery as a service (DRaaS) solutions to


help with disaster recovery.

Disaster Recovery Page 475


help with disaster recovery.

About DRaaS

DRaaS is a cloud computing service model that organizations can use to


back up their data and IT infrastructure in a cloud computing
environment, such as VMware Cloud on AWS.

DRaaS provides DR orchestration, through a SaaS solution, to regain


access and functionality to IT infrastructure after a disaster.

It addresses the most common challenges of DR: that it's complex,


costly, and unreliable.

In addition, you don't have to manage and maintain your own DR site.

Video Transcript

As a modern business that continues to transform and become more digital, the need for
protection in the event of an outage grows. Technology and network failures, power
failures, natural disasters, and of course the growing threat of cyberattacks, including
ransomware, are putting you at risk if you don't have a disaster recovery solution in place
to protect you.

It's also important to remember that even after you moved to cloud, you still need DR.
Ask yourself: When was the last time you conducted a DR end-to-end test? Even if you
have DR in place today, odds are it requires operational time to manage and monitor,
costly capital investment in infrastructure, business interruption for testing and

Disaster Recovery Page 476


costly capital investment in infrastructure, business interruption for testing and
validation, and more regular inventory to avoid gaps.

Disaster Recovery as a Service is a great approach to eliminate the burden of managing a


DR infrastructure while creating the ability to scale and recover quickly from an incident.
DRaaS is a secure and flexible off premises, cloud-based disaster recovery solution that's
designed specifically for your VMware environment and works directly from your vSphere
console or centralized cloud interface. It provides fast, efficient, and secure disaster
recovery from on-premises to cloud, as well as cloud to cloud within our data center
regions.

With Disaster Recovery as a Service, you get safety and protection from disaster with
offsite storage, easy test and validation, and simplified prescriptive, or changeable
recovery options; Lower TCO that leverages existing VMware investments and skills,
reducing capital expenditure and retraining your operations; Speed and simplicity with
quick on-ramp and no new skills required.

To learn more about how DRaaS could be right for you, please visit our website.

DR Solutions

You can select a DR solution that best aligns with the criticality, recovery time objective (RTO),
and recovery point objective (RPO) requirements for your applications and workloads, and with
your organizational policies.

VMware Site Recovery Manager is VMware Cloud Disaster Recovery VMware Site
a powerful solution for offers on-demand DRaaS to protect Recovery delivers hot
organizations who want to utilize a a broad set of IT services in a cost- DRaaS for mission-
secondary data center as their DR efficient manner, with fast recovery critical IT services
site. capabilities. that require very low
RPO and RTO and all
the benefits of
vSphere Replication
and Site Recovery
Manager.

VMware DR Portfolio

The VMware portfolio of DR solutions can help you to achieve balance in your DR strategy. You

Disaster Recovery Page 477


The VMware portfolio of DR solutions can help you to achieve balance in your DR strategy. You
can implement elements of DRaaS or customer-managed models (on premises).

Comparing DR Solutions

This table presents the similarities and differences in the VMware DR solutions.

Disaster Recovery Page 478


VMware Site Recovery
Thursday, February 2, 2023 9:29 AM

Learner Objectives
After completing this lesson, you should be able to:

• Explain how Site Recovery works


• Describe the function of Site Recovery Manager
• Deploy and configure Site Recovery on VMware Cloud on AWS

Site Recovery Manager Introduction


An organization has DR sites spread across on-premises and cloud data centers, including a
VMware Cloud on AWS SDDC.

The clients in the organization have diverse requirements:

• Some clients want to remove their DR data centers


and move their DR sites to VMware Cloud on AWS
while still keeping their primary sites on premises.

• Other clients have an effective DR solution on


premises and do not want to expand their existing DR
site capacity.

But they want to explore using VMware Cloud on


AWS, starting with a low-risk use case.

They will continue protecting some apps on their on-


premises DR site but complement this site with some
DR protection in VMware Cloud on AWS.

How might the organization meet different customer requirements?

Site Recovery

Site Recovery is an optional, separately priced add-on for VMware Cloud on


AWS.

A VMware Cloud on AWS SDDC can use the add-on to become the disaster
recovery site for an on-premises cloud or data center.

Disaster Recovery Page 479


recovery site for an on-premises cloud or data center.

With Site Recovery, the organization can achieve its requirements.

Site Recovery supports fan-out topologies and is


storage-agnostic, so clients do not have to
completely change their existing on-premises
solution to on-premises storage-level replication.
Clients can gradually phase out on-premises DR data
centers rather than removing them all at once.

How Does Site Recovery Work?

Site Recovery provides disaster recovery as a service


(DRaaS) for a data center failure by replicating VMs
between an on-premises and a cloud data center, such as
VMware Cloud on AWS.
Site Recovery sets up protected and
recovery sites

Site Recovery uses the host-based replication feature of VMware vSphere Replication and
the orchestration of Site Recovery Manager

Several concepts help to explain how Site Recovery works.

Disaster Recovery Page 480


Protected Site
The protected site provides business-critical data center services.

Recovery Site
The recovery site is where the protected VMs are recovered if a failover occurs.

Failover
When disasters occur, Site Recovery fails over workloads to the recovery site:

• Site Recovery performs a disaster recovery failover or a planned migration, and fails
back recovered VMs to the original site.

• With Site Recovery failover, minimal downtime occurs. Business operations can
continue with minimal to possibly no disruption.

Failback
After failover, and when the original site is available, you might want to move your
workloads back as soon as possible:

• Site Recovery provides one-click failback to simplify and automate this action.

• All workloads are migrated back to the original site by following the runbooks from
the original failover.

RTO (Recovery Time Objective)


The RTO is the targeted amount of time that a business process should be restored after a
disaster or disruption to avoid unacceptable business consequences.

RPO (Recovery Point Objective)

Disaster Recovery Page 481


RPO (Recovery Point Objective)
The RPO is the maximum age of files recovered from backup storage for normal
operations to resume if a system goes offline as result of a hardware, application, or
communications failure.

Knowledge Check: Site Recovery


Which statement most accurately describes how Site Recovery Works? (Select one option)

A cloud SDDC can use the Site Recovery add-on as a disaster recovery site in an on-
premises or cloud data center.

A cloud SDDC cannot use the Site Recovery add-on as a disaster recovery site for an on-
premises data center.

Site Recovery is prebuilt with VMware Cloud on AWS, which helps VMware Cloud on AWS
become the disaster recovery site for an on-premises or cloud data center.

Site Recovery works with Site Recovery Manager and vSphere Replication 8.1 and later to
automate the recovery, testing, re-protecting, and failback of virtual machines.

Site Recovery Manager

Site Recovery
Manager is a
business
continuity and
disaster recovery
solution that
helps to plan,
test, and run the
recovery of VMs
A preview of Site Recovery Manager Appliance Management Interface Summary between a
page. primary site and a
recovery site.

The Site Recovery license key is part of the subscription to the service.

When you pair the Site Recovery Manager on-premises instance with the Site

Disaster Recovery Page 482


When you pair the Site Recovery Manager on-premises instance with the Site
Recovery Manager instance on VMware Cloud on AWS, Site Recovery uses the cloud
license.

vSphere Replication

vSphere Replication is a hypervisor-based,


asynchronous replication solution.

Site Recovery uses vSphere Replication to


move VM data between sites.

Data is replicated between sites with Site Recovery

vSphere replication works in the following


ways:

• Uses any storage supported by


vSphere.

Storage arrays, similar or otherwise, are


not required at either site.

• Supports RPOs from 5 minutes to 24


hours.

vSphere Replication supports network


traffic compression and file-system
quiescing for both Windows and Linux.

Failover Topologies

Failover is the process of recovering an affected VM by failing over to its replica in the disaster
recovery site.

Site Recovery can be used in several failover topologies, depending on customer requirements,
constraints, and objectives.

Active-Passive
Disaster Recovery Page 483
Active-Passive

An active-passive failover topology includes a production site that runs applications and
services, and a secondary or recovery site that is idle until needed for recovery.

How does this active-passive topology work?

This common topology provides dedicated recovery resources. You pay for a site, compute
capacity, and storage that are not used most of the time.
Active-Active

In an active-active failover topology, Site Recovery can be used where low-priority workloads,
such as test and development, run at the recovery site and are powered off as part of the
recovery plan.

How does this active-active topology work?

Recovery site resources are used regularly, rather than being held in reserve. The resources
are used as sufficient capacity for critical systems if a disaster occurs.
Bidirectional

In a bidirectional failover topology, Site Recovery supports the protection of VMs in both
directions.

How does this bidirectional topology work?

This topology is used in situations where production applications are operating at both sites,
for example, VMs at site A are protected at site B, and vice versa.

Knowledge Check: Site Recovery Manager


What function does Site Recovery Manager perform? (Select one option)

Orchestrates recovery planning, testing, and running.


Provides replication technology to move VM data between sites.
Phases out on-premises DR data centers.

VM Failover: Network Customization


The most commonly modified VM recovery property is IP address customization. Many
organizations have different IP address ranges at protected and recovery sites.

When a VM is failed over, Site Recovery can automatically change the


network configuration (IP address, default gateway, and so on) of the
Disaster Recovery Page 484
network configuration (IP address, default gateway, and so on) of the
virtual NICs.

For automation purposes, multiple network ports must be accessible for


Site Recovery to work.

Site pairing connects Site


Recovery and vSphere
Replication at each site.

For example, you can create


an IP address customization
rule that maps one range of
In this example, an administrator maps 10.10.10.0/24 to
IP addresses to another.
192.168.100.0/24

Network Ports
For information about the list of network ports for VMware Site Recovery, access the Site
Recovery Installation and Configuration documentation.

Deploying Site Recovery


You want to deploy Site Recovery in your VMware Cloud on AWS SDDC. To start, you must
consider compatibility.

VMware Cloud on AWS Compatibility

Site Recovery deploys Site Recovery Manager version 8.x in your VMware Cloud on AWS SDDC.
Other compatibility considerations include:

• Site Recovery is compatible with the following vCenter Server and ESXi versions:
○ On-premises vCenter Server versions 6.0 U3 and later, including version 7.0
○ On-premises ESXi version 6.0 U3 and later, including version 7.0

• Site Recovery Manager version 8.3 is compatible with version 8.2 installed on-premises.

• Site Recovery Manager version 8.2 is the latest version to support vSphere version 6.0 U3.

• Site Recovery Manager version 8.3 supports only vSphere version 6.5 and later.

Disaster Recovery Page 485


Site Recovery Manager and vSphere Replication do not support coexistence with earlier versions. Before
installing Site Recovery Manager and vSphere Replication, you must uninstall the earlier versions.

Compatibility Requirements
For more information on compatibility requirements, access the Site Recovery Release Notes.

Site Recovery Integration with On-Premises Components

Site Recovery works with Site Recovery Manager and vSphere Replication version 8.1 and later.

Site Recovery Manager 8.x supports the following editions of vSphere:

Current Product Version


For information about current product version interoperability, access the VMware Product
Interoperability Matrix compatibility guide.

Service Delivery

Site Recovery is an add-on for VMware Cloud on AWS. As such, it has the following
characteristics:

• VMware delivers, sells, supports, and maintains the Site Recovery add-on.

• Site Recovery does not require special agents.

• The Site Recovery service places individual VMs and applications in application
consistency groups.

Site Recovery Manager version 8.1.2 is the first version to support VMs that are
attached to NSX-T Data Center installations.

Prerequisites for Site Recovery

You must ensure that the following prerequisites are met:

• Do not use non-default plug-in identifiers because they are not supported.

Disaster Recovery Page 486


• Verify version compatibility between on-premises installations and VMware Cloud on
AWS.

• Verify that network communication succeeds between on-premises vSphere and VMware
Cloud on AWS.

Deploying VMware Site Recovery

A popular scenario for setting up VMware Site Recovery in your environment is to deploy an on-
premises protected site and a VMware Cloud on AWS SDDC recovery site.

The Site Recovery deployment process for this scenario is outlined as follows:

Through a series of demonstration videos, you can explore how these steps are performed.

Demonstration: Installing and Configuring Site Recovery

This video describes the process of enabling the Site Recovery service, installing the on-
premises components, configuring the VMware Cloud on AWS firewall, pairing the sites, and
configuring mappings.

Installation and Configuration of VMware Site Recovery for VMware Cloud on AWS

Disaster Recovery Page 487


Video Transcript

In this demo, I'm going to walk you through the installation and configuration of VMware
Site Recovery for VMware Cloud on AWS. That process is going to include activating the
service, downloading the components that we need for on-premises, configuring the
firewall, installing vSphere Replication, installing Site Recovery Manager, and then
configuring vSphere Replication. So let's get started.

Start with activating the service. This is going to, in the background, install and configure
Site Recovery Manager and vSphere Replication within the VMware Cloud on AWS
environment. That's including IP addresses and all related configuration certificates,
things like that. While that process is going on, that's a fully automated process, we can
download the components that we're going to need for our on-prem installation. That
would be vSphere Replication and Site Recovery Manager.

After we get those components downloaded, we can configure the firewall. The firewall
configuration consists of four rules that need to be entered. Those rules are documented
both in the documentation for VMware Site Recovery, as well as in the inline help for
VMware Site Recovery.

Once those rules are created, we would move on to installing vSphere Replication. This is
done with an OVF and is pretty much a standard OVF installation. So it consists of
selecting the OVF, selecting where we're going to install that, accepting license
agreement, putting what datastore we're going to install it in, what network we're going
to connect it to, NTP servers, passwords, networking properties, like gateways, domain
names, DNS servers, the IP address for the appliance and the mask.

While that process completes, we'll move on to installing Site Recovery Manager. Site
Recovery Manager is installed on a Windows server and is again a very basic installation,

Disaster Recovery Page 488


Recovery Manager is installed on a Windows server and is again a very basic installation,
mostly consists of clicking Next, typing out a little bit of information as needed, selecting a
database. I would highly encourage you to use the embedded database that's supported
and included with Site Recovery Manager. And all in all, that installation should take no
more than five to 10 minutes to get running and go through.

After that's completed, we can finish configuring vSphere Replication. And that's going to
consist of changing a couple of things, and putting in a password, and then saving and
restarting services. Once that's complete and our services are running, we'll be able to
access the VMware Site Recovery console through vCenter. And now we're ready to pair
up our sites.

So, to pair up our sites, we're going to select the vCenter, enter in the PSC name for our
VMware Cloud on AWS instance, username and password, select that vCenter, select the
services that we want to pair, and we're done.

Got the sites paired up, log in, we can see details about the Site Recovery Manager pairing
as well as the vSphere Replication pairing. And now we just need to configure our
mappings.

So first off, we'll do network mappings. Go ahead and select the single network that we
want to map to our VMC on AWS instance and add that. Then we'll configure it in
reverse. Configure our test network and we're good to go. Move on to our folders. We're
going to use the automatic mapping here and just select the top level, and it's going to
match up anywhere where it finds the same name. Go ahead and configure those in
reverse. And we're ready to complete that. Lastly, for mappings, we'll configure our
resource mappings. Go ahead and set that up. Select our cluster at our on-prem location
and our compute resource pool within VMC and AWS, configure it in reverse, configure
our placeholder datastore.

At this point, we are ready to start protecting VMs.

This concludes our demonstration of installing and configuring VMware Site Recovery.

Demonstration: Replicating and Protecting VMs

The video demonstrates the process of replicating and protecting VMs in VMware Cloud on
AWS.

It describes how to use the HTML5 UI to replicate and protect VMs, including steps such as
selecting replication settings, adding VMs to a new or existing protection group, and adding a
protection group to a new or existing recovery plan.

Replicating and Protecting VMs with VMware Site Recovery for VMware Cloud on AWS | vSAN

Disaster Recovery Page 489


Video Transcript

In this demonstration, I'm going to walk you through replicating and protecting virtual
machines using VMware Site Recovery for VMware Cloud on AWS. Let's get started.

We start at our on-premises location, select our payroll application, which consists of 10
VMs. Right-click on those VMs. Select Site Recovery actions and configure replication.
That's going to take us to our VMware Site Recovery window, where we can confirm those
10 VMs that we selected, confirm that we want those replicated to our VMC environment.
Select our datastore, in this case we want the WorkloadDatastore, change our replication
settings as needed, things like RPO, guest quiescing.
Next, we can either add our VMs to an existing protection group or create a new
protection group for them. In this case, we're going to create a new protection group. We
also have the same option when it comes to recovery plans, either adding them to an
existing recovery plan or creating a new recovery plan. We'll create a new recovery plan
and then we'll navigate to the VMware Site Recovery window, where we can monitor the
status of replication.

And once they're complete, we can take a look at our protection groups, confirm that that
payroll protection group was created and contains all 10 of those VMs, and that we have a
recovery plan that contains all 10 of those VMs because it contains that protection group.

This concludes our demonstration of protecting VMs with VMware Site Recovery.

Demonstration: Failing Over

This video shows an example of using Site Recovery to fail over from a customer site to VMware
Cloud on AWS. The video also describes how to run a re-protect, recovery plan test, and
failback using planned migration to return workloads to the on-premises data center after the
disaster has passed.

Disaster Recovery Page 490


disaster has passed.

VMware Site Recovery To Failover From Customer Site To VMware Cloud on AWS | vSAN

Video Transcript

In this demo, I'm going to walk you through using VMware Site Recovery to fail over from
a customer site to VMware Cloud on AWS, and then reprotect and fail back.

This picture shows our current situation. Our VMs are protected at our customer site
replicating over to VMware Cloud on AWS. Here we are in our on-premises environment,
our customer environment. These are the 10 VMs that we're protecting that are part of
our payroll application, and we can see that all 10 of them are running.

If we look in our VMware Cloud environment, we see that all 10 of those VMs are
protected. That icon that is highlighted there indicates that that is protected by VMware
Site Recovery. And if we look at our VMware Site Recovery panel, we can see that our
current configuration is paired, connected. Everything's ready to go. We have a protection
group specifically for our payroll application. And we can see that all 10 of our payroll VMs
are protected. And then we also have a recovery plan that contains that protection group.

Our environment has now just experienced an outage. We can no longer connect to our
on-prem environment, and we need to get that payroll application failed over to VMware
Cloud on AWS as quickly as possible. And just to confirm that our situation, if we connect
into the VMware Site Recovery panel within VMware Cloud on AWS, we can see that our
VMware Cloud on AWS environment is connected, but our on-premises site is in an
unknown and disconnected state. And we're generating errors that are showing that
we've lost that connectivity to our on-prem environment.

So, just giving you an idea visually of what that looks like. Our customer site has failed. We

Disaster Recovery Page 491


So, just giving you an idea visually of what that looks like. Our customer site has failed. We
now need to fail over those VMs into VMware Cloud on AWS, move those VMs over and
get them powered on. To do that, we are going to go into our recovery plans, specifically
for our payroll application, that's the most important one. That's the one that we need to
get up and running as quickly as possible in this situation. Check that box, confirming that
we understand this is going to change things and go through with that process.

One of the things you might have noticed is that the option for running a planned
migration was grayed out. And the reason for that is because our sites are disconnected.
We don't have connectivity between our two sites. What that means is that our recovery
plan is going to run in a slightly different way.
And you'll see that in the steps that it's running through, skipping our pre-synchronizing
storage, and then it’s actually generating an error because it's not able to shut down VMs
at our protected site, and it's not able to prepare those protected site VMs for migration
also. Those two things do not impact the recovery of those VMs, but they do create an
issue for when we want to fail back.

We're running through that recovery process right now. We're getting those VMs
powered on. Just a second here, you'll see that we've succeeded with that. If we look up
at the top, we can see that our recovery has completed. However, there are some errors
and warnings. Part of that is that it wasn't able to shut down those VMs because the sites
aren't connected.

We can now see that within our VMware Cloud on AWS environment, those 10 payroll
VMs are running. Our payroll application is now up. Payroll is able to run our company
payroll, don't have an issue there. We've now restored connectivity to our original
production site.

At our original production site, we can see that our original VMs are still powered on. Now
what we need to do is we need to run our recovery plan another time. We need to do this
in order to clean up any of the issues that were left over from having to run the recovery
plan while the sites were disconnected.
The thing that that's going to take care of is things like shutting down those VMs that
were at that protected site, and preparing them for migration. That will put us in a
position of being ready to reprotect our VMs and get them ready for testing or failback.

The next thing we're going to do is reprotect, and all that this is doing is reversing
replication and protection. So, getting things configured so that we are able to fail back to
our original site or run a test of that failback to our original site. You can see that reflected
in these steps here.
We're just going to configure storage in the reverse direction, and then configure
protection in the reverse direction. Now that we've done that, we see that we have,
again, our VMs running in our VMware cloud environment. We now see that our VMs in
our customer environment are ready, are protected as well. They're showing those
placeholder icons.

The next thing we're going to do is run a test of our recovery plan. A test of the recovery
plan is going to allow us to verify that our recovery plan is going to work the way that we
expected it to, if we need to use it. It's going to do that non-disruptively. When we run a
test of the recovery plan, this is completely non-disruptive to both storage and

Disaster Recovery Page 492


test of the recovery plan, this is completely non-disruptive to both storage and
networking. Nothing changes as far as our replication RPOs, we're not impacting
replication, we're not impacting networking traffic. The way that we do that is through
isolating the network and taking a snapshot of the storage.

You can see that we've run through that process. Now you can see that the VMs at our
customer site have that placeholder icon and are powered on. You can see that they're
connected now to our test network instead of to our production network, which is how
we're keeping them network-isolated.
Now that we've completed our test successfully, the next thing that we'll want to do is
clean that test up. The reason that we would do that is that would just allow us to run an
additional test if we wanted to or run a fail over as needed.

That process is now completed. Now the next step for our failback would be to actually
run that planned migration and move our VMs from VMware Cloud on AWS back to our
customer site.
We'll go ahead and run that workflow. You'll see that the option that we're going to select
is that planned migration option. We're not in a disaster recovery situation now. The
difference between a planned migration and disaster recovery is that in planned
migration, if we hit any errors along the way, the plan would stop and give us the
opportunity to fix those before it moved on to the next step. Compare that with disaster
recovery, where it's going to just keep running, because the idea is to get you up. It's a
disaster, so you want to get up and running as quickly as possible.

Our recovery completed successfully, our VMs are back running again at our customer
environment. You can see there now just normal VMs, no special icons anymore. These
are our production VMs.
Now the thing that we need to do is again run that reprotect so that our VMs are
protected in VMware Cloud on AWS. If we have another failure at some point in the
future, again, we are protected. We've already seen this workflow before. We'll run
through this just really fast. Now that workflow has completed.

Now we're ready at this point to run another test. Any time we make a major change in
our environment, it's a good idea to run a test to verify that things are acting and
behaving the way that you would expect them to. We can see that our placeholder VMs
are in our VMware Cloud environment and our regular VMs are in our customer
environment.

Everything is as it should be prior to running that test. We'll go ahead and kick that test
off. That option that you see there for replicating recent changes just allows you to run
that test in two different ways. We can run it in a way where we simulate a disaster,
which would be not replicating those changes. Or, we could run it in a way where we're
simulating a planned migration, which would be where we would replicate those changes.
It just gives you a couple of different options for how you want to run that test.
The test completed successfully, and we can see our test VMs are running in VMware
Cloud on AWS, and we can see that they're using the SRM-generated port groups within
VMware Cloud on AWS to keep that network traffic isolated.

After we test and verify that our application works as we expect it to, now we can run a
cleanup. And all that that's going to do is it's just going to power off those VMs, delete

Disaster Recovery Page 493


cleanup. And all that that's going to do is it's just going to power off those VMs, delete
those snapshots, basically get everything set back up so that we can run another test or
run a failover as needed
This concludes our demonstration of using VMware Site Recovery to fail over and fail back
from VMware Cloud on AWS.

Multiple Site Recovery Instances

You can deploy multiple instances of Site Recovery in a VMware Cloud on AWS SDDC.

All instances of Site Recovery are activated and


deactivated independently.

• A Site Recovery Manager appliance is


deployed for each instance.

• The vSphere Replication appliance is


deployed only once.

Multiple instances support fan-in, fan-out, and


VMware Cloud on AWS interface with multiple other multisite topologies.
Site Recovery Instances

With multiple Site Recovery instances, you can perform the following actions:

• Connect a single VMware Cloud on AWS SDDC to multiple on-premises sites and to other
VMware Cloud on AWS SDDCs for disaster recovery purposes.

• Pair up to 10 remote sites with a single SDDC.

• Recover VMs from multiple protected sites to the same VMware Cloud on AWS SDDC, or
recover different sets of VMs from a single VMware Cloud on AWS SDDC to multiple
recovery sites.

• Use custom extension IDs.


You use the default extension ID with only one instance of Site Recovery.

You can apply other complex multisite topologies, but you must establish network
connectivity between the remote sites and the shared VMware Cloud on AWS SDDC.

Multisite Topologies
For more information about multisite topologies, access the Site Recovery documentation.

Knowledge Check: Deploying Site Recovery

Disaster Recovery Page 494


Which steps do you take to deploy Site Recovery for VMware Cloud on AWS?

Disaster Recovery Page 495


VMware Cloud Disaster Recovery
Thursday, February 2, 2023 10:34 AM

Learner Objectives
After completing this lesson, you should be able to:

• Recognize benefits of VMware Cloud Disaster Recovery


• Configure VMware Cloud Disaster Recovery in a VMware Cloud on AWS SDDC

An organization faces several challenges related to disaster recovery (DR):

• After a recent fire, one of its two data centers is


destroyed.
• The organization requires an immediate disaster
recovery solution for its on-premises infrastructure.
• The organization requires a long-term, affordable
solution for migrating live VMs to a cloud environment
if a disaster occurs.

What type of DR solution is appropriate for this organization?

To meet its requirements, the organization can use VMware Cloud Disaster Recovery.

VMware Cloud Disaster Recovery

VMware Cloud Disaster Recovery is an easily accessed on-demand disaster


recovery solution, which is delivered as a SaaS solution, with cloud economics.

VMware Cloud Disaster Recovery Introduction

Disaster Recovery Page 496


Video Transcript

The problem with traditional disaster recovery is that it's expensive, complex and
unreliable because data center failover touches many different components from
applications and servers to networking and storage. It ends up being a very complex and
manual process. VMware Cloud Disaster Recovery is transforming DR.

For all VMware workloads, with its on-demand-DR delivered as an easy-to-use SaaS
solution with cloud economics, we've converted a complex DR process into an easy-to-use
SaaS product. And you only pay for compute resources when you test or when disaster
strikes, which is exactly how cloud-based DR should be: Elastic, pay as needed.

VMware Cloud Disaster Recovery provides simple disaster recovery, combined with cloud
economics, in the event of ransomware attacks, power outages, or natural disasters.
VMware Cloud Disaster Recovery keeps VMs in their native format, eliminating brittle VM
conversions, slow down recovery, make failback a nightmare.

So how does it work? Through a simple UI. You set protection policies and DR runbooks
replicas can be created every few hours, multiple times per day, whatever frequency
makes sense for your business. These replicas are then encrypted and stored in their
native VM format in the cloud. Compliance checks automatically run every 30 minutes to
ensure your DR plan works when you need it. Non-disruptive recovery tests can be run as
frequently as desired to reduce risk.

When disaster strikes, just click a button to fail over to the cloud. VMware Cloud DR
automatically provisions SDDC on VMware Cloud on AWS. The stored replicas, which
could be hours, days, or weeks old, are instantly powered off via an NFS datastore and
mounted by ESX hosts in that SDDC, resulting in fast recovery. And there's no learning
curve for your IT team. During the disaster, they can use the same vCenter tools to
manage their cloud DR site. The last thing you want to do in the middle of a disaster is

Disaster Recovery Page 497


manage their cloud DR site. The last thing you want to do in the middle of a disaster is
learn an entirely new set of cloud management tools.

Once the disaster is over, failback is simple too. With a click of a button deduplicated
change data is compressed and encrypted, which minimizes egress charges automatically
sent back to the production data center. This combination of on-demand compute and
efficient cloud storage delivers low total cost of ownership.

You get everything you need for on-demand cloud DR in a single SaaS solution, with top-
of-the-line support from VMware.

What benefits does VMware Cloud Disaster Recovery provide?

You can manage both production and DR sites with vCenter Server
because you retain access to familiar VMware vSphere abstractions

Consistent and
Familiar Operations

VMware Cloud Disaster Recovery provides simplified DR maintenance


operations and software life cycle management.

SaaS-Based
Management

VMware Cloud Disaster Recovery provides audit-ready automated


reports that meet compliance objectives.

Built-In Audit
Reports

VMware Cloud Disaster Recovery provides automated checks every 30


minutes.

Continuous DR
Health Checks

Disaster Recovery Page 498


Use Case: Ransomware Recovery

To help fight ransomware attacks, you can use VMware Cloud Disaster Recovery to create
secure remote backups of critical data through regularly scheduled application consistent
snapshots of VMs and files.

If a ransomware attack occurs, you can go back in time to a moment before the attack
happened and recover snapshots from months or years ago. You can use these snapshots to
rebuild your VMs and computing environment in a recovery SDDC deployed on VMware Cloud
on AWS.

VMware Cloud Disaster Recovery was designed for its systems and repository to be
operationally isolated (known as operational air-gapping) and for instantiating isolated recovery
environments.

Ransomware Recovery Guide


This comprehensive guide discusses the recovery measures you need to fight ransomware using
VMware Cloud Disaster Recovery.

Operational Air Gapping in the Fight Against Ransomware


This blog describes how VMware Cloud Disaster Recovery design and components deliver
operational air-gapping, which is a key element to a successful ransomware implementation.

VMware Cloud Disaster Recovery Overview

Deploying and
using VMware
Cloud Disaster
Recovery
involves
installing cloud
service
In the example, a production site is connected to a recovery target site (VMware components to
Cloud on AWS) through cloud-based services connect the
production site
to the recovery
site.

You can manage both production


Disaster Recovery Page 499
You can manage both production
and DR sites with vCenter Server
because you retain access to
familiar vSphere abstractions.

The DR site can access the vSphere tools at the production


site

VMware Cloud Disaster Recovery Components

VMware Cloud Disaster Recovery consists of several components.

Production Site

The production site to be protected can be any of your current vSphere clusters.

You can use any of your VMFS, vSAN, or vSphere Virtual Volumes datastores in the production
site.

Connectors and Replication

Disaster Recovery Page 500


During the steady-state, day-to-day operations of VMware Cloud Disaster Recovery, the DRaaS
Connector provides regular snapshot protection and replication to the cloud backup that is
located on the scale-out cloud file system (SCFS).

Cloud-Based Services

VMware Cloud services include the SaaS orchestrator and the SCFS.

The connectors are configured by a SaaS-based management console.

With these components, you can configure protection for the on-premises infrastructure.

Recovery Site

The recovery site is created immediately before a recovery is performed. It does not need
to be provisioned to support replication in the steady state. This site is also called the
failover site.

You can use any of your VMFS, vSAN, or vSphere Virtual Volumes datastores in the
recovery site.

Disaster Recovery Page 501


Knowledge Check: VMware Cloud Disaster Recovery
When using VMware Cloud Disaster Recovery, how do you manage both the production and DR
site? (Select one option)

Using the vCenter Server


With the SDDC clusters at the target site
With the DRaaS Connector

Disaster Recovery Deployment Types


VMware Cloud Disaster Recovery has two deployment types for the recovery site.

On-Demand Deployment

With an on-demand deployment, the recurring costs of a cloud DR site are eliminated in
their entirety until a failover occurs and cloud resources are provisioned.

For example, a low DR cost, steady-state replication occurs with no active VMware Cloud
on AWS hosts.

Pilot Light Deployment

With a pilot light deployment, you can deploy a smaller subset of SDDC hosts ahead of
time to recover critical applications with lower RTO requirements than in a purely on-
demand approach.

For example, a steady-state replication occurs with DR costs for only three VMware Cloud
on AWS hosts in a pilot light SDDC cluster.

Disaster Recovery Approaches

When a disaster strikes on the production site, the DR plan failover starts, whether the plan is
for an on-demand or a pilot light deployment.

Disaster Recovery Page 502


On-Demand Approach Pilot Light Approach

1. The SaaS Orchestrator initiates the SDDC 1. The SaaS orchestrator selects the
cluster build, and the recovery backup is recovery backup.
selected. 2. The DR VMs start on the live
2. The DR VMs start on the live mount (NFS) mount (NFS).
3. The DR plan completes by migrating the VMs 3. The DR plan completes by
into the DR SDDC cluster using vSphere migrating the VMs into the DR
vMotion. SDDC cluster using vSphere
4. Additional clusters are created, if necessary, for vMotion.
capacity expansion or performance 4. Additional clusters or hosts are
improvement. created, if necessary, for capacity
5. The disaster is mitigated, and only the changes expansion or performance
(delta-based) are failed back to the production improvement.
site. 5. The disaster is mitigated, and
only the changes are failed back
to the production site.

Cost Savings

Both on-demand and pilot light deployments help


to save costs.

On-Demand Deployment

Because it uses an on-demand strategy, VMware Cloud Disaster Recovery reduces the
operating costs of DR:

• Backups are sent to the SCFS and, after some processing, are stored in a cost-
effective compressed form.
• The bulk of the DR infrastructure is programmatically deployed following a DR event.
• The costs of the cloud SDDC are incurred only when running a DR plan.
• Administrators can add clusters and hosts only when needed.

Pilot Light Deployment

A pilot light deployment assists organizations in reducing the total cost of cloud

Disaster Recovery Page 503


A pilot light deployment assists organizations in reducing the total cost of cloud
infrastructure:

• A scaled-down version of a fully functional environment always runs in warm


standby so that core applications are readily available when a disaster event is
triggered.
• Administrators can add SDDC hosts to fail over the remaining applications.
• The SDDC is expanded by adding hosts at a fraction of the cost of the ahead-of-time
deployment.
• Administrators can add clusters only when needed.

Knowledge Check: Deployment Types


Can you distinguish between pilot light and on-demand deployments?

On-Demand Deployment Pilot Light Deployment


The bulk of the DR infrastructure is A low DR cost, steady-state replication
programmatically deployed after a DR event. occurs with no active hosts.
A scaled-down version of the environment
always runs in warm standby.
A steady-state replication occurs with DR
costs for only three hosts.

High-Level Architecture
In a VMware Cloud Disaster Recovery solution for VMware on AWS, components work together
to deliver disaster recovery.

Disaster Recovery Page 504


Disaster Recovery Page 505
VMware Cloud Disaster Recovery Workflow

When you set up VMware Cloud Disaster Recovery, you follow a workflow, from activating
VMware Cloud Disaster Recovery, accessing the SaaS orchestrator, deploying the DraaS
Connector, to configuring the SDDC and creating a DR plan.

Disaster Recovery Page 506


Disaster Recovery Page 507
Disaster Recovery Page 508
DRaaS Components
Consider the function of the DRaaS components in more detail.

DRaaS Connector

Disaster Recovery Page 509


The DRaaS Connector is a stateless software appliance that helps to replicate VM
snapshot deltas from protected vSphere sites to cloud backup sites, and back, driven by
policies that you set in protection groups.

When you deploy the DRaaS Connector as an OVA in vCenter Server, DR protection is
enabled for customer-managed or VMware Cloud on AWS SDDCs.

The DRaaS Connector can be redeployed at any time with no loss of backup data.
Software upgrades for the connector are automatic. Each connector provides additional
replication bandwidth for the site.

Orchestrator

VMware Cloud Disaster Recovery is delivered as SaaS and provides SDDC orchestration
and management using the DRaaS console.

You use the DR orchestrator to create the following components:


• Protected sites
• Protection groups: Data backups and schedules, and RPO management
• Protected vCenter Server instances and resource mappings
• Failover and failback workflow steps: Network mapping and changes, and scripts

Scale-Out Cloud File System

Disaster Recovery Page 510


The Scale-Out Cloud File System (SCFS) holds the protected VM data and provides the live
mount capability for faster recovery times.

SCFS is integrated with and managed by the VMware Cloud control panel. Other features
include:
• Uses AWS S3 storage for deep retention
• Uses the Datrium log-structured file system
• Provides VMware Cloud with DR and ransomware recovery with deep cloud data
protection

Highlights of Datrium log-structured file system:


• Retains every VM backup ever taken.
• Can have up to 2,000 backups per VM.
• Every backup is a full synthetic backup.
• All data is encrypted, compressed, and deduplicated.

DRaaS Connector Functions

The DRaaS Connector performs the following tasks in protection and recovery sites.

Backup

Disaster Recovery Page 511


The DRaaS Connector creates snapshots of your VMs and files, and sends them to the SCFS
using this workflow:

1. The DRaaS Connector uses VMware vSphere® Storage APIs - Data Protection to create
snapshots of the virtual machine disk (VMDK) file.
2. The DRaaS Connector uses changed block tracking (CBT) to query only changed blocks.
3. Snapshots are compressed and encrypted before being sent to the cloud backup
repository on AWS S3.

Replication

Disaster Recovery Page 512


DRaaS that is deployed in the cloud creates snapshots of your VMs and files, and
replicates them to the SCFS using this workflow:

1. Take a VMware snapshot.


2. Ask vSphere for changes since the last snapshot.
3. Copy changed blocks from the VMware snapshot using the Virtual Disk Development
Kit (VDDK).
4. Compress and encrypt the data.
5. Transfer the data to the SCFS.
6. Delete the VMware snapshot.

Note: vSphere remembers changes even without a live snapshot.

Failover

You run a failover operation after a disaster or cybercrime event when the source site is
no longer available. The failover operation is orchestrated on the destination site, based
on previously replicated snapshots.

When failing over to a VMware Cloud on AWS SDDC, VMs that belong to the protection
groups defined in your DR plan are recovered to the vCenter Server instance in your
recovery SDDC.

When a plan finishes executing, you must explicitly commit a failover or roll back and

Disaster Recovery Page 513


When a plan finishes executing, you must explicitly commit a failover or roll back and
acknowledge a failback for the plan to transition to the Ready state.

You can failover your VM using fully on-demand or pilot light modes:

• DRaaS provisions and scales dedicated tenant SDDCs.


• You can pick which recovery point to restart.
• VM guests restart through the runbook plans.
• vSphere Storage vMotion to vSAN runs in the background after VMs are operational
(optional).
• VMware Cloud API manages the SDDC and vCenter Server.
• At the conclusion of DR, VMware Cloud Disaster Recovery brings change blocks from
SDDC back to the SCFS, creates a new full, and sends change blocks to the data
center.

Recovery and Failover

Failback from an SDDC brings back only data that has changed since the failover.

A failback from the recovery SDDC consists of the following general stages:
1. Undo stage: VMs on the failback target are restored to the state that matches the
snapshots used at recovery time:

• VMs are powered off on the recovery SDDC.


• Automatic snapshots of VMs are taken to the SCFS following the power-off.

2. Catchup stage: VM changes incurred while running in the SDDC following failover are
applied to the VMs on the failover target:

• Differences between the VM state at the time of recovery and failback are
applied to the SCFS snapshot.
• VM backups for the on-premises system are retrieved from the SCFS using a
general forever incremental protocol.
• VMs are recovered to a protected vSphere site.

When VMs are recovered, they are automatically deleted from the recovery SDDC.

DRaaS for VMware Cloud on AWS

DRaaS provides the following services across AWS regions:

• DR capability to protect a VMware Cloud

Disaster Recovery Page 514


• DR capability to protect a VMware Cloud
SDDC in another AWS region.

• Backup to AWS S3 and failover to another


region.

• On-demand or pilot light mode.

• Amazon Virtual Private Cloud (VPC)


peering avoids Internet and egress costs
during backup and failback.

Scale-Out Cloud File System

DEMO - VMware Cloud Disaster Recovery provides lower RTO from the SCFS

Video Transcript

In this demo, we will explore one of the architectural advantages of using VMware Cloud
Disaster Recovery to minimize recovery times when failing over to your VMware Cloud on
AWS DR site.
VMware Cloud Disaster Recovery provides fast and reliable recovery using the unique
capabilities of the scale-out cloud file system to store the recovery points for your
production VMware workloads.

Each recovery point created by VMware Cloud Disaster Recovery is represented in the
scale-out cloud file system inventory as a complete set of VMs at the point-in-time
specified by the protection group scheduling policies. The incremental changes received
from the protected site. DRaaS connector are synthesized into a full image of the VMs and
Disaster Recovery Page 515
from the protected site. DRaaS connector are synthesized into a full image of the VMs and
stored as an immutable recovery point.

When it comes time to use these recovery points for a disaster event, there is no need to
wait for lengthy restores, image reconstruction, or data migration. They are ready to use
directly from the scale-out cloud file system. As part of the VMware Cloud Disaster
Recovery setup, the scale-out cloud file system is presented to the recovery SDDC as an
NFS-mounted datastore.

Outside of DR testing or actual disaster events, this special datastore appears empty.
When a DR plan is run, the selected recovery point is chosen, with the latest copy being
the default. Then, an instant clone of that recovery point is made available in the
mounted datastore to use for recovery. The original recovery point is left unchanged.
Depending on the capacity, performance and availability SLAs needed for DR operations,
the running VMs for that DR plan can be left on the scale-out cloud file system datastore.

The VMs in that datastore view can now be configured by VMware Cloud Disaster
Recovery into the SDDC inventory and quickly powered on. Or, they can be set to migrate
into the SDDC vSAN WorkloadDatastore with no downtime. Note that the VM migration is
performed in the background under VMware Cloud Disaster Recovery orchestration
control. No further user interaction is needed. The VMs that were failed over to the SDDC
as part of this DR plan are already up and running and ready for service.

Leveraging the recovery point inventory in the scale-out cloud file system and the
mounted datastore architecture of the SDDC allow VMware Cloud Disaster Recovery to
quickly bring the desired version of the VMs into inventory and lower recovery times for
your DR solution.

Protection Groups

A key configuration component of VMware Cloud Disaster Recovery are protection groups.

Protection groups contain one or more VMs in your vSphere environment. They are added to a
DR plan so that you can orchestrate recovery to a new site using selected snapshots of your
VMs.

Snapshots are created according to the schedule that you define.

Disaster Recovery Page 516


Typically, a protection group
consists of the VMs that
support a service or an
application, such as an email
or accounting system.

For example, an application


might consist of a two-server
database cluster, three
application servers, and four
web servers.

In most cases, it is not


beneficial to failover part of
this application, so all VMs are
Three protection groups are Web App, Email, and SharePoint included in a single protection
group.

A protection group consists of the following components:

• Protected site, either customer-managed or VMware Cloud on AWS SDDC


• Members, consisting of VMs
• Cloud file system, where snapshots replicate to
• Policies for snapshots, which include snapshot frequency schedule and retention

Knowledge Check: VMware Cloud Disaster Recovery Components


Match each VMware Cloud Disaster Recovery component with the appropriate description.

Disaster Recovery Page 517


Deploying VMware Cloud Disaster Recovery
A VMware Cloud Disaster Recovery protected site includes vCenter Server instances, protection
groups, and DR plans. A protected site can be an on-premises data center or a VMware Cloud
SDDC.

A protected site includes the vCenter Server instance which contains the VMs you want to
protect. A vCenter Server instance can only belong to one site, and protection groups and DR
plans can only be associated with one vCenter Server instance.

The deployment process for VMware Cloud Disaster Recovery can be outlined as follows:

1. Set up the protected sites.


2. Test the DR plans.
3. Execute the DR plan failover.
4. Execute the DR plan failback.

Through a series of demonstration videos, you can explore how these steps are performed

Demonstration: Setting Up Protected Sites

Disaster Recovery Page 518


This video describes the process of setting up protected sites in an on-premises environment.

DEMO - VMware Cloud Disaster Recovery - Setup Protected Sites

Video Transcript

In this step of setting up VMware Cloud Disaster Recovery, we're going to set up a
protected site. This is step number two in our quick setup. We start off in the UI
dashboard and we have our setup steps here. So let's go ahead and click on setup here.

We're going to take a look around. We can build an on-prem protected site. We can also
protect VMware Cloud on AWS SDDCs in another region. But for this setup, we're going to
use an on-prem vCenter. We'll give it a name here, Data Center Site 1 as an example, and
click Set up. This is going to create the logical entity for our protected site, and we see
that in our menu here, and it takes us to the Protected sites page.

We have a few steps to do. We're going to deploy the connector, register the vCenter, and
we're going to create a test protection group. So let's go about these tasks. We'll click
Deploy, and let's go ahead and copy the URL to our clipboard, and we're going to use that
to basically paste into our vCenter.

I've created a folder here for DRaaS connectors. What I want to do is deploy a new OVF
appliance here. I'll go ahead and paste in the connector URL from the other screen. We'll
come in here and give it a name. I'm going to call this DRC-1, DRaaS connector 1. We'll
choose the vSAN cluster. We'll go ahead and ignore the certificate for now. Put that on
our vsanDatastore. Now here, we're going to choose a network that can talk to the other

Disaster Recovery Page 519


our vsanDatastore. Now here, we're going to choose a network that can talk to the other
ESX hosts, the vCenter, and has the proper firewall routing allowed out to the public
Internet to get to our VMware Cloud components in AWS. The Management distributed
virtual port group (MGMT-DVPG) is the one I'm using for that. And we're done.

We're going to go ahead and click Finish. The OVA will deploy. This will take a few
minutes, depending on your system. And once that's done, then what we'll do is we'll go
ahead in here and power this on. So let's go ahead and power on the virtual machine. It'll
take a few minutes to initialize and come up to Ready.

We're going to log in and configure it in just a moment. So let's go ahead and open a
screen here on the console. I'm going to log in as the administrator with the initial default
password, it was on that other screen we were looking at. We'll go ahead and do a static
IP assignment. So let's choose option A. I've got an IP address already configured, so let's
go ahead and enter that in, a subnet mask and a gateway. I've got a couple of DNS servers
that I want to use. It's essentially testing the network.

The FQDN of the orchestrator, again, was back on that deployment screen that we had.
We could copy it from there. I happened to know what it is, so we're going to type that in
here. It's essentially going to validate. Back on that screen was a passcode. Let's go take a
quick look at what I was talking about here.
In this deploy window, we have the credentials, we have the FQDN. We also have the
passcode here that we can use. So we'll go ahead and take that, and paste that in here.
And that changes every five minutes. We're going to give it a label, this matches the VM
name, DRC-1. It's basically connecting, setting things up. We're good. We can go ahead
and exit out of this window, and go back to our vCenter and finish up what we're doing
here. We've got the connector set here. If you remember the password, the default
password was set. If we wanted to get to the other one, it's stored in here. We're not
going to worry about that right now.

Let's go ahead and register the vCenter. So for this, I'm going to go back to my vCenter.
I'm going to pick the root here. I'm going to go ahead and copy the IP address. Come back
into VMware Cloud Disaster Recovery UI. Let's go ahead and register the vCenter. Paste in
the default user here. This is an administrative-level user, and I have the password for
that. And we'll go ahead and register this.
This is going to create the DRaaS connector and the vCenter relationship for this protected
site. If I need to get to the vCenter, this is how I would also remove it. If I wanted to
register a different one, I can manage my connectors, I can manage my vCenter
registrations all from this part here.
So let's go ahead and create a protection group and we'll call it TEST. We're going to build
it on Data Center Site 1, the one we just created. We're going to associate it with the
vCenter that we just registered. And I'm going to use just a simple naming pattern here,
TEST*, and we'll see what virtual machines we have that have those. It looks like there's
10 virtual machines in my environment that matched that pattern.

We can go now and here's a simple schedule. We're going to get into protection groups
later. But for this one, the default is daily at midnight, with a retention of one week. You
could add more schedules here or adjust this. That's going to be a topic for another task.
So let's just finish this.
I want to test out to make sure everything's working. We now have our DRaaS connector

Disaster Recovery Page 520


I want to test out to make sure everything's working. We now have our DRaaS connector
installed, our vCenter registered, our TEST protection group. Let's go ahead and take a
quick manual snapshot, and we're going to delete this, so keeping it for a month isn't
going to be hard. It's going to basically go perform a full backup of those virtual machines,
copy that to the scale-out cloud file system in VMware Cloud Disaster Recovery. If we look
at this, there's the point-in-time snapshot from this manual backup of those 10 virtual
machines

We can go back to our test here. We'll go ahead and select this and go ahead and clean
this up. We don't need to keep this around. I just wanted to make sure that the protection
group within the protected site worked. And with that, we are basically done setting up a
protective site.
We deployed the OVA DRaaS connector. We registered the associated vCenter that had
the virtual machines we wanted to protect. We created a sample TEST protection group
that's going to run every night at midnight, validated with a manual snapshot that
everything worked, and cleaned up.
And we're finished with this task.

Knowledge Check: Setting Up Protected Sites


Which steps do you take when configuring a protected site for an on-premises vCenter Server
environment? (Select two options)

Deploy the DRaaS Connector.


Register the target SDDC.
Create the SCFS.
Create a protection group.

Demonstration: Testing DR Plans

This video describes how you can test DR plans without disrupting running workloads.

DEMO - VMware Cloud Disaster Recovery Test

Disaster Recovery Page 521


Video Transcript

In this demonstration of VMware Cloud Disaster Recovery, we will show how we can
easily test our DR plans to VMware Cloud on AWS with no disruption to running
workloads in the production VMware site. This testing will provide higher confidence in
the plans and construction we have set up for actual DR needs.

We started out with two vCenters. The one on the left is our production on-prem site
running some example virtual machine workloads. And the one on the right is a newly-
provisioned, empty SDDC in VMC on AWS. This cloud-based SDDC could be set up just in
time for testing or impending disaster recovery needs, or always running in a minimal
pilot light configuration for continuous access, and then scale when needed for DR.

To test the DR plan, we will connect to the SaaS orchestration component of VMware
Cloud Disaster Recovery. From the dashboard, we will navigate to the DR plans and select
the sample application plan from our list of recovery plans. Continuous checks for plan
compliance help ensure reliability. Note: This plan is ready and could be used for an actual
failover if desired.

In this case, we click Test plan to perform a quick non-disruptive test of the DR plan. The
latest recovery point is automatically selected when running a plan. It is possible to select
a different snapshot from the deep range of recovery points for use cases such as
ransomware. In this case, we will use the latest recovery point.

During plan testing, we also have the option to leave the VM workloads on the live-mount
NFS datastore that's holding the snapshots. This will save some testing time and allow us
to free up the SDDC sooner, if desired. Note that during a DR plan test, changes to VMs
while in the failover site location are not captured. So there is no need to fully migrate
them into the target SDDC. This option is not available during an actual failover.

Disaster Recovery Page 522


We click Next a few times and enter TEST PLAN, then Run test to start the failover testing.
VMware Cloud Disaster Recovery orchestrates the recovery of multiple virtual machines
based on plan specifications. In this case, we recover a database server. then a web
server, a file server and a virtual desktop into VMware Cloud on AWS.

Switching between views, we can see that they are quickly up and running on the
software-defined data center powered by vSphere NSX and vSAN. There is no need to
convert the virtual machine format or perform lengthy restore operations. This particular
DR plan has an optional step, number five, that prompts for user input, providing even
further control of the plan execution and testing scenarios. Once all of the failover actions
have been completed, we are presented with the option to clean up the test.

Let's first take a quick look at the two vCenters again. The production site vCenter on the
left is still operating as when we started. And the SDDC on the right is now running the
workload specified by the plan. Note: The plan took advantage of test network isolation
settings to make sure that the test VMs on the right do not interfere with production VMs
on the left. To finish the testing, we return to the SaaS Orchestrator UI and click through
the cleanup confirmations.

Switching back to watching the two vCenters, we see the cloud-based SDDC getting
cleared back to an empty status, ready for other testing, or even decommissioning if
desired. Once the test failover has been cleaned up, we acknowledge the plan testing and
we're done. VMware Cloud Disaster Recovery automatically generates detailed reports
whenever a plan is run. These reports can be exported for compliance, audits and
regulatory requirements.

In this demo, we saw how easy it is to non-disruptively test the failover of a sample
workload using VMware Cloud Disaster Recovery. Testing DR plans regularly increases the
confidence that they will work as planned when needed. Continuous health checks help
ensure a test or recovery can be performed at any time. Workflows are orchestrated to
bring up workloads in the desired order. Reports are generated automatically for
compliance.

VMware Cloud Disaster Recovery is a cost-effective DR-as-a-service, that minimizes


downtime by providing quick and reliable failover to VMware Cloud on AWS.

Knowledge Check: Running Tests of DR Plans


Which component of VMware Cloud Disaster Recovery do you connect to when running a test
of DR plans? (Select one option)

SaaS orchestrator
SCFS
SDDC
DRaaS Connector

Disaster Recovery Page 523


Demonstration: Executing the DR Plan Failover

The video describes how to recover VM workloads using the failover operations.

DEMO - VMware Cloud Disaster Recovery Failover

Video Transcript

In this demonstration of VMware Cloud Disaster Recovery, we will show how we can
quickly and reliably recover VM workloads to VMware Cloud on AWS.

We select the sample application plan from our list of recovery plans. Continuous checks
for plan compliance help ensure reliability. We leverage cloud economics by provisioning
and scaling up the target DR site only when needed.

In this case, we click Failover to perform a recovery to a recently provisioned software-


defined data center. The latest recovery point is automatically selected when starting a
failover. Leveraging the capabilities of the scale-out cloud file system, it is possible to have
a deep range of recovery points for use cases, such as ransomware.

We click next a few times and enter FAILOVER, then finish to start the recovery. VMware
Cloud Disaster Recovery orchestrates the recovery of multiple virtual machines based on
plan specifications. In this case, we recover a database server, then a web server, a file
server, and a virtual desktop in the VMware Cloud on AWS, only consuming cloud
compute resources in the event of an actual failover.

The VMs are quickly up and running from the live-mount datastore on the just-in-time

Disaster Recovery Page 524


The VMs are quickly up and running from the live-mount datastore on the just-in-time
SDDC, powered by vSphere NSX and vSAN. There is no need to convert virtual machine
format or perform lengthy restore operations. The VMs will migrate fully to the vSAN
datastore in the background. No other action is needed.

To finish the plan, click Commit and enter COMMIT FAILOVER to continue running these
workloads in VMware Cloud on AWS, until the disaster is resolved and the workloads can
be migrated back on-prem and cloud resource consumption reduced.

Whenever a plan is run or tested, VMware Cloud Disaster Recovery automatically


generates detailed runbook reports with the execution details that can be exported for
compliance, audits, and regulatory requirements.

In this example, we saw how easy it is to recover VM workloads using VMware Cloud
Disaster Recovery. Continuous health checks help increase reliability that a test or
recovery can be performed at any time. Cloud costs are reduced, as workflows are
orchestrated to bring up workloads only when needed for DR. Reports are generated
automatically for compliance.

VMware Cloud Disaster Recovery is a cost-effective DR-as-a-service that minimizes


downtime by providing quick, reliable, failover to VMware Cloud on AWS.

Knowledge Check: Executing the DR Plan Failover


VMware Cloud Disaster Recovery orchestrates the recovery of multiple VMs according to which
component? (Select one option)

Protected site
DR site
DR plan
DRaaS Connector

Demonstration: Executing the DR Plan Failback

The video describes how to fail back workload VMs from a DR site to the original on-premises
site.

DEMO - VMware Cloud Disaster Recovery Failback

Disaster Recovery Page 525


Video Transcript

In this demonstration of VMware Cloud DR, we will show the simple process of failing a
workload set of VMs back from their DR site running in a VMC SDDC to the original on-
prem vCenter site.

We start off with our split screen view with the on-prem vCenter data center on the left
that experienced the DR event, and the VMC cloud-based SDDC on the right currently
running the prescribed VM workload after executing that DR failover operation. But in this
example, we do not care much about the state of the on-prem vCenter VM workload, the
highlighted VMs on the left, as the VMware Cloud DR failback plan execution will address
their state as part of the orchestration.

To begin the failback, let's navigate to the VMware Cloud DR orchestrator dashboard and
then to DR plans view. Note that this interface is running in the cloud independent of
either site being managed. Here in the DR plans inventory, we already have our sample
application failback plan constructed and ready to execute.

Note that as a failback plan, it is not testable in the same manner as failover plans. We see
the compliance checks are all passed for this DR plan, so it is okay to proceed with the
operation. A quick preview of the DR plan steps looks a bit different than the original
failover steps defined to transition from on-prem to cloud operations. We will cover these
in a bit more detail shortly.

We enter PLANNED FAILOVER and then hit the Start failover button to begin executing the
plan. Let's review the steps in more detail as they execute. The first action is to prepare
the original site for an optimal recovery. This starts with powering off the original VMs if
they were left running. Then, the original site is recovered through snapshot and CBT
management back to the same point in time that was used for the original DR failover

Disaster Recovery Page 526


management back to the same point in time that was used for the original DR failover
action. We see this in the first several steps of power off and restore.

Once the original site is ready for failback, the plan orchestration then powers off the VMs
in the cloud DR site SDDC. The orchestration process then takes a snapshot of the virtual
machines at the DR site to capture the changes that have accrued while operating in the
DR mode in VMC.

The next few steps then determine the required changes that need transferred back to
the original on-prem site for failback. The on-prem VMs will be customized, actually
returned to their original vCenter configuration based on the DR plan details. Then, each
step in the DR plan is executed, much like the steps were run in the original failover plan.

And the change is applied from the latest snapshot taken in the VMC SDDC. This DR plan
has an optional user input step, allowing us to confirm the entire failback proceeded as
desired before concluding the DR plan execution. Once the user input is acknowledged,
the plan will perform its final step of deleting the powered off VMs from the VMC SDDC.

We navigate to the SDDC for a quick check of the site that shows the VMs being removed.
Note that once the SDDC is cleared, it could be deleted until needed again for testing or
failover. We then complete the failback by committing the plan execution, and we are
done.

In this demo, we have successfully failed a workload back from the VMC SDDC to the
original on-prem vCenter environment with VMware Cloud DR.

Knowledge Check: Executing the DR Plan Failback


Before you run a failback plan, which step must you take? (Select one option)

Verify that the plan passes compliance checks.


Test the plan.
Disconnect from the target site.
Verify that the failback steps are the same as the failover steps.

Compliance Management

VMware Cloud Disaster Recovery offers several health check features, including continuous
compliance checks.

Continuous compliance checks verify the integrity of DR plans so that plans are ready to run.
For example, compliance checks ensure that the specified protection groups are active on the
protected site and are being replicated successfully to the target site.

Disaster Recovery Page 527


DEMO - VMware Cloud Disaster Recovery Health

Video Transcript

Welcome to another VMware Cloud Disaster Recovery product demonstration. Today, we


will focus on health checks, reports and status checking. Building and maintaining a DR
solution that is continuously checked, well-documented and running as expected helps
improve the organization's DR readiness and raises the level of confidence that the system
will work as planned should a disaster arise.

In this demonstration, we will explore the various built-in health checks, reports and
status monitoring capabilities of VMware Cloud Disaster Recovery that enable a greater
degree of visibility into the overall health and readiness of the solution. We will explore
how to review the status of VMware Cloud Disaster Recovery components and
operations, monitor the VM protection policies, overall readiness of the DR plans, and
monitor events and alarms in the system. We'll look at enabling email alerts for various
conditions and how to produce runbooks, health check reports, and track configuration
changes.

Let's start at the top level in the SaaS orchestrator dashboard. This management interface
runs in the cloud, independent of the configured protected sites and recovery sites. This
gives us a system-wide view of current components, sites, and operational status. The
global summary provides a synopsis of the key components of VMware Cloud Disaster
Recovery. This includes overall system health, cloud backup storage consumption. This is
the scale-out cloud filesystem, protected sites and recovery SDDCs enabled for DR
configurations, protection VM coverage based on the protection groups defined for those
protected sites, and DR plans in the inventory. Green check marks indicate good
operational health. If there were a problem in an area, we could easily navigate into the
associated detailed view.

Disaster Recovery Page 528


associated detailed view.

On the right-hand side, we see a list of currently running tasks as well as recently finished
tasks, and any recent alarms. These lists track the recent activity in VMware Cloud
Disaster Recovery system. From the dashboard, we can navigate to other functional areas
for operations, administration, and as we'll see here, health checks, reports and status.

Let's look closer at protection groups. From the protection groups list view, we can see
the current status of each protection policy that has been defined and which protected
site it is associated with. For the example here, they are all of type on-prem site and
replicating their change datasets to the scale-out cloud filesystem called Cloud Backup. If
there were any issues, we could navigate into the affected protection group and review
the history or details of any of the individual snapshots. A healthy OK status in the Health
column indicates that the policies are running on schedule as defined.

Let's now navigate over to the DR plans view. This is where most of the details we are
interested in will be found. In this view, we start with an overall DR plan list, which
displays the current state of each of the defined DR plans. The plan status shows which
plans are enabled and ready to failover or test. The plans can be in a number of different
states, depending on the current conditions. For details on the other states, please
consult the product documentation.

The other key indicator in this view is the Compliance status in the right-hand column of
the main window. DR health checks run against active plans every 30 minutes and check
the operational readiness of the plans in several key areas that we will explore in just a
bit. One other operational characteristic shown in this view is the protected site, usually
the on-prem data center location, or an SDDC from another region, and recovery site,
usually the target SDDC. If the SDDC does not exist, this field will be empty and the full DR
health check will be incomplete.

Next, we will explore an individual DR plan, Let's pick the APPS plan. The DR health check
status is really obvious in this view. If we click the Show button near the green check
mark, we'll get a more detailed view of the health checks being performed. The checks
cover four main areas: the protected site, the recovery site, the orchestration steps, and
VMware Cloud Disaster Recovery component integration status. Each of the checks
actually has several lower-level checks conducted.

It's possible to download the health check report as a PDF and share with others or file as
desired. The report has a timestamp and summary information as well, for easy tracking.
In this view, on the plan details menu bar is a Reports tab. There are two types of plan
reports available here, run and configuration history.

These run reports are generated any time this plan is executed for either testing or
failover operations. This provides a useful audit trail history of the plan. Similar to the
health check report, this automatically generated report can also be downloaded as PDF
and shared or filed as desired. Run reports contain critical timing and task tracking
information to provide insight into the plan details as well as the overall plan execution
results.

This is essentially your run book, documentation of plan testing, or actual failover

Disaster Recovery Page 529


This is essentially your run book, documentation of plan testing, or actual failover
operations. These run reports are also generated for plans used to failback from the cloud
to on-prem locations. The DR plan run report covers these basic areas each time the plan
is executed: the plan scope, a failover action summary with times and results, failover
mappings from site to site, workflow steps, and failover execution timings and step-by-
step details, and an error log. The other type of report in this view is the configuration
report. Each time a plan has changed and saved, a new version of the plan configuration
report is created with a timestamp. This provides better tracking for basic change control
purposes.

The last area to look at in the DR plans view is the plan itself. Let's edit the plan and
review the Alerts settings. It's here that you can build email notification triggers for the
plans in your environment. Simply configure one or more recipients in the orchestrator
setup to receive email alerts. Then for each plan, choose what triggers you want included.
These can be for regular health check operations, or for plan execution changes. When
something changes in the plans, your team will be alerted automatically.

One last health and status area within the orchestrator UI worth exploring is the
monitoring view. In this view, we can review all events and alarms presented by the
system. With some basic filtering, grouping and level selection, it is possible to narrow
down the view and focus on specific operations or areas where attention might be
needed.

Let's review what we have covered in this demonstration. From the SaaS orchestrator UI,
we can easily see and review the status of VMware Cloud Disaster Recovery components
and operations, track the on-prem application VM protection policies, the protection
groups, monitor the overall readiness status of the DR plans defined, produce detailed
health check reports, execution runbooks and plan configuration reports that are ready to
download and share, set up email alert mechanisms for your administrative team, and
review events and alarms for all parts of the VMware Cloud Disaster Recovery setup.

This level of visibility, detail and tracking make VMware Cloud Disaster Recovery easier to
manage and provide a higher degree of confidence that when disaster arises, you will be
prepared for recovery to the cloud.

Disaster Recovery Page 530


Module Summary
Thursday, February 2, 2023 1:39 PM

Review the key concepts covered in this module:


• Backup and disaster recovery practices help to support business continuity. Backup is the
process of creating copies of your data across multiple environments to ensure that the
data is safe and recoverable. DR automates data transfer to keep your workloads available
if an outage occurs.

• VMware Cloud on AWS offers two DRaaS solutions: Site Recovery and VMware Cloud
Disaster Recovery. You select a DRaaS solution that best aligns with your requirements.

• Site Recovery is a separately purchased add-on for VMware Cloud on AWS. It provides
DRaaS for data center failures by replicating VMs between an on-premises data center
and a VMware Cloud on AWS data center.

• You can deploy multiple instances of Site Recovery in a VMware Cloud on AWS SDDC.

• VMware Cloud Disaster Recovery is an on-demand SaaS disaster recovery solution.

• All VMware Cloud Disaster Recovery components, including cloud storage, are deployed
and managed by VMware in an AWS account dedicated to each tenant.

Additional Resources

• For information about DRaaS for VMware Cloud on AWS, access the resources on the
VMware Tech Zone website at https://vmc.techzone.vmware.com/vmc-aws-draas.

• For information about building and maintaining DR solution using VMware Cloud Disaster
Recovery, watch the demonstration videos at https://www.youtube.com/playlist?
list=PLNOz1mVhDkG6ZsnZPI_bol5o1ii1onPTv.

Disaster Recovery Page 531


Account Management in VMware Cloud on AWS
Thursday, February 2, 2023 1:43 PM

Learner Objectives
After completing this lesson, you should be able to:

• Identify VMware Cloud on AWS accounts for the onboarding process


• Describe VMware Cloud on AWS service roles
• Describe how to add an identity source to the SDDC LDAP domain
• Recognize the setup of the enterprise federation

This lesson focuses on account management for VMware Cloud on AWS

For information about account management for other hyperscaler partners, you can access the following
resources:

Azure VMware Solution


See the Azure VMware Solution identity concepts section in the Azure VMware Solution documentation.

Google Cloud VMware Engine


See the VMware Engine IAM roles and permissions section in the Google Cloud VMware Engine
documentation.

VMware Cloud on AWS Accounts and Roles


VMware Cloud on AWS accounts are based on an organization, which corresponds to a group or line of
business subscribed to VMware Cloud services.

Each organization has one or more organization owners, who have access to all the resources and services of
the organization and can invite additional users to the account.

By default, these additional users are organization members, who can use and manage cloud services
belonging to the organization but cannot invite new users.

Accounts for the Onboarding Process

A VMware Customer Connect account is required to authenticate an Administrator during the initial Cloud
Services Portal on-boarding process.

Organization Owner Account


If you have a VMware Customer Connect account, you can use it to create an Organization Owner

Maintenance and Troubleshooting Page 532


If you have a VMware Customer Connect account, you can use it to create an Organization Owner
account after you receive the invitation email.

If you do not have a VMware Customer Connect account, you are prompted to create one during
Organization Owner Account creation.

Inviting a New User


As an organization owner, you can invite additional users to your organization.

After an organization owner invites you to an organization in VMware Cloud, you can accept the
invitation to create your account and gain access to the service.

Organization members cannot invite users to an organization

Knowledge Check: Accounts for the Onboarding Process


Which account is needed to create an Organization Owner Account in VMware Cloud on AWS? (Select one
option)

VMware Customer Connect Account


VMware Customer Service Account
VMware Cloud Account

VMware Cloud on AWS Service Roles


Maintenance and Troubleshooting Page 533
VMware Cloud on AWS Service Roles

Service roles define the privileges of


organization members when they access the
VMware Cloud services that the organization
uses.

The following VMware Cloud on AWS service


roles can be assigned.

Administrator
This role has full cloud administrator rights to all service features in VMware Cloud on AWS.

Administrator (Delete Restricted)


This role has full cloud administrator rights to all service features in the VMware Cloud on AWS
console, however, this role cannot delete SDDCs or clusters.

NSX Cloud Auditor


This role can view NSX service settings and events but cannot change the service.

NSX Cloud Admin


This role can do all the tasks related to the NSX service.

NSX Manager UI

With the organization role of NSX Cloud Admin or NSX Cloud Auditor, you can use either the VMware NSX
Manager web interface or the VMware Cloud console Networking & Security tab to manage your SDDC
networks.

The NSX Manager interface is accessible at a public IP address reachable by any browser that can connect to
the Internet. You click OPEN NSX MANAGER on the SDDC Summary tab to open the public NSX Manager
interface.

Knowledge Check: VMware Cloud on AWS Service Roles


Match the VMware Cloud on AWS service roles to the correct permissions.

Maintenance and Troubleshooting Page 534


Connecting a Cloud SDDC to Active Directory
To link your cloud SDDC to your on-premises vCenter Server, you must add an identity source to the SDDC
LDAP domain:

• You configure Hybrid Linked Mode from your SDDC by adding your on-premises LDAP domain as an
identity source for the SDDC vCenter Server.

• You can configure Hybrid Linked Mode from your SDDC if your on-premises LDAP service is provided by
a native Active Directory (Integrated Windows Authentication) domain or an OpenLDAP directory
service

Adding an identity source is optional when configuring Hybrid Linked Mode from the Cloud Gateway
Appliance, but adding an identity source allows you to configure users or groups with a lesser level of access
than the Cloud Administrator.

For more information about using OpenLDAP as the identity source, access VMware knowledge base article
2064977.

How do you add an identity source to the SDDC LDAP Domain?

Step 1: Log in to the vSphere Client for your SDDC


To add an identity source, you must be logged in as [email protected] or another member of the
Cloud Administrators group.

Maintenance and Troubleshooting Page 535


Step 2: Configure single sign-on to add an identity provider
Follow the steps in Add or Edit a vCenter Single Sign-On Identity Source in the VMware vSphere
Documentation.

Step 3: Configure the identity source settings.


For more information about configuration parameters, access Active Directory over LDAP and
OpenLDAP Server Identity Source Settings in the VMware vSphere product documentation.

Enterprise Federation

VMware Cloud services users with a federated domain use their corporate credentials to log in to the
VMware Cloud services console across organizations.

Setting up enterprise federation for your corporate domain is a self-service process that involves multiple
steps, users, and roles.

How Does it Work?

1. As an organization owner, you start the self-service federation workflow on behalf of your organization
and invite an Enterprise Administrator to complete the setup.

2. The Enterprise Administrator must determine the type of federation setup that is most suitable for
your enterprise.

If your corporate domain is not federated, your access to VMware Cloud Services is authenticated
through your VMware ID account.

If you are new to VMware Cloud services, visit my.vmware.com to create a VMware ID.

Self-Service Federation Setup

To start the self-service federation setup, you must first receive an email invitation with a link
to the special federation organization.

The organization owner who sent you the invitation has identified you as an Enterprise
Administrator and granted you the permissions to initiate and configure the federation setup
for your enterprise domain.

Setting Up Enterprise Federation

For more information about enterprise federation, access the VMware Cloud services product
documentation.

Which Federation Setup Best Suites Your Enterprise?

Maintenance and Troubleshooting Page 536


You initiate the self-service
federation workflow by selecting
the type of federation setup that is
most suitable for your enterprise.

Dynamic (Connectorless) Authentication Setup

When enterprise federation for your enterprise domain is set up to use your third-party identity provider,
users accessing VMware Cloud services from the federated domain are redirected to the login screen of the
identity provider for your enterprise.

Users authenticate directly with your identity provider through SAML JIT dynamic provisioning.

Step 1: User identification is through email, [email protected], or UPN.

Step 2: User is redirected to identity provider (IdP) login page.

Step 3: User authenticates with corporate credentials.

Step 4: Dynamic user and group provisioning occurs.

Connector-Based Authentication Setup

An on-premises instance of VMware Workspace ONE Access connector syncs users and groups from your
Active Directory to a dedicated instance of a Workspace ONE Access tenant.

Only synced groups and users can log in to VMware Cloud services with their corporate credentials.

User authentication can be set up to use either a SAML 2.0 based identity provider or the Workspace ONE
Access connector authentication methods.

Maintenance and Troubleshooting Page 537


Step 1: User identification with Email, [email protected], or UPN.

Step 2: User redirected to identity Provider login page.

Step 3: User authenticates with corporate credentials.

Step A: User and Groups sync at configured intervals.

Comparing Federation Setup Options

Federation Setup Authentication Method User and Group Provisioning


Dynamic (Connectorless) SAML 2.0 Identity Provider SAML JIT user and group
authentication and setup dynamic provisioning
Connector-Based SAML 2.0 Identity Provider OR Worksdpace Manual pre-provisioning:
authentication and setup ONE Access connector authentication Syncing users and groups from
methods. the customer's Active Directory

Maintenance and Troubleshooting Page 538


Maintenance and Support for VMware Cloud on AWS
Friday, February 3, 2023 8:13 AM

Learner Objectives
After completing this lesson you should be able to:

• Recognize management and operational responsibilities in VMware Cloud on AWS


• Recognize update and upgrade responsibilities of various components for VMware Cloud
on AWS
• Describe elements of the service management process
• Describe how to check for and subscribe to maintenance and outage notifications
• Identify support resources that are available for VMware Cloud on AWS

This lesson focuses on maintenance and support as it relates to VMware Cloud on AWS.

For more information about maintenance and support for other hyperscaler partners, you can
access the following resources:

Azure VMware Solution


See the Azure VMware Solution private cloud and cluster concepts section in the Azure
VMware Solution documentation. Scroll down to the section on host maintenance and lifecycle
management.

Google Cloud VMware Engine


See the Private cloud maintenance and updates section in the Google Cloud VMware Engine
documentation.

VMware Cloud on AWS is sold and managed as a service from VMware. In this way, VMware
has ownership of many management and operational responsibilities.

Cloud Responsibilities in VMware Cloud on AWS

Maintenance and Troubleshooting Page 539


Customer Responsibilities

Customers are responsible for managing and operating their workloads.

Workloads consist of the following components:

• Virtual Machines
• VMware Tools
• Guest operating systems
• Third-party products
• Applications
VMware Responsibilities

VMware has management and operational responsibilities for the


infrastructure, the VMware Cloud console, and the cloud SDDC software
components:

• VMware vCenter Server


• VMware NSX
• VMware vSAN
• VMware ESXi hosts
Amazon Responsibilities

Amazon is mainly responsible for the hardware used in the cloud SDDC.

Responsibilities of Each Party

Customer Responsibilities

Administrators must perform workload-related tasks, which include:

Maintenance and Troubleshooting Page 540


• Deploying and configuring workload VMs
• Updating workload VMs
• Encrypting and securing VMs
• Backing up and restoring workload VMs
• Updating & patching guest operating systems, applications, and third-party
products
• Updating and patching VMware Tools installed on workload VMs
• Monitoring workload VMs and applications
• Migrating VMs within an SDDC, or between the on-premises and cloud
SDDCs

Administrators are responsible for infrastructure tasks, which include:

• Keeping VM templates and content library files updated


• Maintaining network connectivity
• Monitoring SDDC resource utilization
• Monitoring user access and third-party product activity
• Monitoring resource utilization and charges of integrated AWS and third-
party products
• Managing firewall rules for management and compute network gateways
• Configuring AWS Virtual Private Cloud

VMware Responsibilities

VMware performs tasks for managing, maintaining, and monitoring the SDDC,
which include:

• Managing the hosts and the SDDC


• Managing SDDC networking
• Updating SDDC components
• Configuring SDDC components
• Providing VMware Tools patches through vSphere
• Backing up and restoring VMware appliances and infrastructure (vCenter
Server, NSX components, etc.)j
• Patching VMware Cloud on AWS components (vSphere, ESXi drivers, vSAN,
NSX, SDDC console)
• Monitoring host and infrastructure appliances (SDDC components)

All infrastructure maintenance is performed by VMware and AWS. VMware


monitors for component failure in the SDDC and takes direct action when
necessary or engages AWS support or engineering.

Amazon Responsibilities

Amazon performs hardware maintenance tasks, such as:

• Patching BIOS and firmware


• Refreshing hardware
• Replacing failed hardware components

Administrators patch VMware Tools, but VMware provides an up-to-date repository for the latest

Maintenance and Troubleshooting Page 541


Administrators patch VMware Tools, but VMware provides an up-to-date repository for the latest
VMware Tools version as part of regular VMware component patching.

VMware provides problem, event, and incident management services, as well as capacity
management services, and SDDC upgrades for the VMware Cloud on AWS platform.

Service Management Process

Incident Management

VMware services include incident detection, severity classification, recording, escalation, and
return to service for the VMware Cloud on AWS platform.

For problem, event, and incident management for your workload virtual machines that are
deployed in the cloud SDDC, you can follow the same processes for existing virtual machines.

Capacity Management

VMware performs capacity management for cloud SDDCs:

• Capacity data is accessible from the SDDC


console.
• Any additional computing resources
required for the VMware Cloud on AWS
environment can be purchased on-
demand.
• When Elastic DRS is enabled and
configured to automatically scale ESXi
hosts, the customer must be consulted to
determine the effect on service cost.
• Capacity reporting should function as it
does for existing vSphere services. You can view CPU, memory, and storage data in the
SDDC's Summary tab.

SDDC Upgrades with VMware Cloud on AWS

VMware regularly performs updates on SDDCs. These updates ensure continuous delivery of
new features and bug fixes, and maintain consistent software versions across all SDDCs.

Updates to the SDDC software are mandatory and must be done in a timely manner. VMware
works to ensure proper notification in a timely manner.

When an SDDC update is upcoming, VMware sends a notification email to inform you of the
upcoming update. Typically, the email is sent 7 days before a regular update and 1 to 2 days

Maintenance and Troubleshooting Page 542


upcoming update. Typically, the email is sent 7 days before a regular update and 1 to 2 days
before an emergency update.

Delays to upgrades can result in your SDDC running an unsupported software version.

SDDC updates are performed in three phases:

Phase 1: Control Plane Updates

These updates are made to vCenter Server and VMware NSX® Edge. A backup of the
management appliances is taken during this phase.

You cannot access VMware NSX® Manager and vCenter Server during this phase. Your
workloads and other resources function as usual, subject to a few constraints.

Phase 2: Host Updates

These updates are for ESXi hosts and host networking software
in the SDDC. An additional host is temporarily added to your SDDC to provide enough
capacity for the update. You are not billed for these host additions.

VMware vSphere vMotion and VMware vSphere Distributed Resource Scheduler activities
facilitate the update. During this time, your workloads and other resources function as
usual, subject to a few constraints.

Phase 3: Updates to NSX Appliances

These updates are for VMware NSX appliances. A backup of the management appliances
is taken during this phase. You do not have access to NSX Manager and vCenter Server
during this phase.

Your workloads and other resources function as usual subject to a few constraints.

You receive notifications by email when each phase of the update process starts, completes, is
rescheduled, or is canceled. You do not need to respond to these notifications.

To ensure receipt of these notifications, you add the following address to your email safe sender
list: [email protected].

Knowledge Check: Service Management Process


Which task does VMware perform as part of the incident management service? (Select one
option)

Sends notification 7 days before a regular update and 1 to 2 days before an emergency
update.
Performs incident detection, severity classification, recording, escalation, and return to

Maintenance and Troubleshooting Page 543


Performs incident detection, severity classification, recording, escalation, and return to
service for the VMware Cloud on AWS platform.
Manages problems, events, and incidents for workload virtual machines
Manages capacity of the cloud SDDCs

To ensure proper, continuous operation of your workloads, review the latest news and status
of your VMware Cloud on AWS environment.

Staying Up-to-Date with VMware Cloud on AWS


You should regularly view the release notes and release announcements for VMware Cloud on
AWS.

VMware Cloud service offerings release new versions and updates at an increased pace
compared to other VMware products.

VMware Cloud on AWS Release Notes


For up-to-date information on VMware Cloud on AWS releases, access the product
documentation.

Monitoring the Health of VMware Cloud Services


VMware hosts an independent website at https://status.vmware-services.iothat shows the
current status of VMware Cloud services, which include the following products:

• VMware Cloud on AWS


• VMware Cloud on Dell
• VMware vRealize® Network Insight Cloud
• VMware vRealize® Log Insight Cloud
• VMware HCX
• VMware Tanzu

The website also posts scheduled maintenance windows and a history of past incidents.

VMware periodically sends notifications to keep you informed about upcoming maintenance
and other events that impact the VMware Cloud on AWS service.
The notification channels that are available include email, VMware Cloud console, and the
Activity Log UI.

Maintenance and Troubleshooting Page 544


Activity Log UI.

Email Notifications

If subscribed, you can receive email status notifications from [email protected].

Scheduled Maintenance

Scheduled maintenance windows are communicated in advance, and follow-up emails are
sent before, during, and on completion of maintenance.

Administrators receive emails with specific details about certain disruptive patches.

Availability Issue

Maintenance and Troubleshooting Page 545


If an issue is detected with VMware Cloud on AWS, an incident report is distributed and
regular status updates are sent, during the incident and when it is resolved.

Activity Log

The Activity Log pane in the VMware Cloud console contains a history of significant actions in
your organization, such as SDDC deployments and removals, as well as notifications sent by
VMware for events such as SDDC upgrades and maintenance.

Maintenance and Troubleshooting Page 546


Knowledge Check: Service Status

Where can you find the current status of the VMware HCX and VMware Tanzu cloud services?
(Select one option)

Activity log in the VMware Cloud console


VMware Cloud Services status web page
Release notes
From an email notification

Scheduled Maintenance
In the VMware Cloud console, you can view scheduled maintenance windows on the
Maintenance tab of the SDDC.

Before contacting VMware Support, you can use a variety of resources in the VMware Cloud
Console to find information that might help resolve your issues.

Maintenance and Troubleshooting Page 547


Self-Support Resources

You can access self-support resources through the VMware Cloud console.

VMware Cloud Console Help Topics


Click the Question icon at the top of the window to open the Support panel.

From the Support panel, you can search VMware content to find answers to questions,
chat with VMware Support, and create a support request.

Connectivity Validator
The Connectivity Validator provides network connectivity tests to verify that network
access is available for Hybrid Linked Mode.

Maintenance and Troubleshooting Page 548


The Connectivity Validator tests Hybrid Linked Mode connectivity when Hybrid Linked
Mode is configured from the VMware Cloud on AWS vCenter Server Appliance, rather
than with the vCenter Cloud Gateway Appliance, which is installed on-premises.

The Connectivity Validator can also be used to check VMware Site Recovery
connectivity. You can check that all required network connectivity from your VMware
Cloud on AWS SDDC to the remote site is in place.

You can use the tests both during the initial setup of Site Recovery, and to troubleshoot
connectivity issues during day-to-day management.

The status of each test is displayed as it runs. When a test has finished, you can expand
the test to see details of the test results.

Helpful Links
You can access resources for self-support outside the VMware Cloud on AWS SDDC
console:

• VMware documentation for VMware Cloud Services, VMware Cloud on AWS,


vSphere, vSAN, and NSX at https://docs.vmware.com/
• Frequently Asked Questions at https://cloud.vmware.com/vmc-aws/faq
• VMware Cloud on AWS Dev Center at https://code.vmware.com/vmc-aws
• VMware Cloud on AWS downloads
at https://my.vmware.com/web/vmware/details?
downloadGroup=VMC_GA&productId=726

VMware Technical Support

VMware technical support provides several features at no additional cost when you use
VMware Cloud on AWS:

• Global, 24-hour support, 365 days a year, for severity 1 issues


• Unlimited support requests
Maintenance and Troubleshooting Page 549
• Unlimited support requests
• Online access to documentation and other technical resources
• SaaS updates
• Chat or callback phone support
• Six technical contacts per contract

Severity Categories and Target Response Times

Severity Target Response Time


1 (Critical) 30 minutes or less (24 x 7)
2 (Major) 4 business hours*
3 (Minor) 8 business hours
4 (Cosmetic) 12 business hours
* Business hours vary by location

Cloud Services Production Support


VMware technical support is available under the Cloud Services Production Support offering

Engaging with VMware Support

You can contact VMware technical support for VMware Cloud on AWS directly through the
VMware Cloud console.

VMware engages AWS on your behalf for VMware Cloud on AWS support issues as necessary.

Chat with Support


Click the Question icon to open the Support panel. From there, you can chat with VMware
Support.

Maintenance and Troubleshooting Page 550


SDDC Support Information
When contacting VMware Support, you must provide information that can help the
support team resolve your issue.

In the VMware Cloud console, click the Support tab to view the information that VMware
Support needs from you.

Maintenance and Troubleshooting Page 551


Knowledge Check: Support Resources
You are having connectivity issues with your Hybrid Linked Mode environment.

What is the first thing you should do? (Select one option)

Use the Support panel to chat with support.


Open a support ticket from the VMware Cloud console.
Run the Connectivity Validator to verify that network access is available.
Go to FAQs at https://cloud.vmware.com/vmc-aws/faq

Maintenance and Troubleshooting Page 552


Troubleshooting Cloud SDDC Operations
Friday, February 3, 2023 9:17 AM

Learner Objectives
After completing this lesson you should be able to:

• Identify best practices for avoiding common issues with cloud SDDC operations
• Troubleshoot common problems that can occur in cloud SDDC operations

This lesson focuses on preventive actions and common issues in the following areas of SDDC
operations:

Unless otherwise specified, the troubleshooting best practices described in this lesson apply
generally for all clouds.

Throughout the lesson, you'll explore answers to the following questions:

• What steps can you take to avoid problems?


• If problems do occur, how do you diagnose and resolve them?

Identifying Connectivity Issues

During the deployment and operations of a cloud


SDDC, you might encounter connectivity problems
related to the management subnet, VPN, Hybrid
Linked Mode, or a private line.

Management Subnet

When you deploy a cloud SDDC, you must consider IP address management just as with a
traditional data center.
Maintenance and Troubleshooting Page 553
traditional data center.

During the SDDC deployment process, you specify an IP range for the management network of
the SDDC. The choice of address space is important because it cannot be changed without
making the SDDC inoperable and having to rebuild it.

Avoiding Management Subnet Issues

Suppose you are deploying a VMware Cloud on AWS SDDC and must assign a management
subnet. Which configuration do you think will cause problems? (Select one option)

You select a range of IP addresses that do not overlap with the AWS subnet that you
connect to.
To deploy a single-host SDDC, you specify a management network address that overlaps
with the IP address range 192.168.1.0/24.
You select a /23 CIDR block because the SDDC will not increase in capacity.

Guidelines for Management Subnet

When setting up the address space for the management subnet for a VMware Cloud on AWS
SDDC, use the following job aid.

Connectivity Between Sites

For connectivity over the public Internet between the data


center and a cloud data center, you can create IPsec VPN
tunnels, which support the most common encryption
methods.

Maintenance and Troubleshooting Page 554


For VMware Cloud on AWS SDDCs, you can also use the AWS
Direct Connect service to establish a private virtual interface
from the on-premises network directly to the SDDC. This
service provides a dedicated private, high-bandwidth network
connection.

VPN Connectivity

Sometimes, issues with VPN connectivity can arise.

But you can take steps to help prevent such problems.

For example, when you set up a connection to a VMware Cloud on AWS SDDC, verify
that the following conditions are met:
• The IP addresses between the VMware Cloud on AWS SDDC and the on-premises SDDC do
not conflict.
• VMware Cloud on AWS SDDC can communicate with the on-premises DNS server, as
necessary.

VPN Connectivity: L2 VPN

Configuration mismatches are often the cause of VPN connectivity issues.

Do you think that an L2 VPN connection problem might be caused by a configuration error?

Your L2 VPN tunnel status is down. What action do you take?


You verify that UDP port 500, UDP port 4500, and IP protocol 50 are configured as open in
your on-premises firewall.

VPN Connectivity: IPsec VPN

Several possible configuration problems can cause an IPsec VPN tunnel to fail.

Status Channel Status Tunnel Status Possible Problems


Down Negotiating Down: Error message is • The remote peer is not
IPsec negotiation not configured
started. • The pre-shared key (PSK) is not
correct
• A firewall is blocking IPsec
traffic
Down Down: Error message is Down: Error message is The Internet Key Exchange (IKE)
No proposal chosen IKE SA down version or phase 1 cryptography
is not correct.
In Up Down: Error message is The phase 2 cryptography is not

Maintenance and Troubleshooting Page 555


In Up Down: Error message is The phase 2 cryptography is not
Progress No proposal chosen correct.

VPN Connectivity Best Practices

To help prevent issues with VPN connectivity, verify that the following elements are configured
correctly:

• Remote peer
• Pre-shared key
• Firewall rules
• IKE version or phase 1 cryptography
• Phase 2 cryptography

If you make changes to a VPN, disable and re-enable the tunnel to ensure that configuration
changes are applied.

Can You Resolve a VPN Connectivity Problem?

You connect your VMware Cloud on AWS SDDC to your on-premises SDDC over a policy-based
VPN.

You can ping IP addresses in the on-premises network from VMs in the SDDC network, but
workload VMs cannot reach your on-premises DNS servers.

Solution: VPN Connectivity Problem

The following solution fixes the policy-based VPN connection problem.

1. If you can configure your on-premises connection over a route-based VPN or Direct
Connect, you can skip the rest of these steps.

2. If you must use a policy-based VPN as your on-premises connection, configure the SDDC
side of the VPN tunnel to allow DNS requests over the VPN

3. Configure the on-premises side of the tunnel of connect to local_gateway_ip/32, in


addition to the local gateway IP address. This step allows DNS requests to be routed over
the VPN.

If your SDDC includes both a policy-based VPN and another connection such as a route-based
VPN, DX, or VTGW, connectivity over the policy-based VPN fails if any of those other
connections advertises the default route (0.0.0.0/0) to the SDDC.

Connectivity with AWS Direct Connect

AWS Direct Connect provides direct connectivity into an AWS region through private leased

Maintenance and Troubleshooting Page 556


AWS Direct Connect provides direct connectivity into an AWS region through private leased
lines. With Direct Connect, you define virtual interfaces (VIF) to connect to public or private
resources within that region.

When configuring a private virtual network interface (VIF) for AWS Direct Connect, verify that
you perform the following steps:

• Select Another AWS Account as the virtual interface owner.

This AWS account should be the AWS account ID of your VMware Cloud on AWS SDDC.

• Enter the router peer IP address manually

Leaving router peer IP addresses to auto-generate causes a failed connection

Troubleshooting AWS Direct Connect


For help with diagnosing and fixing issues with an AWS Direct Connect connection, access the
AWS documentation.

Knowledge Check: VPN Connectivity Issues


Which best practices should you follow to help prevent VPN connectivity issues? (Select three
options)

Create a firewall rule to block IPsec traffic.


Verify that the IKE version or phase 1 cryptography are correct.
Disable the remote peer.
Disable and re-enable the VPN tunnel.
Verify that the IP addresses between the cloud SDDC and the on-premises SDDC do not
conflict.

Hybrid Linked Mode with VMware Cloud on AWS


In a VMware Cloud on AWS SDDC, you use Hybrid Linked Mode to link a cloud vCenter Server
instance with an on-premises vCenter Single Sign-On domain.

You can configure Hybrid Linked Mode in the VMware Cloud console or with the Cloud Gateway
Appliance. Careful configuration can help you to avoid failures later on.

VMware Cloud Console Configuration


When you configure Hybrid Linked Mode between sites, taking the following steps can
help you to avoid connectivity failures:

• Change the management gateway to resolve to an on-premises DNS server.


• Change the cloud vCenter Server instance FQDN to resolve to a private IP address.

Maintenance and Troubleshooting Page 557


• Change the cloud vCenter Server instance FQDN to resolve to a private IP address.
• Verify that the maximum tolerable latency is 100 milliseconds round-trip.
• Verify that the on-premises vCenter Server instance uses version 6.5 patch d or later.
• Retain solution users that are created automatically for Hybrid Linked Mode.
• Use the Connectivity Validator to verify connectivity before you configure Hybrid
Linked Mode from the cloud vCenter Server Appliance.
• Verify that the appropriate group is assigned Cloud Administrator permissions and is
correctly configured.
• Verify that NTP has a time skew of no later than 10 minutes.

Cloud Gateway Appliance Configuration


You can also verify network connectivity when configuring Hybrid Linked Mode from the
Cloud Gateway Appliance.

Proper network configuration results in these commands succeeding in a SSH session of


the Cloud Gateway Appliance.

Using Connectivity Validator in VMware Cloud on AWS

In the VMware Cloud console, you can access the Connectivity Validator to verify that all
required network connectivity is in place for Hybrid Linked Mode.

You provide the required


inputs to the Connectivity
Validator, and it runs the
required tests.

The example shows that a


traceroute test to an FQDN
fails.

In the test results, hops to


the destination are listed
without accompanying IP
addresses.
Maintenance and Troubleshooting Page 558
addresses.
Example of traceroute test failure

Using Connectivity Validator


For more information about Connectivity Validator and how to resolve issues such as a failed
traceroute test, access the Managing the VMware Cloud on AWS Data Center documentation.

Knowledge Check: Connectivity Issues with Hybrid Linked Mode


You use the Connectivity Validator to verify the connectivity for Hybrid Linked Mode.

After running the tool, you get a connection failure and must verify your configuration.

Which configuration can cause a Hybrid Linked Mode connection failure? (Select one option)

The management gateway resolves to an on-premises DNS server.


The on-premises vCenter Server instance is configured with version 6.7.
The cloud vCenter Server instance FQDN resolves to a public IP address.

Troubleshooting Security Issues

You can help to maintain the safety and security of your cloud SDDC
management infrastructure by configuring firewall rules and security roles
correctly.

Troubleshooting Problems with Firewall Rules

By default, the management gateway blocks traffic to all management network destinations
from all sources. You add management gateway firewall rules to allow secure traffic from
trusted sources.

An inbound firewall rule is configured with a source address of Any. Do you think that this rule
will cause problems? (Select the best option)

Maintenance and Troubleshooting Page 559


Yes, the rule exposes the cloud vCenter Server instance to security risks.
No, this rule increases traffic flow into vCenter Server and makes it accessible to
customers.
Yes, this rule means that the cloud vCenter Server instance cannot connect to AWS
resources.

How do you fix the firewall rule with Any as the source address? (Select the best action)

Modify the firewall rule so that the source address is a specific management group that
requires vCenter Server access.
Change the firewall rule to outbound and the source to a defined management inventory
group.
Modify one of the predefined management inventory groups in the SDDC infrastructure
and add it as the source.

Firewall Rules
For more information about creating secure firewall rules, access VMware knowledge base
article 84154.

Troubleshooting Access Issues


Roles define the access privileges of organization members. Sometimes, issues related to user
role can arise. Consider the following example.

In a VMware Cloud on AWS SDDC, a user with the


NSX Cloud Auditor role tries to change the NSX
configuration but is unable to perform this task.

What do you think is causing the problem?

Maintenance and Troubleshooting Page 560


What do you think is causing the problem?

Cause: Role Privileges

The NSX Cloud Auditor role can view NSX service settings and events but cannot make any
changes to the service.

How do you best troubleshoot the issue? (Select one option)

Assign the user the Administrator role so that the user has full cloud administrator rights.
Verify that the user should have permission to change the NSX configuration.
Delete the role from the user's privileges because the user does not have access rights.

The NSX CloudAdmin role can perform all tasks


related to the deployment and administration of
the NSX service. Before assigning this role to the
user, you must verify that the user requires the
role to perform their job.

The way that permissions work in the VMware


Cloud on AWS SDDC is similar to how permissions
work in vCenter Server.

It is a best practice to limit access to the VMware


Cloud services and SDDC console. Only users who
are responsible for the entire SDDC or NSX
components (VPN, firewall, and so on.) should
have access.

Knowledge Check: Security Troubleshooting

Which practice can help you to avoid problems in your cloud SDDC? (Select one option)

Maintenance and Troubleshooting Page 561


Configure firewalls to allow the correct access.
Assign the highest permissions for most users.
Configure Any as the source for inbound connections.
Send a ping request to the cloud SDDC

You should assign permissions with the least


privileges necessary for the user to perform their
role.

Configuring Any as the source for inbound


connections poses security risks.

Sending a ping helps you to test whether a


connection is working, but does not help you to
avoid security problems.

VM Troubleshooting

VM operations in cloud SDDCs are similar to on-premises VM operations.


You can troubleshoot workload issues in a cloud SDDC using methods similar to
those you use for on-premises workloads.

Common problems can include:

• Snapshot failures
• Power-on operation failures
• Performance problems
• Connection problems
• VMware Tools installation failure

Troubleshooting VM Performance Issues

Performance issues can have different causes: CPU constraints, memory overcommitment,
storage latency, or network latency.

To help prevent performance issues, follow these general best practices for cloud SDDC hosts:

• Plan your deployment by allocating enough resources for all the virtual machines you run,
as well as those needed by SDDC itself.

• Allocate to each VM only as much virtual hardware as that VM requires. Provisioning a


virtual machine with more resources than it requires can reduce the performance of that
virtual machine as well as other VMs sharing the same host.

• Deactivate unused or unnecessary virtual hardware devices because they can impact
performance.

Maintenance and Troubleshooting Page 562


performance.
If you plan to move a VM from the cloud SDDC to an on-premises ESXi host (in a hybrid
cloud scenario, for example), verify that the machine’s virtual hardware version is
supported by the ESXi hosts on which you intend to use them.

To view what is supported by each version of ESXi, access VMware knowledge base
article 2007240.

Resolving VM Performance Issues

Two VMs in your SDDC must run without contention because they contain key business
applications. You adjust the resources of the VMs to guarantee a fixed amount of memory.

The VMs are powered off during the adjustments. When you try to power on the VMs again,
one VM fails to start.

How do you troubleshoot this issue? (Select one option)

Modify the resource settings


Migrate the VMs
Move the applications to different VMs

Troubleshooting VMs
For help with troubleshooting workload problems, access vSphere Virtual Machine
Administration documentation.

For help with specific issues, search the VMware knowledge base archive.

VM Configurations for VMware Cloud on AWS

Because it is a service, VMware Cloud on AWS imposes some constraints on VM operations,


especially those that require you to have physical access to the host hardware or root access to
the host operating system.

Some VM configurations that you use in your on-premises data center are not supported in the
SDDC. Others are supported with limitations.

Maintenance and Troubleshooting Page 563


Which configuration do you think is not supported in the VMware Cloud on AWS SDDC? (Select
one option)

Creating a VM that includes a virtual hardware device that requires a physical change to
the host.
Creating an encrypted VM from an unencrypted VM or VM template.
Deploying a VM from a template in a content library and customizing the guest OS after
the deployment task is complete.

To determine which configuration limitations apply to SDDCs, access the VMware Cloud on
AWS documentation.

Avoiding VM Migration Failures

You can migrate your workload VMs from your on-premises hosts to those in your cloud SDDC
and back again, as well as across hosts in your SDDC.

The method that you choose is based on your tolerance for workload VM downtime, the
number of VMs that you must move, and your on-premises networking configuration.

The following lessons describe different migration methods:

Lesson: Hybrid Linked Mode

Lesson: Migration Solutions

Lesson: VMware HCX

To help avoid problems with migrations of your workloads and applications, follow these
general guidelines:

• Configure Hybrid Linked Mode and verify that the vSphere Client is accessible.
• Establish and configure network connectivity.
• Confirm the compatibility of the VM for migration.
• Verify that the available bandwidth is sufficient for the desired migration type.
• Verify that port 8000 is open for vSphere vMotion migrations.

Maintenance and Troubleshooting Page 564


After configuring Hybrid Linked Mode, you migrate a VM from on-premises to VMware Cloud
on AWS. The migration fails with the following error: "Permission to perform this operation is
denied."

How do you troubleshoot this issue? (Select one option)

Verify whether the VM DRS or vSphere HA overrides is preventing the hybrid migration
with vSphere vMotion.
Use VMware HCX to retry the migration of the VM to the VMware Cloud on AWS SDDC.
Check whether you need to select one of the higher-performing elastic DRS policies for
the VM.

Accessing Checklists for VMware Cloud on AWS Migrations


The following checklists can help you prepare for and prevent issues with migrations in a
VMware Cloud on AWS SDDC.

Hybrid Migration with VMware HCX Checklist

Hybrid Migration with vMotion Checklist

Hybrid Cloud Migration Checklist

Troubleshooting Storage Problems


To troubleshoot storage problems, you must understand how storage works in your cloud
SDDCs.

For example, in VMware Cloud on AWS, two vSAN datastores are provided for
each SDDC cluster:

• WorkloadDatastore, managed by the Cloud Administrator


• vsanDatastore, managed by VMware

These datastores are logical entities that share a common capacity pool. Each
datastore reports the total available free space in the cluster as its capacity.
Maintenance and Troubleshooting Page 565
datastore reports the total available free space in the cluster as its capacity.
Capacity consumed in either datastore updates the Free value for both.

vsanDatastore
The vsanDatastore provides storage for the management VMs in your SDDC, such as
vCenter Server, NSX controllers, and so on.

The management and troubleshooting of the vSAN storage in your SDDC is handled by
VMware.

For this reason, you can't edit the vSAN cluster settings or monitor the vSAN cluster. You
also do not have permission to browse this datastore, upload files to it, or delete files
from it.

WorkloadDatastore
WorkloadDatastore provides storage for your workload VMs, templates, ISO images, and
any other files you choose to upload to your SDDC.

You have full permission to browse this datastore, create folders, upload files, delete files,
and perform all other operations needed to consume this storage.

Avoiding Storage Performance Degradation

Intermittent or unexpected storage performance degradation can occur. The following best
practices help to maintain storage performance and to avoid issues:

• Make changes to a production VM, especially if it runs a database, during a maintenance


window or outside of production hours.

• Use thin provisioning disks for all workload VM VMDKs because these disks do not cause
performance impacts and help to maintain storage utilization efficiency.

• Use a RAID-1 storage policy to provide the best storage throughput at the lowest latency.

• Disable the CPU Hot Plug feature.

• Do not run any production VMs with a snapshot chain in place. Consolidate or delete all
snapshots when available.

Troubleshooting Storage Performance


For more best practices related to storage performance, access VMware knowledge base article
84472.

Storage Policy Configuration Issues

The datastores in your SDDC are assigned the default VM storage policy. You can define
additional storage policies and assign them to either datastore.

Maintenance and Troubleshooting Page 566


additional storage policies and assign them to either datastore.

vSAN storage polices define storage requirements for your virtual machines. These policies
guarantee the required level of service for your VMs because they determine how storage is
allocated to the VM.

Sometimes, configuration issues occur. Consider the following problem, its symptoms, and
resolution.

Problem You cannot change the storage policy applied to any data, except for a
VM.
Data refers to objects in the datastore other than VMs, such as ISO
image files, custom folders, scripts, and so on.

Click the Symptoms and Resolution tabs to learn more about how to
resolve this problem.

Symptoms The symptoms of the problem are as follows:

• You cannot manually change the data's storage policy.


• You cannot check which storage policy is applied to the data.
• You cannot delete the ESXi host from the SDDC because the RAID
configuration and FTT of the data is not compatible with the
number of minimum hosts required after you delete the host.

Resolution The VMC Workload Storage Policy - Cluster-1 storage policy is applied
by default to the data in WorkloadDatastore when it is created.

You can change a VM's storage policy in the vSphere Client, but the
data's storage policy cannot be changed this way.

You also cannot check the current storage policy contents that are
applied to the data if the contents of the original default storage policy
are changed after you create the data.

If you need to change the storage policy that is applied to the data, you
must remove the data and recreate it in WorkloadDatastore with a new
storage policy.

Storage Policy Guidelines

To help avoid issues with storage policies, follow best practices. For VMware Cloud on AWS
SDDCs, consider the following best practices:

Avoid using a VM storage policy with FTT=0 (no Data Redundancy).


This policy can cause data loss if a host failure occurs or if the VM becomes unresponsive.

Maintenance and Troubleshooting Page 567


Check your clusters after storage reconfiguration and remove the additional host, if
necessary.
When you make a change to a cluster that triggers a managed storage policy
reconfiguration, the reconfiguration temporarily requires additional storage. If the cluster
is close to 79% storage capacity, a host might be added to the cluster.

For clusters with six or more hosts, you cannot remove a host if the cluster storage
utilization is greater than 40% of the total storage capacity.

For all other types of clusters, do not remove a host if the cluster storage utilization is
greater than 40% of the total storage capacity.

Do not edit the managed storage policies that VMware Cloud on AWS creates for your
clusters.
If you rename a policy, it is no longer managed by VMware Cloud on AWS. If you edit the
settings of the managed storage policy, your changes are overwritten at the next storage
policy reconfiguration.

When you deploy a VM from a template, select Datastore Default for the VM storage policy.
The VM is deployed with the current cluster managed storage policy.

Recognize the effects of virtual machine storage policies on consumption and the SLA.
VM storage policies affect the consumption of storage capacity in the vSAN cluster and
whether they meet the requirements defined in the Service Level Agreement for VMware
Cloud on AWS (the SLA).

When migrating VMs between clusters in the same SDDC, change the VM storage policy to
the destination cluster's managed policy.
The default option of Keep existing VM Storage Policies is only appropriate if using a
custom policy; otherwise, select the policy assigned to the destination cluster.

Knowledge Check: Troubleshooting Storage Issues


Which action might cause issues with your storage policies on your VMware Cloud on AWS
SDDC? (Select one option)

Rename managed storage policy in VMware Cloud on AWS.


Configure a RAID-1 storage policy.
For a VM that is deployed from a template, select Datastore Default for the VM Storage
Policy.
Disable the hot plug feature.

Maintenance and Troubleshooting Page 568


Module Summary
Friday, February 3, 2023 10:40 AM

Review the key concepts covered in this module:


• VMware Cloud accounts are based on an Organization, which corresponds to a group or
line of business subscribed to VMware Cloud services.

• The Organization Owner role can invite additional users (who become organization
members) to the Organization. Service roles define the privileges of organization
members when they access the VMware Cloud services that the organization uses.

• You can select dynamic (connectorless) or connector-based setups for an enterprise


federation.

With the dynamic setup, users authenticate directly with the identity provider through
SAML JIT dynamic provisioning.

With connectorless-based setup, user authentication can be set up to use either a SAML
2.0 based identity provider or the Workspace ONE Access connector authentication
methods.

• In a VMware Cloud on AWS environment, the customer is responsible for managing


workload VMs, VMware is responsible for managing the SDDC components, and Amazon
is responsible for managing the hardware.

• The service management process includes incident management, capacity management


and SDDC updates for the cloud SDDCs.

• For monitoring and maintaining your cloud SDDC, you can stay informed through release
notes, the VMware Cloud Services Status page, email notifications for schedule
maintenance, and the Support panel, Activity log, and Connectivity Validator in the
VMware Cloud console.

• To avoid problems in your cloud SDDC environment, verify that components such as
security, networking, and storage are configured correctly.

• Performance issues can be caused by CPU constraints, memory overcommitment, storage


latency, or network latency. Ensure that VMs have sufficient resources for your workloads
to perform acceptably.

• You can troubleshoot workload issues in a cloud SDDC using methods similar to those you
use for on-premises workloads.

Maintenance and Troubleshooting Page 569


VMC on AWS: Configuring the Management Subnet Address
Friday, February 3, 2023 9:28 AM

The following guidelines can help you to determine an appropriate address space.

✓ Choose a range of IP addresses that does not overlap with the AWS subnet that you
connect to.
✓ You should provision an IP range that is unique in your organization.
If you plan to connect your SDDC to an on-premises data center, the IP address range
of the subnet must be unique within your enterprise network infrastructure. It cannot
overlap the IP address range of any of your on- premises networks.
✓ If you deploy a single-host SDDC, the IP address range 192.168.1.0/24 is reserved for
the default compute network of the SDDC. If you specify a management network
address range that overlaps this address, single- host SDDC creation fails.
✓ If you deploy a multi-host SDDC, no compute gateway logical network is created
during deployment, so you must create one after the SDDC is deployed.
✓ CIDR blocks of size 16, 20, or 23 are supported, but they must be in one of the private
address space blocks that are defined by RFC 1918 (10.0.0.0/8, 172.16.0.0/12, or
192.168.0.0/16).
✓ The range must be large enough to facilitate all hosts that you deploy on day 1 but
must also account for future growth.
✓ The management CIDR block cannot be changed after the SDDC is deployed, so a /23
block is appropriate only for SDDCs that will not require much growth in capacity.

For a complete list of IPv4 addresses reserved by VMware Cloud on AWS, access Reserved
Network Addresses in the VMware Cloud on AWS Networking and Security guide at
https://docs.vmware.com/en/VMware-Cloud-on-AWS/index.html .

Additional Resources Page 570

You might also like