VCP-MC Full Course
VCP-MC Full Course
Exam Delivery
This is a proctored exam delivered through Pearson VUE. For more information, visit the Pearson VUE website.
Certification Information
For details and a complete list of requirements and recommendations for attainment, please reference the VMware Education
Services – Certification website.
Products/Technologies
This exam validates breadth of knowledge of VMware Cloud across different hyperscalers including:
Introduction Page 1
Exam Sections
VMware exam blueprint sections are now standardized to the seven sections below, some of which may NOT be included in the
final exam blueprint depending on the exam objectives.
Section 1 – Architecture and Technologies
Section 2 – VMware Products and Solutions
Section 3 – Planning and Designing
Section 4 – Installing, Configuring, and Setup
Section 5 – Performance-tuning and Optimization
Section 6 – Troubleshooting and Repairing
Section 7 – Administrative and Operational Tasks
If a section does not have testable objectives in this version of the exam, it will be noted below, accordingly. The objective
numbering may be referenced in your score report at the end of your testing event for further preparation should a retake of
the exam be necessary.
Introduction Page 2
Objective 4.1 – Deploy and configure VMware HCX appliances
Objective 4.2 – Configure connectivity between clouds (VPN, AWS Direct Connect, VMware Managed Transit Gateway)
Objective 4.3 – Set up Hybrid Linked Mode using the VMware Cloud Gateway Appliance
Objective 4.4 – Deploy and configure cloud business continuity and disaster recovery (BC/DR) solutions
Objective 4.5 – Assess the requirements for cloud onboarding within a VMware single- or multi-cloud environment
Objective 4.6 – Assess the required account access and privileges for an SDDC deployment within a VMware single- or
multi-cloud environment
Objective 4.7 – Understand the concept of different types of segments (compute and management)
Objective 4.8 – Understand hyperscaler networking considerations
Objective 4.9 – Understand the concept of dynamic SDDC scale-out
Objective 4.10 – Complete cluster operations
Recommended Training
Designing, Configuring, and Managing the VMware Cloud
References*
In addition to the recommended course modules listed above, item writers used the following references for information when
writing exam questions. It is recommended that you study the reference content as you prepare to take the exam, in addition
to any recommended training.
Link Topic
https://blogs.vmware.com/ Introduction to the VMware Cloud
Operating Model
Introduction Page 3
Documentation, VMware Tanzu Service
Mesh Product Documentation, VMware
Tanzu
Product Documentation, vSphere with
Tanzu Configuration and Management
Documentation, VMware Cloud on AWS
Product Documentation, VMware Cloud
Disaster Recovery Product
Documentation, VMware Site Recovery
Product
Documentation, VMware Cloud on AWS
Operating Principles, VMware NSX-T
Data Center Product Documentation
https://www.vmware.com/topics/glossary.html Kubernetes Namespace
*The content in this exam covers breadth of knowledge of VMware Cloud across
different hyperscalers including VMware Cloud on AWS, VMware Cloud on Dell
EMC, VMware Cloud on AWS Outposts, Google Cloud VMware Engine, and Azure
VMware Solution.
Sample Questions
Sample questions presented here are examples of the types of questions candidates may encounter and should not be used as
a resource for exam preparation.
Sample Question 1
When creating a hybrid cloud solution using Google Cloud VMware Engine, which inter-connectivity option would a cloud
administrator choose to provide the most secure layer 3 connection with the greatest possible throughput for application
connectivity?
A. Partner Interconnect
B. Partner VPN
C. Dedicated Interconnect
D. Cloud VPN
Answer: C
Sample Question 2
An administrator will be implementing a third-party, cloud-based backup solution to provide backup services to virtual
machines running in VMware Cloud on AWS.
A. Deploy the solution inside the VMware Cloud on AWS environment to take advantage of the existing capacity of the
service.
B. Deploy the solution into the customer-owned virtual private cloud (VPC) that is connected to the SDDC. This allows use
of a high-speed, low latency ENI connection for data backup and recovery.
C. Deploy the solution on-premises. This affords the greatest degree of recoverability in the event that VMware Cloud on
AWS becomes unavailable.
D. Deploy the solution into a virtual private cloud (VPC) located in another AWS availability zone (AZ). This provides
increased resiliency in the event of a localized AZ failure that may impact the VMware Cloud on AWS environment.
Answer: B
Sample Question 3
A cloud administrator is managing an Azure VMware Solution environment. Currently, the environment consists of a single
cluster. Due to increased demand, the cloud administrator is tasked with adding an additional six hosts to the environment.
The newly provisioned hosts must be able to provide access to existing VMware NSX networks.
What should the administrator do to achieve this goal?
Introduction Page 4
B. Provision a new private cloud.
C. Create a new Azure VMware Solution tenant.
D. Contact VMware support to request a cluster expansion.
Answer: A
Sample Question 4
Which three strategies are key when transitioning to a cloud operating model? (Choose three.)
A. Continuity
B. Endpoint
C. Application
D. Financial
E. Migration
F. Cloud
Answers: C, D, F
Sample Question 5
A cloud administrator needs to deploy a three-tiered application that needs to be compliant with the following security policies:
• The web layer should be accessible only from testing networks
• The application layer should be accessible only by the web services
• The database layer should be accessible only by the application services
Based on the given scenario, which three VMware NSX components would be necessary at a minimum to provide a compliant
architecture for the application to be deployed on VMware Cloud? (Choose three.)
A. Tier-1 gateway
B. Segments
C. VP services
D. Endpoint protection rules
E. Security group
F. Distributed firewall rules
Answers: A, B, F
Sample Question 6
What are the two authentication options supported when using Hybrid Linked mode with the vCenter Cloud Gateway
Appliance? (Choose two.)
A. Security Assertion Markup Language (SAML)
B. Open Authorization (OAuth) 2.0
C. Integrated Windows Authentication (IWA)
D. Windows NT LAN Manager (NTLM)
E. Lightweight Directory Access Protocol (LDAP)
Answers: C, E
Sample Question 7
A cloud administrator is tasked with ensuring a dedicated, secure, high-speed, and low-latency connection exists between an
on-premises environment and Azure VMware Solution.
A. ExpressRoute gateway
B. Dedicated Microsoft Enterprise Edge
C. Global Reach
D. ExpressRoute
Introduction Page 5
Answer: D
Sample Question 8
A cloud administrator would like to limit bandwidth from a particular virtual machine that is connected to a network segment
using a Quality of Service (QoS) segment profile.
Answer: A
Sample Question 9
A cloud administrator is experiencing an issue with VMware vMotion failing between two of its hosts.
Which VMware solution could the administrator use to gather further information about the failure? A. VMware vRealize
Lifecycle Manager
Answer: D
Sample Question 10
A company is using AWS Direct Connect to access VMware Cloud on AWS. The autonomous system number (ASN) configured
on AWS and the software-defined data center (SDDC) is 65225. The connection is unsuccessful.
Answer: B
Certification Alignment
Carla Gavalakis
Christopher Lewis Chris Vallee
Cosmin Trif
Emad Younis
Frances Wong
James Potts
Jamie Maillart
Kim Delgado
Mateusz Konopnicki
Paul Irwin
Introduction Page 6
Paul Irwin
Ranjna Aggarwal
Scott Bowe
Tiago Baeta Neves
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com © 2022 VMware, Inc. All rights reserved. The product or
workshop materials is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at
http://www.vmware.com/download/patents.html . VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks
and names mentioned herein may be trademarks of their respective companies.
VMware warrants that it will perform these workshop services in a reasonable manner using generally accepted industry standards and practices. THE EXPRESS WARRANTY SET FORTH IS IN LIEU OF ALL
OTHER WARRANTIES,
EXPRESS, IMPLIED, STATUTORY OR OTHERWISE INCLUDING IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE WITH RESPECT TO THE SERVICES AND
DELIVERABLES
PROVIDED BY VMWARE, OR AS TO THE RESULTS WHICH MAY BE OBTAINED THEREFROM. VMWARE WILL NOT BE LIABLE FOR ANY THIRD-PARTY SERVICES OR PRODUCTS IDENTIFIED OR
REFERRED TO
CUSTOMER. All materials provided in this workshop are copyrighted by VMware ("Workshop Materials"). VMware grants the customer of this workshop a license to use and make reasonable copies of any
Workshop Materials strictly for the purpose of facilitating such company's internal understanding, utilization and operation of its licensed VMware product(s). Except as set forth expressly in the sentence above,
there is no transfer of any intellectual property rights or any other license granted under the terms of this workshop. If you are located in the United States, the VMware contracting entity for the service will be
VMware, Inc., and if outside of the United States, the VMware contracting entity will be VMware International Limited.
REV. 12/2022
Introduction Page 7
Introduction
Wednesday, January 11, 2023 9:59 AM
VMware Cloud helps organizations to navigate the shift to the cloud. It consists of infrastructure and
management components, and their supporting technologies.
In this course, you learn about how VMware Cloud solutions help to manage and support multi-cloud
environments.
Introduction Page 8
Understanding Multi-Cloud
Wednesday, January 11, 2023 10:04 AM
Learner Objectives
Why Multi-Cloud?
This lesson starts with a solution: VMware CloudTM.
It explores the solution's origins by addressing several questions: Why cloud? What is multi-cloud? What is the
VMware multi-cloud vision? What are the benefits of cloud? What are the challenges it presents? And finally,
how does VMware Cloud address those challenges?
"Multi-cloud is all about the power of and. In fact, we hear that clearly from our customers. Most of them
today are by far multi-cloud already. Seventy-five percent have two or more clouds, and almost half have
three or more clouds. When we talk to customers and ask them, what are your goals with multi-cloud? The
biggest thing that came back was all about getting access to best-of-breed cloud services.
When we think about cloud services, you think about things like databases, data warehouses, messaging,
streaming, analytics. And every public cloud has at least one of those, if not many of those. And so what
customers do is that they compare and contrast. They look across to see which of those cloud services best
meets their application needs.
So how can you get all the benefits of these best-of-breed cloud services without the complexity?
Well it's all about having the right architecture. It's about standardizing in certain places so that you can choice
and flexibility in other places.
In particular, standardizing at the DevSecOps level and at the infrastructure level is crucial here to this
architecture.
VMware Tanzu, laser-focused on the DevSecOps space, and VMware Cloud, by contrast, is really focused on
the infrastructure space, delivering a set of consistent cloud infrastructure services, available across all clouds.
So, first question, what is VMware Cloud? You hear us talk about it, but what really is it at its heart? Well, at its
heart, it's fundamentally two different components. The first of which is this infrastructure building block that
we call VMware Cloud Foundation. The beauty of VMware Cloud Foundation is that you can place it anywhere
you want, in a public cloud, at a colo, in your data center, at the edge. It's very flexible and yet also powerful
and consistent. It gives you that consistent infrastructure layer.
Second is our multi-cloud management layer with vRealize. The goal here is to pull together all these different
infrastructure pieces, infrastructure building blocks, and give you a single view into them all, a single way of
So what are the benefits of VMware Cloud? First of all, it's the fastest path to cloud. This consistent
infrastructure from the on-prem to the cloud, leveraging tools, like HCX. Unprecedented levels of speed.
We're also focused on application modernization, getting you those best-of-breed cloud services that your app
teams are looking for in the easiest and yet most secure way possible.
Speaking of security, really focused on that, across clouds, governance, operations, standardizing these things
because that is so important to your business. And finally, it's got great economic value, dramatically reducing
total cost of ownership, much lower than you see in other places, and great ROI as well."
Cloud Evolution
How did we get to a point where the cloud is the model for doing business?
Cloud and container technologies have changed the way that IT and businesses operate. Consider the
following interlinking shifts in the evolution of the cloud.
The cloud delivers a range of advanced services, including integrated Kubernetes, artificial intelligence,
machine learning, Internet of things, and
more.
Kubernetes and containers accelerated the adoption of a microservices architecture and of DevOps
methodologies, making Linux and open source more mainstream and strategic to the enterprise.
Public cloud providers now bring their stacks on premises, and data center and server vendors offer cloudlike
managed services with OpEx financial models.
To unlock the potential of cloud and applications, businesses must transform in three main ways, each of
which supports digital business and application modernization at different levels:
• Embrace microservices and APIs and improve the developer experience, speeding innovation and the
delivery of business services.
• Accelerate the path to new and modernized applications and deliver critical business services to
production quickly, securely, and continuously.
• Redefine IT with cloud capabilities, modern architectures, and a cloud operating model that spans from
the data center to any cloud and edge for all applications.
As organizations develop and deliver modern and traditional applications in the cloud, they redefine the
nature of IT.
Organizations that use cloud capabilities and technologies to transform their business can reap several
benefits:
• Increased agility: Agility encompasses scalability, customizability, and access to the cloud service from
any where and on any device. In addition, you can have the same level of security regardless of the scale
of your business or services.
• Cost reductions: You save on capital expenses (hardware) and can use a flexible payment structure
where you pay only for the resources that you use.
• Increased innovation and developer productivity: With cloud computing, organizations do not need to
worry about managing IT infrastructures and can focus on application development and other priorities
using the most up-to-date technology.
Given what you know about cloud, which examples illustrate its benefits? (Select all options that apply)
Cloud Challenges
Even with all its benefits, the cloud also presents challenges as IT struggles to balance the needs of new
and existing applications.
Pressure to provide reliability, availability, security, and Rugegovernance can be compounded by growing a portfolio
of application architectures, infrastructure and cloud vendors, tools, and processes.
You can address this challenge by applying network virtualization technology to public clouds:
• Connect public clouds and services securely.
• Deploy secure network architectures that span multiple clouds with VMware NSX®.
The solution is a common operating environment that gives you visibility and tools to view and manage
resources, workloads, and operations across clouds.
In this way, you can avoid cloud vendor lock-in, monitor operations, and manage to specific service-level
agreements (SLAs).
Although organizations might not outright choose a multi-cloud architecture, they often find themselves using
a multi-cloud approach to increase innovation.
A multi-cloud environment is where apps are built and deployed quickly, and new capabilities are continuously
added.
You can deploy apps anywhere across a distributed cloud and move apps freely to the best cloud.
Do you recall the solution that this lesson started with, namely, VMware Cloud? It was developed to help
address cloud challenges.
VMware Cloud delivers multi-cloud services that span the data center, edge, and any cloud. It has two main
parts:
• Infrastructure: You redefine IT with cloud capabilities, modern architectures, and consistent, global
operations in the data center, cloud, and edge.
• Management: The goal for the management layer is to bring together the architecture components in a
Which statement best describes the VMware multi-cloud vision? (Select one option)
Provide cloud services through the infrastructure using existing tools and outsource the management of the
infrastructure
Deliver infrastructure across all clouds and in the datacenter and edge and manage and secure the
infrastructure with a common set of tools.
Modernize applications in the cloud of your choice, using the cloud-native services of that cloud provider,
including their management services.
Learner Objectives
Everyone is impacted by the digital transformation. It is happening in every company and in every industry.
The goal of digital transformation is to become a digital business. The transformation process changes how
companies deliver services and products to their consumers.
Banks were one of the first business to go on the digital transformation journey.
Most banks now have a web or mobile application that customers can use to access
their bank accounts.
Customers chose a bank based on their experience of interactive with it. So banks
must continuously innovate and transform themselves to keep up with changing
customer demands.
What drives a business toward digital transformation? Why are companies embarking on this journey?
• Improve Customer Experience - Nowadays, people expect to consume services digitally, through a web
app or mobile app.
Source: VMware Market Insights Study, March 2021, based on research of 1,200 organizations globally
Why are companies transforming into digital businesses? (Select three options)
IT Challenges
Traditionally, services come from the data center and are managed by customers themselves. But more
services are now being consumed from multiple public clouds and service providers.
A multi-cloud strategy presents challenges for IT, which is still responsible for delivering IT services.
How can IT control multiple services from multiple cloud providers? And how can IT manage multiple
cloud environments so that consumers get the cloud resources and services that they require?
Enterprise IT organizations should be able to provide IT services from different cloud providers and still remain in
control.
Multiple clouds bring complexity. And complexity can hinder organizations from getting the benefits of multi-
cloud.
• Decreased Agility - Due to bureaucratic process and complexity, agility might decrease, IT might be
perceived as too slow, as the bottleneck. It can take 7.4 years to refactor and migrate 100 apps to the
cloud. (Hybrid Cloud Trends Survey, The Enterprise Strategy Group, March 2019)
• Higher Costs - Costs typically increase due to limited visibility and IT not being able to transform towards
cloud. Along the way, efficiency is reduced. It can cost 1 million USD to move 1,000 workloads from one
cloud to another. (VMware white paper: Six Ways Application Requirements Drive Your Infrastructure
Decisions, Sept 2019)j
• Higher Risk - Risk is introduced if a siloed approach is taken when consuming multiple clouds. The
environment becomes complex and disjointed, resulting in an increase in risk for the business. 90% of
organizations reported skills shortages in cloud-related tasks. (2019 Trends in Cloud Transformation, 451
Research, Nov 2018)
The cloud operating model provides a framework for adopting the cloud.
A key part of implementing the cloud operating model is aligning your applications, clouds, and investments
with your business strategy.
• Application Strategy
○ Your application strategy defines what applications you need to support the business.
○ The strategy might involve building new, modern applications. Or it might involve using the
existing applications, and possible modernizing them as well.
• Cloud Strategy
○ Your cloud strategy should define the cloud resources needed to align to the business outcomes
and application requirements.
• Financial Strategy
○ Your financial strategy is used to manage your investments.
○ You must ensure that your costs are under control, while providing the right resources at the right
cost for your applications.
○ Cost governance and compliance become important in a multi-cloud environment.
A business consumes applications, and applications can run on any cloud. In a multi-cloud environment,
existing and new applications must run on their clouds of choice.
As you move toward a multi-cloud environment, you must determine your application strategy.
• Retain
○ Retain applications that already exist and ensure that they are optimally supported based on key
requirements such as security, privacy, performance, and data gravity.
• Rehost
○ Rapidly relocate (migrate) applications, without recoding or refactoring, to any cloud, based on the
organization's goals. You match the needs of each app to the best cloud environment.
• Replatform
○ Leverage Kubernetes for new and existing applications to improve application deployment speed
while evolving to a more flexible, more reliable architecture. For example, you can move
applications from virtual machines in your data center to containers in a public cloud.
• Refactor
○ Use modern application design, microservices, and cloud-native principles to refactor (restructure)
existing applications or build new applications.
• Retire or Replace
○ Decommission an existing application, or replace the application with software as a service (SaaS).
The cloud operating model brings together application and cloud strategies so that both new and existing
applications are managed and operated in a multi-cloud environment.
To move to a cloud operating model, you must transform the people, processes, and technology.
The cloud operating model encompasses the people, processes, and technology that are required for
implementing the business, application, and cloud strategies.
Cloud Computing and Operations Page 19
implementing the business, application, and cloud strategies.
• People
○ Moving from the traditional IT model to a cloud operating model affects the people in the
organization.
○ The IT team must rethink how to manage and provide services to the business.
○ People must align to the organization objectives and business level KPIs (key performance
indicators)
• Process
○ The IT team must rethink their current processes and adopt a model for cloud operations and
management that focuses on delivering services.
○ IT must automatically deliver services, from development to consumption.
• Technology
○ The IT team must have the right technology from an infrastructure and cloud management
perspective, to align with application and business requirements.
With the VMware cloud operating model, you can build a cloud on VMware technology, or unify an existing
multi-cloud infrastructure.
Embracing Multi-Cloud
How can you create a cloud operating model for managing multiple clouds?
Video Transcript
The way we do this is by looking at what is required for both existing and modern applications to run on this
hybrid stack. Your applications could be built on top of VMs, containers, Kubernetes, or native public cloud
services
These applications could be hosted inside an edge, private, public, or hybrid cloud infrastructure. And no
matter where your workloads are hosted, or what your workloads look like, you must have to have a
consistent management experience across all of your infrastructure.
Customers who consume public cloud services from AWS, Azure, Google, and other public cloud providers
have their compute and hardware hosted in their data centers. This experience is something that cloud
consumers have become accustomed to. Your infrastructure is easily accessible when needed.
In order to get that same experience within the data center we need to standardize and modernize the
infrastructure across your multi-cloud landscape.
VMware has a long history by doing this through our software-defined data center, or SDDC approach to data
center design. Now, VMware also delivers SDDC as a service.
While it is powerful, SDDC as a service won’t provide a full picture across all of your compute and
infrastructure needs. For that, we need to have a cloud management platform that will transform your
infrastructure from disparate clouds into a true VMware cloud.
First, let’s look at the SDDC itself. A variation of VMware Cloud Foundation TM is used to automatically deploy
instances of VMware vSphere®, VMware vSAN TM and VMware NSX®. These three components are the
Instead of using VMware Cloud Foundation on your own infrastructure, the SDDC can also be consumed as a
service by leveraging VMware services, such as VMware Cloud TM on AWS or VMware CloudTM on Dell. You can
also spin up an SDDC as a service using one of our partners, such as Azure VMware® Solution, Google
VMware® Engine, or one of the more than 4,500 VMware certified partners worldwide.
In addition to this more traditional stack, we utilize VMware vRealize® Cloud Management TM together with
our VMware Tanzu® application modernization technology to add cloud capabilities to our platform.
All of these components work together to create VMware Cloud. VMware Cloud gives you a unified cloud
experience with the public clouds. It allows you to provide services to the consumers of the cloud. Your
developers and lines of business will see increased flexibility and scalability as a result.
With so many different options for compute both on premises and in multiple clouds, you must have the
ability to manage them all together. vRealize and VMware Tanzu have multi-cloud capabilities. And we enrich
that with CloudHealth® for cost management and cloud security.
All of these products and solutions snap together like a puzzle, working in tandem to make the VMware cloud
operating model.
Which technologies support the main components of a multi-cloud environment? Match each component to
the solutions that support it.
The VMware cloud operating model provides benefits to both the consumer (application developers and lines
of business) and the cloud service provider.
True or False: The cloud operating model is a framework for implementing a cloud strategy.
True
False
Which benefits does the VMware cloud operating model provide? (Select two options)
Helps organizations to transition to cloud operations through the use of public clouds only
Defines the people, processes, and technology that are required to deliver a cloud strategy
Adopts a siloed approach from traditional IT organizations by managing each cloud separately
Delivers services, intelligent operations, and governance for multi-cloud management
Learner Objectives
Consider how cross-cloud solutions can provide control and consistency as you develop and
manage applications across different clouds.
• App Platform
○ With a flexible application platform, developers can build and deploy applications in
different types of clouds in a consistent way.
• Cloud Infrastructure
○ The cloud infrastructure helps you to operate and run enterprise applications.
• Cloud Management
○ Cloud management tools help you monitor and manage the performance and cost of
applications across different clouds.
• Security & Networking
Security and networking span entire multi-cloud operations so that you can connect
Which examples demonstrate the benefits of cross-cloud solutions? (Select two options)
Learner Objectives:
After completing this lesson, you should be able to:
A retail organization is pursuing a multi-cloud strategy. It wants to use different cloud providers
for data storage and for its online store applications.
Cloud management solutions can help organizations manage their cloud infrastructure.
vRealize® Cloud Management TM provides a core set of products and services for managing cloud
environments.
vRealize Suite
vRealize Cloud Management includes the VMware vRealize® Suite products, which provide a
comprehensive management stack for IT services on vSphere and other hypervisors, physical
infrastructure, and multiple public clouds.
• vRealize Operations
○ Automates IT management, providing full-stack visibility across various
infrastructures.
• vRealize Automation
○ Automates multiple clouds with secure, self-service provisioning.
• vRealize Network Insight
Helps you build a secure network infrastructure across cloud environments.
This service collects and analyzes logs from cloud, virtual, and physical
infrastructures in a central location. You can view log data and actionable
insights and query data quickly, using several features.
Features:
• VMware-Authored Insights
○ Insights provide useful information for troubleshooting and auditing events in your
multi-cloud environment. Insights provide information about what is happening in
your SDDC, including critical information about VMware ESXi.
• Query Facility
○ A query facility supports troubleshooting for novice and experienced administrators.
• Alerts
○ You can access built-in alerts or create custom alerts.
• Notifications
○ You can get notifications in different ways, including Syslog forwarding and email.
• Authentication Support
○ Support for local or federated authentication is available, depending on your
security environment.
Events and logs can be forwarded between on-premises vRealize Log Insight, Syslog, and other
logging tools, and vRealize Log Insight Cloud.
A cloud proxy receives log and event information from monitored sources and sends this
information to vRealize Log Insight Cloud, where it can be queried and analyzed.
vRealize Log Insight Cloud includes the cloud proxy as an OVA file, which you can
download and install as a VM. The cloud proxy is also available as an Amazon Machine
Image (AMI) for deployment in Amazon Elastic Compute Cloud (Amazon EC2).
You can sign up for the vRealize Log Insight Cloud service and set up your organization, billing,
Which description most accurately explains the function of vRealize Log Insight Cloud in a multi -
cloud environment? (Select one option)
Collects and analyzes log data from your entire environment so you can view and resolve
problems from one place.
Collects log data on premises and exports the information to the cloud environments for cloud
providers to interpret.
Analyzes data that is collected by cloud providers and uses the data to determine financial costs
of each provider.
Administrators, developers, and business users can access a common service catalog to request
IT services, including infrastructure, applications, and desktops.
vRealize Automation Cloud includes the following services: VMware Cloud Assembly TM,
VMware Service BrokerTM, and VMware Code StreamTM .
SaltStack Config
VMware vRealize® Automation SaltStack® Config is tightly integrated with vRealize Automation
and is one of its key product features. SaltStack Config is available for both the on -premises and
cloud versions of vRealize Automation.
SaltStack Config provisions, configures, and deploys software to your virtual machines at any
You can also use SaltStack Config to define and enforce optimal, compliant software states
across your entire environment.
If you have an active vRealize Automation Cloud cloud license, you are eligible for a SaltStack
Config cloud integration. You can request a SaltStack Config cloud integration using the
VMware Cloud Services console.
You can purchase an enhanced license that includes SaltStack SecOps, which includes two
libraries of content: Compliance and Vulnerability.
Cloud administrators and cloud template developers use Cloud Assembly in different ways.
• Set up projects, add groups or users, and enable • Deploy templates to the
access to resources in cloud accounts or supporting cloud vendors based
regions. on project membership.
• Configure governance in
the form of projects to
control accessibility of
resources and
deployment location.
• You create CI/CD pipelines that automate your entire DevOps life cycle, using existing
development tools.
• Code Stream runs your software through each stage of the pipeline until it is ready to be
released.
• You can integrate the pipeline with one or more DevOps tools, which provide data for the
pipeline to run.
For example, you can publish your Code Stream pipeline to Service Broker as a catalog item that
can be requested and deployed on cloud accounts or regions.
Or you can deploy a Cloud Assembly cloud template and use the parameter values the cloud
template exposes.
With the workflow engine, you can create and run workflows that automate orchestration
processes.
You run workflows on objects of different technologies that vRealize Orchestrator accesses
through a series of plugins.
A library workflow engine runs on objects of different technologies that vRealize Orchestrator
accesses through a series of plug-ins:
• vRealize Orchestrator provides a standard set of plug-ins, including a plug-in for vCenter
Server, with which you can orchestrate tasks in the different environments that the plug-
ins expose.
• vRealize Orchestrator also presents an open architecture for plugging in external third-
party applications to the orchestration platform.
Your organization is using vRealize Automation Cloud to support its multi-cloud strategy. Which
automation services do you use for each multi-cloud task?
To connect vRealize Automation Cloud to a cloud SDDC, you must define resource
infrastructure and cloud template settings for deployment to the cloud SDDC environment.
For example, to connect VMware Cloud on AWS and vRealize Automation Cloud, you perform
the following general procedure.
Note: The procedure requires that the VMware Cloud on AWS SDDC is configured with basic
networking and other parameters.
In this procedure, you configure a VMC on AWS workflow in vRealize Automation Cloud:
• Deploy a new cloud proxy to your VMC on AWS SDDC in vCenter.
• Create a VMC on AWS cloud account that accesses the proxy.
• Configure infrastructure that supports cloud template deployment to resources in
your VMC on AWS environment.
In this procedure, you add an isolated network for your VMC on AWS deployment in
vRealize Automation Cloud
You can configure network isolation for a VMC on AWS deployment by using either
of the following procedures:
In this step, you drag a network machine component onto a vRealize Automation
Cloud cloud template canvas and add settings for an isolated network deployment
to your target VMC on AWs environment.
For more information about prerequisites and procedures, access Tutorial: Configure VMware
Cloud on AWS for vRealize Automation Cloud in the VMware vRealize Automation Cloud
documentation.
You want to connect vRealize Automation Cloud and your VMware Cloud on AWS SDDC. Which
steps do you take? (Select two options)
With its unified operations platform, vRealize Operations Cloud supports several use cases:
• Intelligent remediation
With vRealize Operations, you can access data across your environment, from one
place.
You can predict, prevent, and troubleshooting faster using actionable insights that
correlate metrics and logs.
You can set compliance on your objects to meet defined standards, and vRealize
Operations determines the compliance of your objects with those standards.
Workload Optimization
Using the Workload Optimization feature, you can move virtual compute resources and their
file systems dynamically across datastore clusters in a data center.
• Automate a significant portion of your data center compute and storage optimization
efforts.
vRealize Operations Cloud monitors virtual objects and collects and analyzes related data,
which is presented in graphical form on the Workload Optimization page.
You use the information on this page to determine whether an action is required. If an action is
required, you can select the appropriate optimization function to help resolve the issue.
Which statements do you think describe possible actions for optimizing workloads? (Select
three options)
Managing Costs
With its capacity and cost management features, vRealize Operations Cloud can predict future
demand and provide actionable recommendations to help in managing costs.
Cost Overview
vRealize Operations Cloud supports costing for private clouds, public clouds, and VMware Cloud
infrastructure.
You can track expenses for a single virtual machine, and identify how these expenses attribute
to the overall cost associated with your private cloud accounts and VMware Cloud
infrastructure accounts.
On the Cost Overview home page in vRealize Operations Cloud, you can find details about the
costs associated with your VMware Cloud infrastructure accounts, public cloud accounts, and
your private cloud accounts.
You can view the Total Cost of Ownership, Potential Savings, and Realized Savings for your
VMware Cloud infrastructure cloud accounts and vSphere private cloud accounts, and Total
Cost of Ownership for your private cloud accounts.
How can you use vRealize Operations Cloud to analyze and manage costs? (Select three options)
Troubleshooting Workbench
You can use the Troubleshooting Workbench to analyze alerts and changes in your environment
when troubleshooting problems.
On the Troubleshooting Workbench home page in vRealize Operations Cloud, you can find
active troubleshooting sessions and recent searches. The page also includes a search bar.
The active troubleshooting sessions do not persist after you log out of vRealize Operations
Cloud. But the next time that you log in, your earlier active sessions appear as recent searches.
You can start the Troubleshooting Workbench with an alert in context from the alert
information page, or you can search for an object and start the Troubleshooting Workbench to
investigate known or unknown issues related to the object.
On the Potential Evidence tab, you look for evidence of a problem within a specific scope and
time range. Extending the time range and scope can reveal more evidence for troubleshooting.
You can select only the object that you are investigating or include several upstream and
downstream relationships by increasing the scope. As you increase the scope, more objects
appear in the inventory tree.
By increasing the scope to include additional objects, you can view new evidence. In the
example, significantly more events, property changes and anomalous metrics appear.
You investigate potential evidence in an object's events, property changes, and anomalous
metrics.
• Events
• Shows events based on change in metrics, for example, events where metrics breach
the usual behavior, and major events that occur in the selected scope.
• Property Changes
• Shows important configuration changes that occurred in the selected scope and
time, including both single and multiple property changes.
• Anomalous Metrics
• Focuses on metrics that show drastic changes in the selected scope and time.
Results are ranked according to the degree of change.
You can select individual metrics directly from the Potential Evidence tab for comparison.
After you select the metrics that you want to compare, you click the Metrics tab to view the
metrics.
Correlating Metrics
Correlation is the key to focusing efforts in the right area when investigating problems. You click
the Correlation icon to investigate the potential root causes through pattern matching.
Metric correlation identifies the metrics with similar patterns of behavior in a time range. In
this way, you can access relevant data that helps you to resolve problems faster.
True or False: Changing the scope or time in the Troubleshooting Workbench changes the
potential evidence identified by the tool.
True
False
Compliance Benchmarks
Compliance benchmarks show score cards that help you proactively detect compliance
problems in vRealize Operations Cloud. The compliance benchmarks are measured against a set
of standard rules, regulatory best practices, or custom alert definitions.
If an object is not compliant with a specified standard, vRealize Operations Cloud generates an
associated alert.
vRealize Operations Cloud displays compliance score cards for VMware SDDC, custom, and
regulatory benchmarks.
Given the use cases for vRealize Operations Cloud, what benefits does this service provide for
multi-cloud operations? (Select three options)
vRealize Network Insight Cloud provides end-to-end network visibility across VMware
NSX, VMware SD-WAN, VMware Cloud, public cloud, and other multi-cloud
deployments.
Visibility with vRealize Network Insight Cloud
vRealize Network Insight Cloud provides visibility into the network flows and security of your
on-premises and cloud applications, and it helps you to administer your NSX based SDDC.
You can use vRealize Network Insight Cloud to monitor and diagnose problems with your
network resources. For example, you can check your network flows and your virtual machine
and NSX security rules, and plan for optimal micro-segmentation.
vRealize Network Insight Cloud puts the data it gathers to good use
Take Inventory
The process starts with vRealize Network Insight Cloud collectors taking inventory of the
various physical components—switches, routers, firewalls, load balancers, and so on—as well
as virtual components, including vCenter, NSX, and AWS inventories.
Construct Meaning
vRealize Network Insight Cloud takes the networking data and constructs meaningful insights
about the networking components of applications, how those components are dependent on
each other, which are shared, and where the different components run.
By turning on network flow collection, vRealize Network Insight helps you to understand the
movement between application components. In this way, network engineers can view traffic
data from an application perspective.
You then group these components into applications, and can mark components as shared
between applications.
Observing the traffic at a granular level and taking the underlay network and the workload into
account, it translates that information into ready-to-go firewall rules that can be easily
imported into NSX.
Search Functionality
The search functionality is fundamental to vRealize Network Insight Cloud.
Anything that you do in the interface is a search command. The search looks through all traffic
flow data, as well inventory across vSphere, NSX, VMware Cloud on AWS, native AWS (EC2) and
Azure, and events, as well as metrics across time.
Getting Started
To onboard with vRealize Network Insight Cloud, you take the following steps:
1. Sign up for VMware Cloud Services and request a vRealize Network Insight Cloud trial.
2. Log in to vRealize Network Insight Cloud.
3. Deploy Collector and connect to Cloud Platform
4. Add data sources in vRealize Network Insight Cloud
Which examples illustrate the uses for vRealize Network Insight Cloud? (Select four options)
Learner Objectives
VMware Horizon® provides a virtual desktop infrastructure (VDI) platform for the management
and secure delivery of personalized virtual desktops and published applications to users.
VDI is a technology that hosts and manages desktop environments on a centralized server and deploys
them to users on request.
You use VMware Horizon to create virtual desktops on-demand, based on location and profile,
and you securely deliver managed desktops and applications to the employees.
The desktops and applications are managed in a centralized data center. VMware Horizon
supports both Windows and Linux virtual desktops, and Remote Desktop Sever Host (RDSH)
hosted applications.
Users can access published desktops and applications exclusive of the client device.
They can access their personalized virtual desktops or remote applications from company
laptops, their home PCs, thin-client devices, Macs, tablets, or smartphones.
VMware Horizon uses several features to deliver just-in-time desktops and applications:
VMware Dynamic Environment ManagerTM provides the personalization and dynamic policy
configurations across virtual, physical, and cloud-based environments.
Dynamic Environment Manager enhances the VMware Horizon by enabling customers to take
advantage of user and application management for Horizon virtual desktops, session-based
desktops and hosted applications.
App Volumes
After you select the deployment type, VMware Horizon automatically operates in a mode that
is compatible with the AWS, Google Cloud, or Microsoft Azure cloud admin privileges.
For more information about deploying VMware Horizon on AWS, Google Cloud, and Microsoft
Azure, access the VMware Horizon Product Documentation.
Which statement accurately describes VMware Horizon deployment options across private and
public clouds? (Select one option)
VMware Horizon can be deployed on premises and in cloud SDDCs but not both.
When installing VMware Horizon, you can select a cloud provider as a deployment type.
You can deploy VMware Horizon on premises only.
You can expand on-premises VMware Horizon without a lengthy hardware purchase,
installation, and configuration process.
Application Locality
You want to move published applications that are latency-sensitive to the cloud and need
virtual desktops and Remote Desktop Session Hosts (RDSH) to be co-located with your
published applications.
When you extend the VMware Horizon deployment to the cloud, you can allow end users to
connect to the nearest virtual desktop or RDS host to launch the application.
When you use the cloud, you pay for the use of the this infrastructure during those times when
the primary infrastructure is down. A unified VMware Horizon architecture across the primary
site on-premises and the disaster recovery and continuity site on a cloud provider makes the
failover process simple.
Learner Objectives
After completing this lesson, you should be able to:
From a single platform, CloudHealth provides information to help achieve the following key
goals in a
multi-cloud environment:
Cloud Computing and Operations Page 64
multi-cloud environment:
CloudHealth Capabilities
CloudHealth capabilities can be divided into the areas of financial management, operational
governance, and security and compliance.
Financial Management
CloudHealth includes budget management, cost reporting, and cost forecasting capabilities.
Operational Governance
For example, you can use the platform to perform the following tasks:
• Rightsize cloud infrastructure to eliminate wasted spending
• View recommendations for purchasing and managing commitment-based discounts
• Create customer policies and receive alerts
• Set automated actions when policy conditions are met to ensure continuous governance
You can get alerted when conditions deviate from your desired state and enable automated
actions to execute changes in your environment.
Through CloudHealth, you can access intelligent and real-time security insights.
The overview dashboard displays a summary of multi-cloud security and compliance insights.
To address the challenges of increasing costs and risks, the retail organization implements the
CloudHealth platform.
With visibility into all its cloud environments, the retail organization learns which
departments, teams, projects, or applications are driving cloud cost and usage.
Accessing a report generated by CloudHealth, the retail organization finds that unused
storage volumes often go unnoticed.
The cloud team tracks cost patterns over time to accurately forecast future budgets and
reduce miscalculations.
The cloud team prioritizes threat events, visualizing all the services, key relationships, and
associated security risks.
Learner Objectives
After completing this lesson, you should be able to:
In addition to relying on on-premises data centers and cloud infrastructures, modern enterprise
applications are increasingly using edge compute services (co-located with data sources) to
deliver real-time insights and processes.
Managing and running workloads requires a common approach, which a hybrid cloud offers. An
enterprise has several options for managing and running workloads.
For the successful implementation of a hybrid cloud, companies must overcome several
challenges.
Operational inconsistencies
The on-premises infrastructure and the public cloud environment are different in terms of
operations and the infrastructure stack.
Organizations require different skill sets and tools when moving workloads from on premises to
the cloud. This move requires new training for current employees, or hiring new employees
In a hybrid cloud environment, you require a unified interface or console from which you can
manage your environment and prevent tasks from being overlooked in the workflow.
Which statement describe challenges that IT organizations might face when moving to a hybrid
cloud environment? (Select three options)
Applications have different SLAs depending on whether they run in the private or public
cloud.
You cannot move workloads between on-premises and public clouds because of
incompatible machine formats.
No disaster recovery options are available in a hybrid cloud environment.
On-premises IT teams do not have the skills to manage and operate a hybrid cloud
environment.
Modern applications are not supported in a hybrid cloud environment.
Enterprise Capabilities
VMware hyperscaler partners (for example, AWS, Azure, and Google) deliver several enterprise
capabilities in the public cloud.
Seamless Migration
As a result, you get fast, cost-effective and low-risk migration of workloads between the on-
premises environment and the cloud environment.
Seamless migration across clouds is achieved using a solution called VMware HCX.
As-a-Service Model
The VMware software-defined datacenter is delivered as a cloud service that runs in the
hyperscaler partner's cloud.
By using the SDDC as-a-service model, you can help lower your costs because you do not need
to purchase the infrastructure to run your workloads.
Operational Consistency
Hyperscaler partners provide the consistency and familiarity of VMware technologies between
on-premises and cloud environments through consistent infrastructure and operations.
And you can use the same tools and skillsets for both environments.
Workload Portability
You can move your workloads from on premises to the cloud and vice-versa.
Whatever you build in your on-premises environment, you can also build in the cloud, and vice-
versa.
And, you can manage your hybrid cloud environment with the hybrid capabilities provided by
the platform.
You can build Kubernetes containers and access native cloud services.
With a single platform, you can create and run these modern composite applications.
Hyperscaler partners provide a cloud service that adopts industry best practices. The cloud
service meets a comprehensive set of international and industry-specific security and
compliance standards.
• Disaster recovery
• Data center extension
• Cloud migrations
• Next-generation applications
Disaster Recovery
Key Capabilities
Hyperscaler partners provide key capabilities that you want in a disaster recovery system.
DR Benefits
On-demand capacity
You require IT capacity to support seasonal spikes in demand.
Virtual desktops
Your training organization requires virtual desktops for their weekly online classes, so you
expand into the cloud to meet this requirement.
Key Capabilities
Hyperscaler partners provide key capabilities to support extending your data center into the
cloud.
By extending you data center through a hyperscaler In the hyperscaler partner cloud, your
partner, you have seamless application portability dedicated hardware lives within the
hyperscaler partner infrastructure,
because you use the same application format across
which is high-performing and powerful.
your on-premises and cloud infrastructure.
Extension Benefits
By extending the data center, you can, in turn, expand your environment seamlessly as
necessary.
You can scale rapidly, while managing your environment in a unified way, using one interface or
console view.
In addition, you can reuse the skills and tools that you already have.
Cloud Migrations
Key Capabilities
Hyperscaler partners provide key capabilities to support cloud migration use cases.
You can use tools such as VMware HCX and VMware • Predictable, high-performance
compute with vSphere
vSphere vMotion to migrate workloads that exist on
• Feature-rich SDDC with NSX and
the cloud and bring them into the data center, and
vSAN
vice-versa. • Ability to spin up an SDDC and
seamlessly add additional hosts in
minutes
• An infrastructure that supports
Benefits of Migration
The benefits of migrating workloads to the cloud include:
Next-Generation Apps
You might run your next-generation applications in the cloud for an number of reasons:
Key Capabilities
You can select from a variety of tools You can use infrastructure as a You can use the hyperscaler
partner ecosystem and still
to automate your infrastructure. service, while also considering
take advantage of your
containerization through
VMware infrastructure.
VMware Tanzu and Kubernetes.
The VMware Cloud operating model aligns your applications, cloud, and investments to
your business strategy. The operating model includes the people, processes, and
technology that are key to executing your business strategy.
The VMware cloud operating model focuses on delivering three main competencies for
multi-cloud management: service delivery, operations, and governance.
Because hybrid clouds use a consistent software-defined infrastructure stack, you can
manage on-premises data centers and public cloud environments using familiar skill sets
and tools. These tools include the vRealize Cloud Management stack (which includes
vRealize Suite), VMware Horizon, and CloudHealth.
VMware and its hyperscaler partners provide joint solutions for the hybrid cloud. These
solutions include disaster recovery, data center extension, cloud migrations, and next-
generation applications.
Learner Objectives
SDDC Overview
The SDDC can also be consumed as a service by leveraging VMware services, such as VMware Cloud on
AWS or VMware Cloud on Dell.
You can also spin up an SDDC as a service using a VMware partner, such as Azure VMware Solution,
Google Cloud VMware Engine, or one of the more than 4,500 VMware certified partners worldwide.
The virtualization
vSphere
vSphere provides the core virtualization platform for the SDDC and includes the following key products:
• VMware ESXi
○ Provides the compute platform where you create and run VMs.
• VMware vCenter Server
○ Acts as a central administration point for managing ESXi hosts and VMs that are connected
in a network.
○ vCenter Server exposes functionality such as VMware vSphere vMotion and VMware
vSphere High Availability.
vSAN
vSAN is a software-defined storage solution that enables administrators to provide a host cluster with
redundant storage without having to use traditional, external, shared storage. By clustering solid -state
drives (SSDs) or host-attached hard disk drives (HDDs), vSAN creates an aggregated datastore shared by
VMs.
vSAN is an object-based, policy-driven storage environment. The datastore contains all the VM files,
including the VMDK files. For each of the VMDK files, you can create a different VM storage policy,
which defines how data is stored on the disks of the datastore. You configure these VM storage policies
to take advantage of the vSAN features.
NSX
These workloads can run on the on-premises data center or on public clouds, such as VMware Cloud on
AWS, Azure VMware Solution, or Google Cloud VMware Engine.
NSX also supports modern applications through integration with vSphere with VMware Tanzu®.
True or False: The SDDC consists of vSphere, vSAN, and NSX, whether the SDDC is located on-premises
or in the public cloud.
True
False
vSphere HA
vSphere HA ensures availability of the VMs in your SDDC. vSphere HA provides uniform, cost-effective
failover protection against hardware and operating system outages within your virtual environment. It
uses multiple ESXi hosts to provide rapid recovery from outages and cost-effective high availability for
applications.
vSphere HA protects against ESXi host failures, guest OS failures, and application failures. It also
protects VMs against network isolation.
Host Failure
Guest OS Failure
Application Failure
Network Isolation
vSAN storage policies define storage requirements for your virtual machines (VMs). These policies
guarantee the required level of service for your VMs because they determine how storage is allocated
to the VM.
Storage policies are sets of rules that you configure for VMs. Each VM has a storage policy. Each storage
policy reflects a set of capabilities that meet the availability, performance, and storage requirements of
the application or service-level agreement for that VM.
Failures to Tolerate
The storage policy defines the failures to tolerate (FTT). The value for the number of failures to tolerate
defines the number of failures that a storage object can tolerate and the method that is used to
tolerate failures.
If vSAN fault domains are enabled, vSAN applies the active VM storage policy to the fault domains
instead of to the individual hosts.
vSAN requires a minimum of three fault domains. Each fault domain consists of one or more hosts. At
least one additional fault domain is recommended to ease data resynchronization in the event of
unplanned downtime or planned downtime, such as host maintenance or upgrades.
A sufficient number of fault domains should exist to satisfy the failures to tolerate (FTT) value defined in
the VM storage policy.
Learner Objectives
With VMware CloudTM on AWS, customers can integrate SDDC clusters with Amazon Web
Services, such as Amazon Simple Storage Service (Amazon S3), Amazon Elastic Compute Cloud
(Amazon EC2), and Amazon Relational Database Service (Amazon RDS).
Video Transcript
So, you're probably saying what's an SDDC? So an SDDC is a software-defined data center with
VMware Cloud on AWS. You can integrate these software-defined data centers that are using
the best of VMware technology with the native services that you find on AWS.
Each organization within VMC on AWS supports two SDDCs, and each SDDC can support up to
20 clusters. And those clusters can have between 2 and 16 hosts for a maximum of 160 hosts
Now, if you'll look on your screen, you're going to see that we have three logos under each of
the SDDCs, and that's for our ESXi, NSX, and vSAN technologies, because the three of those
together make cloud foundation, which is the foundation of the VMC on AWS stack.
It's running the same software that you use on-premises up in the cloud to make it easy, to
make your workloads mobile from the on-premises environment to the cloud environment.
VMware Cloud on AWS hosts are named for the Amazon EC2 bare metal hardware underlying
them.
Video Transcript
At the time of the recording of this video, we have two different types of nodes that are
available. And these are bare metal instances that AWS provides that we install ESXi and the
foundation stack on for you to then be able to administer. The default type of node is what we
call an i3.metal node. That's the actual name of it from AWS.
So the i3 node is going to be the default, the lower cost option, right? Now, if you need
additional storage needs or IOPS needs, you might use an i3en host. The i3en host has
additional NVMe storage and additional RAM for those high-intensity workloads. So, let's go
ahead and quickly review some of the differences between the i3 and the i3en.
The i3 is going to be for that general purpose, running an Intel Xeon Broadwell processor. From
there, it's going to have 36 cores at 2.3 gigahertz. Now you'll notice if you look over at the i3en,
we're running a little bit of a newer stack. We're running Cascade Lake with 48 cores at 2.5
Now you'll notice that hyperthreading is enabled here, which actually will get you up to 96
hyperthreading cores, as compared to the 36 non-hyperthreaded cores you're going to get on
the i3. You may say, well, Frazier why is hyperthreading not enabled on the i3 versus the i3en?
The i3 simply has a Spectre mitigation implemented because it is the Broadwell chip set. So, we
can't enable hyperthreading on them at this time.
When we're looking at RAM, Amazon does their RAM in gibibytes (GiB), not gigabytes (Gb). So
you need to make sure that you make that calculation. And you're not hearing things when you
hear me say gibi versus giga. We have 512 gibibytes under the i3 instance. If you need more
RAM, you can always use the i3en instance, which is going to have 768 gibibytes.
For both of the i3 and the i3en clusters, we are using vSAN as our storage methodology. That
storage is going to be vSAN storage with local NVMe flash that is underpinning it. If you're using
VMC on AWS for your i3 instances, you're going to have to have compression and
deduplication. It's going to be enabled by default and you can't take it off. Now for the i3en,
deduplication is not available, but compression will be allowed. We enable this because it
allows you to have 150% to 200% additional storage on your host by enabling these different
feature sets.
On the i3 hosts, you get 10.3 tebibytes (TiB) of raw storage capacity. While on the i3en, you're
going to get 45 tebibytes, which has a much larger disk size because of the additional NVMe
drives that are added to the disk groups for the i3en.metal. We'll talk more about that
i3en.metal when we go into the storage section and how you can see the different disk groups
that are created.
You receive 25 gigabytes of capacity for your bandwidth on both the i3 and the i3en. However,
the i3en has a network offload chip within its NIC that allows you to do some data-at-rest and
data-in-motion encryption. That's not available within VMC on AWS on the i3.metal hosts.
Let's go ahead and look at some of the different configurations that are common for these
clusters.
When designing your SDDC to be hosted in VMware Cloud on AWS, you can use two types of
bare-metal hosts for your workloads:
• I3.metal
• I3en.metal
i3.metal Hosts
The i3 host type is the default host type. Each i3 host includes:
• 36 cores
• 512 GiB of RAM
• 10.3 TiB of raw storage capacity
You can use i3.metal hosts for most use cases, including general computing workloads,
database, and virtual desktop deployments. These hosts are appropriate for workloads
characterized by high-performance, high throughput, or low latency.
The i3 host favors read-intensive operations, read-intensive operations with occasional high
write bursts, and smaller block sizes.
Many customers choose a host option according to workload capacity requirements. If the i3
host does not meet the requirements, you can use the higher-performance i3en host.
I3en.metal Hosts
The i3en host also includes hyperthreading, an upgraded CPU microarchitecture, and increased
CPU clock speed.
The i3en host type is optimized for data-intensive workloads. It has greater network bandwidth,
memory capacity, and storage capacity than the i3 host type.
Single-host SDDCs cannot contain the i3en host type. The i3en host type is currently
available in 17 AWS regions.
Compute Resources
VMware Cloud on AWS hosts run VMware ESXi directly on the computer hardware, without an
operating system. Cluster sizes have different compute capacities.
When you use AWS bare-metal hosts without an OS, features such as Intel Virtualization
I3.metal I3en.metal
The i3 hosts might use CPU functionality The i3en.metal hosts might use CPU functionality
up to Broadwell CPU family instruction set. up to the Cascade Lake CPU family instruction set.
You can use Enhanced vMotion Compatibility baseline of Broadwell for nay cluster in
an on-premises SDDC that might use VMware vSphere vMotion to migrate VMs to a
VMware Cloud on AWS SDDC.
You can use per-VM Enhanced vMotion Compatibility if a different CPU feature set is
required.
For more information about Enhanced vMotion Compatibility, see VMware knowledge base
article 1003212.
Storage Resources
VMware Cloud on AWS hosts use VMware vSAN and can connect to Amazon S3 and Amazon
Elastic File System (Amazon EFS) for additional storage needs.
Each i3.metal and i3en.metal host contains NVMe flash drives that provide increased vSAN
performance.
i3.metal i3en.metal
• Uses two all-flash vSAN disk groups for • Uses four all-flash vSAN disk groups in a
increased availability proprietary configuration
• Uses vSAN deduplication and compression
Data encryption is performed at the drive level, and datastore encryption is available
through vSAN with AWS Key Management Services (KMS) integration.
Networking Resources
• Amazon Elastic Network Adapter (ENA) connects each host to the LAN with a total
available bandwidth of 25 or 100 Gbps.
• A management gateway in the SDDC handles management traffic, and a separate
compute gateway handles workload virtual machine (VM) network traffic.
• Amazon Virtual Private Cloud (VPC) enabled optimized connectivity of the SDDC to other
AWS services, regions, and availability zones.
• Amazon Elastic Network Interfaces (ENIs) is a virtual NIC that is provisioned on the ENA.
The ENI connects the VMware Cloud on AWS SDDC to your Amazon VPC.
• Amazon Direct Connect enables low-latency connectivity of the SDDC to your on-
premises data center.
Which statements accurately describe VMware Cloud on AWS host configurations? (Select
three options)
You use i3.metal hosts for general computing workloads, database, and virtual desktop
deployments.
The i3en.metal host type is optimized for data-intensive workloads.
The i3.metal host includes hyperthreading.
A given cluster in your SDDC can contain a mixture of host types.
VMware Cloud on AWS hosts use vSAN and can connect to the Amazon S3 and Amazon EFS for
additional storage needs.
The VMware Cloud service is deployed in AWS data centers in multiple regions.
AWS Regions
A customer selects an AWS region where an SDDC is deployed, and the workloads persist in this
data center.
The VMware Cloud on AWS console data includes SDDC configuration information and data
that VMware collects on the use of VMware Cloud on AWS.
This data persists in the AWS us-west-1 (Oregon) data center. But it might be replicated to
other AWS regions to ensure availability of the service.
The location of the service can be global, which might introduce compliance and security
concerns. Compliance and security must be addressed in the design phase and in the customer
contracts.
Stretched Clusters
Stretched clusters offer an availability strategy. Stretched clusters are designed to provide the
SDDC with an extra layer of resiliency in the event of host-level failures within the cluster or
with AZ-level failures within the region.
A stretched cluster SDDC is one in which the hosts of the SDDC are evenly split between 2 AZs
within an AWS Region. A standard (non-stretched) SDDC is one in which all hosts are deployed
within a single availability zone.
Implementation Overview
Stretched cluster SDDCs are implemented using a vSAN feature of the same name. Per the
requirements of vSAN, the SDDC provides two data sizes and one witness host per cluster.
The data sites are composed of two groups of hosts, which are evenly split between a pair of
AZs. The witness host is implemented behind the scenes, using a custom EC2 instance, and is
deployed to a third AZ that is separate from the data sites. This witness host is not reflected in
the total host count of the SDDC.
Because of the requirement that stretched cluster SDDCs use a total of three AZs, stretched
clusters are only supported in AWS regions that are able to provide at least three AZs.
Video Transcript
In addition to regular clustering, we also have what they call stretch clustering. This is a really
cool opportunity to use some of the resiliency that's built into VMC on AWS and built into the
AWS architecture by splitting your SDDC into two AWS availability zones, which will then allow
you to spread your workloads across the two zones and have dual rights, and have a extremely
high level of protection and SLA against any type of failure, much more so than just having it in
a single region.
Let's talk a little bit more about the two different types of clusters that you're going to run into
in this SDDC. Your first and a default type of cluster is going to just be a cluster. This is going to
be restricted to a single availability zone within VMC on AWS, and you'll have a 99.9%
availability guarantee backed by an SLA.
This is for customers who want to balance risk and cost. Now, if cost is not a consideration and
you have to have a higher level of availability, that's where the stretched clusters really come in
handy. So they're still restricted to a single AWS region. You can't go cross region with a
stretched cluster, but you can go cross-availability zone.
They'll provide a 99.99% availability uptime, SLA guaranteed. And these are great for those
business critical workloads that you need to be able to abstract away that infrastructure
volatility and know that it's there. The decision to make a multi-cloud versus a stretch cluster
deployment is going to be something that's made at that time of deployment.
So if you needed three hosts, for example, to host all of your workloads, you're actually going to
need six now to be able to accommodate the dual writes that are happening to the two sets of
hosts within your cluster.
And if that's confusing, don't worry. We're about to talk a lot more about stretch clusters, not
only in this module, but also through the rest of this course. So once again, these stretch
clusters are a great way to abstract away that infrastructure volatility. It's built on the intrinsic
vSphere HA that's a part of the VMware stack and it has automated host failure remediation to
keep that uptime of at that 99.99%.
This is really cool because it's actually built into the infrastructure layer. So, if you are running
an application on VMC on AWS, then you don't actually have to design for this because it's in
the infrastructure layer.
As long as you're deploying, as you would normally deploy into this stretch cluster, then it is
happening with this extra resiliency added with no additional work on the developers. It does
this by using that synchronous sync and write between the availability zones for those mission-
critical applications.
So if one of the availability zones goes down, it's going to be treated as a vSphere HA event.
And then that VM will be restarted in the other availability zone. vSphere vMotion is enabled by
default on all hosts within a vSphere VMC on AWS cluster. So, movement of workloads is not a
problem whenever those HA events happen. It also allows you to live migrate workloads in a
cluster that spanning two different availability zones.
A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. Amazon VPC is
a service where you can launch AWS resources in a logically isolated virtual network that you
create.
Permissions Structure
Each hyperscaler partner has its own approach to permissions for the VMware infrastructure
(ESXi, VMware vCenter Server, and VMware NSX).
However, these roles do not include all the privileges that the
Administrator role includes because VMware performs host
administration and other tasks for you.
CloudAdmin: CloudGlobalAdmin:
• Includes the necessary privileges for • This role is an internal role that must exist
creating and managing workloads for an during SDDC deployment by can be removed
SDDC by a CloudAdmin after deployment is
complete.
• Does not allow changing the configuration
of certain management components that
are supported and managed by VMware,
such as hosts, clusters, and management
VMs.
If you change the password credentials, you are responsible for the new password.
Contact Technical Support and request a password change if the password is lost.
True or False: In a vCenter Server instance in the VMware Cloud on AWS environment, the
[email protected] user has more permissions than the [email protected] user
in an on-premises vCenter Server instance.
True
False
Now you have an opportunity to apply your knowledge in a practical activity, where you deploy
a VMware Cloud on AWS SDDC.
Learner Objectives
Azure VMware Solution delivers VMware based private clouds in Microsoft Azure.
Private cloud hardware and software deployments are fully integrated and automated in Azure.
You deploy and manage the private cloud through the Azure portal, CLI, or PowerShell.
During the planning process, you take the following steps to identify and gather information
needed for your deployment:
• Identify the subscription that you plan to use to deploy Azure VMware Solution
• Identify the resource group that you want to use for your deployment
• Identify the region in which you want Azure VMware Solution deployed
• Define the resource name for your Azure VMware Solution private cloud
• Identify the size of the hosts that you want to use when deploying Azure VMware Solution
• Define the number of hosts that you want to deploy to the first cluster for your
deployment
• Request a host quota (capacity) early so that you will be ready to deploy your Azure
VMware Solution private cloud
• Identify the /22 CIDR IP segment for your private cloud management
• Define the IP address network segment for your VM workloads
• Define the virtual network gateway
• Define VMware HCX network segments
If you’re considering another hyperscaler public cloud like Amazon Web Services, Google Cloud
Platform, or Oracle Cloud Infrastructure, consider these potential benefits of migrating to Azure
and AVS, particularly if you’re a Windows Server and SQL Server shop.
Let's look at these benefits for each of these three groups on the screen.
For business decision makers, Microsoft has introduced a number of specific cost savings for
Azure VMware Solution.
These include:
• Free security updates for Windows Server 2008 R2 and SQL Server 2008 R2 for 4 years
beyond the end of extended support date for those products.
• Extended security updates typically cost anywhere from 75% all the way up to 125% of
the base software license cost per year. That makes running legacy Microsoft platforms on
other cloud prohibitively expensive if you want to stay secure, as you should. No other
VMware hyperscaler service has free security updates.
Microsoft has announced that Azure and Azure VMware will provide free extended security
updates for SQL Server 2012 R2 and Windows Server 2012 R2 when those products reach their
end of extended support dates in 2022 and 2023.
You can also bring their existing on-premises Windows Server and SQL Server licenses with
software assurance to Azure and AVS under their Azure Hybrid Benefit program. This allows
you to save up to 40% on their Microsoft licensing costs.
No other VMware hyperscaler service allows BYOL for Windows and SQL Server licenses
purchased after October 2019.
Microsoft allows deployment of downloadable Office 365 in VDI desktops running with AVS. All
other VMware hyperscaler services are restricted from running downloadable Office 365
applications.
For IT infrastructure and operations teams, the integration between Microsoft tools and
VMware SDDC simplifies initial and day-to-day operations. Specifically, Azure credits are used
to purchase AVS, the Azure Portal is used to manage AVS subscriptions, and a unified Azure
services bill includes AVS.
Azure Resource Manager templates can be used to automate deployment of AVS capacity and
environment configurations, and integrated audit logging, alerting, and metrics management
are displayed in the Azure Portal as well as Azure Monitor.
For application developers, the integration between the Microsoft Azure environment and the
VMware SDDC accelerates delivery of modern applications.
Develop and deploy applications across VMware and Azure environments through Azure Cloud
API.
Developers can modernize components of existing vSphere applications with Azure's market
leading services such as Internet Of Things.
The integration of the VMware SDDC and vCenter into the Azure Portal gives developers a
single pane of glass to manage all of their Azure services including AVS.
Integrated identity management across VMware and Azure environments minimizes access
control issues when leveraging Azure services from within the AVS SDDC environment.
Azure VMware Solution combines VMware compute, networking, and storage running on top
of dedicated, bare-metal hosts from Microsoft Azure.
Because vSphere is running on bare metal, customers get the same performance and resilience
that they are accustomed to having on-premises.
The service is jointly engineered with Azure as the operator. This means that Azure delivers the
initial environment and provides periodic updates and fixes, remediates any hypervisor, server,
or network failures, and provides support. It also means that the service is fully integrated with
Azure’s native services.
You are not required to have anything from VMware on-premises. However, if you have
VMware technologies on-premises, you can maximize the value of this offering and easily
migrate workloads from on-premises to the cloud.
Azure VMware Solution delivers VMware based private clouds in Azure. The private cloud
hardware and software deployments are fully integrated and automated in Azure. You deploy
and manage the private cloud through the Azure portal, CLI, or PowerShell.
This diagram shows the private cloud within its own Azure Resource Group and adjacent
connectivity to various native Azure services within another resource group in the same region.
Here you can see that we have our vSphere clusters with vSAN storage, managed by vCenter, all
You’ll need to identify the subscription within Azure that you plan to use to deploy Azure
VMware Solution. You can either create a new one, or use an existing one.
The subscription must be associated with an Enterprise Agreement or Cloud Solution Provider
plan.
Once this is complete, a support request will need to be created with Microsoft Azure support
to request a host quota. This is when you’ll provide the region for deployment and number of
hosts. We will go over how to make those decisions a little bit later in this lesson.
Next, you’ll identify a resource group. Generally, a new resource group is created specifically for
AVS, but you can use an existing one.
Then, you’ll need to identify the admin who will be able to enable and deploy the private cloud.
This individual should have the contributor role for the subscription.
Lastly, you’ll need to think about the network requirements.
A /22 CIDR network block is required to deploy AVS. This address space is carved up into
smaller subnets and used for vCenter, NSX-T, vMotion, and HCX. This block should not overlap
with any existing network segment you have on-premises or in Azure.
You’ll need a /24 CIDR block for Azure VNet for your jump box or other services.
You'll also need to scope out an additional 24 CIDR block for NSX-T network segment for your
workload VMs.
Optionally, you will need to define network segments for HCX if you’re planning to leverage this
technology for migrations. However, this can be done after deployment.
You’ll need to determine whether you are using VPN or ExpressRoute, and configure
appropriately – most customers will be using ExpressRoute to be able to have the fastest and
highest performance connection possible between their on-premises infrastructure and their
SDDC in Azure.
On this screen, you can see the logical nesting of components within Azure.
As with other resources, private clouds are deployed and managed from within an Azure
subscription. The number of private clouds within a subscription is scalable. However, initially
As we said earlier, a private cloud contains the vCenter Server for management, ESXi hosts,
vSAN, NSX-T and HCX. Each additional private cloud that is deployed will have separate
management components.
For each private cloud created, there's one vSAN cluster by default. You can add, delete, and
scale clusters. The minimum number of hosts per cluster and the initial deployment is three.
Up to 4 private clouds can be created, with up to 12 vSphere clusters per cloud. There’s a
maximum of 16 hosts per cluster, with up to 96 hosts per private cloud.
If multiple clusters are deployed within the same private cloud, the management components
will only live on the first cluster. All additional clusters will be fully available for workload VMs.
vSphere HA and DRS are enabled by default.
The hosts come from an insolated pool where they pass all health checks and where all data is
securely deleted. These hosts are available for purchase by hourly (on-demand) billing or by
one-year and three-year reserved instances.
• Dual socket, 18 core, Intel Xeon Gold 6140 CPUs at 2.3GHz with hyperthreading enabled
• 576 GB of RAM
• 2 x 1.6 TB NVMe drives for vSAN cache
• 8 x 1.92 TB SSDs for vSAN capacity
• 2 x dual port 25 GbE NICs
Two NICs are provisioned for ESXi system traffic and two for workload traffic
Compute Resources
Azure VMware Solution hosts run ESXi directly on the computer hardware, without an OS.
Cluster sizes have different compute capacities.
By using Azure bare-metal hosts without an OS, features such as Intel Virtualization Technology
(VT) are directly available to the ESXi hypervisor.
Host Processor
Azure VMware Solution AV36 hosts use dual Intel Xeon Gold 6410 CPUs running at 2.3 GHz with
hyperthreading enabled.
You can use an Enhanced vMotion Compatibility baseline of Skylake for any cluster in
an on-premises SDDC that might use vSphere vMotion to migrate VMs to an Azure
VMware Solution SDDC
You can use per-VM Enhanced vMotion Compatibility if a different CPU feature set is
required.
For more information about Enhanced vMotion Compatibility, see VMware knowledge base
article 1003212.
Each AV36 host contains NVMe flash drives that provide increased vSAN performance.
Storage Resources
Azure VMware Solution uses vSAN and can connect to Azure Blob Storage, Azure Disk Pools, or
NetApp Files datastores for additional storage.
Storage Architecture
All disk groups use an NVMe cache tier of 1.6 TB with the raw, per host, SSD-based capacity of
about 15.2 TB.
The size of the raw capacity tier of a cluster is the per-host capacity times the number of hosts.
For example, a three-host cluster provides 46.08 TB of raw capacity in the vSAN capacity tier.
Each host provides approximately 7.6 TB useable capacity to the vSAN pool.
Data encryption is performed at the drive level, and datastore encryption is available
through vSAN with Azure Key Vault (AKV) integration.
Networking Resources
You can create secure, scalable, and highly available connections between the SDDC and other
The bare metal hosts that are used for Azure VMware Solution are different from the server
fleet that hosts other Azure IaaS services and are in a dedicated zone within the Microsoft data
center. Consider how the services work together.
Azure VMware Solution hosts run ESXi directly on the computer hardware, without an OS.
You can add, delete, and scale clusters. The minimum number of hosts per cluster, and in
the initial deployment, is three.
• You can create up to 4 private clouds, with up to 12 vSphere clusters per cloud. The
maximum is 16 hosts per cluster or 96 hosts per private cloud.
For information about cluster maximum configurations for Azure VMware Solution, see the
Microsoft documentation site.
The Azure VMware Solution service is deployed in Azure data centers in multiple regions.
For location definitions and the latest availability information, see the Microsoft documentation
site.
• Each availability zone has independent power, cooling, and physical security and is
connected through redundant, ultra-low-latency networks.
• Availability zone naming and number is different for every customer so that availability
zones do not become hotspots.
Permissions Structure
Each hyperscaler partner has its own approach to permissions for the VMware infrastructure
(ESXi, vCenter, and NSX).
CloudAdmin: CloudGlobalAdmin:
Learner Objectives
This video provides an overview of Google Cloud VMware Engine, including a demonstration of
how to set up a Google Cloud VMware environment.
Video Transcript
Cloud migration is top of mind for many organizations today. While moving to the cloud can be
full of challenges, the cloud offers many advantages around increased agility, new and
innovative services, and on-demand pricing that traditional data centers don't offer. So let's talk
about one way to make migration easier: Hosting your applications in a native VMware
environment, right in Google Cloud.
Google Cloud VMware Engine is built to address the biggest issues that prevent most workloads
from moving to the cloud: Lack of resources and the cost of rearchitecting apps. With a Google
Cloud VMware Engine, you can migrate your apps with no changes to your processes because
you run your applications on native VMware VMs in a dedicated and private SDDC.
This means that you can use the same tools, processes, and policies while still getting the
SDDC Planning and Design Page 116
This means that you can use the same tools, processes, and policies while still getting the
advantages of being in the cloud, all on top of deeper integration with other Google Cloud
services. And it doesn't take long to spin up an environment. You can quickly lower your total
cost of ownership and spend more time planning how to rearchitect down the road.
Let's walk through a demo of how to set up Google Cloud VMware environment with just a few
clicks. After clicking the navigation menu, we'll scroll down to the COMPUTE section and
click VMware Engine, which takes us to an overview page. The overview provides you with
We’ll create a private cloud by clicking the Create Private Cloud button, then entering a name
for our new cloud. We'll stick with US East for the location and keep the node count at the
minimum number of three. We'll input the CIDR range for our management appliances, and
then click Review and Create.
While our private cloud is being created, let's talk about pricing, which is per node and includes
all the storage, compute and licensing to run your VMware environment. You'll pay monthly by
default, but you can also sign up for one- or three-year plans to reduce costs.
The links in the upper-right of the screen allow you to launch the vCenter client, which is the
standard enterprise vCenter client VMware users know today and expand your cloud by adding
nodes.
SDDC Planning and Design Page 119
nodes.
There are also additional options at the bottom of the screen that let you remove nodes, delete
your cloud, and elevate your vSphere privileges, an important feature that gives you admin
access in vCenter so you can make the configuration changes necessary to run certain third-
party software.
Let's scroll back up to the top of the screen and launch our vSphere Client. After logging in, you
can see that we're working with the same vSphere interface that's so familiar to admins. This
native access to VMware provides you a standard way to control your applications, while still
getting all the benefits of running on Google Cloud.
The Activity interface provides important details for your security and operations teams,
including environment alerts, details about past events, and any currently running tasks and
their status. Your team can also audit logs of any activities performed by users.
The Account screen provides a summary of your entire VMware Engine environment, including
all your private clouds, and lets you subscribe to email alerts and add distribution lists. The
Account screen is also where you can manage any users that have access to the environment.
One really nice feature of the VMware Engine is the native integration into Google services. All
the billing related to the service is integrated seamlessly into the online account management
system, which allows you to see all the usage of your VMware Engine dedicated nodes right in
With this quick tour, we've shown you how to quickly generate a private native VMware
environment on Google Cloud and use all the same tools and processes you're already familiar
with. The environment is fully supported by VMware and gives you the ability to create hybrid
apps that integrate seamlessly with Google Cloud services.
Google Cloud VMware Engine is available for you to try out now. So check out the
documentation to learn more details and spin up your own private VMware environment.
Google Cloud VMware Engine brings VMware enterprise class SDDC software to the Google
Cloud Platform.
Customers can run production applications across vSphere-based private, public, and hybrid
cloud environments, with optimized access to Google Cloud Platform services.
Google Cloud VMware Engine integrates with VMware compute, storage, and network
virtualization products (vSphere, vSAN, and NSX), vCenter management, and robust disaster
protection.
It optimizes these tools to run on dedicated, elastic, Google Compute Engine bare-metal
infrastructure that is fully integrated with the Google Cloud Platform.
When designing your SDDC to be hosted in Google Cloud VMware Engine, you use a ve1-
standard-72 host to run your workloads.
ve1-standard-72 Hosts
The ve1-standard-host type is the default host type. Each host includes:
Compute Resources
Google Cloud VMware Engine hosts run ESXi directly on the computer hardware, without an
OS. Cluster sizes have different compute capacities.
By using Google Cloud bare-metal hosts without an OS, features such as Intel Virtualization
Technology (VT) are directly available to the ESXi hypervisor.
You can use an Enhanced vMotion Compatibility baseline of Cascade Lake for any
You can use per-VM Enhanced vMotion Compatibility if a different CPU feature set is
required.
For more information about Enhanced vMotion Compatibility, see VMware knowledge base
article 1003212.
Storage Resources
Google Cloud VMware Engine hosts use vSAN and can connect to the Google Cloud Storage,
Google Cloud Filestore, or third-party storage providers for your additional storage needs.
Each ve1-standard-72 host contains NVMe flash drives that provider increased vSAN
performance.
Cluster Size Total Cache Size (TB) Total Capacity Size (TB)
3 x ve1-standard-72 9.6 57.6
6 x ve1-standard-72 21.6 64.2
16 x ve1-standard-72 57.6 171.2
Networking Requirements
To establish connectivity between Google Cloud VMware Engine private clouds and other
networks, you use networking services such as Cloud VPN and Cloud Interconnect.
Cloud Interconnect
Cloud Interconnect provides connectivity between your on-premises network and Google Cloud
through a high bandwidth, low latency connection.
This services comes in two versions, Dedicated Interconnect and Partner Interconnect:
• Dedicated Interconnect: This version uses a direct circuit (private line) provisioned by a
telco to provide connectivity at 10 or 100 Gbps throughput.
• Partner Interconnect: This version provides similar connectivity through a service
provider at speeds between 50 Mbps to 10 Gbps.
Google Cloud VMware Engine deploys management components of a private cloud in the
vSphere / vSAN subnets CIDR range that you provide during private cloud creation. IP addresses
in this range are reserved for private cloud infrastructure, and cannot be used for workload
VMs. The CIDR range prefix must be between /24 and /21.
The size of your vSphere / vSAN subnets CIDR range affects the maximum size of your private
cloud. This table shows the maximum number of nodes you can have, based on the size of the
vSphere / vSAN subnets CIDR range.
When selecting your CIDR range prefix, consider the node limits on resources in a private cloud.
For example, CIDR range prefixes of /24 and /23 do not support the maximum number of nodes
available to a private cloud.
Cluster size can be dynamically modified through the Google Cloud VMware Engine console on
demand as necessary.
Node Considerations
• You can specify the number of hosts to add or remove to or from their cluster.
• Private cloud initial setup happens in ~30 minutes.
• Additional hosts can be added in ~15 minutes.
• A three-node cluster is the minimum for production.
• You can have up to 32 hosts per cluster.
• You can have up to 64 hosts per private cloud.
You can choose the Google Cloud region where an SDDC is deployed, and the workloads persist
in that data center.
The location of the service can be global, which might introduce compliance and security
concerns. Compliance and security must be addressed in the design phase and in the customer
contracts.
For more information about Google Cloud regions that support VMware Cloud, see "Avaialble
Google Cloud Regions" in the Google Cloud VMware Engine Operations Guide.
An availability zone is one or more discrete data centers with redundant power, networking,
and connectivity in a Google Cloud region.
• Each Google Cloud region consists of multiple, isolated, and physically separate availability
zones within a given geographic area.
• Each availability zone has independent power, cooling, and physical security and is
connected through redundant, ultra-low-latency networks.
• Availability zone naming and number is different for every customer so that availability
zones do not become hotspots.
Permission Structure
Each hyperscaler partner has its own approach to permissions for the VMware
infrastructure (ESXi, vCenter, and NSX-T Data Center).
For more information about permissions, access the Google Cloud VMware Engine
documentation.
If you change the password credentials, you are responsible for the new password.
Contact Technical Support and request a password change if the password is lost.
When using a third-party KMS solution, you are responsible for providing the required
licenses for the KMS.
You are designing your SDDC to be hosted in Google Cloud VMware Engine. Which statement
about host configuration is accurate? (Select one option)
The AV36 host is the default host type for running workloads
Google Cloud VMware Engine hosts run ESXi on an operating system
Google Cloud VMware Engine hosts use vSAN and can connect to Google Cloud Storage for
additional storage
The default Google Cloud VMware Engine privileges give you access to all administrative
functions
Learner Objectives
Management Components
Example VMs
To run in the SDDC, the management VMs must meet several resource requirements.
• Six instances
• Virtual CPUs: 28 total
• Memory: 92 GB
• Provisioned Storage: 1,100 GB
• Consumed Storage: 1,465 GB
A restricted access model prevents users from adjusting the resources on these
management VMs.
The resource requirements for management VMs in a VMware Cloud on AWS i3 host are as
follows:
• The management VMs (vCenter Server Appliance and NSX VMs) consume cluster
resources.
• A resource pool is created for the management VMs.
• The resource pool includes CPU and memory reservations to guarantee a minimum
amount of resources.
As you scale out the i3 host cluster, the amount of resources required for the management VMs
What types of resource requirements apply to management VMs? (Select four options)
Virtual CPU
Network Bandwidth
Memory
Guest OS Memory
Provisioned Storage
Consumed Storage
True or False: In a VMware Cloud on AWS i3 host cluster, a 3-node cluster reserves less CPU
and memory resources relative to total capacity for management functions than a 16-node
clusters.
True
False
Learner Objectives
After completing this lesson, you should be able to:
This lesson focuses on Elastic DRS for VMware CloudTM on AWS and its
scale-out capabilities.
For more information about the scale-out capabilities of other hyperscaler partners, you can
access the following resources:
With Elastic DRS, you can set policies to automatically scale your cloud SDDC by adding or
removing hosts in response to demand.
Elastic DRS replaces VMware vSphere® Distributed Power ManagementTM in a VMware Cloud
on AWS SDDC.
To access Elastic DRS settings within the VMware Cloud on AWS cloud console, you
select Actions > EDIT EDRS SETTINGS on the SDDC pane.
The benefits of using Elastic DRS in a VMware Cloud SDDC are numerous:
How do Elastic DRS policies help you to scale clusters and SDDCs?
To configure Elastic DRS, you select a policy, configure the minimum and maximum cluster
sizes, and click SAVE.
Scale-out is performed when utilization for any resource remains consistently above built-in
thresholds.
Scale-in is performed when utilization for all resources remains consistently below the built-in
thresholds.
Several types of notifications are available for scaling recommendations that are generated by
Elastic DRS.
vCenter Server Logs - More details about Elastic DRS events are tracked in vCenter Server log
files.
True or False: Elastic DRS scales in whenever any of the resources drops below a configured
threshold.
True
False
(Elastic DRS scales in only when all resources are consistently below the configured thresholds
for the policy and only when the performance or cost policies are in use)
Learner Objectives
• Use sizing tools to assess the cost of running applications and VMs on VMware Cloud
providers
• Recognize the shared responsibility models of each of the major hyperscalers
• Identify services for creating on-premises to SDDC connections
When designing a cloud SDDC, you must consider the number of required resources, the
division of responsibilities, and how to connect you on-premises data center with the cloud
SDDC.
To integrate an SDDC solution into your existing data center, you must first determine what
resources are required. Then you can reserve and order services from your hyperscaler partner.
Sizing tools can help to automate this process.
How are responsibilities divided between the SDDC and the partner?
A shared responsibility model defines distinct roles and responsibilities, for example, customer,
VMware, and cloud provider.
When you design your hybrid cloud infrastructure, you must consider how to connect your
existing compute infrastructure and storage in the cloud SDDC.
Sizing Tools
VMware Cloud Sizer is a complimentary VMware Cloud service that estimates the resources
that are required to run various workloads in VMware Cloud.
In addition, the VMware Cloud Services portal includes an integrated user interface for the
sizer, making the sizing process easy to navigate.
VMware Cloud Sizer is responsible for estimating the resource use for any VMware Cloud
deployment. The VMware Cloud Sizer currently supports VMware Cloud on AWS.
You can access the sizer tool on the VMware Cloud on AWS Sizer website.
The default values are based on workload or application profiles obtained from vSAN
assessment, large proofs of concept, and telemetry. They can be changed to match your
environment.
• Quick Sizer
• Import Mode
• Advanced Sizer
Quick Sizer
Using Quick Sizer, you can perform sizing with minimal inputs. Typically, you use the Quick Sizer
for the initial sizing.
With Quick Sizer, you get a simple, high-level input of workload type and number of VMs,
followed by the specific compute and storage resources, which can be either provided as per
VM averages or as total number of resources for the environment in scope.
With Import mode, you can perform sizing on data that is imported from on-premises through
Live Optics or RVTools.
When you choose a file to upload, you should not make changes to the file that disrupt the
sizing. For example, you can remove entire rows of VMs but not rename or add columns with
custom data.
You do not need to pre-filter the document because basic filters can be applied through the
sizing tool. For example, you can size powered-on VMs for only the used memory and the
storage, as opposed to the provisioned storage.
For more information about the RVTools software, see the RVTools website.
For more information about the Live Optics tool, see the Live Optics website.
Advanced Sizer
With the Advanced Sizer option, you can perform sizing on multiple workload profiles with
more gradual configuration inputs.
With the Advanced Sizer option, you can perform sizing on multiple workload profiles.
On the Basic tab, manual sizing options are available. These options are similar to the Quick Sizer
options, except that the IOPs per VM option can be changed.
Sizer Output
The sizer tool provides several recommendations. You can generate a report PDF for
distribution to stakeholders and decision makers.
VMware Cloud on AWS implements a shared responsibility model that defines distinct roles and
responsibilities: Customer, VMware, and Amazon Web Services.
Customers are responsible for the deployment and ongoing configuration of their
SDDC, virtual machines, and data.
In addition to determining the network, firewall, and VPN configuration, customers are
responsible for managing virtual machines (including guest security and encryption) and
using VMware Cloud on AWS user roles and permissions with vCenter roles and
permissions to apply the appropriate controls for users.
VMware is responsible for protecting the software and systems that make up the VMware
Cloud on AWS service.
This software infrastructure is composed of the compute, storage, and networking software
comprising the SDDC, and the service consoles that are used to provision VMware Cloud on
SDDC Planning and Design Page 146
comprising the SDDC, and the service consoles that are used to provision VMware Cloud on
AWS.
AWS is responsible for the physical facilities, physical security, infrastructure, and hardware
underlying the entire service.
AWS Direct Connect creates a dedicated network connection from an on-premises data center
to an AWS region.
Rather than using only a VPN tunnel over the public Internet, AWS Direct Connect uses a
dedicated leased connection to connect the on-premises data center to an AWS Direct Connect
location.
With AWS Direct Connect, network traffic is isolated and bandwidth is, potentially, increased
between the on-premises data center and the AWS resources.
For more information, see "Direct Connect Gateways" on the Amazon website.
A full list of AWS Direct Connect partners is available on the AWS website.
A full list of AWS Direct Connect locations is available on the AWS website.
1. True or False: VMware Cloud on AWS uses a shared responsibility model that defines
distinct roles and responsibilities for the customer, VMware, and AWS.
True
False
2. You are designing your SDDC in partnership with VMware Cloud on AWS. Which
statement accurately describes tools for helping you determine resources? (Select one
option)
Quick Sizer provides a high-level input of workload type and number of VMs.
The default values for workload resources cannot be changed when you use the
sizing tool.
Using Manual mode, you can perform sizing on data that is imported from on-
premises through Live Optics or RVTools
3. How does AWS Direct Connect create a connection from on-premises to the cloud SDDC?
Capacity planning, or sizing, with Azure VMware Solution involves discovering, grouping,
assessing, and reporting.
Use cases for capacity planning with Azure VMware Solution include:
• Determining monthly and yearly costs: You want to determine the costs that are incurred
on a monthly and yearly basis. A capacity planning exercise can help provide customers
with potential costs.
To prepare for an Azure VMware Solution deployment, you must consider how the overall
capacity affects business and technical decisions.
Explore how to use the Microsoft Azure Migrate tool to determine the required resources for
transitioning to a cloud SDDC model:
Step 1: Discovery
OVA Mode
This template can be used to bootstrap an Azure Migrate VM in an on-premises VMware site.
After the Azure Migrate instance is configured, it sends on-premises inventory data in Azure.
CSV Mode
The CSV file expects four mandatory fields: VM/Server Name, Number of Cores, Memory, and
Eligible OS Name.
Other remaining optional fields (such as Number of disks, Disk IOPS, Throughput, and so on) can
be added to improve the accuracy of sizing.
Output from VMware utilities, such as RVTools, can be used to create a CSV file.
Step 2: Grouping
Grouping helps you to organize and manage a large number of VMs. You can group in different
ways, for example:
Information obtained through dependency analysis can also be used for grouping related VMs.
Step 3: Assessment
Sizing Parameters
You configure the assessment with parameters that are useful in determining right sizing and
capacity. These parameters can cover target Azure VMware Solution site details, such as the
location, node type, and so on.
For Azure VMware Solution VMs, you must include parameters such as FTT and RAID settings
and CPU oversubscription.
Assessment Criteria
You can assess the VMs from two perspectives:
• Performance: You assess on-premises VMware VMs, using their performance profiles.
You can select performance history, going back one month, to capture a performance
profile.
You can provide an additional capacity margin by using a comfort factor, which increases
the capacity by multiplying it by the comfort factor.
• As on-premises: In this case, you use the existing VM specifications, such as CPU and
memory.
Step 4: Reporting
The results include cost and readiness. A summary provides the number of assessed VMware
VMs, the average estimated cost per VM, and the total estimated costs for all VMs.
Reporting shows Azure VMware Solution readiness, providing a clear breakdown of the VM
numbers, across multiple readiness statuses (Ready, Not Ready, Ready with conditions, and so
on).
You get a list of VMs that might require remediation before migration, including reasons for
remediation.
Reporting also provides a number of Azure VMware Solution nodes that are required to run
assessed VMs. You also can access a projected utilization for CPU, memory, and storage in
Azure VMware Solution.
The management of Azure VMware Solution is a shared responsibility between the customer
and Microsoft.
• Deployment, configuration, life cycle, and management of physical infrastructure are also
the responsibility of Microsoft.
For more information about the Azure VMware Solution shared responsibility model, see Cloud
Infrastructure Services on the VMware TechZone website.
Azure ExpressRoute
For connectivity to an Azure SDDC from your on-premises data center, you can use an any-to-
any (IP VPN) network, a point-to-point Ethernet network, or a virtual cross-connection through
a connectivity provider at a co-location facility.
ExpressRoute connections are not made through a public network. As a result, they provide
more reliability, quicker speeds, predictable latencies, and higher security than traditional
Internet-based connections.
Many enterprise customers have an existing ExpressRoute circuit between an on-premises data
Using ExpressRoute Global Reach, you can peer the ExpressRoute circuit with the private
ExpressRoute circuit that supports Azure VMware Solution to allow for connectivity between
on-premises resources and Azure VMware Solution.
For more information about setting up an ExpressRoute circuit, see the ExpressRoute
documentation on the Microsoft website.
ExpressRoute
Direct Connect
Network Segments
Google Cloud VMware Engine delivers VMware as a service with all the components you need
to securely run VMware natively in a private and dedicated private cloud.
Google Responsibilities
Customer Responsibilities
You are responsible for the deployment and ongoing configuration of the SDDC, virtual
machines, and data.
In addition to determining the network, firewall, and VPN configuration, you manage virtual
machines (including guest security and encryption) and use Google Cloud Platform IAM roles
and permissions with vCenter roles and permissions to apply the appropriate controls for users.
Google Cloud VMware Engine supports options for connecting from on premises to the cloud to
support different customer use cases.
When connecting from on-premises to the Google Cloud VMware Engine private cloud, you can
use Cloud Interconnect or Cloud VPN.
Cloud Interconnect
You use Cloud Interconnect when you require high speed, low latency connectivity into the
Google Cloud Platform (GCP) for access to Google Cloud VMware Engine and other native GCP
services.
Cloud VPN
You might consider Cloud VPN in situations where you do not require additional resiliency or if
you require a lower cost option for hybrid connectivity into GCP. For less error-prone
configuration, configure BGP for Cloud VPN connectivity.
You can use the GCP Org and associated virtual private cloud (VPC). The CIDR range for the
Google Cloud VMware Engine management network is configured in this VPC and connectivity
to this environment is established.
You have several options for connectivity between your on-premises infrastructure and your
cloud SDDCs in Google Cloud VMware Engine:
• Cloud Interconnect - High bandwidth, low latency 10Gb and 100 Gb options
• Partner Interconnect - Partner-managed 50 Mbps to 10 Gb bandwidth options
• Cloud VPN - Secure layer 3 connection over the Internet
• Layer 2 VPN - Migration use cases NSX standalone edge or VMware HCX
• Point-to-site VPN - Secure Admin access to vCenter
For information about connectivity options with Google Cloud VMware Engine, see the guides
on the Google Cloud website.
1. Which responsibilities does each party have in Google Cloud VMware Engine?
2. Which option should you select if you require high speed, low latency connectivity into
the Google Cloud Platform for access to Google Cloud VMware Engine and other native
GCP services? (Select one option)
Cloud Interconnect
SDDC Planning and Design Page 158
Cloud Interconnect
Cloud VPN
ExpressRoute Global Reach
Learner Objectives
In addition to VMware Cloud on AWS, Azure VMware Solution, and Google Cloud VMware
Engine, other partner solutions are available. This lesson provides brief descriptions of each
solution.
Raghu Raghuram: Larry, thank you for joining us and it's great to see you.
Raghu Raghuram: The VMware and Oracle partnership has been thriving over the last couple of
years, and it's great to see the giant solutions we have done over in the marketplace to our
customers. Before we dive into the partnership, it'd be great to hear from you - what trends are
you seeing, and what's driving your thinking and Oracle's thinking?
Larry Ellison: Well, several things. One is we're gratified by the fact that the latest report that
surveys IaaS and PaaS services for the second year in a row, Oracle is out there trying to join the
big three. So we were kind of the fourth player in all of this, but we're the most improved
player over the last couple of years. And by the way, the VMware partnership has helped
enormously. We have a number of customers and a lot of nodes running. Our partnership with
Zoom and others has allowed us to make the investments necessary. Now, we think Oracle is
now a fourth major player among the cloud infrastructure providers.
So that's a big deal. Multi-cloud I think is a big deal, because there's really two separate cloud
businesses. There's the application business and then there's the infrastructure business. And
as customers pick applications from cloud application companies and big infrastructure from
cloud infrastructure companies, they need to interconnect these clouds. They're not going to be
using one cloud. And in fact, they need to interconnect there on premise workloads with their
cloud workloads. They need to interconnect their infrastructure cloud provider with their
application cloud provider. So this whole idea of multi-cloud is going to be very important,
important going forward. It used to be people thought, well, I'm just going to move everything
to Amazon. And I think, I think that's not going to be the case.
Amazon's very good at some things. Actually, I think Oracle and VMware are very good at some
things. And they're going to want to pick the best technology available at the best price
available. And that's going to mean having multiple clouds in their future. So multi-cloud
extremely important. Hybrid cloud interconnecting on prem to public clouds, and application
clouds interconnected to infrastructure clouds. I think all of that's going to be a huge trend as
the center of gravity of computing moves from on-premise to the cloud.
Raghu Raghuram: Yep. Couldn't agree more. And in all of our conversations with customers, we
are seeing exactly the same thing, customers wanting to use a variety of cloud for different
reasons, because they're all good at lots of different things, and connecting their on-premise to
the various cloud solutions.
Coming to our partnership, you mentioned the Oracle Cloud VMware solution, which was
activated a couple of years ago, and we have started to see very good interest from customers.
Can you talk a little bit more about what you're hearing from customers that are using the
solution along with the rest of OCI?
Larry Ellison: Yeah, I think one of the things we tried to do is make it very easy to lift up an
existing VMware estate, and move it to the cloud without redoing your network architecture.
So as you know well, Raghu, I don't want to dive too deep into the underlying technology. But
we have an L2 network implementation. So you don't have to change all your IP addresses.
You'd don’t have to do all this stuff. You can lift up an existing VMware configuration and move
it largely unchanged into an Oracle public cloud very easily and very, very quickly.
And the interesting thing is the network addresses you have allow you to isolate. When you lift
and shift, so, quick lift and shift is part of it, with our L2 implementation. The other thing that's
interesting about that is because we control, the network addresses are all virtualized, of
course. We can isolate the VMware estate from other customers, or even other estates in the
same company.
So we give you a level of security because of that network architecture, that once it's moved,
other people, neighbors can't address your storage systems, can't address your compute
systems. So we really provide that level of isolation to guarantee security, and that's a very big
deal in a world of ransomware. So, quick lifting and shifting, security built in all because of our
unique approach of an L2 implementation. And in a world where ransomware is getting more
and more common, I think this becomes more and more important, and more important to our
customers and, therefore, a very important offering for VMware and Oracle to deliver to those
customers.
Raghu Raghuram: Yeah, and we've seen some good customer events like in Maxim’s and Ruma
logistics, which is the biggest operator, railway operator in Brazil, and many, many others as
well. This is super exciting.
Larry Ellison: Yeah, exactly. That's in Hong Kong, retailers in Hong Kong, railways in Brazil
moving, that have proved that they can move these estates over, save money, and get better
security.
Raghu Raghuram: Yup. That accelerates their whole journey to the cloud and their whole
journey to the modernization of their application portfolio as well, because they can then
connect it to all of your assets that you've got in the databases, on the applications and
everything else that's in the Oracle cloud.
So that's a great start to OCVS and our teams are working great together. What do you see
going forward for the solution? And what customers can look forward to?
Larry Ellison: Well, I think, again, some of the unique things that we offer together are an
environment where security is always on. You know, the approach Oracle takes to security is it
is not an optional feature that you buy. We don't have a long list of parts that you order this
security and that security. Everyone gets security, there's no uplift. You have to have security.
You have to have that level of isolation to protect your data.
It's not that you choose to encrypt, or you choose not to encrypt. No, encryption is always on -
encryption at rest, encryption on the net, it's always on. We don't give you the option. The
other thing I think is very important, that people are going to be looking forward to, that I think
is critical for the future of cloud computing is autonomy, autonomous systems.
The only way, the only way to guarantee that your data is not going to be stolen is to ask your
people who are doing implementations over there at AWS, not to make any mistakes, not to
misconfigure something. If human beings, if the infrastructure that you're running on is
manually configured and a human being makes an error, your data is at risk.
So, everyone thinks of autonomous systems, whether it's an autonomous system from Tesla,
that's going to drive you from the restaurant at home in the evening, as it’s a convenience.
Well, it's more than a convenience. The autonomous system is much less likely to have an
accident and crash your Tesla when you're coming home from dinner.
The autonomous database at Oracle and the autonomous Linux systems that make up our
infrastructure - we never miss patches because it's the computer that does the patching. The
computer does the patching immediately when the patch is available, and it does the patching
while we're live.
Just like, I could point out, when you're moving a VM workload from on-premise into the cloud,
you can move that workload while you're running. I mean, it's amazing. Same thing for security,
a patch that becomes available, you don't look for a patch window to take your systems down
and patch it. That patch window is, you know, if you wait two days to patch, that's two days of
vulnerability, we can't afford that.
So we have to patch while we're on, be able to do these things while the systems are running,
and it's go to be the computer, our robots, our AI, our machine learning that has automated all
this stuff and does it autonomously. So your data is safe, it's not going to be stolen. And that's
what the ransomware guys do, right? They take the data, they encrypt it, and then they offer to
sell you the key. And that's going to get worse before it gets better, but not for our customers.
We'll protect our customers.
Raghu Raghuram: Yeah, that's a very similar philosophy to what we have at VMware as well.
We call it intrinsic security, to build it in. And that's great. So, these are very exciting sorts of
developments to go forward, and look forward to the collaboration between the two teams. It's
been great to work with your teams, bring the solution forward. Thank you so much for your
time, Larry.
Larry Ellison: Raghu, thank you very much. We're looking towards growing our business and
making a lot of customers very, very happy. Thank you for taking your time.
Learn more about Oracle Cloud VMware Solution on the Oracle website.
Hi, I'm Simon Kofkin-Hansen from IBM Cloud, and today we're here to talk about IBM Cloud for
VMware solutions. This is the most secure enterprise-grade cloud for VMware at scale. So let's
break down what does this mean? The security leadership, enterprise grade, and the VMware
expertise at scale.
Starting with the security leadership, IBM provides the highest form of encryption for data at
rest and data in motion with FIPS 140-2 Level 4-based encryption, ensuring that your data
where it resides and while it's moving around within your organization is using that the highest
form of encryption available out in the market today.
We also provide security role-based access control, allowing different parts of your organization
to interact with the data, maintaining the compliance, the security, and the visibility to the
different parts of the business, and ensuring the wrong people don't get access to the wrong
types of data.
Furthermore, we can comply with data sovereignty regulations, ensure we provide geo-fencing
for your workloads, ensuring data doesn't necessarily cross the relevant borders. As we find this
becomes more and more easily achievable in this cloud and this virtual-based world, we want
to ensure that this sovereignty remains intact, also providing compliance and regulatory control
through config management and managing the configuration drift.
Through our wide variety of partners, we've brought all these different security solutions
together, all based on the real-time advice and guidance provided from our highly regulated
and security-conscious clients.
So let's explore enterprise grade, and what does that mean? So, what we've done is we've
utilized and codified IBM's experience of managing, for over a decade, these 850,000 based
VMware workloads for all our enterprise clients across the world. And these consist of banking,
government and things in the financial sector, insurance, retail, and all the other industry
sectors. And the ability to take all that experience, codify it and creating an automated way of
deploying these solutions out there.
The automation that we have provided not only brings a rapid provisioning and a rapid uptime
to provide these solutions and these platforms, but it has ancillary benefits downstream by
making these solutions much easier to support and much easier for all the third parties that we
have out there to integrate their solutions and integrate their products onto this overall
platform.
The other thing with enterprise grade is we have the largest footprint of the solution available,
with the solution available in over 35 data centers globally. We also have the flexibility and this
is what the final factor of enterprise grade: the flexibility of options and the flexibility of choice
with the myriad of options that you can choose.
And what I mean by that is we have a number of different storage options. As we found out by
direct lessons learned from the enterprise is most enterprises are not going to choose just one
or two different storage options. So taking that lessons learned, you have a myriad of choice,
with software-defined back storage, with endurance-based storage, and with object storage for
long-term data archiving and retrieval.
All these lessons learned have been brought together along with the partners and our broad
partner ecosystem to bring together what we believe is a truly enterprise grade and enterprise
ready solution, which has been tested, validated, and verified by many of the enterprises out
there today.
VMware expertise at global scale, let's dive into that for a second or two. We're the largest
manager of workloads out there with 850,000 VMs under management. we've migrated over a
hundred thousand workloads from on-premise into the cloud and ensuring and helping our
clients out there with their data transformation. We have over decades worth of experience in
managing these workloads, providing these solutions across various industry verticals.
So with these three things, it all makes it out to be why we're unique in the market with our
VMware solutions.
Thank you for watching this video today, and please feel free to leave any comments down
below. If you like this content, please, like and subscribe to future videos around this and many
other subjects on IBM Cloud.
Learn more about IBM Cloud for VMware Solutions on the IBM Cloud website.
Rosa Wang: Hello everyone. My name is Rosa Wang. I am a global alliance manager of Alibaba
Cloud.
Enterprises adopt multi-cloud and the hybrid cloud rapidly to address increasing demands from
Today, I'll cover three major use cases for this solution, which include disaster recovery, data
center extension, and cloud migration. Before I get into that, I would like to introduce CT Dong,
who is the SDDC solution architect leader at VMware, to talk about some technical details of
this solution.
CT Dong: Hi, my name is CT Dong and I'm the cloud architect lead from VMware, China region.
So, today we will introduce to you the Alibaba Cloud VMware Solution. So, VMware Solution is
to allow any application running on any cloud on any devices. To achieve that, VMware built
the leading service called VMware Cloud Foundation, which is composed of the very famous
VMware products like vSphere, software-defined network, NSX, and software-defined storage,
vSAN, and the cloud management, vRealize.
The VCF is not only built for enterprise private cloud on customer-owned data center, but also
it's tied to every major public cloud globally, such as VMC on AWS, Microsoft Azure, Google
GCP, and of course, Alibaba Cloud in China. And even the same architecture can be the
foundation for edge computing.
So, you can see VMware Cloud Foundation enables consistent cloud infrastructure everywhere.
Today, hybrid cloud becomes the preferred enterprise cloud adoption strategy. The reason the
research shows the percentage of organization committed to or interested in hyper-cloud
strategy keep growing, and the amount of the mix of workloads moving from on-premise to
cloud: 56% are doing lift and shift and 44% are doing refactoring. So, if you are doing simple
math, you can see lift and shift of enterprise workloads is a multi-billion opportunity and it's
happening now. So, VMware together with our public cloud partners can address these market
requirements very well.
So why do the top public clouds build a joint solution with VMware? There are a couple of
compelling reasons. First, VMware is private cloud leader with more than 15 million workloads
running on VMware vSphere. Second, the joint hybrid cloud solution allows customer to do the
live migration without cost, complexity, or risk caused by refactoring. And last but not least,
migrating the legacy applications to public cloud gives the customer the opportunity to
integrate the public cloud services, such as AI and the machine learning central, with better
connection and a lower cost.
This is the general architecture we built for VMware hybrid cloud. On customer on premise, we
have full stack of VMware Cloud Foundation being built. And on public cloud, with joint
engineering effort, we integrate the full stack VMware Cloud Foundation software on top of
public cloud infrastructure.
Let’s drill down a little bit on the details of Alibaba Cloud VMware Solution. The part on left side
shows the architecture on Alibaba Cloud. We preload the VMware software stack onto the
The part on right side shows the architecture of customer data center. It is also full stack
VMware solution, managed by either the customer themselves or by the management service
provider for the customer. In between the customer data center and the public cloud, we can
use Alibaba direct connect lines, or software-defined WAN network, or VPNs to make the
connections.
And by leveraging VMware Site Recovery Manager, SRM, or Hybrid Connect Extender, HCX,
products, and technologies, customer can do disaster recovery or backup in a cost-effective
way. The good news is that Alibaba Cloud launched the services are all regions in mainland
China and Hong Kong. Customers can choose the region close to their business, test it, use it
and append it. And Alibaba Cloud provides the first-line support and VMware provides the
second-line support to make sure we maintain a high-level service level agreement.
A quick summary: Alibaba Cloud VMware Solution brings a lot of benefits to customers. One,
Alibaba Cloud VMware Solution is a true leading solution for hybrid cloud. It is joint engineering
by the two leaders of cloud provider. Two, the service is available now and easy to access, so
that it saves customers time to market. Three, because of the consistent architecture is easy to
migrate the workloads to and from the public cloud. And the IT team can extend their skills to
manage the infrastructure, and avoid a deep learning curve on public cloud. Four, by
introducing Alibaba Cloud VMware Solution, customers can change the cost model from CapEx
to OpEx, start from small-scale to large-scale based on the business needs. So everything is
ready. Checkout the service now.
Rosa Wang: Thank you, CT, for the great explanation. In this slide, I will give you an overview of
Alibaba Cloud VMware Solution. It has four features I want to highlight. First, joint
development. VMware and the Alibaba Cloud engineering team work together to develop the
VMware SDDC version that runs on Alibaba Cloud, the bare-metal service and the VPC. The key
components include VMware vSphere and NSX. Later on, we will also add vSAN support.
It’s a bundle, which means the customer doesn't need to purchase a VMware license
separately. Both VMware SDDC software and Alibaba Cloud infrastructure, such as bare-metal
service, are available in a bundle together for ease of management and purchase.
Seamless integration: The customer can use the existing Alibaba Cloud to easily integrate
Alibaba Cloud VMware Solution with Alibaba compute, storage, network and other cloud native
services.
Last but not the least, the same user experience: The customer can manage that VMware
workload on Alibaba Cloud through VMware vSphere Client, which connects to the vCenter on
Alibaba Cloud, which is exactly the same tool they use on-premise.
The first use case scenario I want to cover today is disaster recovery. In this scenario, your
production system runs on VMware environment in your local data center, and the DR site will
be deployed on Alibaba Cloud. So, you can replicate your existing VMware images and protect
your VMware workload from disaster, and recover easily from cloud backup.
In this case, you gain bi-directional workload portability between on-premise and VMware
Alibaba Cloud. So, customer will leverage the VMware Site Recovery Manager, SRM, feature to
replicate virtual machine images to Alibaba Cloud. When they say this is bi-directional, this
means the customer can replicate the image, not only from local data center to public cloud,
but also from Alibaba cloud to local data center, or from another cloud to Alibaba cloud, or vice
versa. And also, there are enough choices for the DR site, through Alibaba Cloud availability
zones and regions. Currently in mainland China and Hong Kong, there are nine regions
available. The data in the Alibaba Cloud backup can be saved to Alibaba Cloud object storage
services.
The second scenario is for data center extension. This case is for customers who want to
continue to maintain their existing local data center VMware environment, but also want to
leverage the elasticity and flexibility of public cloud. So, customers can extend their existing on
premise workload to the cloud to allow easily to scale up and down by leveraging cloud elastic,
compute capacity, while maintaining the same user experience of the VMware environment of
the local data center.
The last scenario I want to talk about today is migration. So, this really includes two scenarios. It
is for customers who have enterprise application workloads running on premise, who want to
move to the public cloud. We can call this lift and shift. So they can easily move traditional
applications to Alibaba Cloud without re-architecting the environment. It also includes net new
application development, such that net new application can leverage a cloud native services on
Alibaba Cloud with the flexible and the hybrid architecture. Enterprise workloads, such as
traditional business applications, ERP, CRM, SRM, or service automation in the VMware
environment can be easily moved to Alibaba Cloud without architecture change, which means it
can save you time and money.
We really appreciate the partnership with VMware. This is only the beginning of the
partnership journey. So we look forward to work with VMware closely to create more success
Thank you.
Learn more about Alibaba Cloud VMware Solution on the Alibaba Cloud website.
True or False: In the VMware cloud partnerships with Alibaba, IBM, and Oracle, you must learn
to use new management tools because VMware tools are note integrated.
True
False
Learner Objectives
With on-premises cloud infrastructure as a service (IaaS), you can host an environment similar
to a public cloud, on premises.
But an on-premises cloud infrastructure seems to contradict a key principle of cloud services:
that they are provided off-site, in the cloud.
Organizations have different reasons for wanting to keep their workloads in their own data
centers:
VMware Cloud on Dell EMC and VMware Cloud on AWS Outposts are on-premises products
that you can use to extend the cloud model to the data center.
With this hybrid cloud option, you can continue operating your data centers without the
traditional capital-funded infrastructure refresh spend and the ongoing maintenance that is
typically required for physical data center infrastructure.
The fully managed on-premises infrastructure as a service offers a cloud-like monthly billing
model.
Service Features
VMware Cloud on Dell EMC provides infrastructure, VMware SDDC software, services such as
shipping, installation, and life cycle management, and support for security updates and
software patching, proactive monitoring, and break-fix service.
Hardware Software
Each deployment includes a 42u rack, with two VMware VMware Cloud on Dell EMC
SD-WAN appliances. VMware uses these appliances to includes the SDDC stack:
manage the solution remotely.
• ESXi running on Dell EMC
An out-of-band management switch connects the Dell VxRail
Services Support
The services include shipping, both delivery and return. Support is provided for all software
components.
For installation, a Dell technician comes onsite to install
the power and networking that is required to activate If a problem occurs with a host, a
the system. four-hour mission-critical onsite fix
is required.
For lifecycle management, the service includes all
patching and upgrades for all hardware and software Global support centers provide full
components. monitoring (24/365) and support.
For more information about the features of VMware Cloud on Dell EMC, access the Service
Description.
AWS delivers and installs the outpost at your on-premises location and monitors, patches, and
updates it. AWS handles all maintenance and replacement of the hardware.
VMware provides continuous life cycle management of the VMware SDDC and serves as your
first line of support.
VMware Cloud on AWS and VMware Cloud on AWS Outposts share the same infrastructure,
architecture, and operations.
The VMware SDDC runs on AWS Outposts bare metal delivered as-a-service on-premises.
SDDC Planning and Design Page 175
The VMware SDDC runs on AWS Outposts bare metal delivered as-a-service on-premises.
You run applications and workloads on premises using familiar AWS services, tools, and APIs.
You can run some AWS services locally and connect to a broad range of services available in the
local AWS Region.
Which statement most accurately describes service features of VMware Cloud on AWS
Outposts? (Select one option)
The following checklists can help as you prepare to order your first SDDC.
Plan adequate space for the rack based on its dimensions: Verify that you have
accommodations for network cabling and power accessibility and enough space for
service and maintenance.
Verify that you have sufficient space and weight capacity onsite to maneuver the rack into
its designated position in the data center.
Ensure that the rack is not exposed to direct sunlight and that the site maintains the
specified temperature and humidity levels.
Plan for electrical power sources that meet the requirements of the rack.
VMware is not responsible for any delay in installation or any failure of the
SDDC Planning and Design Page 177
VMware is not responsible for any delay in installation or any failure of the
hardware or the SDDC if the customer does not maintain the specified
environmental conditions at the installation site.
Networking Considerations
Verify that an existing network can handle multiple subnets and a router with Internet
connectivity can be connected to the rack.
During the ordering process, specify the IP addressing information for configuring the
management subnets.
Ensure that you provide the underlying networking details for the uplink network to
establish a connection between the SDDC and your network. An uplink connection is
required to migrate your workloads between the rack and your network.
Configure the number of uplink connections based on your requirements.
Accessing Specifications
• For a list of detailed specifications, access the data sheet for VMware Cloud on Dell EMC
• For a list of detailed specifications, access AWS Outposts rack hardware specs
If necessary, use the specifications information provided in the VMware Cloud on AWS
Outposts website and the VMware Cloud on Dell EMC datasheet to answer the following
questions.
1. Before you deploy VMware Cloud on AWS Outposts or VMware Cloud on Dell EMC at your
data center, you must plan and allocate a dedicated physical space for the hardware.
The operating temperature is within the required range with no direct sunlight on
the equipment.
Cabling and power sockets meet requirements in terms of location and number.
Power source locations are on the floor, rather than the ceiling, to avoid fire
hazards.
The power source location is close enough to the hardware so that you do not
require extension cords.
2. Which requirements must you meet for physically moving the outpost hardware into your
data center? (Select two options)
Weight capacity to move the hardware to its location in the data center.
Space clearance to move the hardware to its designated location in the data center.
Trained movers that you hire to manually lift the hardware into place.
3. Which networking requirements must you consider when deploying the outpost
hardware? (Select two options)
Hyperscaler cloud partners implement vSphere HA to provide high resiliency against the
potential failure of hosts in their data centers.
VMware Cloud on AWS is a VMware first-party solution. You can integrate SDDC clusters
with Amazon Web Services, such as Amazon Simple Storage Service, Amazon Elastic
Compute Cloud, and Amazon Relational Database Service.
Azure VMware Solution is a Microsoft service, verified by VMware, that runs on Azure
infrastructure. With this solution, you can move VMware workloads from your data center
to Azure and integrate your VMware environment with Azure.
Google Cloud VMware Engine brings VMware enterprise class SDDC software to the
Google Cloud Platform. You can run production applications across vSphere private,
public, and hybrid cloud environments, with optimized access to Google Cloud Platform
services.
With Elastic DRS, you can set policies to automatically scale your cloud SDDC by adding or
removing hosts in response to demand. Elastic DRS replaces VMware vSphere DPM in a
VMware Cloud on AWS SDDC.
When designing your SDDC, you must consider where your other cloud-native services live
and how to make the network connections to those services and to your on-premises
infrastructure.
Learner Objectives
This lesson reviews the basic concepts of the VMware vSphere virtual switches and VMware
NSX networking planes.
To connect VMs and ESXi hosts to the network, a virtual switch uses specific types of
connections, or ports: virtual machine, VMkernel, and uplink.
VM Ports
Virtual machine ports connect virtual machines to the virtual network.
VMkernel Ports
The ESXi hypervisor (VMkernel) uses VMkernel ports for managing infrastructure traffic.
VMkernel ports are used for traffic such as IP storage, vSphere vMotion migration, VMware
vSphere Fault Tolerance, VMware vSAN, VMware vSphere Replication, and the ESXi
management network.
Uplink Ports
Uplink ports connect the virtual network to the physical network.
Each uplink port is associated with a physical network adapter on the ESXi host.
A vSphere standard switch is a virtual switch that provides virtual networking for an ESXi host
and its virtual machines.
VMware vSphere® Distributed SwitchTM is a virtual switch that provides virtual networking for
all ESXi hosts in a data center.
The distributed switch architecture consists of the control plane and the I/O plane.
The control plane also coordinates the migration of the ports and is
responsible for the switch configuration.
The I/O plane is implemented as a hidden virtual switch in the VMkernel of each
I/O Plane ESXi host.
This plane manages the I/O hardware on the host and is responsible for forwarding
packets. vCenter Server oversees the creation of these hidden virtual switches.
Which statement accurately describes vSphere distributed switches? (Select one option)
A distributed switch is a virtual switch that is configured for a single ESXi host.
A standard switch is different from a distributed switch in that standard switches contain
VMkernel ports.
A distributed switch is managed by vCenter Server for all ESXi hosts associated with the
distributed switch.
Each ESXi host can have only one distributed switch configured at any time.
Networking Planes
Networks use the data forwarding process to carry user traffic from one device to another
device.
Networks include three layers or planes: management, control, and data. These planes
coordinate with each other to identify the best possible path between devices.
The main elements of NSX architecture are the management, control, and data planes. This
architectural separation lets you scale your environment without impacting workloads.
Although not part of NSX, an additional plane, called the consumption plane, provides
integration into a cloud management platform.
• Users manage, configure, and monitor the network devices, such as a switch or router.
• The network device usually provides a CLI or GUI for configuring the network and the
device. The CLI or GUI operates in the management plane.
In NSX, the management plane is designed with advanced clustering technology, which allows
the platform to process large-scale concurrent API requests. NSX Manager provides the REST
API and a web-based UI interface entry point for all user configurations.
Control Plane
• It calculates and determines the best path for a packet to navigate from one device to
another device. Routing protocols, such as BGP, OSPF, RIP, primarily operate in this layer.
• After determining the best path, the control plane propagates this information to the data
plane.
The control plane is responsible for computing and distributing the runtime virtual networking
and security state of the NSX environment.
In NSX, the management plane and control plane are converged. Each manager node in NSX is
Data Plane
The data plane, also called the forwarding plane, performs the following functions:
• Forwards the user traffic between the networking devices, such as switches or routers
• Carries the user traffic from one device to another device, which is the fundamental
function of a network
The control and management planes help the data plane to perform effective data forwarding.
In NSX, the data plane includes transport nodes. Transport nodes, such as ESXi hosts and NSX
Edge nodes, are responsible for the distributed forwarding of network traffic.
The data plane includes a virtual distributed switch managed by NSX-T (N-VDS), which
decouples the data plane from vCenter Server and normalizes the networking connectivity. The
ESXi hosts managed by vCenter Server can also be configured to use the vSphere Distributed
Switch (VDS) during the transport node preparation.
Although the consumption plane is not part of NSX-T Data Center, this plane provides
integration into cloud management platforms through the REST API and integration with
VMware cloud management planes such as vRealize Automation:
• The consumption of NSX-T Data Center can be driven directly through the NSX UI.
• Typically, end users tie network virtualization to their cloud management plane for
deploying applications.
Learner Objective
After completing this lesson, you should be able to:
NSX provides consistent networking and security for cloud SDDCs and the on-premises
SDDC.
In NSX, segments connect VMs and containers regardless of their physical location. A segment,
also known as a logical switch, reproduces switching functionality in an NSX virtual
environment.
VMs communicate with each other when connected to the same segment. For example, you
can connect all web server VMs to the same segment so they can communicate with each
other and exchange information.
Containers
NSX segments provide connectivity for containerized applications.
Segment Profiles
Segment profiles include layer 2 networking configuration details. Segment profiles can be
applied at a port level or at a segment level.
You can configure multiple types of segment profiles such as IP Discovery, Spoof Guard,
Segment Security, and MAC Discovery.
Segment
The NSX-T Data Center logical switches are called segments:
• Segments separate networks and provide layer 2 connectivity to their attached VMs and
containers.
• VMs and containers can communicate with each other if they are connected to the same
segment.
• Each segment has a virtual network identifier (VNI), which is similar to a VLAN ID.
However, unlike VLANS, VNIs scale beyond the limits of VLAN IDs.
In vSphere 7 environments, ESXi hosts can use both N-VDS and VDS for layer 2 forwarding.
Transport Node
A transport node, such as an ESXi host, is responsible for forwarding the data plane traffic that
originates from VMs, containers, or applications running on bare-metal servers.
Uplinks
Uplinks are logical interfaces on the N-VDS/VDS.
Uplinks are used to connect the host physical NICs to provide external connectivity.
VM frames are
The GENEVE protocol provides L2 over L3 encapsulation of data plane
packets. encapsulated with GENEVE
tunnel headers and sent
across the tunnel.
VMs can communicate with each other if they are connected to the same segment.
But VMs might also need to communicate with VMs on different segments and with the
Internet.
Tier-1 Gateways
Tier-1 (T1) gateways are typically used to connect VMs and containers that are attached to
different networks or segments.
Tier-0 Gateways
Tier-0 (or T0) gateways connect the virtual and physical networks to provide external
Communication between the cloud SDDC and external networks, such as on-premises data
centers, the Internet, or public cloud services, is called north-south traffic.
• Require the deployment of one or more VMware NSX® Edge™ nodes to centrally
configure and manage the routing capabilities.
• Support static and dynamic routing protocols (BGP) toward the physical network.
• Support equal-cost multipath (ECMP) routing to load balance traffic and provide fault
tolerance.
Which use cases apply to NSX logical routing? (Select two options)
Two NSX Edge nodes are created for high availability. NSX Edge nodes run in active-passive
mode, and the failover is handled by the NSX Edge nodes themselves.
Two NSX Edge node appliances are created during an SDDC deployment. Although not pictured here, each NSX
Edge node is connected to a different management segment to make the edge services highly available.
Features that are typically used when setting up network connectivity are DHCP and NAT.
What is DHCP?
With DHCP (Dynamic Host Configuration Protocol), clients can automatically obtain network
configuration settings such as IP addresses, subnet masks, default gateways, and DNS
configuration from a DHCP server.
DHCP makes it easier to manage IP addresses because IP addresses are assigned automatically
rather than manually. DHCP ensures that each client is assigned a unique IP address.
In the VMware Cloud SDDC, you can configure a DHCP server or a DHCP relay.
DHCP Server
A DHCP server handles DHCP requests from VMs that are attached to segments. The VM
becomes the DHCP client.
DHCP Relay
A DHCP relay forwards DHCP requests from VMs to external DHCP servers.
What is NAT?
NAT (network address translation) is a mechanism that maps private IP addresses to public IP
addresses.
Source NAT
SNAT translates source IP packets from a private IP address to a known public IP address.
SNAT is used for traffic originating in the private network and reaching the Internet.
SNAT is automatically applied to all workloads in the SDDC to enable Internet access.
Destination NAT
DNAT is used for traffic originating on the Internet and reaching the private network.
The VMware Cloud on AWS SDDC includes management and compute segments.
Management Segments
Management segments handle traffic from your infrastructure, or management systems, such
as the vCenter Server appliance, NSX Manager appliance, Edge Node appliances, and ESXi
hypervisors.
In the VMware Cloud SDDC, management nodes are connected to management segments.
Management nodes include vCenter Server, NSX Manager, ESXi hypervisors, and NSX Edge
node appliances. Add-on services deploy other management appliances to the management
segments.
Compute segments handle traffic from your workload systems. Workload VMs and containers
can be connected to one or more network segments.
In this example, the app servers are connected to App-Segment, the web servers are
connected to Web-Segment, and the database servers are connected to DB-Segment.
In a VMware Cloud on AWS SDDC, management segments are created and managed
by VMware. Also, a default compute segment is created by VMware. You can create
additional compute segments if necessary.
In a VMware Cloud on AWS SDDC, VMs, containers, appliances, nodes, and servers are split
between two types of Tier-1 gateways: management and compute.
The compute gateway (CGW) handles network traffic from workload VMs and containers.
The Tier-0 gateway provides external connectivity to all the containers and VMs that run in
the VMware Cloud on AWS SDDC
You can create additional compute gateways in your SDDC. Use cases for multiple compute
gateways include the following:
Routed CGW
A routed CGW is connected to the NSX overlay network. Workload VMs behind a routed CGW
can communicate with other CGW workloads (including the workloads on the default CGW).
You can configure route aggregation to enable routed CGW workloads to communicate over
VMware Transit Connect/ AWS Direct Connect (Intranet endpoint) or Connected VPC (Services
endpoint).
Only the explicitly configured addresses in route aggregation prefix lists are advertised
externally, giving you fine-grained control over reachability to workloads on additional CGWs.
NATted CGW
A NATted CGW requires NAT to be configured to ensure connectivity to the SDDC NSX overlay
network.
As with routed CGWs, workloads on NATted CGWs can communicate externally when using
route aggregation. Addresses behind the NATed CGW are not advertised, so overlapping CIDRs
can be created in the SDDC.
This capability is useful when supporting tenants or applications with overlapping IP addresses.
You can avoid renumbering (re-IP'ing) your applications when you migrate them to the cloud,
saving a significant amount of time, effort, and risk.
Isolated CGW
The isolated CGW serves as a local router without connectivity to the rest of the SDDC
networks or to the external environment. Workload VMs on isolated CGW subnets can
communicate among themselves but not to VMs on other CGWs.
The isolated CGW configuration is often used to simplify certain advanced use cases such as
Which types of gateways can you find in the VMware Cloud on AWS SDDC? (Select two
options)
Control
Compute
Standard
Management
Distributed
Learner Objectives
For more information about configuring networking for other hyperscaler partners, you can
access the following resources:
You use the VMware Cloud console to configure and manage your NSX network configuration.
On the Networking & Security tab, you perform all networking configurations, with the
exception of connecting VMs to network segments.
Compute segments provide network access to your workload VMs. Compute segments are also
referred to as logical networks.
To add a segment, you give the segment a name, you specify the segment type, and you enter
the subnet. The subnet must be in IPv4 CIDR block.
Classless Inter-Domain Routing (CIDR) block is a method for allocating IP addresses and IP
routing.
A VMware Cloud on AWS SDDC starts with a single default compute segment called sddc-cgw-
network-1.
The VMware Cloud on AWS SDDC supports the following types of compute segments: routed,
Routed segment
• A routed segment is the default type. It has connectivity to other segments in the SDDC
and, through the SDDC firewall, to external networks.
Extended Segment
• An extended segment requires a layer 2 virtual private network (VPN), which provides a
secure communications tunnel between an on-premises network and one in your cloud
SDDC.
• An extended segment extends an existing L2 VPN tunnel, providing a single IP address
space that spans the SDDC and an on-premises network. An L2 VPN connection can be
used to migrate running VMs between SDDCs.
Disconnected Segment
• A disconnected segment has no uplinks associated with it and provides an isolated
network accessible only to VMs connected to it.
• This segment type can be useful for testing a disaster recovery solution. You can create
disconnected segments and use a VM-based router to provide internal connectivity
between the isolated networks. You can then verify that workloads and applications
connected to these isolated networks function as expected.
• Disconnected segments are created when needed by VMware HCX®. You can also create
them and convert them to other segment types.
Configuring DHCP
To configure DHCP, you must first create a DHCP profile. The DHCP profile identifies whether
you are using a DHCP server or DHCP relay.
After creating the profile , you assign the profile to either a segment or Tier-1 gateway.
Which task do you perform before configuring the DHCP server on the compute gateway?
(Select one option)
Configuring SNAT
Source NAT (SNAT) is automatically configured when deploying a VMware Cloud on AWS SDDC.
The public IP address used by SNAT appears in the Overview pane under Default Compute
Gateway.
For outbound requests, by default, the workloads of the compute network use a dedicated NAT IP
address, shown as the Source NAT Public IP in the Overview pane.
Configuring DNAT
In the VMware Cloud console, you can create DNAT rules to forward traffic from external,
public IP addresses to internal, private IP addresses.
In the DNAT rule, you must specify the public IP address for the VM and the internal IP address
of the VM. The public IP address is exposed to external networks and the internal IP address is
private to the compute network.
You can request a public IP address from AWS to assign to a workload VM:
On the Networking & Security tab, click Public IPs under System and click REQUEST NEW IP.
VMware Cloud on AWS provisions the IP address from AWS. Public IP addresses might incur
additional charges.
As a best practice, release the public IP addresses that are not in use.
In this example, you create a DNAT rule to direct HTTP traffic from the public IP address to the
VM whose internal IP address is 192.168.1.2. The name of this VM is Photo-App.
To create the NAT rule:
1. Enter the NAT rule name.
2. Enter the public IP address of the VM.
This is the public IP address that you requested earlier.
3. From the Service drop-down menu, select HTTP.
The public port automatically populates when you select the service.
Selecting a specific service such as HTTP, instead of All Traffic, creates an inbound (DNAT)
rule that applies only to traffic using that protocol and port.
4. Enter the internal IP address of the VM.
5. Click SAVE.
Learner Objectives
How do you maintain network security across private and public clouds?
Using NSX, you can set up gateway and distributed firewalls to protect your data center from
both external and internal threats.
Gateway Firewalls
A stateful firewall monitors the state of active connections and uses this information to
determine which packets to allow through the firewall. Stateful firewall rules allow or
deny traffic based on the source, destination, and protocol or port combination of the
packet.
• They are independent of the distributed firewall in terms of policy and enforcement.
Distributed Firewalls
With a distributed firewall, you can define and enforce network security policies fore every individual
workloads in the environment.
Which statements accurately describe gateway firewalls and distributed firewalls? (Select two
options)
Applying micro-segmentation, security administrators build security controls for each individual
workload based on its application requirements.
Micro-segmentation denies attackers the opportunity to pivot laterally in the internal network,
even after the gateway firewall is breached.
NSX micro-segmentation uses existing network infrastructure and prevents the lateral spread of
threats across an environment.
Which statements do you think accurately describe how micro-segmentation works in this
example? (Select three options)
• Logically divides a data center into distinct security segments to the individual workload
level
• Defines distinct security controls for, and delivers services to, each unique segment
• Attaches the centrally controlled and operationally distributed firewalls directly to each
VM
The zero-trust model trusts nothing and verifies everything. It establishes a security perimeter
around each VM or container workload using a dynamically defined policy.
These firewalls examine all traffic into and out of the SDDC.
The compute (tier-1) gateway firewall allows or denies network traffic to the workload VMs.
Learner Objectives
After completing this lesson, you should be able to:
For more information about configuring network security for other hyperscaler partners, you
can access the following resources:
In the VMware Cloud on AWS SDDC, you configure firewall rules on the Tier-1 gateways:
Management and Compute
By default, the management gateway firewall blocks traffic to all management network
destinations from all sources. The rule called Default Deny All drops all network traffic.
You must add rules to allow secure traffic from trusted sources. For example, you should create
a rule that allows VMware vSphere® ClientTM users to access VMware vCenter Server®.
The rule called vCenter Inbound is an example of such a rule. The vCenter Inbound rule allows
HTTPS traffic from MgmtGroup to vCenter Server. MgmtGroup is a group of IP addresses from
which you plan on using vSphere Client.
Add compute gateway firewall rules to allow traffic as needed. These rules specify actions to
take on network traffic from a specified source to a specified destination.
Firewall rules are sets of instructions that determine whether the network traffic should be
blocked or allowed based on specific criteria.
All firewall rules can send logs to VMware vRealize® Log Insight CloudTM, if logging is enabled.
In the demonstration, a firewall rule is created for the compute gateway in a VMware Cloud on
AWS SDDC. This rule enables access to the Photo-App application. The rule allows HTTP traffic
from any source to the public IP address of the Photo-App VM.
Custom Services
Firewall rules often apply to traffic from a network service. Many services are defined by
default.
In this demonstration, you create a custom service to use with VMware Cloud on AWS firewall
rules. This service is for Amazon EFS, using port 2049.
You create a custom service to use with VMware Cloud on AWS firewall rules.
1. In the VMware Cloud console browser tab, navigate to the SDDC summary page.
2. Click the Networking & Security tab.
3. Under Inventory, click Services.
4. Create a custom service for Amazon EFS connectivity using port 2049.
a. Click ADD SERVICE.
b. Enter AWS-EFS for the Name of the service.
c. Click Set Service Entries.
d. On the Port-Protocol tab, click ADD SERVICE ENTRY.
e. Enter EFS for the Service Entry Name.
f. In the Service Type drop-down menu, select TCP.
As an administrator, you want to be able to access your vCenter Server instance using the
vSphere Client. Which option must you create to allow this access? (Select one option)
A custom service
A compute gateway firewall rule
A management gateway firewall rule
Distributed firewall rules are grouped into policies, and policies are organized into categories.
Each category can contain one or more policies. Each policy can contain one or more rules.
On the Networking & Security tab, you can view, add, edit, and remove policies and their rules
in the Distributed Firewall pane.
• The All Rules tab is a read-only view of the policies and their rules.
The Category Specific Rules tab (shown here) lets you view, add, and remove policies.
• Five categories are available. To add a policy to a category, you must first select a category
in this row.
In this example, the Application category is selected. This category contains seven rules,
indicated by the number in parentheses.
The number of rules in each policy is identified by the number in parentheses. For
example, 3-TIER POLICY contains three rules.
A policy can also apply to DFW, which means that the policy applies to all workloads. Or
the policy can apply to a specific group of VMs or containers.
Firewall rules are enforced in the categories, from left to right (Ethernet > Emergency >
Infrastructure > Environment > Application), and top to bottom in each category.
One of your application VMs is compromised, and you want to temporarily block all traffic to
and from this VM so you can resolve the issue.
Into which category should you place this rule? (Select one option)
Ethernet
Emergency
Infrastructure
Environment
Application
Learner Objectives
• Recognize the options for connecting on-premises data centers and cloud SDDCs
In a multi-cloud environment, you want to use the most appropriate and secure options for
connecting your cloud environments, whether you're connecting an on-premises
environment to a public cloud, or you're connecting between public clouds.
You can connect cloud SDDCs in different ways and enable workloads to communicate in a
secure manner.
For workloads to communicate with each other, you must choose an appropriate connection to
use between your on-premises data center and your cloud SDDC, or between cloud SDDCs
(cloud to cloud).
• Public Internet Connection - for public applications that share data publicly
• Private IPsec VPN - to secure connection between cloud SDDCs and on-premises data
centers
• Private L2 VPN - To migrate running VMs between SDDCs in different geographical
locations
• High Bandwidth, Low Latency Connection - VMware and its hyperscaler partners provide
connectivity solutions for high bandwidth, highly available, secure, low latency
connections
This example shows a public Internet connection between an on-premises data center and a
VMware Cloud on AWS SDDC. The connection is over the Internet and through the Internet
gateway provided by AWS.
Public Internet connection between on-premises data center and a VMware Cloud on AWS SDDC
You can create a public Internet connection to a cloud SDDC by performing the following steps:
You must perform these steps, and others, if necessary, for the on-premises SDDC.
If you are have a VMware Cloud on AWS SDDC, consider using IPsec VPN when you require
connectivity to an SDDC and you do not have an AWS Direct Connect in the desirable region,
but the region has reliable Internet.
Performance requirements should be no greater than 3 to 4 Gbps peak total in both directions,
with some tolerance for latency.
IPsec VPNs can be route-based and policy-based. Either type of VPN provides a secure
connection to your SDDC over the Internet.
• Route-based
○ A route-based VPN creates an IPsec tunnel interface and routes traffic through it as
dictated by the SDDC routing table.
A route-based VPN provides resilient, secure access to multiple subnets. When you
use a route-based VPN, new routes are added automatically when new networks are
created.
Routes are learned dynamically over a special interface called virtual tunnel
interface (VTI) using Border Gateway Protocol (BGP). BGP is a dynamic routing
protocol used to exchange routes.
• Policy-based
○ A policy-based VPN creates an IPsec tunnel and a policy that specifies how traffic
uses it.
A policy-based VPN can be an appropriate choice when you have only a few
networks on either end of the VPN, or if your on-premises network hardware does
not support BGP.
When you use a policy-based VPN, you must update the routing tables on both ends
of the network when new routes are added.
In this example, a policy-based IPsec VPN is created between the Tier-0 gateway in a VMware
Policy-based IPsec VPN between the Tier-0 gateway in a VMware Cloud on AWS SDDC and the VyOS
gateway appliance on premises
In this demonstration, a policy-based IPsec VPN is created to allow a VMware Cloud on AWS
SDDC (called demo01) to securely connect over the Internet to the on-premises data center.
The connection is established from the T0 gateway in the demo01 SDDC to the VyOS gateway in
the on-premises SDDC.
1. In the VMware Cloud console browser tab, navigate to the SDDC summary page.
2. On the Networking & Security tab, click VPN under Network.
3. Select the Policy Based tab.
4. Create a policy-based VPN.
a. Click ADD VPN.
b. Enter On-Prem-VPN for the VPN Name.
c. In the Local IP Address drop-down menu, select Public IP1.
d. In the Remote Public IP text box, enter the on-premises public IP address that you
recorded to your text file earlier.
e. In the Remote Networks text box, enter 172.20.10.0/24 and click Add Item(s).
f. In the Remote Networks text box, enter 172.20.11.0/24 and click Add Item(s).
g. For Local Networks, select sddc-cgw-network-1 and select Infrastructure Subnet.
h. Enter VMware1! in the Preshared Key text box.
i. Enter 172.20.0.254 in the Remote Private IP text box.
j. In the IKE Type drop-down menu, select IKE V1.
k. Click SAVE.
Private L2 VPN
You use a private layer 2 (L2) VPN to extend an on-premises network to your cloud SDDC. This
extended network is a single subnet with a single broadcast domain.
You can use L2 VPNs to migrate VMs to and from your cloud
SDDC, for disaster recovery, or for dynamic access to cloud
computing resources (often called cloud bursting).
Example L2 VPN
In this example, an L2 VPN is created between the Tier-0 gateway in a VMware Cloud on AWS
SDDC and the autonomous NSX Edge appliance in the on-premises data center.
An autonomous NSX Edge appliance is simple to deploy and provides a high-performance VPN.
You do not need NSX on premises to use an L2 VPN. You can download the autonomous NSX
Edge appliance and configure it as the client-side component of your L2 VPN.
You want to migrate a VM (using vSphere vMotion) across SDDCs and allow this VM to keep the
same IP address. Which connection type should you use? (Select one option)
Private L2 VPN
Private route-based IPsec VPN
Private policy-based IPsec VPN
Public Internet connection
VMware and its hyperscaler partners provide connectivity solutions that are highly available,
secure, high bandwidth, low latency connections:
Azure ExpressRoute
For information on ExpressRoute, see the Azure VMware Solution networking and
interconnectivity concepts section in the Azure VMware Solution documentation.
Learner Objectives
When connecting VMware Cloud on AWS SDDCs, you can use the following solutions,
depending on your goals:
Rather than using only a VPN tunnel over the public Internet, DX uses a dedicated leased
connection (private line) to connect the on-premises data center to an AWS DX location.
Ports are available with a speed of 1 Gbps and 10 Gbps, and you can order multiple ports.
AWS DX charges per port hour (charges vary per port speed) and per gigabyte of data
transferred, both in and out. Charges vary between locations. Pricing does not include the cost
of the dedicated network connection.
With AWS DX, network traffic is isolated and bandwidth is, potentially, increased between the
on-premises data center and AWS resources.
Examples
Japan
An AWS DX service in Japan includes the following connections:
USA
An AWS DX gateway is used to logically extend an AWS DX connection from one AWS region to
another without creating an extra private connection to AWS.
For example, an AWS DX gateway service to multiple AWS regions in the United States includes
these connections:
• An on-premises data center in Palo Alto connects through a dedicated line to an AWS DX
location in Portland.
• The AWS DX location in Portland connects through a dedicated line to the AWS region in
Oregon.
• The AWS region in Oregon is connected through an AWS DX gateway to the AWS region in
northern Virginia.
For more information about partners, access the AWS Direct Connect Delivery Partners
webpage.
For more information about locations, access the AWS Direct connect Locations webpage.
With AWS Direct Connect, you must identify the type of connection to use. You can use either
dedicated ports or hosted connections.
Dedicated ports
Dedicated ports provide the highest port speed that is available. These ports are assigned and
dedicated to a single customer.
You can use multiple virtual interfaces to load-balance your traffic across the aggregated links.
The possible values for port speed are 1 Gbps, 10 Gbps, and 100 Gbps.
Hosted Connections
Hosted connections are provided by an AWS DX partner and have defined bandwidth and
VLANs.
You get a single virtual interface rather than multiple virtual interfaces.
The possible values are 50 Mbps, 100 Mbps, 200 Mbps, 300 Mbps, 400 Mbps, 500 Mbps, 1
Gbps, 2 Gbps, 5 Gbps, and 10 Gbps. Only those AWS DX partners who have met specific
requirements may create a 1 Gbps, 2 Gbps, 5 Gbps or 10 Gbps hosted connection.
This type of connection is simpler and easier to use, especially if you don't require a 1 Gbps
connection or cannot assume the full cost of a dedicated port.
Routing Protocol
BGP runs on TCP port 179. It has neighbors that exchange routing information over a peering
session.
BGP is the only protocol supported that AWS supports for exchanging routes. Static routing is not
allowed.
Router A and Router B advertise routes to each other so that Autonomous Systems 64512 and
65001 can communicate with each other.
An autonomous system is a collection of networks, or more precisely, the routers joining these
networks, that are under the same administrative authority and that share a common routing
strategy.
All route-based VPNs in the SDDC default to Autonomous System Number (ASN) 65000, so you
must change the local ASN. The local ASN must be different from the remote ASN.
You want to connect from your on-premises data center to your VMware Cloud on AWS SDDC.
You want to use the highest port speeds available and you want to load-balance traffic over
multiple virtual interfaces.
Networking Page 240
multiple virtual interfaces.
Dedicated ports
Hosted Connections
In the Japan data centers example, the on-premises data center in Kobe connects through a
dedicated, private line to an AWS DX location in Osaka.
Both private and public AWS Direct Connect connections are used to connect between data
centers:
• The blue line (B) represents a private AWS Direct Connect connection, which can be used
for AWS resources. However in this case, the connection is used to securely connect to
the VMware Cloud on AWS SDDC.
• The green line (G) represents a public AWS Direct Connect connection, used for private
and, potentially, faster access to AWS resources.
Private and public VIFs establish private dedicated connections to the AWS backbone.
The private VIF connects the on-premises data center through an AWS DX connection into the
private VPCs where the SDDCs are located.
After you request a link from the AWS partner (Equinix, in this example), you can view the
connection in the AWS Direct Connect console window.
After you accept the configured connection, the status changes to pending until the connection
is approved and initialized.
When the approval process is complete, the status changes from pending to available.
Open the VMware Cloud SDDC console. In the Networking & Security tab, click Direct
Connect under System.
Verify that the State is Attached and the BGP Status is Up.
For further details on the functionality of AWS Direct Connect, see the user guide.
With a dedicated port connection, you can use multiple VIFs to load-balance traffic.
A private VIF uses a public IP address space and terminates at the customer VPC level.
With a hosted connection, you can use multiple VIFs for your 10G connections.
Both private and public VIFs establish private dedicated connections to the AWS
backbone.
VMware Transit Connect is a VMware managed connectivity solution between the VMware
Cloud on AWS SDDCs. With VMware Transit Connect, customers can build high-speed, resilient
connections between their VMware Cloud on AWS SDDCs and other resources.
• SDDC Groups: An SDDC group helps you to logically organize SDDCs together to simplify
SDDC to SDDC
You can use the SDDC-to-SDDC model to create highly available SDDC-to-SDDC connectivity
across different AZs.
This topology shows three SDDCs in the same AWS region. Two of the SDDCs are members of
an SDDC group and can communicate through the high-speed VPC attachment created with the
VTGW.
SDDC to VPC
You can use the SDDC-to-VPC model to allow SDDC workloads to access AWS native services
across different native AWS VPCs.
This model supercharges the hybrid connectivity by reducing the reliance on VPNs to tie these
environments together.
This topology shows three SDDCs in the same AWS region. Two of the SDDCs are members of
SDDC to On-Premises
You can use the SDDC to on-premises model to migrate or balance workloads from on-premises
to any of the SDDCs in the SDDC group.
This topology show SDDC to on-premises connectivity. With VMware Transit Connect, a transit
VIF is used and can only be terminated between an AWS Direct Connect Gateway and a VTGW.
Direct Connect Gateways are not region-based but are a global construct, so you do not have
the same considerations for regional co-location that SDDCs and VPCs require.
Match each VMware Transit Connect connectivity model to its use case.
Learner Objectives
NSX provides monitoring tools that you can access from VMware Cloud console or from the NSX
Manager UI:
• IPFIX
• Port mirroring
• Traceflow
IPFIX (Internet Protocol Flow Information Export) is a standard for the format and export of
network flow information for troubleshooting, auditing, or collecting analytics information.
You monitor network traffic on a logical network or segment. You can monitor the amount of
network traffic generated between two VMs. All flows from the VMs connected to that
segment are captured and sent to the IPFIX collector.
The IPFIX collector receives and stores the flow of packets from the VMs. The collector can be
located on a compute segment or in the on-premises data center.
You define the network segments to monitor and the IPFIX collector to use in the IPFIX profile.
In this example, IPFIX is accessed from the VMware Cloud console. The IPFIX profile is configured to use
an IPFIX collector identified as Collector_1
You can configure IPFIX from the Networking & Security tab in the VMware Cloud console.
To configure IPFIX, you must add an IPFIX collector. You also create an IPFIX profile. The profile
identifies the objects to collect packets from. For example, you might want to collect packets
from VMs on a particular segment.
On the Networking & Security tab, click IPFIX under Tools, select the Collectors tab, and
click ADD COLLECTOR.
In the IPFIX pane, click the Switch IPFIX Profiles tab and click ADD SWITCH IPFIX PROFILE
1. Click Set.
2. In the Applied To window, select the Segment category.
The categories are Segment, Port, or Groups.
3. Select one or more segments that you want to collect packets from.
The IPFIX profile is applied to the selected objects.
4. Click APPLY.
5. Click SAVE.
View the network packet flow from the user interface of the IPFIX collector that you configured.
Port Mirroring
Using port mirroring, you can replicate and redirect all the traffic from a source.
• Troubleshooting: Analyze the traffic to detect intrusion and debug and diagnose errors on
Port mirroring includes a source group where the data is monitored and a destination group
where the collected data is copied to.
In this session example, the source group is the compute segment that the web servers are
connected to. The destination group contains one or more VMs running the Wireshark
software.
Wireshark is used to mirror the web servers on the compute segment being monitored.
You can configure port mirroring on the Networking & Security tab in the VMware Cloud
Console
To configure port mirroring, you create a port mirroring session. During the session, you
configure the direction of traffic being monitored, the source being monitored, and the
destination where the traffic is mirrored.
Under Source, click Set and select the port mirroring source.
Sources can be segments, segment ports, groups of VMs, or groups of virtual NICs.
Source group membership requires that VMs are grouped according to workload, such as a web
group or application group.
Destinations are groups of up to three IP addresses. You can use existing inventory groups or
create new ones.
Destination group membership requires that VMs are grouped according to IP addresses.
Click SAVE.
Traceflow
Traceflow observes a marked packet as it traverses the overlay network, and monitors the
packet until it reaches its destination.
You use Traceflow to inspect the path of a packet. With Traceflow, you can identify the path (or
paths) a packet takes to reach its destination or, conversely, where a packet is dropped along
the way.
Each entity reports the packet handling on input and output, so you can determine whether
issues occur when receiving a packet or when forwarding the packet.
You configure Traceflow on the Plan & Troubleshoot tab in the NSX Manager UI.
If you have a VMware Cloud on AWS SDDC, you access the NSX Manager UI from within the
VMware Cloud console.
To configure Traceflow, you specify the IP address type, the traffic type, the protocol, the
source, and the destination.
In the NSX-T user interface, click the Plan & Troubleshoot tab and click Traceflow.
For example, for unicast traffic, you can select VMs as the source and destination.
5. Click TRACE.
Summary
You can use Traceflow for visibility and self-serve troubleshooting. With Traceflow, you can
inspect the path of a packet from source to destination in the SDDC.
For hands-on experience, look to lab VMware Cloud on AWS - Advanced Networking
(HOL-2387-05-ISM) at https://labs.hol.vmware.com and complete the following:
VMware network virtualization can be achieved using vSphere standard switches, vSphere
distributed switches, and NSX distributed switches.
In the VMware Cloud SDDC, logical switching is achieved using management and compute
segments. Logical routing is achieved using a T0 gateway and management and compute
T1 gateways. Logical routing functionality is implemented in NSX Edge nodes.
Management and compute gateway firewalls are used to protect north-south traffic.
Distributed firewalls, which support micro-segmentation, protect east-west traffic.
VMware Cloud SDDCs can communicate with remote SDDCs using public Internet
connections, private IPsec VPNs, and private L2 VPNs. Also, a hyperscaler partner offers
high-performance connections. For example, VMware Cloud on AWS offers AWS Direct
Connect for high-speed, low-latency connections.
The VMware Cloud console and NSX Manager interfaces provide tools such as IPFIX, port
mirroring, and Traceflow to monitor, analyze, and troubleshoot networking in the SDDC.
Additional Resources
For information about configuring networking and security in VMware Cloud on AWS,
see VMware Cloud on AWS Networking and
Security at https://docs.vmware.com/en/VMware-Cloud-on-AWS/services/vmc-on-aws-
networking-security.pdf.
For information about networking and security using NSX-T Data Center, see NSX-T Data
Center Administration Guide at https://docs.vmware.com/en/VMware-NSX-T-Data-
Center/index.html.
Learner Objectives
After completing this lesson, you should be able to:
• VMware
• Amazon Web Services
• Managed service provider
When you purchase VMware Cloud on AWS through one of the available sources, the
purchase source becomes the seller of record.
The seller of record is responsible for billing the resources that are purchased.
When you purchase through a managed service provider (MSP), the MSP handles billing,
support, deployment, and management of the VMware Cloud on AWS infrastructure.
VMware Cloud on AWS provides flexible, consumption-based billing and payment options to
meet your needs.
When selecting sites for deployment of VMware Cloud on AWS, you must evaluate several
factors.
Latency: How far away, in terms of network latency, is the target site from the main user base?
Bandwidth: How much bandwidth is available for the target site? Are high-speed private lines
such as AWS Direct Connect available?
Geography: Do any geographic requirements need to be considered? For example, must the
target site be physically separate from other sites for fault tolerance?
AWS Regions
AWS runs data centers in many geographical locations, or regions, around the world.
AWS regions are continuously updated. For more information about AWS regions, access Global
Infrastructure on the Amazon Web Services website.
True
False
In this interactive simulation, you create an Amazon virtual private cloud (VPC) that can be used
to deploy a VMware Cloud on AWS SDDC.
Organization owners can invite additional owners and users to the account, manage access, or
remove users. They control access to VMware Cloud services, such as VMware Cloud on AWS.
To use your SDDC within your organization, you must first assign users to the SDDC so that they
can provision and maintain workloads on the system.
The Active Users view shows a list of all users currently in the organization.
1. Enter an email address for each user, separated by a comma, space, or a new line.
3. (Optional) For an organization member, select the Support User check box if the user has
support duties.
By default, all organization owners are support users. The setting for organization owners
cannot be changed.
True
False
• Download an authentication application to your mobile device. This step creates a virtual
MFA device.
• The application generates a six-digit authentication code that is compatible with the time-
based, one-time password standard.
To log in to cloud services, you use the code generated by the application, with your VMware
ID and password.
Activating MFA is globally valid for VMware Cloud and My VMware for that email address.
To enable MFA, you activate MFA on your VMware Cloud account. Recovery codes are
generated for you in case you cannot access your virtual MFA device. You can disable MFA at
any time, and you can also regenerate the recovery codes.
Step 1: Log In
1. Log in to VMware Cloud services with your user name and password.
2. Click User and select My Account
Maintain MFA
NOTE: Reenabling MFA does not require that the device is reconfigured.
During the normal MFA-enabled login process, what information must you provide to log in to
your VMware Cloud account? (Select two options)
Account password
QR code
One-time password
Learner Objectives:
AVS Deployment Deep Dive Series - Module 1: Planning and Design Considerations
If you plan to scale your cluster for future growth or disaster recovery use cases, consider
requesting the additional hosts in your initial quota request. You are not billed for these hosts
unless they are allocated to your account, and this will save time if you need to scale out
To request your host quota, open a support ticket by following these steps:
1. In the Azure portal, expand the upper left blade and select Help + Support
2. Click Create a support request
3. On the Basics tab, supply the following values:
6. Select whether you want to share diagnostic information, provide your preferred contact
method and contact info, then click Next: Review + create >>
7. Review the information, then click Create
At provisioning, an ExpressRoute circuit is created connecting the AVS private cloud to the
Microsoft Dedicated Enterprise Edge routers, allowing the AVS private cloud to connect to the
Azure backbone and access Azure services.
The AVS private cloud can be connected to an existing Azure VNet by way of an ExpressRoute
Gateway. The preferred method for connecting an AVS private cloud to an on-premises
datacenter is via ExpressRoute Global Reach. If an ExpressRoute circuit between the on-
premises datacenter and Azure is not available, a Site-to-Site VPN connection can be used.
AVS requires a /22 CIDR network that does not overlap with any existing network segments
that are deployed on-premises or in Azure. This network block is automatically carved up into
supporting subnets for management, provisioning, vMotion, and related purposes. Permitted
ranges for this address block are the RFC 1918 private address spaces (10.0.0.0/8,
172.16.0.0/12, and 192.168.0.0/16), with the exception of 172.16.0.0/16).
As an example, if the block 10.2.0.0/22 were provided, the following subnets would be created:
The AVS private cloud requires an Azure VNet. You can connect AVS to an existing Azure VNet
or create a new one. A non-overlapping IP range must be defined for the VNet, and a subnet
Two additional VLANs should be defined as well. These will be used for a Jumpbox VM and for
the Azure Bastion Service for connectivity to the Jumpbox VM. The Bastion subnet must be
named AzureBastionSubnet.
AVS Deployment Deep Dive Series - Module 2: AVS Initial Deployment and Connectivity Demo
Deployment
Topics in this section address the deployment of the AVS private cloud, connecting the AVS
private cloud to an Azure VNet, and connecting the AVS private cloud to an on-premises data
center.
After host quota has been allocated, you can create your first AVS Private cloud by following
these steps:
By default, there will be no connectivity between the AVS Private cloud and other Azure
resources deployed in your subscription. You can connect a new or existing Azure VNet to the
AVS Private cloud when the AVS deployment is complete. This VNet must have a subnet named
GatewaySubnet defined.
A Virtual Network Gateway will be created in this VNet and connected to the AVS ExpressRoute
connection, allowing communication between resources attached to this VNet and AVS VMs.
To create a new VNet, follow these steps:
ExpressRoute Global Reach allows you to connect your on-premises environment to your Azure
VMware Solution private cloud. ExpressRoute Global Reach peers the private cloud
ExpressRoute circuit with an existing ExpressRoute circuit connecting your on-premises and
Azure environments.
To complete this step, an existing, functioning ExpressRoute circuit must exist connecting the
on-premises environment to Azure. This will be referred to as “on-prem ExpressRoute.”
Additionally, all gateways must support 4-byte Autonomous System Numbers (ASNs).
1. From the Azure Portal, navigate to the ExpressRoute circuits page and select the on-prem
ExpressRoute
2. Under Settings, select Authorizations
3. Enter a name for the new Authorization and click Save. The Authorization will begin
provisioning and should complete within a few minutes.
4. Copy the on-prem ExpressRoute Resource ID and the Authorization key. These will be
used to complete the peering.
1. From the Azure Portal, navigate to the Private cloud object and click Manage >
Connectivity > ExpressRoute Global Reach > Add
2. Enter the on-prem Resource ID and Authorization key created in the previous step, then
click Create. These operations will take a few minutes to complete.
1. From the Azure Portal, navigate to the ExpressRoute circuits page, and select the on-prem
ExpressRoute
2. Under Settings, select Peerings
3. Click the Azure private row, then click View route table in the top menu
4. Examine the route table and confirm the AVS management networks and any NSX-T
segments are listed.
5. From your on-premises edge router, confirm routes exist to the AVS management
networks and any NSX-T segments.
6. From an on-premises device, attempt to access the AVS-hosted vCenter management
console.
Learner Objectives
After completing this lesson, you should be able to:
All of these prerequisites are detailed in the Google Cloud VMware Engine documentation.
Once these are completed, you are ready to create your SDDC!
In this example you will learn how to create an SDDC in Google Cloud VMware Engine.
Your quota will also determine how many nodes you can request. The minimum node count for
a production SDDC is three nodes.
After clicking Review and Create, you will be shown a confirmation page. Review your choices
and click Create.
You can click on the Activity to tab view recent events, tasks, and alerts. Drilling into those will
provide specifics on any activity in your SDDC, including the provisioning process.
To establish initial connectivity to Google Cloud VMware Engine, a VPN gateway can be used.
This is an OpenVPN-based client VPN that will allow you to connect to your SDDC’s vCenter and
perform any initial configuration that you desire.
Before the VPN gateway can be deployed, you will need to configure the “Edge Services” range
for the region where your SDDC is deployed. To do this, browse to Network > Regional settings
in the Google Cloud VMware Engine portal, and click Add Region.
Once complete, they will show as Enabled on the Regional Settings page. Enabling these
settings will allow Public IPs to be allocated to your SDDC, which is a requirement for deploying
a VPN Gateway.
To begin the deployment, browse to Network > VPN Gateways and click Create New VPN
Gateway.
Supply the name for the VPN gateway and the client subnet reserved during planning and click
Next.
Next, specify which networks to make accessible over VPN. I opted to add all subnets
automatically.
Click Next, and a summary screen will be displayed. Verify your choice and click Submit to
create the VPN Gateway.
You will be returned to the VPN Gateways page, and the new VPN gateway will have a status
of Creating. Once the status shows as Operational, click on the new VPN gateway.
Profiles for connecting via UDP/1194 and TCP/443 are available. Choose whichever is your
preference and import it into Open VPN, then connect.
In the Google Cloud VMware Engine portal, browse to Resources and click on your SDDC.
Learner Objectives
You can connect to a VMware vCenter Server instance from a cloud SDDC. The example shows you how to
connect from a VMware Cloud on AWS SDDC
After networking is configured, a dialog box with the default vCenter Server credentials appears. Use these
credentials and log in to the vCenter Server instance.
1. In the VMware Cloud console browser tab, click OPEN VCENTER in the top-right corner.
2. Click SHOW CREDENTIALS.
3. Click the Copy password to clipboard icon.
4. Click OPEN VCENTER.
5. Enter [email protected] in the User name text box.
6. In the Password text box, paste the password that you copied.
7. Click LOGIN.
8. If the following alarms or warnings appear, click Reset to Green for each one:
• Key Management Server Health Status alarm
• Skyline Health has detected issues in your vSphere environment
• Certificate Status alarm
Learner Objectives
The way that you interact with a VM is similar to how you interact
with a physical machine.
You power on the VM. The OS loads. And you use a keyboard or a
mouse to interact with the OS and its applications.
VM Architecture
VMs use the same types of components as physical machines. Can you identify the layers in a VM?
VM Components
VMs provide the same functionality as physical machines because they use the same types of components.
For example, a web server application in need of storage space requests the OS for the
required space. If this web server is running on a VM, the guest OS presents the application
with the storage space.
When a client requests access to websites, this web server responds to the client requests
without ever knowing that the OS is running on a VM.
The OS that is installed on a VM is called a guest OS. Similar to the OS on the physical
machine, the guest OS interacts with the VM hardware and allocates resources to the
applications on demand.
Multiple operating systems can run on a single server. For example, if two VMs are running
on a server, each guest OS can access only a subset of resources.
A driver is a software component that links a computer's hardware with the OS so that they
can communicate with each other.
For example, the OS comes with drivers for basic operations such as controlling the
keyboard. VMware VMs include VMware Tools, a bundle of drivers that help the guest OS
interact efficiently with the guest hardware.
The virtualization software abstracts the physical hardware and presents it as virtualized
The guest OS uses the virtualized hardware devices of the VM but is unaware that those
devices are virtual.
Driver
Hardware
Application
Guest operating system
Guest OS
Stack of VM components
After you create the VM, you install a guest OS that meets your requirements.
Installing a Guest OS
How many guest operating systems can run on a single physical server? (Select one option)
Only 1
2 to 5
Less than 10
Multiple guest operating systems
VM Encapsulation
In its most basic form, a VM is a set of files.
Multiple ESXi hosts can access this datastore. Any host accessing the
datastore can find the VM files, power on the VM, and run it.
For example, if you must reboot a host, you can move the VM to
another host that can access the same datastore.
Exploring VM Files
When you create a VM, ESXi creates a folder that is named after the VM. The files inside the folder share the name of the
VM, followed by an extension.
For example, when you create a VM called VM1, the folder in which it is placed is also called VM1. One of the files inside
that folder is called VM1.vmx.
The VM configuration file has the extension .vmx, for example, VM1.vmx
Swap Files
A swap file extends the VM's RAM when the RAM is fully used.
The swap files use the .vswp extension, for example, VM1.vswp or vmx-VM1.vswp.
BIOS File
A VM has a file that stores the BIOS settings even when the VM is turned off.
Log Files
A VM uses a log file to record the activity of the VM. A VM keeps other log files to archive old log entries.
• vmware.log
• vmware-1.log
• vmware-2.log
If a VM is converted to a template, a VM template configuration file replaces the VM configure file (.vmx).
The template configuration file takes the .vmtx extension, for example, VM1.vmtx.
A VM has two files for each virtual disk. The virtual disk files use the .vmdk extension.
When you suspend a VM, a suspend state file records the state of the VM. When you resume the VM, the VM uses the file
to continue where it left off.
The suspend state file takes the extension .vmss, for example, VM1.vmss.
Learner Objectives
In a cloud SDDC, you can provision and transfer a large number of VMs in
multiple ways. The VM provisioning must be optimized to support the cloud
environments in using the available resources effectively and to function
productively.
Provisioning VMs
Provisioning Restrictions
Restrictions can apply. Check your hyperscaler partner documentation for details.
For example, the following restrictions apply to the placement of VMs in the VMware Cloud on
AWS SDDC:
In the VMware vSphere Client, you can use the New Virtual Machine wizard to create a VM
from scratch.
Cloning an Existing VM
Cloning VMs
The Virtual Machine Management lesson discusses cloning and its use cases in more detail. If
you wish, click the link to go to this lesson now.
Typically, you use this method when you want to create multiple VMs
with the same configuration.
VM Templates
The Virtual Machine Management lesson discusses templates and their use cases in more
detail. If you wish, click the link to go to this lesson now.
Guest OS Customization
VMs with identical settings can conflict and create connection problems. To avoid these
conflicts, you customize the guest OS to make a VM unique.
For example, if two systems use the same IP address, a conflict arises and both systems are
unable to connect to the network.
A customization specification contains the information necessary to ensure that each guest
operating system instance is unique. Customization specifications are stored in the vCenter
Server database.
In vCenter Server, you can create a customization specification for either a Windows or Linux
guest OS.
• Cloning a VM
• Deploying a VM from a template
Click the below picture to play a demonstration video to learn about how to create a
customization specification
Transcript
You create a new guest customization specification. When deploying virtual machines from a
template, we want to make sure that certain properties inside the template are unique. For
example, we want to make sure that the IP address assigned to the virtual machine is unique
and not duplicated across the network. To create a specification:
Cloning a VM
Creating a Template
Creating a VM from scratch
Both while cloning a VM and creating a template
• Upload ISO images and OVA/OVF templates directly to a datastore in the SDDC.
For example, in a VMware Cloud on AWS SDDC, you upload files to the datastore called
WorkloadDatastore.
• Import the ISO images and OVF/OVA templates, from a local filesystem or web server
URL, to a content library.
• For a VMware Cloud on AWS SDDC, use the Content Onboarding Assistant to import
VMTX templates.
After uploading the ISO images, VMTX templates, and OVA/OVF templates, you can use them to
Content Libraries
Content libraries are container objects that store and manage VM templates,
vApp templates, and other file types. Using the content library, you can deploy
and share the stored items within a vCenter Server instance and between
vCenter Server instances
For your SDDC, you can create a content library that subscribes to the content library in your
on-premises data center. You publish the on-premises content library to import library items
into your SDDC.
To synchronize your on-premises and SDDC content libraries, follow these steps:
1. Add your templates, ISO images, and scripts to the on-premises content library. All .vmtx
templates are converted to OVF templates.
3. In your SDDC, create a content library that subscribes to the one you published in Step 2.
Content is synchronized from your on-premises data center to your cloud SDDC.
You can deploy VMs and vApps from the VM or OVF templates that are stored in a content
library.
In this demonstration, you launch the vSphere Client from the VMware Cloud on AWS console.
Transcript
1. In the SDDC vSphere Client browser tab, select Menu, and then Content Libraries.
2. On the Content Libraries page, click VMC-CL-01.
3. Select the Templates tab and click OVF & OVA Templates.
4. Deploy a new virtual machine from the Lychee-ubuntu template. Right-click the Lychee-
ubuntu template and click New VM from This Template. The New Virtual Machine from
Content Library wizard opens.
5. On the Select a name and folder page, enter Photo-App-01 for the Virtual machine name.
6. Expand the location tree and select the Workloads folder.
7. Click NEXT.
8. On the Select a compute resource page, expand the compute resource tree and
select Compute-ResourcePool.
9. Click NEXT.
10. On the Review details page, click NEXT.
11. On the Select storage page, select WorkloadDatastore and click NEXT.
12. On the Select networks page, select sddc-cgw-network-1 from the Destination
Network drop-down menu and click NEXT.
13. On the Ready to complete page, click FINISH.
14. Wait for the Deploy OVF template task to finish.
15. Power on the newly created Photo-App-01 VM. Select Menu, and then Host and Clusters.
16. In the left pane, expand Compute-ResourcePool and locate the new VM called Photo-
App-01.
Workload Management Page 312
App-01.
17. Right-click the Photo-App-01 VM and select Power, and then Power On.
The VM powers on and acquires an IP address using DHCP from the 192.168.xxx.0/24 range.
You can create VMs from a content library in other ways. Explore the VMware vSphere product
documentation to learn more.
Content libraries support in-place updates of VM templates with a rich version history.
Transferring Content
You can easily transfer content using the Content Onboarding Assistant. The process works as
follows:
1. Check the connectivity between the client and on-premises vCenter Server instance and
VMware Cloud on AWS.
2. Scan vCenter Server Inventory for VMTX templates.
3. Scan given datastores and folder for any files.
4. Create a published content library in the on-premises vCenter Server instance.
5. Copy the selected vCenter Server VMTX templates.
6. Import the content from a given folder into the content library.
7. Create a subscribed content library in the VMware Cloud on AWS SDDC.
8. Synchronize all content from Step 6.
True or False: The VMware Cloud Content Onboarding Assistant is built into the VMware Cloud
on AWS client.
True
False
To upload content:
If you upload unsupported templates from an on-premises content library to an SDDC, the VMs
that are created from the template do not power on in the SDDC.
These VM configurations have limited support and, as a result, are incompatible with VM
migrations that use VMware vSphere vMotion in VMware Cloud on AWS:
Learner Objectives:
After completing this lesson, you should be able to:
Example: If problems occur during the patching or upgrading process, you can stop the process and
revert to the previous state.
Cloning is a quick and simple way to create a VM that shares properties with an existing one.
Example: You must diagnose a problem with a production VM. You find a potential fix for the problem,
but you do not want to install the fix on the production VM because users need to access it. You decide
to clone the VM and use the clone to test the fix. In this way, users can still access the production VM
during the cloning and testing processes.
A VM template is the original copy of a VM from which you can create ready-to-use VMs. The template is
useful for creating many VMs of the same kind.
Example: You require four VMs. The steps for creating these four VMs are repetitive and time-
consuming and can introduce errors. A more efficient method is to create a base template containing
the essential VM configuration. You can also customize the VMs created from a template based on
need.
• Powered on
• Powered off
• Suspended
A snapshot does not include independent virtual disks (persistent and non-persistent)
VM Snapshot Files
The configuration state file has a .vmsn extension and is used to hold the active memory state of the VM at
A new .vmsn file is created for every snapshot that is created on a VM and is deleted when the snapshot is
deleted. The # symbol stands for the next number in the sequence, starting with 1.
The size of this file varies, based on the options selected when the snapshot is created. For example, including
the memory state of the VM in the snapshot increases the size of the .vmsn file.
The memory state file has a .vmem extension and is created if the option to include memory state is selected
during the creation of the snapshot.
It contains the entire contents of the VMs at the time that the snapshot of the VM was taken.
The disk descriptor file is a small text file that contains information about the snapshot. The # symbol
indicates the next number in the sequence, starting with 1.
The snapshot delta file contains the changes to the virtual disk data since the snapshot was taken.
When you take a snapshot of a VM, the state of each virtual disk is preserved.
The VM stops writing to its -flat.vmdk file. Writes are redirected to the
-######-delta.vmdk. The ###### symbols indicate the next number in the sequence.
You can exclude one or more virtual disks from a snapshot by designating them as independent disks.
Configuring a virtual disk as independent is typically done when the virtual disk is created, but this option can
be changed whenever the VM is powered off.
The snapshot list file is created at the time that the VM is created. It maintains snapshot information for a VM
so that it can create a snapshot list in the vSphere Client.
This information includes the name of the snapshot .vmsn file and the name of the virtual disk file.
An independent disk does not participate in virtual machine snapshots. That is, the disk state is independent
of the snapshot state. Creating, consolidating, or reverting to snapshots does not affect the disk.
In general, virtual disks are created using one of the following modes: Independent persistent, independent
nonpersistent, and dependent.
When a virtual machine reads from an independent nonpersistent mode disk, the redo log is checked first. If
the relevant blocks are listed, the virtual machine reads the information. Otherwise, the read goes to the base
disk for the virtual machine.
Dependent - the default disk mode. When you take a snapshot of a virtual machine, dependent disks are
included in the snapshot. When you revert to the previous snapshot, all data are reverted to the point of
taking a snapshot.
Managing Snapshots
A VM provides several operations for working with snapshots and snapshot chains. You can manage
snapshots, revert to any snapshot in the chain, and remove snapshots.
To open the Snapshot Manager in the vSphere Client, select the required VM and navigate to the Snapshots
tab.
Alternatively, you can right-click the VM and select Snapshots > Manage Snapshots.
Editing a VM Snapshot
To edit a snapshot:
1. On the VM Snapshots page, click EDIT.
2. In the Edit snapshot dialog box, make changes.
3. Click EDIT.
Reverting to a VM Snapshot
Deleting Snapshots
Deleting a snapshot removes the snapshot from the Snapshot Manager. The snapshot files are consolidated
and written to the parent snapshot disk and merge with the VM base disk.
To delete a snapshot:
1. In the vSphere Client, select the required VM in the left pane.
2. Click the Snapshots tab.
Consolidating Snapshots
Snapshot consolidation is a method for committing a chain of delta disks to the base disks when the Snapshot
Manager shows that no snapshots exist, but the delta disk files remain on the datastore.
Snapshot consolidation is useful when snapshot disks fail to compress after a Revert, Delete, or Delete
all operation. This failure to compress might happen, for example, if you delete a snapshot but its associated
disk does not commit back to the base disk.
The presence of redundant delta disks can adversely affect the virtual machine performance. You can
combine such disks without violating a data dependency.
After snapshot consolidation, redundant disks are removed, which improves the virtual machine performance
and saves storage space.
• The snapshot descriptor file is committed correctly, and the Snapshot Manager window shows that all
the snapshots are deleted.
• Delta disk files continue to expand until the datastore on which the VM is located runs out of space.
Snapshot Consolidation
For more information about how to consolidate a VM snapshot in the vSphere Client, access VMware
knowledge base article 2032907.
Snapshot Recommendations
Follow these recommendations to get the best performance when using snapshots:
The presence of snapshots can have a significant impact on guest application performance, especially in
a VMFS environment, for I/O intensive workloads. The guest applications fully recover performance
after snapshots are deleted.
• Keep snapshot chain length short when possible, to minimize the guest application performance impact.
• If you need to increase the size of a virtual disk that has snapshots associated with it, you must delete
the snapshots first before you can increase the virtual disk's size.
Snapshots and backups are thought to be the same. However, they are different and have different purposes.
Snapshot Backup
Saves the state of the VM with the VM files. Saves a copy of the VM files in a remote site.
Depends on the availability of VM files. To revert a Is autonomous. If a problem affects the VM files, you
snapshot, the VM files must be accessible and show can restore from the backup because it is stored
no errors. separately.
Use cases: Checkpoint in upgrades, patching, testing, Use cases: VM or data safeguard in disaster and
and development processes. recovery plans.
Powered on
Powered off
Suspended
Quiesced file system
Cloning VMs
Cloning Example
Powered on - An exact copy of the VM is not possible because of the services and applications running in the
VM are not paused during the cloning process.
Folders provide a way to store VMs and templates for different groups in an organization. You can set
permissions on them. If you prefer a flatter hierarchy, you can put all VMs and templates in a data
center and organize them in a different way.
5. Click NEXT.
6. On the Select a compute resource page, select the host, cluster, resource pool, or vApp where the VM
will run and click NEXT.
7. On the Select storage page, select the datastore or datastore cluster for storing the template
configuration files and all virtual disks.
For information on available storage options, see the chapter called Clone an Existing Virtual Machine in
the VMware vSphere documentation at https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-1E185A80-0B97-4B46-
A32B-3EF8F309BEED.html.
8. Click NEXT.
9. On the Select clone options page, select additional customization options for the new virtual machine
and click NEXT.
You can choose to customize the guest OS or the VM hardware. You can also choose to power on the
VM after its creation.
10. On the Ready to complete page, review the virtual machine settings and click FINISH.
VM Template Components
A template is a VM, with all its components:
Template Examples
• Convert a VM to a template.
• Clone a VM to a template.
• Clone an existing template.
The Deploy from Template wizard guides you through the steps in the VM deployment process.
You can choose to customize the guest OS or the virtual machine hardware. You can also choose to
power on the virtual machine after its creation.
11. On the Ready to complete page, review the virtual machine settings and click FINISH.
A tag is a label that you can apply to objects in the vSphere inventory.
When you create a tag, you assign that tag to a category. Using categories, you can group related tags
together.
When you define a category, you can specify the object types for its tags, and whether more than one tag in
the category can be applied to an object.
You can assign tags to objects, search for tagged objects, and view objects that have the same tag.
You can search by typing key words such as the tag name or the display name of the object that you want to
find.
After searching for keywords such as the tag name, you can view related objects, regardless of type.
In the example, objects for the RD - Tag include a template, two hosts, and one datastore.
Workloads are increasingly becoming distributed as our environments continue to get broader and more
complex. Many cloud-based applications are business critical but vulnerable to compromise if any part of
the workload (app, data or OS) malfunctions.
This means that securing each part of the workload is a critical part of securing your business.
Two ways to secure your VMs are covered here: vTPM and Widows VBS.
The vTPM acts as any other virtual device. You can add a vTPM to a virtual machine in the same way you add
virtual CPUs, memory, disk controllers, or network controllers.
When using this feature, you do not require a hardware Trusted Platform Module chip.
By default, no storage policy is associated with a virtual machine that is enabled with a vTPM.
You can choose to add encryption explicitly for the virtual machine and its disks, but the virtual
machine files must already be encrypted.
Component Requirements:
• vCenter Server 6.7 and later for Windows virtual machines, and vCenter 7.0 Update 2 and later for Linux
virtual machines
• Virtual machine encryption (to encrypt the virtual machine home files)
• Key provider configured for vCenter Server
Guest OS Support
• Linux
• Windows Server 2008 and later
• Windows 7 and later
Starting with vSphere 6.7, you can enable Microsoft virtualization-based security (VBS) on supported
Windows guest operating systems.
With Microsoft VBS, you can use the following Windows security features to harden your system and isolate
key system and user secrets so they are not compromised.
Credential Guard - aims to isolate and harden key system and user secrets against compromise.
Device Guard - provides a set of features designed to work together to prevent and eliminate malware from
running on a Windows system
Configurable Code Integrity - ensures that only trusted code runs from the boot loader onward
Learner Objectives
After completing this lesson, you should be able to:
You want to manage your workloads and provide failure protection and
rapid recovery from outages.
You can automate and manage the demand and supply of your workloads using vSphere features such as
VMware vSphere® Distributed Resource Scheduler™, VMware vSphere® High Availability, and resource pools.
To ensure that VMs in a cluster get the required resources, vSphere DRS performs the following key functions:
• Aggregates computing capacity across a collection of servers into logical resource pools
• Allocates available resources among VMs based on predefined rules that reflect business needs and
changing priorities
VMware Cloud on AWS clusters are preconfigured with vSphere vMotion migration networks. vSphere DRS
works best when VMs meet the following vSphere vMotion migration requirements:
• The hosts in the cluster must be part of a vSphere vMotion migration network. If they are not, vSphere DRS
can still make initial placement recommendations.
• VMware Cloud on AWS clusters are preconfigured with vSphere vMotion-enabled vSAN, and all hosts can
use the same datastores.
vSphere DRS policies in VMware Cloud on AWS provide rules that offer various benefits.
VM-Host Affinity
A VM-Host affinity policy describes a relationship between a category of VMs and a category of hosts.
Use cases:
• When host-based licensing requires that VMs running certain applications be placed on hosts that are
licensed to run those applications
• When VMs with workload-specific configurations require placement on hosts that have certain
characteristics
VM-Host Anti-Affinity
A VM-Host anti-affinity policy describes a relationship between a category of VMs and a category of hosts.
Use case:
• Avoids resource contention by not running general purpose workloads on hosts that run resource-
intensive applications.
VM-VM Affinity
A VM-VM affinity policy describes a relationship between members of a category of VMs.
VM-VM Anti-Affinity
A VM-VM anti-affinity policy describes a relationship between members of a category of VMs.
Use case:
• When you want to place VMs running critical workloads on separate hosts so that the failure of one
host does not affect other VMs in the category.
Use case:
• For a VM running an application that creates resources on the local and expects those resources to
remain local.
In VMware Cloud on AWS, vSphere DRS is enabled by default. It is managed by VMware, and you cannot change
the configuration.
In a VMware Cloud on AWS SDDC, you use compute policies to control the vSphere DRS behavior.
The policy takes effect after a tagged VM is powered on and keeps the VM on its current host as long as the host
remains available.
For more information about creating or deleting a Disable DRS vMotion policy, access the VMware Cloud on
AWS product documentation.
If vSphere DRS moves a virtual machine to another host for load-balancing or to meet reservation
requirements, the resources created by the application are left behind.
Performance can be degraded when the locality of reference is compromised, so this VM should not be
unnecessarily moved by vSphere DRS.
You determine the resource requirements for the VMs in your clusters. Which affinity and anti-affinity policies
match your requirements?
vSphere HA is enabled by default in VMware Cloud on AWS and cannot be disabled or modified. Fault tolerance
is unavailable.
Proactive high availability is turned off because VMware immediately replaces failed hosts.
You can configure clusters to tolerate one host failure by using a percentage-based admission control policy.
Several vSphere HA configuration settings are static in the VMware Cloud on AWS SDDC and cannot be disabled.
Resource Pools
A resource pool is a logical abstraction of hierarchically managed CPU and memory resources. With a resource
pool, you can divide and allocate CPU and memory resources to VMs and other resource pools in a vSphere DRS
cluster.
COMPUTE-RESOURCEPOOL
By default, all workload virtual machines are created in the top-level (root) Compute-ResourcePool. It is initially
created in Cluster-1.
Each additional cluster that you create starts with its own top-level Compute-ResourcePool.
You can perform the following actions with this resource pool:
• Create new VMs and child resource pools.
• Rename the resource pools to better match company policy.
• Create child resource pools of any Compute-ResourcePool to give you more control over the allocation of
compute resources.
• Monitor the resource pool, its VMs, and its child resource pools, and examine resource pool use.
• Set tags and attributes.
• Change resource allocation settings on child resource pools.
MGMT-RESOURCEPOOL
Learner Objectives:
• Recognize guest OS performance requirements for virtual CPU, memory, storage, and networking
• Optimize the guest OS configuration
Guest OS optimization requirements can be broadly categorized into virtual CPU, memory, storage, and
network.
Do you know the optimization methods that correspond to each resource category?
CPU Considerations
For more information about vulnerabilities as they relate to VMware products, see the OS-specific
mitigations sections in the following VMware knowledge base articles.
VMware Overview of ‘L1 Terminal Fault’ (L1TF) Speculative-Execution vulnerabilities in Intel processors:
CVE-2018-3646, CVE-2018-3620, and CVE-2018-3615
Workload Management Page 343
CVE-2018-3646, CVE-2018-3620, and CVE-2018-3615
Virtual NUMA
Virtual NUMA (vNUMA) exposes NUMA topology to the guest OS so that NUMA-aware guest operating
systems and applications can make the most efficient use of the underlying hardware in the NUMA
architecture.
vNUMA Topology
• For the best performance, size your VMs to stay within a physical NUMA node.
• When a VM needs to be larger than a single physical NUMA node, size it so that it can be split evenly
across as few physical NUMA nodes as possible.
• Use caution when creating a VM that has a vCPU count that exceeds the physical processor core count
on a host.
• Changing the corespersocket value does not influence vNUMA or the configuration of the vNUMA
topology.
For more information, access "Virtual Machine vCPU and vNUMA Rightsizing" on the VMware
Performance blog.
• By default, vNUMA is activated only for VMs with more than eight vCPUs. This feature can be activated
for smaller VMs, and is useful for VMs with eight or fewer vCPUs.
To activate vNUMA for VMs with eight or fewer CPUs, you can use the vSphere Client to set
numa.vcpu.min to the minimum VM size (in vCPUs) for which you want vNUMA activated.
• With the CPU Hot Add feature, you can add vCPUs to a running VM. Activating this feature, however,
deactivates vNUMA for that VM, resulting in the guest OS seeing a single vNUMA node.
• Without vNUMA support, the guest OS has no knowledge of the CPU and memory virtual topology of
the host. Consequently, the guest OS can make suboptimal scheduling decisions, leading to reduced
performance for applications running in large VMs. So activate CPU Hot Add only if you expect to use it.
What is NUMA?
NUMA systems are advanced server platforms with more than one system bus. For more information about
NUMA, access the vSphere product documentation.
Which guideline accurately describes one way to optimize vCPU? (Select one option)
Create clusters that are composed entirely of hosts with matching NUMA architecture.
Activate the CPU Hot Add feature to increase performance of applications running in large VMs.
Create VMs that have a vCPU count that exceeds the physical processor core count on a host.
Memory Considerations
The memory resource settings for a VM determine how much of the host memory is allocated to the VM.
VMware Cloud on AWS can make large memory pages available to the guest OS.
If an OS or application can benefit from large pages on a native system, that operating system or application
can potentially achieve a similar performance improvement on a virtual machine backed with 2 MB machine
memory pages.
Consult the documentation for your operating system and application to determine how to configure large
memory pages.
Storage Considerations
The virtual storage adapter that is presented to the guest OS can influence storage performance. The
device driver, its settings, and other factors in the guest OS can also affect performance.
For most guest operating systems, the default virtual storage adapter in VMware Cloud on AWS is either LSI
Parallel or LSI Logic SAS, depending on the guest operating system and the virtual hardware version.
In order to use PVSCSI, your VM must be using virtual hardware version 7 or later.
If you choose to use the BusLogic Parallel virtual SCSI adapter, and are using a Window guest operating
system, you should use the custom BusLogic driver included in the VMware Tools package.
The Non-Volatile Memory Express (NVMe) virtual storage adapter (virtual NVMe, or vNVMe) allows recent
guest operating systems that include a native NVMe driver to use that driver to access storage through
VMware Cloud on AWS.
Compared to virtual SATA devices, the vNVMe virtual storage adapter accesses local PCIe SSD devices with
much lower CPU cost per I/O and significantly higher IOPS.
Queue Depth
The depth of the queue of outstanding commands in the guest OS SCSI driver can significantly impact disk
performance. A queue depth that is too small, for example, limits the disk bandwidth that can be pushed
through the virtual machine. See the driver-specific documentation for more information on how to adjust
these settings.
In some cases, large I/O requests that are issued by applications in a VM can be split by the guest storage
driver.
Changing the guest OS registry settings to issue large block size I/O requests can eliminate this splitting and
enhance performance.
For more information about large I/O requests, access the VMware knowledge base article 9645697 at
https://kb.,vmware.com/s/article/9645697.
Disk Partitions
You should ensure that disk partitions in the guest OS are aligned.
For more information about tools and recommendations for disk partitions, access the OS vendor
documentation.
4K-Aligned I/Os
VMware Cloud on AWS uses drives with 4 KB sector size (that is, 4 KB native, or 4Kn) but presents storage to
For more information about device sector formats, access the VMware vSphere product documentation at
https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.storage.doc/GUID-5E7B4EBC-2147-42F9-9CCD-B63315EE1C52.html and
knowledge base article 2091600 at https://kb.vmware.com/s/article/2091600.
True
False
Network Considerations
The guest OS network considerations describe the various types of virtual network adapters, how to select
one, and how to obtain the best performance from it.
VLANCE
The VLANCE virtual network adapter is an emulated adapter. It emulates an AMD 79C970 PCnet32 NIC.
Drivers for this NIC are found in most 32-bit operating systems.
E1000
The E1000 virtual network adapter is an emulated adapter. It emulates an Intel 82545EM NIC. Drivers
for this NIC are found in many recent operating systems.
E1000E
The E1000E virtual network adapter is an emulated adapter. It emulates an Intel 82574 NIC. Drivers for
this NIC are found in a smaller set of recent operating systems.
VMXNET2
The VMXNET2 virtual network adapter (also called Enhanced VMXNET). This adapter is based on the
VMXNET adapter but adds a number of performance features.
VMXNET3
Flexible
The Flexible virtual network adapter is a hybrid virtual network adapter. It starts out emulating a
VLANCE adapter, but can function as a VMXNET adapter if VMware Tools is installed and the guest OS
supports VMXNET.
For the best performance, use the VMXNET3 paravirtualized network adapter for the operating
systems in which it is supported.
For guest operating systems in which VMXNET3 is not supported, use the E1000E virtual network
adapter.
Consider the following guidelines for using various network adapter features and configuring adapters for
the best performance.
When connected this way, their network speeds are not limited by the wire speed of any physical
network card. Instead, they transfer network packets as fast as the host resources allow.
Jumbo Frames
Jumbo frames are recommended as a way to increase network throughput and reduce CPU load. They
do this by allowing data to be transmitted using larger, and, therefore, fewer packets.
Jumbo frames are supported on the E1000, E1000E, VMXNET2, and VMXNET3 devices.
They are activated by default on the underlying network for all same-data-center traffic and connected
VPC traffic.
It is supported in virtual machines only when they use an E1000, E1000E, VMXNET2, or VMXNET3
device.
TSO can improve performance even if the underlying hardware does not support TSO.
It is supported in virtual machines only when they use the VMXNET2 or VMXNET3 device.
Learner Objectives
After completing this lesson, you should be able to:
• Recognize best practices for using permissions in a VMware Cloud on AWS SDDC
• Identify the roles available in VMware Cloud on AWS
• Describe the privileges of the CloudAdmin user role.
• Add roles and users to the vCenter Server instance in VMware Cloud on AWS
This lesson focuses on vSphere permissions as they relate to VMware Cloud on AWS.
For more information on how vSphere permissions are used by other hyperscaler partners, you
can access the following resources:
In a cloud SDDC, how do you limit user or group access to specific tasks on vCenter Server
objects?
• Privilege
• Role
• User or group
• Object or resource
Global Permissions
Global permissions are applied to a global root object that spans solution inventory hierarchies.
Using global permissions, you assign the following permissions and privileges:
For example, if you want to use solutions such as vCenter Server and Content Library, you
must have global permissions.
• Give a user or group privileges for all objects in all object hierarchies.
You decide on the role for each user or group. The role determines the set of privileges that
the user or group has for all objects in the hierarchy.
Global permissions do not apply to objects that VMware manages for you, such as SDDC
hosts and datastores.
VMware Cloud on AWS best practices for using permissions mirror the best practices for vCenter
Server:
You are assigning permissions in a VMware Cloud on AWS SDDC. Which tasks align with best
practices for assigning permissions? (Select two options)
Replicate global permissions from your on-premises vCenter Server and the vCenter Server
in your SDDC.
Assign roles to groups of users.
When assigning a restrictive role to a group, verify that the group does not contain the
CloudAdmin user.
CloudAdmin User
The vCenter Server instance in a VMware on AWS SDDC includes two predefined roles that are not
present in your on-premises vCenter Server instance: CloudAdmin and CloudGlobalAdmin.
In VMware Cloud on AWS, the CloudAdmin role has several key characteristics:
• The [email protected] user includes both the CloudAdmin and the CloudGlobalAdmin
roles.
• The CloudAdmin and CloudGlobalAdmin roles are predefined in the vCenter Single Sign-On
domain and cannot be edited.
• When you change the password for your SDDC from the vSphere Client, the new password is
not synchronized with the password that appears on the default vCenter Server credentials
page.
• If you change the credentials, you are responsible for recording the new password. Contact
Technical Support and request a password change if the password is lost.
You use the CloudAdmin and CloudGlobalAdmin roles to manage the SDDC.
Custom roles can be created in VMware Cloud on AWS. The creation process is the same as for
on-premises vSphere.
To add roles, select Menu > Administration > Access Control > Roles.
On the object whose permissions you want to modify, you must have a role that includes
the Permissions.Modify privilege.
You cannot create new users and groups in the vmc.local or localos domains.
Adding new users requires that the vCenter Server instance in the VMware Cloud on AWS
environment connects to an existing identity source.
To add users, select Menu > Single Sign On > Users and Groups and select ADD USER.
To modify permissions of an object, you must have a role that includes the
Permissions.Modify privilege on that object.
You must create new users and groups in the vmc.local or localos domains of your vCenter.
• The way that you interact with a VM is similar to how you interact with a physical machine.
Every VM provides the same functionality as a physical machine because they use the same
types of components.
• In a VMware Cloud on AWS SDDC, you can provision VMs in multiple ways:
○ Using the New Virtual Machine wizard
○ Cloning a VM
○ Deploying a VM from a template
○ Using the content library
• Virtual machines are not static objects. They can move from host to host to maintain
availability and performance. You can automate and manage the demand and supply of your
workloads using vSphere features such as vSphere DRS, vSphere HA, and resource pools.
• You can optimize the performance of your guest OS by configuring vCPU, memory, storage,
and network settings according to best practices.
• The task of adding users and roles in VMware Cloud on AWS is similar to on-premises
vSphere. The vCenter Server instance in your SDDC includes two predefined roles that are
not present in your on-premises vCenter Server instance: CloudAdmin and
CloudGlobalAdmin.
Additional Resources
• For information about configuring and managing your VMware Cloud on AWS SDDC, access
Managing the VMware Cloud on AWS Data Center at https://docs.vmware.com/en/VMware-
Cloud-on-AWS/services/com.vmware.vsphere.vmc-aws-manage-data-center-
vms.doc/GUID-560F64CA-0C0C-43D2-ABA9-42BD50F84457.html.
• For information about user roles and permissions, access Understanding Authorization in
vSphere at https://docs.vmware.com/en/VMware-
vSphere/7.0/com.vmware.vsphere.security.doc/GUID-74F53189-EF41-4AC1-A78E-
D25621855800.html.
Learner Objectives
Container Benefits
Lightweight Containers require fewer resources and less hardware than, for example,
virtual machines. So you can start containers quickly.
Because containers are lightweight, portable, and scalable, they offer benefits in
terms of developing and deploying applications. And organizations are using
more and more containers to modernize their applications.
Container Challenges
But, as you might expect, the more containers, the more complex it becomes to manage them.
Challenges include:
Orchestration Solution
It manages, schedules, and automates resource use, failure handling, application availability,
configuration, and scalability.
Kubernetes provides an application programming interface (API) where you can define
container infrastructure using a declarative method.
This video is taken from a larger presentation that appears on the VMware Tanzu web site in the
content library at https://tanzu.vmware.com/content/videos/build-manage-secure-a-multi-cloud-
container-infrastructure-with-vmware-tanzu.
Transcript
So, think about why Kubernetes is becoming very popular or becoming the default choice. From
an application development perspective, if I have a multi-cloud strategy, or even if, let's say, I'm
a developer and I know my main function is to write code, to get behind the logic of developing
what I need to do for my business.
And in order to maintain these modern applications that are containerized, I either have to go
through different cloud providers, learn about their different APIs, and learn about the different
methods to manage the applications. Or, I can learn Kubernetes, and Kubernetes will in turn
figure out what is needed and work with all the different cloud providers.
So today it's easier from an application development perspective to write application code, give
the requirements that application needs in order to be stood up, in order to be lifecycle
managed, to Kubernetes through a simple file. And Kubernetes will deploy those applications,
create the back-end services needed to support that application, create microservices so that
those applications can talk with each other or they can talk to the outside world.
Now the way Kubernetes does this is that it talks to the back-end infrastructure or the cloud
provider that your Kubernetes cluster is running on. And then once you deploy an app, once
you tell Kubernetes to deploy an application, it is going to go ahead and work with the
southbound APIs for that particular cloud provider and deploy the necessary building blocks
needed for that application to be supported.
So, for example, if you said, here is a containerized application and it needs a storage volume,
what Kubernetes is going to do is go ahead, create that container. Let's say, if you are running
this on AWS, it's going to go ahead and create an elastic block storage volume, or an object
storage for that matter. If you're running in vSphere, for example, what Kubernetes is going to
do is go ahead and create a VMDK disk or a volume drive. Right. And, you know, not just create
that, using an API, talking to that infrastructure or cloud provider, but also go ahead and attach
those volumes to the right containers.
And so this is all happening behind the scene. From an application development perspective, I
don't have to individually learn all the vSphere APIs. I don't have to individually learn all the
AWS APIs in order to do so.
And that's what gives Kubernetes that power. It's kind of that singular infrastructure API. If you
learn that, then we don't really have to dive into a lot of these different cloud-centric APIs, and
you can really focus on what you're doing, which is writing application code.
A small service company wants to develop its own mobile application, cost-effectively run the
application in its data center, and provide innovative services.
The new application will run in Docker containers on VMs, and Kubernetes will orchestrate the
containerized application.
Which examples illustrate benefits of using Kubernetes in this way? (Select three options)
You can manually fix faults and failures to maintain troubleshooting knowledge on the
team.
Kubernetes services maintain only one version of the same application for consistency
across environments.
The DevOps team can easily port containers from the test environment to production,
accelerating the development and deployment of new features.
Applications
At the application layer, users connect to the applications.
The infrastructure below this layer supports the running of the applications.
Containers
You control containers at this level by configuring Kubernetes and defining nodes,
pods, and the containers within them.
The Kubernetes control plane takes your configuration commands and relays those
instructions to the compute machines.
You can run Kubernetes on VMware vSphere, VMware NSX, and VMware vSAN, or
other virtualization software.
Among the considerations for this layer is how to manage hardware contention,
failure, and changes.
Hardware
You must consider which hardware components can adequately manage the whole
stack.
Applications
Containers
Kubernetes
Cluster API
Virtualization
Hardware
Kubernetes Namespaces
Namespaces are a way to organize clusters into virtual subclusters. They can be helpful when
different teams or projects share a Kubernetes cluster.
Any resource that exists within Kubernetes exists either in the default namespace or a
namespace that is created by the cluster operator
• Provide teams or projects with their own virtual clusters without fear of impacting each
other’s work.
• Enhance role-based access controls (RBAC) by limiting users and processes to certain
namespaces.
• Enable the dividing of a cluster’s resources between multiple teams and users through
resource quotas.
Supervisor Cluster
Modern Applications Page 366
Supervisor Cluster
The Kubernetes cluster that is created when activating the Tanzu Kubernetes Grid service is
called a Supervisor Cluster.
A Supervisor Cluster is composed of the control plane and the compute machines, or worker
nodes. Each node runs pods, which are made up of containers.
The control plane is responsible for maintaining the desired state of the cluster, for example,
the applications or workloads that should be running and the images that they should use.
Which function does each Kubernetes control plane component perform to create the pod?
The kublet component starts the pod and assigns resources from node to container.
How?
The kubelet component receives pod specifications from the API server. It uses the specs to
ensure that pods and their containers are running as expected.
Learner Objectives
After completing this lesson, you should be able to:
• Describe the functions of VMware Tanzu products in Kubernetes life cycle management
• Recognize use cases for VMware Tanzu editions
Introduction
VMware Tanzu products and services help to build, run, and manage modern applications by
automating the delivery of containerized applications and managing them in production with
Kubernetes.
Step 1: Build
• Spring is a framework for writing high-performing and easily testable Java code.
• VMware Tanzu® Application ServiceTM provides a development and deployment platform
across clouds.
• VMware Tanzu® Build ServiceTM automates container creation, management, and
governance.
• VMware Application CatalogTM provides a customizable selection of open-source software
that is maintained and tested continuously for use in production environments.
Step 2: Run
• VMware vSphere® with VMware Tanzu® provides a Kubernetes experience that is tightly
integrated with vSphere. vSphere runs Kubernetes workloads natively on the hypervisor
layer.
vSphere with Tanzu also contains multiple services that provide access to infrastructure
through a Kubernetes API.
Step 3: Manage
• VMware Tanzu® Service MeshTM Advanced edition provides consistent control and
security for microservices, end users, and data across all your clusters and clouds.
With this edition, you can provision clusters directly from vCenter Server and run VMs and
containers side-by-side.
VMware Tanzu Standard is for organizations that want to operate Kubernetes and container
solutions across multiple clouds.
Whereas VMware Tanzu Basic is intrinsically tied to vSphere, VMware Tanzu Standard can
extend a Kubernetes distribution across on-premises and public clouds.
With VMware Tanzu Standard, you can operate one Kubernetes distribution anywhere and
manage it across all your Kubernetes clusters.
VMware Tanzu Advanced simplifies and secures the container life cycle so that teams can
deliver modern applications at scale on-premises and in the public cloud.
It adds a comprehensive global control plane with observability and a service mesh, contains
advanced load balancing, and provides developers with frameworks, data services, an image
catalog, and automated build function.
Learner Objectives
After completing this lesson, you should be able to:
Each tool has different goals. For example, in an enterprise Kubernetes deployment, you
typically use kubeadm and cluster API. And in a development environment, you typically use
minikube and kind.
kubectl
With kubectl, you run commands against Kubernetes clusters. For example, you can use
kubectl to fetch all the Pods running in a cluster.
Cluster API
Cluster API is a declarative API specification that builds on top of kubeadm to add optional
support for managing Kubernetes cluster infrastructure and life cycle.
You use this tool for cluster provisioning, configuration, and management.
minikube
With minikube, you can run Kubernetes locally. This tool runs a single-node Kubernetes
cluster on your personal computer so that you can try out Kubernetes, or use it for daily
development work.
kind
You use kind for running local Kubernetes clusters using Docker container nodes.
This tool was developed for testing Kubernetes itself, but it can be used for local
development.
Cluster API uses Kubernetes-style APIs and patterns to automate cluster lifecycle
management for platform operators. In this way, deployment is consistent and
repeatable across a wide variety of infrastructure environments.
The supporting infrastructure, such as VMs, networks, load balancers, and virtual private clouds
(VPCs), as well as the Kubernetes cluster configuration, are defined in the same way that
application developers deploy and manage their workloads.
1. Cluster API controllers, which run on a Kubernetes cluster, receive Cluster API definitions
that specify the desired state of a new cluster.
2. Cluster API requests that a cloud provider create the cluster according to these
definitions.
Cluster CRDs
A custom resource definition (CRD) is a built-in resource that you use to extend the
Cluster API.
Example CRDs
Cluster: Describes a cluster
Management Cluster
A management cluster is a Kubernetes cluster that manages the lifecycle of workload
clusters.
It is also where one or more infrastructure providers run and where resources such as
machines are stored.
Infrastructure Providers
Cluster API infrastructure providers include cloud providers such as vSphere (CAPV), AWS
They provider resources for running machines, for example, networking, load balancers,
and firewall rules.
Workload Clusters
A workload cluster is a Kubernetes cluster whose lifecycle is managed by a management
cluster.
Deploy Tanzu Kubernetes Cluster (TKC) on VMware Cloud on AWS with Tanzu services
Video Transcript
In this video, we are going to deploy a Tanzu Kubernetes cluster, also known as the TKC,
into our vSphere dev namespace.
We will do so by logging in to our supervisor control plane address. Using the kubectl
vSphere plug-in, we will log in to our supervisor control plane address using the --server
parameter and then the user name, which in this case is [email protected].
Once logged in, we are now going to switch the Kubernetes context into our vSphere dev
namespace. Next, we use the kubectl get tkr, or Tanzu Kubernetes releases command to
show the available Kubernetes versions that are available for us to provision. In this
example, we can see that we have three versions that are supported: 1.20.2, 1.20.7, and
1.21.2.
Before we can provision our TKC, we must first create a YAML manifest that describes our
To view the progress of our TKC, we can go ahead and do a kubectl get TKC, and we can
see the current status and whether or not the cluster is currently ready.
If we now switch to our vSphere UI to see what's happening from an infrastructure point
of view, we can see the TKC request has been received by our supervisor cluster, and it is
now retrieving the desired Kubernetes cluster from the OVAs from our vSphere content
library.
It is now cloning the individual VMs to construct the desired Kubernetes cluster, which is
going to be three control plane VMs and three worker nodes. This can take a few minutes
depending on the size of your Kubernetes cluster and also the desired configuration that
you have specified.
Let's now switch back to the console. If we run a kubectl get TKC, we can see that our
Kubernetes cluster is now fully realized with three control plane nodes and three worker
nodes. And the status is now ready.
To start using this Kubernetes cluster, we need to log in to the TKC. We go ahead and use
our kubectl -vSphere command. But now we pass in two additional parameters, which is
the Tanzu Kubernetes cluster name, which in this case, is william-tkc-01, and also the
Tanzu Kubernetes cluster namespace, which is dev.
Once logged in, again we're going to switch the Kubernetes context to go into our Tanzu
Kubernetes cluster.
Using kubectl get nodes, we can confirm we are now switching to the context of our TKC.
And as we can see, there are three control plane VMs and three worker nodes. At this
point, we are now ready to start deploying an application.
Deploy the Tanzu Kubernetes Cluster from the Namespace tab on vCenter Server
Create a YAML file that specifies the options for deploying the cluster and run the
appropriate kubectl apply -f command.
Set the kubectl context to a Tanzu Kubernetes cluster manually by using the kubectl config
use-context command.
For a description of commonly used commands, see Command line tool (kubectl) on the
Kubernetes website at https://kubernetes.io/docs/reference/kubectl/.
To troubleshoot Tanzu Kubernetes cluster errors, you can run a utility to collect a diagnostic log
bundle.
VMware provides the TKC Support Bundler utility that you can use to collect Tanzu Kubernetes
cluster log files and troubleshoot problems.
To obtain and use the utility, access knowledge base article 80949.
Learner Objectives
After completing this lesson, you should be able to:
You can deploy and run containerized workloads across software-defined data centers (SDDCs)
and public cloud environments, including vSphere, Microsoft Azure, and Amazon EC2.
A Tanzu Kubernetes Grid instance includes a management cluster, deployed Tanzu Kubernetes clusters,
and the shared and in-cluster services that you configure
A bootstrap
machine initializes a
Tanzu Kubernetes
Grid instance by
bootstrapping a
management cluster
on the cloud
infrastructure of
choice.
After bootstrapping
the management
cluster, the machine
manages the Tanzu
Kubernetes Grid
instance.
A bootstrap machine is typically a VM on which you download and run the Tanzu CLI. The
machine includes the Tanzu CLI and installer interface, and Tanzu Kubernetes cluster plans.
Tanzu CLI
After a management cluster is created, the Tanzu CLI communicates with it to create,
scale, upgrade, and delete Tanzu Kubernetes clusters.
The installer interface is launched from the Tanzu CLI and is a graphical wizard that guides
you through the configuration of a management cluster.
Select the Cluster Plan tab to learn more about bootstrap machine components.
Cluster Plans
You can customize default cluster plans and build new cluster plans.
The management cluster is a Kubernetes cluster that is the primary management and
operational center for the Tanzu Kubernetes Grid instance.
It runs cluster API to create the Tanzu Kubernetes clusters. And it is where you configure
the shared and in-cluster services that the clusters use.
NOTE: In vSphere with Tanzu, the supervisor cluster performs the role of the management
cluster.
Your application workloads run in the Tanzu Kubernetes clusters. Tanzu Kubernetes Grid
automatically deploys clusters to the platform on which you deployed the management
cluster.
You can manage the entire life cycle of Tanzu Kubernetes clusters by using the Tanzu CLI.
NOTE: The terms Tanzu Kubernetes cluster and workload cluster are used
interchangeably.
Shared and in-cluster services are services that run in a Tanzu Kubernetes Grid instance,
providing authentication, ingress, logging, and service discovery.
A single bootstrap machine can bootstrap many instances of Tanzu Kubernetes Grid
across different environments, IaaS providers, and failure domains.
Tanzu Kubernetes Grid also includes signed and supported versions of open-source applications
to provide the container registry, networking, monitoring, authentication, ingress control,
logging, and service discovery that a production Kubernetes environment requires.
Learner Objectives
After completing this lesson, you should be able to:
But the operations team must still manage a Kubernetes runtime consistently across multiple
data centers and clouds.
Finding Solutions
Your first task is to deploy a distributed application across public and private clouds in different
locations.
Tanzu Observability
It simplifies installation.
Tanzu Kubernetes Grid includes the tools and open-source technologies for deploying and
consistently operating a scalable Kubernetes environment across VMware private cloud,
public cloud, edge, or multiple clouds.
The operations team must monitor multiple endpoints to manage, scale, and maintain
resiliency and availability. But operational and remediation policies differ across clouds. And
security, auditing, and compliance are inconsistent.
Which solutions can help address these issues? (select two solutions)
Video Transcript
Hi, I'm Corey Dinkens, a technical marketing manager with VMware. In this short video,
I'm going to give an overview of VMware Tanzu Mission Control.
We can also attach any Kubernetes clusters running anywhere for not only visibility but
also control of that cluster. You can see here how I have attached a variety of cluster
types, such as AKS, GKE, EKS, OpenShift, and Tanzu Kubernetes Grid on vSphere.
Here we see a few clusters that have an upgrade available. With Tanzu Mission Control,
you can easily upgrade your clusters with a click of a button in the UI. Tanzu Mission
Control can also be driven using rest API endpoints or command line. Other life cycle
management tasks you can perform are scaling up nodes, scaling down nodes, and also
removing or adding node pools.
As you can have tens of hundreds or even thousands of clusters, you need a way to easily
group them. Cluster groups allow you to organize your Kubernetes clusters into logical
groupings so you can apply a common set of policies to those clusters.
An example would be to align with business units or different environments such as dev,
test, or prod. With Tanzu Mission Control catalog, you can deploy Carvel packages to your
cluster with a click of a button, select your package from the catalog, and click INSTALL
PACKAGE. Public and private Carvel repositories are supported. Tanzu Mission Control has
access, image, network, security, quota, and even custom policies for defining your own.
The underlying policy engine for Tanzu Mission Control is the open policy agent
gatekeeper, also known as OPA gatekeeper. Tanzu Mission Control provides centralized
declarative policy management for your organization. This allows fine-grain policy control
across your Kubernetes fleet, eliminating significant amounts of operational toil.
We can apply policies to nearly any organizational construct within Tanzu Mission Control,
such as an organization, a cluster group, a cluster, and a workspace. Policy insights
provides a centralized holistic view of the current state of policy events in your
organization. You can view fleet-wide policy-related information, including sink issues and
violations.
As an operator, I'm responsible for the health of clusters across my organization. I can
view the baseline health of clusters, which is necessary information for operators. We can
also do this for workloads. This view provides a view of all workloads across all of my
clusters, and I can quickly see their status at a glance.
Tanzu Mission Control integrates with industry-leading monitoring tools, such as VMware
Tanzu® Observability™ by Wavefront, a SaaS monitoring platform. You can easily open
Tanzu Observability from the current cluster you are viewing.
Tanzu Observability allows you to collect data from many services and sources across your
entire application stack. The included out-of-the box dashboards are easily customized.
You can run preconfigured cluster inspections using Sonobuoy, an open-source
community standard. The conformance inspection validates the binary's running on your
The CIS benchmark inspection evaluates your cluster against the CIS benchmark for
Kubernetes published by the Center for Internet Security. Operators need to provide data
protection for the Kubernetes applications and the clusters that they run on.
Tanzu Mission Control data protection leverages the open-source project Valero under
the hood and enables operators to centrally manage data protection on their clusters
across multiple environments, easily backing up and restoring their Kubernetes clusters
and namespaces.
That completes this demonstration. Thank you for watching. For more information about
Tanzu Mission Control, please see Tanzu.vmware.com.
The operations team wants to integrate a monitoring tool with Tanzu Mission Control. Which
VMware Tanzu monitoring tool can you integrate? (Select one option)
Tanzu Observability
Tanzu Application Service
Tanzu Kubernetes Grid
VMware Tanzu Service Mesh - Connectivity and Security for Modern Applications
Video Transcript
With VMware Tanzu Service Mesh, application owners can connect, secure, and observe
distributed applications across end-users, microservices, APIs, and data. With Tanzu
Service Mesh, you can abstract the infrastructure layer from the application layer to
provide strong isolation using global namespace.
Tanzu Service Mesh can automatically scale application instances up and down or
cloudburst to a standby cluster to meet the performance objectives for SLO compliance.
Tanzu Service Mesh provides operations teams with rich troubleshooting tools, including
multi-cloud topology maps and traffic flows and performance and health metrics. While
security teams gain insights from API baselining and drift detection, including API
parameter validation and security analytics that address behavioral anomalies,
unsanctioned usage, API threat detection, and PII detection.
Get advanced end-to-end application connectivity and security for modern distributed
applications with VMware Tanzu Service Mesh.
Tanzu Service Mesh can be installed in Tanzu Kubernetes Grid clusters and third-party
Kubernetes-conformance clusters. And it can be used with clusters managed by Tanzu
Mission Control or clusters managed by other Kubernetes platforms and managed
services.
Global Namespaces
With global namespaces, you can transcend infrastructure limitations and boundaries, and
securely stretch applications across clusters and clouds.
You get consistent traffic routing, application resiliency, and security policies for your
applications across cloud siloes, regardless of where the applications are running.
Solution Architecture
Learner Objectives
After completing this lesson, you should be able to:
In a series of interactive simulations, you perform tasks to deploy a Tanzu Kubernetes cluster
using Tanzu Mission Control:
https://labs.hol.vmware.com
• Kubernetes control plane components manage your cluster, its state data, and its
configuration. The control plane interacts with individual cluster nodes using the kubelet,
an agent deployed on each node.
• VMware Tanzu products and services help to build, run, and manage modern applications
by automating the delivery of containerized applications and managing them in
production with Kubernetes.
• To create Kubernetes clusters, you can use the kubectl command line, Cluster API, and
kubeadm.
• Tanzu Kubernetes Grid automates the life cycle management of multiple Tanzu
Kubernetes clusters.
• The management cluster is a Kubernetes cluster that is the primary management and
operational center for the Tanzu Kubernetes Grid instance. Application workloads run in
Tanzu Kubernetes clusters.
• Tanzu Service Mesh provides consistent control, connectivity, and security for
microservices, end users, and data in multi-cluster and multi-cloud environments.
Additional Resources
• For more information about Kubernetes concepts, components, and commands, see the
Kubernetes website at https://kubernetes.io/docs/home/.
• For more information about VMware Tanzu products and solutions, see the VMware
Tanzu documentation at https://docs.vmware.com/en/VMware-Tanzu/index.html?
hWord=N4IghgNiBcIGoFkDuYBOBTABAFTAOwC8BXEAXyA.
• For more information about multi-cloud solutions with Kubernetes and VMware Tanzu,
see "Make Your Move to Multi-Cloud Kubernetes with VMware Tanzu" on the YouTube
website at https://www.youtube.com/watch?v=aRfOxKqPm5o&t=339s.
• For information about Tanzu Kubernetes Grid and Tanzu Mission Control, see "Multi-
cluster and Multi-cloud Demo with TKG and TSM I VMware Tanzu" on the YouTube
website at https://www.youtube.com/watch?v=AJuaiZTn3OA.
Learner Objectives
After completing this lesson, you should be able to:
• Explain uses for Hybrid Linked Mode in VMware Cloud on AWS SDDCs
• Identify login authentication options for VMware Cloud on AWS SDDCs
• Set up Hybrid Linked Mode using the VMware Cloud Gateway Appliance
• For VMware Cloud on AWS to trust the on-premises users (one-way trust)
• To retain the separation between on-premises and VMware Cloud on AWS permissions
• To migrate workloads both to and from on-premises and VMware Cloud on AWS
Hybrid Linked Mode is a version of Enhanced Linked Mode that is built for VMware Cloud on
AWS.
Using Enhanced Linked Mode, you can perform the following tasks:
• Connect multiple vCenter Server systems by using one or more VMware Platform Services
Controller appliances.
With Hybrid Linked Mode, you link your cloud vCenter Server system to a domain that has
multiple vCenter Server instances. These on-premises instances are themselves linked using
Enhanced Linked Mode.
With Hybrid Linked Mode, you can use a single VMware vSphere Client interface for both on-
premises and cloud deployment.
The vCenter Server instances are managed in separate vCenter Single Sign-on domains.
Hybrid Linked Mode creates a unidirectional trust between vSphere SSO domains. This trust cannot
be bidirectional.
Migrate workloads between your on-premises data center and VMware Cloud on AWS
Before you configure Hybrid Linked Mode, you must meet several prerequisites. The following
prerequisites are common to both vCenter Cloud Gateway Appliance and VMware Cloud on
AWS SDDC:
• Verify that your on-premises data center and the VMware Cloud on AWS SDDC are
synchronized to an NTP service or other authoritative time sources.
• Configure an IPsec VPN connection between your on-premises data center and VMware
Cloud on AWS.
• Verify that the maximum latency between VMware Cloud on AWS and an on-premises
data center is 100 milliseconds round trip.
• Determine the on-premises users that you want to grant Cloud Administrator permissions
and add the users to a group within your identity source. Verify that this group can access
your on-premises environment.
• Ensure that you have credentials for a user who has a minimum of read-only access to the
base distinguished name (DN) for users and groups in your on-premises environment.
• Confirm that an on-premises DNS server is configured for your management gateway so
that it can resolve the FQDN for the identity source.
• Confirm that you have the credentials for your on-premises vSphere SSO domain.
Active directory (AD) groups get mapped from your on-premises environment to the
cloud.
Deploying the vCenter Cloud Gateway and Configuring Hybrid Linked Mode in VMware Cloud
on AWS
Video Transcript
Welcome to the VMware Cloud on AWS quick start series. Wouldn't it be nice if you could
manage your on-premises and cloud inventories in a single pane of glass? Well, you're in
luck. You can maintain operational efficiency with the vCenter Cloud Gateway appliance.
I'm Jeremiah Megie with VMware. And in this video, I'll walk you through deploying this
The vCenter Cloud Gateway receives automatic updates based on the version of the
connected SDDC. So there's never a need to manually patch or upgrade the appliance. If
you have multiple vCenters in the same SSO domain, you'll be able to view and manage all
of them in the same inventory, along with the cloud vCenter. Configuring Hybrid Linked
Mode also affords you the ability to perform migrations between environments directly
with the UI.
Deploying the appliance is very simple. From the Cloud Console, we can navigate to Tools
for the DOWNLOAD link. This redirects us to our My VMware download page, where we
can save the image locally and then run the installer.
There are two stages: Deploying the appliance and configuring Hybrid Linked Mode.
Click START and navigate through the wizard. Provide the on-premises vCenter FQDN and
credentials where you wish to deploy the appliance. Then select the data center, folder,
and cluster. Provide a VM name and root password, select the datastore and then
proceed to the network settings. Select the network or port group that the appliance
should be connected to. Then specify the FQDN, IP address, subnet, gateway, and DNS.
Specify your NTP servers as time sync is especially important. Provide your PSC
information, which may be the same as the vCenter information, depending on your
configuration. Finally, join the appliance to Active Directory by providing a domain name
and credentials. The appliance will be fully deployed and configured in about 10 to 15
minutes on average, but this varies based on your environment specifics.
Once the deployment is complete, we can start the configuration and we only need to
supply a small amount of information. Provide the cloud vCenter FQDN and the password
to the cloud admin account. Next, select the domain from the Identity source drop down
menu, then search for the Active Directory groups that you wish to provide administrative
access to. The linking process only takes a few minutes.
At this point, we can launch the vSphere Client by pointing our web browser at the
vCenter Cloud Gateway appliance, and then logging in with our Active Directory
credentials. As long as our user is in the group that we provided access to during the
configuration, we will be able to see all the vCenters in the same on-premises SSO
domains, as well as our cloud vCenter. Notice we can quickly get access to help
documentation and chat support from the UI.
Be sure to visit VMware Cloud Tech Zone for the latest VMware Cloud on AWS resources.
Authentication Options
After Hybrid Linked Mode is configured, you can log in to the vSphere Client from the VMware
Cloud on AWS console or from the vCenter Cloud Gateway Appliance.
From the VMware Cloud on AWS console (also referred to as the VMware Cloud console), open
the vSphere Client and log in as [email protected] (or a user with Cloud Administrator
permissions).
From the vCenter Cloud Gateway Appliance, launch the vSphere Client and log in with Active
Directory credentials.
The user account should be in the AD group that you provided access to during the Hybrid
Linked Mode configuration. In this way, you can view all vCenter Server instances in the on-
premises SSO domains and in your cloud vCenter Server instance.
You can configure Hybrid Linked Mode from your SDDC if your on-premises LDAP service is
provided by a native Active Directory (Integrated Windows Authentication) domain or an
OpenLDAP directory service.
This step is optional when configuring Hybrid Linked Mode from the Cloud Gateway Appliance,
but adding an identity source does allow you to configure users or groups with a lesser level of
access than the Cloud Administrator
You can configure Hybrid Linked Mode from the vCenter Cloud Gateway Appliance or from the
VMware Cloud on AWS SDDC vSphere Client.
Configuring Hybrid Linked Mode from the Cloud Configuring Hybrid Linked Mode
Gateway Appliance from the vSphere Client
Centralized administration is available through the Centralized administration is
vSphere Client that is hosted by the vCenter Cloud available through the vSphere Client
Gateway Appliance on-premises. that is hosted on VMware Cloud on
AWS.
When you configure Hybrid Linked Mode from VMware Cloud on AWS, the Identity Management
connection requests can increase network traffic charges and application latency.
Learner Objectives
After completing this lesson, you should be able to:
• Footprint expansion
• On-demand capacity
• Testing and development
Your organization plans to use cloud migration to move a limited set of mobile applications to
public cloud architecture for hosting and DevOps management.
Cloud migration presents several challenges that can have negative outcomes.
You can choose from different migration solutions that help to minimize the negative
outcomes of cloud migration, achieve zero downtime, provide live migration of workloads
from one server to another, and ultimately ensure business continuity.
Each method can be used to achieve different goals. Which method do you think can help
achieve the following goals?
VMware HCX Hot and Cold Migration Advanced Cross Content Library
vCenter vMotion
Simplify app Move powered-on or Migrate workloads Share OVF
migration, workload powered-off VMs between vCenter templates, ISO
balancing, and between on-premises Server instances images, and scripts
business continuity. and cloud environments. across vCenter
Server
Migration Solutions
The following migration solutions are available for use within an SDDC or between SDDCS:
• VMware HCX
• Live Migration
• Cold Migration
• Content Library
• Advanced Cross vCenter vMotion
• Enhanced vMotion Compatibility
VMware HCX
VMware HCX is an application mobility platform that helps simplify application migration,
workload rebalancing, and business continuity across data centers and clouds.
VMware HCX is available in cloud SDDCs, such as VMware Cloud on AWS, Azure VMware
Solutions, and Google Cloud VMware Engine.
Key Capabilities
• vSphere 6.0+ to any current vSphere version on a cloud or modern data center
• Built-in WAN optimized links for migration across the Internet or WAN
• Built-in scheduler to determine replication transfer time
• Bidirectional migration
• Support for VMware vSphere Distributed Switch and Cisco Nexus 1000v switch
• Internet support or AWS Direct Connect support (if using VMware Cloud on AWS) for bulk
migration and VMware vSphere vMotion
In this demonstration, HCX is deployed in a VMware Cloud on AWS SDDC. Then, virtual
machines are migrated from the on-premises data center to the VMware Cloud on AWS SDDC.
VMware HCX is available for you when you start using VMware Cloud on AWS. Let's see
how you can migrate your workloads to the VMware Cloud on AWS SDDC using VMware
HCX.
To deploy VMware HCX, go to Add Ons, OPEN HCX, click DEPLOY HCX, and CONFIRM to
start the deployment. HCX Cloud Manager appliance will be deployed and configured in
the SDDC. Network and compute profiles will also be created during this process. Once it's
finished, open HCX and log into the HCX UI using the same vCenter credentials.
Let's explore what the deployment has done, starting with site pairing. There are no site
pairings at the moment. There isn't any service mesh configured either. But if you go to
network profiles, you will see some network profiles that have been created.
VMware HCX creates a VPN tunnel between the on-premises site and the VMware Cloud
on AWS SDDC. HCX can either use the public Internet or a dedicated connection like AWS
Direct Connect. If you have Direct Connect, the first network profile here called
directConnectNetwork1 is the one that HCX will use. If you want HCX to use the Internet,
then the second network profile here called externalNetwork will be used. The last profile
provides network details for the HCX appliances.
Now, looking at compute profiles, there is one created by default. It shows you the HCX
services available with VMware Cloud on AWS, such as HCX Interconnect, Network
Extension and Bulk Migration.
So, VMware HCX is ready on your VMware Cloud on AWS, but you need to deploy an HCX
Connector appliance in the on-premises environment. I already deployed an HCX
In the vSphere Client, click Menu and select HCX. Here I am now in the on-premises HCX
UI. I am here because HCX configuration and migration need to be initiated from the
source site, which is your on-premises site.
First thing you have to do is configure the site pairing. Go to Site Pairing. Click ADD A SITE
PAIRING. Here you provide details of HCX deployed in VMware Cloud on AWS, Click
CONNECT.
Now that we have a site pairing, let's configure the service mesh. Select the VMware
Cloud on AWS as the destination site. Next, select compute profiles for each site. For
VMware Cloud on AWS site, you can select the compute profile that has been created by
default. Here you can select the HCX services to activate. The availability of these services
depends on licensing. Depending on the services selected, appropriate appliances will be
deployed automatically by VMware HCX in both sites.
Let's click CONTINUE. This is an optional step. You can choose a specific uplink network
profile for the HCX appliances. But I'm going to leave them as is. This is another optional
advanced configuration that I will leave as is, and same thing here. So, I'll click CONTINUE.
Here, you can review the topology. This diagram displays all the HCX appliances that will
be deployed at each site. Next, let’s name the service mesh. Click FINISH.
It will take a little bit of time for the service mesh between the on-premises site and the
VMware Cloud on AWS SDDC to be created. You can also track the progress by going to
tasks. Once it's done, we can go to the VMware Cloud on AWS SDDC. You will see some
HCX appliances that have been deployed automatically.
Now let's go back to the on-premises HCX UI and use HCX Network Extension to migrate
virtual machines from the on-premises environment to the VMware Cloud on AWS SDDC.
By using HCX Network Extension, you can migrate workloads without changing the
machine IP addresses.
Let's go to Network Extension. Click EXTEND NETWORKS. I'm going to extend the network
named VLAN-10-Apps. Click NEXT. Here, I'm going to enable HCX Mobility Optimized
Networking. This allows virtual machines in the VMware Cloud on AWS SDDC to use the
NSX-T router in that SDDC as a default gateway, instead of having to use the on-premises
router as a default gateway. This optimized routing limits network traffic hair-pinning
from occurring between the sites. Next let's provide the gateway IP address and click
SUBMIT. Refresh the page and you can now see the progress.
Once the network extension has been completed, we can migrate virtual machines on
that network to the VMware Cloud on AWS SDDC. Go to Migration and click MIGRATE.
Select the destination site. We'll add a couple of SQL servers to migrate. We can name this
mobility group SQL-servers-group-01. Mobility group allows you to implement migration
events that you've planned.
Select the destination compute container. Select destination storage. Here, I'm going to
choose to vMotion to virtual machines. Select the destination folder, and you're ready to
One thing to note here is that vMotion migrates one virtual machine at a time. If you want
to migrate multiple virtual machines at once, you can use bulk migration or replication-
assisted vMotion.
Now that the migration is complete, let's go to the VMware Cloud on AWS SDDC. Here,
you can see the two SQL servers that have migrated from the on-premises environment.
VMware HCX supports bulks migrations, live migrations, and cold migrations on vSphere 6.5 and
later.
Unidirectional migration
Built-in WAN optimized links for migration across the Internet or WAN
Built-in scheduler to determine replication transfer time
Available only in the latest version of vSphere
Support for vSphere standard switches only.
Live Migrations
You can move powered-on VMs between your on-premises environment and your cloud SDDC.
This type of migration is also known as a live, or hot, migration.
vSphere vMotion and VMware vSphere® Storage vMotion® are the underlying technologies in a
live migration:
• vSphere vMotion migrates a powered-on VM from one host to another. With vSphere
vMotion, the entire state of the VM is moved from one host to another, but the data
storage remains in the same datastore.
If you meet several prerequisites, you can perform live migrations to your VMware Cloud on
AWS SDDC. The main requirement is that you enable Hybrid Linked Mode and establish an L2
VPN between your on-premises environment and your cloud SDDC.
To perform vSphere vMotion migrations using Hybrid Linked Mode, you verify the following
settings:
• The source to destination settings must have a minimum bandwidth of 250 Mbps and
maximum latency of 100 milliseconds RTT.
• L2 VPN or AWS Direct Connect is configured between the on-premises environment and
the cloud SDDC.
• Source and destination management network IP address families must match. You cannot
migrate a virtual machine from a host that is registered to vCenter Server with an IPv4
address to a host that is registered with an IPv6 address.
The Change both compute resource and storage option migrates the VM from a host and
datastore in the on-premises environment to a host and datastore in the VMware Cloud on
AWS SDDC.
When this migration type is selected, vSphere vMotion and vSphere Storage vMotion are used
to migrate the powered-on VM from source to destination.
In the vSphere Client, you use vSphere vMotion to migrate a VM from the on-premises
environment to the VMware Cloud environment.
Cold Migration
Using cold migration, you can move powered-off VMs between your on-premises environment
and your cloud SDDC.
Cold migration is best used for non-production workloads, for example, development or test
workloads, where business continuity is least impacted by downtime.
To perform an Advanced Cross vCenter vMotion, use the Migrate wizard in the vSphere Client.
From the Migrate wizard in the vSphere Client, select Cross vCenter Server export for the
migration type.
Configure the target vCenter Server. Either a new vCenter Server is connected, or a saved
connection is selected.
When you select a compute resource, a list of target vCenter Server data centers, clusters, and
hosts appears.
The other wizard options are similar to the compute resource and storage steps.
During the storage step, you can select the correct destination storage.
You might also need to change the VM network to match the target configuration.
The compatibility checks are processed with each step to ensure a successful migration.
Because the live migration occurs over multiple vCenter Server instances, you can view the
migration in process in the current environment, as well as view the receive operation in the
target vCenter Server instance.
Enhanced vMotion Compatibility ensures that all hosts in a cluster present the same CPU
feature set to VMs, even if the actual CPUs on the hosts differ. Using Enhanced vMotion
Compatibility prevents migrations with vSphere vMotion from failing because of incompatible
CPUs.
The Enhanced vMotion Compatibility feature works differently at the host cluster and VM
levels.
Cluster Level
When you migrate a VM out of the Enhanced vMotion Compatibility cluster, a power cycle
resets the VM EVC mode.
The baseline feature set that you configure for the VM cannot have more CPU features
VM Level
You can change the per-VM EVC mode only when the VM is powered off.
When you configure Enhanced vMotion Compatibility at the VM level, the per-VM EVC
mode overrides cluster-based Enhanced vMotion Compatibility.
If you do not configure per-VM Enhanced vMotion Compatibility, when you power on the
VM, it inherits the EVC mode of its parent Enhanced vMotion Compatibility cluster or
host. The EVC mode becomes an attribute of the VM.
Enhanced vMotion
Compatibility Example
• VMware HCX orchestrates per-VM Enhanced vMotion Compatibility for live migrations.
• VM mobility is possible across all supported Intel chipset generations.
• VM mobility is possible regardless of power cycles.
Content Library
When you first access your cloud SDDC, you spin up new workloads. To perform this task, you
must access the VM templates, ISO images, OVFs, and scripts that you use in your on-premises
data center.
You can onboard or share these objects with your new SDDC. The fastest and easiest way to
onboard content into the cloud SDDC is by using a content library.
A content library organizes and automatically shares your corporate OVF templates, ISO
images, and scripts across vCenter, including the vCenter instance running in your new SDDC.
To create a content library across your cloud, you take the following steps:
• VMware HCX
• Live migration
• Cold migration
• Content Library
• Advanced Cross vCenter vMotion
• Enhanced vMotion Compatibility
You want to migrate powered-on VMs from your on-premises environment to a VMware Cloud
on AWS SDDC without affecting your business continuity. Which methods do you use? (Select
two options)
Hot migration
Cold migration
VMware HCX
Content Library
Advanced Cross vCenter vMotion
Learner Objectives
After completing this lesson, you should be able to:
To build a hybrid and multi-cloud strategy, you must consider the best solution for achieving the following
goals:
Hybrid Applications
Consider an example of a customer using VMware HCX and VMware Cloud on AWS for its production
environment.
On day 1, the customer moves its production booking system, application, and web tiers to VMware Cloud on
AWS. The customer experiences a performance improvement.
The core application runs on older hardware that uses an older vSphere version. But by moving to VMware
Cloud on AWS, the customer runs on newer hardware and a newer vSphere version.
The customer performs the move on day 1, creating its network bridge, stretching the web and application
networks, and migrating a live application
Burst Capacity
Consider an example of a customer using VMware HCX and VMware Cloud on AWS in its environment.
For example, a media company has regular development cycles in its existing on-premises VMware data center.
It runs out of capacity and does not want another purchasing cycle.
By spinning up an instance of VMware Cloud on AWS on-demand, the company can use VMware HCX to
consume excess capacity when required and remove it when it is not needed.
Bulk Migration
You can schedule and migrate several vSphere VMs in and across data centers without requiring a reboot.
Consider an example of an organization using VMware HCX and VMware Cloud on AWS in their environment.
An organization wants to migrate workloads from a legacy vSphere environment and other platforms to
VMware Cloud on AWS. It wants to drive large-scale migration and accelerate transformation (in months or
VMware HCX is an application mobility platform that is designed for simplifying application migration. After the
organization establishes hybridity between on-premises and the cloud, they can efficiently move workloads
without downtime.
Yes, the core function of VMware HCX is to migrate workloads transparently between environments.
No, VMware HCX focuses on migrating workloads permanently to one or more SDDCs on premises.
Yes, the main function of VMware HCX is to expand capacity to cloud environments.
• The infrastructure hybridity provides a high throughput, low latency, layer 2 network extension, which is
WAN-optimized and load-balanced and provides traffic engineering with intelligent routing and fairness
for large migrations.
• The hybrid cloud is secured with military-grade level B encryption. This cloud can extend to multiple sites
and multiple clouds of different vSphere versions.
• VMs can securely and seamlessly migrate bi-directionally and in bulk. VMware HCX supports live vSphere
vMotion migration and warm bulk migration, with low downtime.
Example of VMware HCX infrastructure between an on-premises data center and a VMware Cloud on AWS SDDC
Bulk Migration
Bulk migration, or replication-based migration, uses the VMware vSphere Replication protocols to move the
virtual machines to a destination site.
• You can use this migration type for migrating large number of VMs in parallel.
• When the replica is ready, you can choose the switchover mode:
○ Immediate switchover as soon as the replica is ready at the destination
○ Scheduled switchover during a predetermined maintenance
VMware HCX migration using vSphere vMotion provides the following features and benefits:
• Migrates workloads into the cloud SDDC without impact to the application owner
• Incorporates SD-WAN technologies, including WAN acceleration, traffic management, and intelligent
routing
You can move a running application to the cloud, on a stretched network, with changes to the VM, and
maintain the existing security context.
The cold migration method uses the VMware Network File Copy (NFC) protocol. It is automatically selected
when the source virtual machine is powered off.
Requirements for VMware HCX using vSphere vMotion and cold migration are as follows:
• VMs with raw disk mapping in compatibility mode (RDM-V) can be migrated.
Cold Migration
vMotion Migration
Bulk Migration
VMware HCX Replication Assisted vMotion combines advantages from VMware HCX bulk migration (parallel
operations, resiliency, and scheduling) with VMware HCX vMotion (zero-downtime VM state migration). It
simplifies the planning, execution, and operationalization of large-scale mobility to public or private clouds.
Switchover Window
Administrators can specify a switchover window.
Continuous Replication
After a set of VMs is selected for migration, VMware HCX Replication Assisted vMotion does the initial
syncing and continues to replicate the delta changes until the switchover window is reached.
Concurrency
Resiliency
VMware HCX Replication Assisted vMotion migrations are resilient to latency and varied network and
service conditions during the initial sync and continuous replication sync.
Switchover
Large chunks of data synchronization by way of replication mean smaller delta vMotion cycles, which, in
turn, means that large numbers of VMs switch over in a maintenance window.
• The Hybrid Interconnect, Bulk Migration, vMotion, and Replication Assisted vMotion services must be
activated and in a healthy state in the relevant service mesh.
• The resources to create, power on, and use the VM must be available in the destination environment.
Replication Assisted vMotion uses vSphere Replication whose potential throughput can vary
depending on the bandwidth available for migrations, latency, available CPU/MEM/IOPS, and disk read
speed.
• Replication begins with a full synchronization (replication) of the VM disks to the destination site.
• You can have the switchover process start immediately following the initial sync or delay the switchover
until a specific time using the scheduled migration option. If the switchover is scheduled, the
synchronization cycle continues until the switchover begins.
• The final delta synchronization begins when the switchover phase starts. During this phase, vMotion is
engaged for migrating the disk delta data and virtual machine state.
• As the final step in the switchover, the source VM is removed, and the migrated VM is connected to the
• Replication Assisted vMotion creates two folders at the destination site. One folder contains the virtual
machine infrastructure definition, and the other contains the VM disk information. This is normal
behavior for Replication Assisted vMotion migrations and has no impact on the functionality of the VM at
the destination site.
• The HCX OS Assisted Migration service uses the Sentinel software that is installed on Linux- or Windows-
based guest VMs to assist with communication and replication from their environment to a VMware
vSphere SDDC.
• Sentinel gathers the system configuration from the guest virtual machine and assists with the data
replication. The source system information is used by various HCX OS Assisted Migration service
processes.
• Sentinel also helps with the data replication by reading data that is written to the source disks and
passing that data to the SDR appliance at the destination site.
• Guest virtual machines connect and register with an HCX Sentinel Gateway (SGW) appliance at the source
site. The SGW then establishes a forwarding connection with an HCX Sentinel Data Receiver (SDR)
appliance at the destination vSphere site. You specify the network connections between the guest virtual
machines and SGW in the compute profile.
• You must install the HCX Sentinel software on each guest VM requiring migration to initiate the guest VM
discovery and data replication. After Sentinel is installed, a secure connection is established between the
guest virtual machine and the HCX SGW. HCX builds an inventory of candidates for migration as the
Sentinel software is installed on the guest virtual machines.
• Using the established connection between the SGW and SDR, replication connections are made between
the Sentinel software on the guest virtual machines and the SDR, with one connection each for control
operations and data replication.
2. An organization has VMs in a non vSphere environment. They want to migrate the VMs to the cloud.
Which type of migration is the best solution in this scenario? (Select one option)
OS Assisted Migration
Replication Assisted vMotion
Cold Migration
vMotion Migration
You use VMware HCX to migrate a VM from the on-premises environment to a VMware Cloud on AWS
environment.
Learner Objectives
After completing this lesson, you should be able to:
VMware HCX comprises a virtual management component at both the source and destination sites, and up to
five types of VMware HCX Interconnect service appliances depending on the HCX license.
VMware HCX services are configured and activated at the source site and then deployed as virtual appliances
at the source site, with a peer appliance at the destination site.
VMware HCX Components: HCX Manager, HCX Network Extension, HCX WAN Optimization, HCX-IX Interconnect
• Provides the framework for the deployment of the VMware HCX service appliances
• Integrates with vCenter and uses existing SSO for authentication
• Supports actions against HCX Manager from the VMware HCX user interface or context menus
HCX-IX Interconnect
• Provides migration and cross-cloud vSphere vMotion capabilities over the Internet or private lines
• Provides suite-B encryption, traffic engineering, and VM mobility
HCX Network Extension
Virtual hardware requirements for VMware HCX appliances apply for both the source and destination
environments.
When VMware HCX is used to extend networks in deployments using VMware NSX at the destination,
additional network extension (HCX-NE) appliances are required when extending more than 8 networks.
You should never use VMware HCX to extend the vSphere management network or other VMkernel networks
(for example, vMotion, vSAN, replication) to the remove site.
In cloud-to-cloud environments, you deploy HCX Cloud Manager at both the source and destination sites. In
legacy vSphere to cloud (private or public) deployments, you install HCX Connector at your on-premises or
legacy site and HCX Cloud Manager at the destination cloud site.
Both the source and destination sites have a management interface for HCX administration and HCX actions.
This management interface is called HCX Manager.
HCX Connector on the source site or HCX Cloud Manager on the destination site are often referred to
as simply HCX Manager.
HCX Connector and HCX Cloud Manager must have connectivity of the following types:
HCX Connector is the central launch point for VMware HCX mobility services. HCX Connector has the following
characteristics:
• It is an OVA that must be deployed on the source site from where workloads are migrated
• It provides the job framework for multisite mobility operations
• Its GUI can be used to deploy other components and can be used to migrate virtual machines and to
protect VMs
True
False
HCX-IX Interconnect
The HCX-IX appliance provides VM mobility using vSphere Replication, vSphere vMotion, and NFC protocols.
The HCX-IX service appliance provides VM replication and vSphere vMotion based migration capabilities over
the Internet and private lines to the destination site, with strong encryption, traffic engineering, and virtual
machine mobility.
NFC is a proprietary VMware protocol that is used to transfer virtual disk data between hosts, vCenter Server,
and ESXi clients.
How the WAN Optimization service helps to move workloads faster and with less network
traffic than traditional methods.
Send VMDK
ESXi passes the source VMDK through military-grade encryption to the WAN Optimization appliance.
The WAN Optimization appliance communicates with the HCX Interconnect appliance, which sends the
VMDK, over either IPsec VPN or AWS Direct Connect, to the VMware Cloud on AWS SDDC.
Decompression
In the VMware Cloud on AWS SDDC, the source VMDK is decompressed and decrypted by the WAN
Optimization appliance.
The VMDK then passes through the hybrid cloud gateway and on to the ESXi host.
You perform the following tasks when connecting from on-premises to a cloud environment:
• Test and certify the new networks with the help of security teams.
• VMware HCX uses virtualization and abstraction so that you can use components on both the on-
premises site and the cloud to set up a secure bridge. Typically, you set up the bridge over the Internet
while waiting for the Direct Connect circuit to arrive.
• NSX network virtualization is part of the cloud SDDC but is not necessary for the on-premises side. The
VMware HCX virtual appliance provides everything that you require for the on-premises site.
• You can stretch layer 2 networks to the cloud. You do not need to create networks. You can bypass the
recertification process because no changes are made to the on-premises network and security.
• VMs can move to the cloud (and back again) without refactoring or IP address changes.
• Virtualization
• Abstraction
2. How does VMware HCX Network Extension work? (Select one option)
You use components on both the on-premises site and the cloud to set up a secure bridge.
You require NSX network virtualization for the on-premises side.
You create layer 2 networks for the cloud side.
You must change IP addressing to allow VMs to move to the cloud (and back again).
Learner Objectives
After completing this lesson, you should be able to:
• Deploy and configure VMware HCX appliances in a VMware Cloud on AWS SDDC
• Create site pairing
• Configure the service mesh
• Configure a network extension
When preparing to install VMware HCX on VMware Cloud on AWS, you perform the following
general steps.
1. Deploy an SDDC
2. Configure firewall access to the SDDC vCenter instance
3. Enable VMware HCX on the VMware Cloud on AWS SDDC
4. Enter the VMware Cloud on AWS SDDC [email protected] service account
credentials
1. On the management network, you identify three IP addresses for the following
components:
• HCX Manager
• HCX Interconnect
• HCX Network Extension
2. You identify one IP address on the vSphere vMotion network
3. You use a distributed virtual switch for the L2 extension (if using vSphere vMotion).
4. You require two VLANs:
• One VLAN for management network (cannot be stretched)
• One or more VLANs for workloads to be migrated
5. The required ports must be open for WAN connectivity.
6. The HCX Manager outbound firewall requirements are TCP port 443 to
connect.hcx.vmware.com and hybridity-depot.vmware.com.
7. The HCX Interconnect and HCX Network Extension outbound firewall requirements are
UDP port 500 and UDP port 4500.
8. After deployment, you log into the VMware Cloud on AWS console to find the procured
public IPs for HCX Interconnect and HCX Network Extension appliances in the SDDC.
If firewall rules for IPsec traffic require the specific destination IP, the firewall rules must
be created after the deployment of VMware HCX on VMware Cloud on AWS.
1. Log in to VMware Cloud on AWS and select View Details on the target SDDC.
2. Navigate to the Add Ons tab.
3. On the VMware HCX tile, click OPEN HCX. A new browser tab opens to
During deployment, the VMware HCX Cloud components are deployed to the VMware
Cloud on AWS SDDC and the SDDC becomes an eligible VMware HCX target site.
You download the HCX installer on the source site, which is the on-premises data center:
The HCX installer is the HCX Connector OVA file, which is used to deploy HCX Manager on
the source site.
When you are logged in to the HCX Manager, you are automatically prompted to activate
the VMware HCX instance:
Pairing source and destination sites is a requirement for creating a service mesh.
A site pair establishes the connection that is required for management, authentication, and
orchestration of VMware HCX services across a source and destination environment.
A service mesh can be added to a connected site pair with a valid compute profile that is
created on both sites. Adding a service mesh initiates the deployment of VMware HCX
Interconnect virtual appliances on both sites.
Pair the on-premises site with the VMware Cloud on AWS SDDC.
A VMware HCX site pair establishes the connection needed for management,
authentication, and orchestration of VMware HCX services across a source and
destination environment.
Create one or more network profiles. Network profiles are used for management, uplinks,
vSphere Replication, and vSphere vMotion traffic that is associated with a compute
profile.
A network profile specifies the port group, network range, gateway, and DNS settings for
a network that can be consumed by a compute profile.
A compute profile defines the structure and operational details for the appliances that are
deployed for VMware HCX.
A compute profile contains the compute, storage, and network settings that VMware HCX uses
on this site to deploy the HCX Interconnect-dedicated virtual appliances when a service mesh is
added.
Create a compute profile in the Multi-Site Service Mesh interface in both the source and the
destination VMware HCX environments using the planned configuration options for each site,
respectively.
A VMware HCX service mesh is the effective VMware HCX services configuration for a source
and destination site. A service mesh can be added to a connected site pair that has a valid
compute profile created on both of the sites.
Adding a service mesh initiates the deployment of VMware HCX Interconnect virtual appliances
on both of the sites. An interconnect service mesh is always created at the source site.
The service mesh defines the compute profiles, both local and remote, for deployment.
Storage profile
Network profile
Compute profile
Host profile
Extending Networks
You can use Network Extension to create layer 2 networks at the destination VMware HCX site
and bridge the remote network to the source network over a multi-gigabit-capable link. The
new stretched network is automatically bridged with the network at the source HCX data
center.
In the vSphere Client, select Services > Network Extension and click CREATE A
NETWORK EXTENSION.
Prerequisites
Prerequisites for using VMware HCX with AWS Direct Connect are as follows:
• The AWS Direct Connect with a private virtual interface (VIF) is only supported on the
VMware Cloud SDDC that is backed by NSX networking.
• The SDDC must be configured to use the AWS Direct Connect private VIF.
• A private subnet that can be reached from on-premises over AWS Direct Connect with
private VIF is reserved for VMware HCX component deployments.
To configure VMware HCX over AWS Direct Connect with a private VIF, you take the following
steps:
Learner Objectives
After completing this lesson, you should be able to:
• Deploy VMware HCX to the VMware Cloud on AWS SDDC and to the on-premises data
center
• Create a site pairing and service mesh between the VMware Cloud on AWS SDDC and the
on-premises data center.
1. Deploy and Configure VMware HCX to the VMware Cloud on AWS SDDC
2. Deploy and configure VMware HCX to the on-premises data center
3. Create a site pairing and service mesh between the VMware Cloud on AWS SDDC and the
on-premises data center
• Hybrid Linked Mode can be configured from the vCenter Cloud Gateway Appliance and
the VMware Cloud on AWS SDDC vSphere Client.
• The vCenter Cloud Gateway Appliance can be installed so that Hybrid Linked Mode can be
configured from an on-premises SDDC.
• VMware HCX is a flexible SaaS tool that is well-suited for bulk workload migrations.
• VMware Cloud on AWS has multiple migration solutions with specific requirements that
help to minimize the negative outcomes of cloud migration challenges.
• VMware HCX does not require NSX to create a network extension from the on-premises
data center to VMware Cloud on AWS.
• In VMware Cloud on AWS, you have restricted permissions on objects that VMware
manages.
Additional Resources
• For more information about VMware HCX, access the chapter on VMware HCX in the
VMware Cloud on AWS in the VMware HCX product documentation
at https://docs.vmware.com/en/VMware-HCX/4.3/hcx-user-
guide/GUID-90467C70-6D3B-411C-B056-16023ED2B839.html.
• For more information about migration solutions, access the VMware Cloud Migration
website at https://vmc.vmware.com/solutions/migration/overview.
Learner Objectives
After completing this lesson, you should be able to:
VM backup and disaster recovery (DR) methods are important parts of business continuity and
DR plans. Each method fulfills different objectives.
Backup
Store copies of VM data in multiple environments as a recovery option
To protect workload data, you can use your preferred third-party backup tool.
With vSphere Storage APIs - Data Protection, backup products can perform
centralized, efficient, off-host, LAN-free backups of VMs.
A backup product that uses vSphere Storage API - Data Protection can back up
VMs from a central backup system (physical or virtual system). The backup
does not require backup agents or any backup processing inside the guest
operating system.
The backup solution must be considered closely when integrating with a VMware Cloud on
AWS environment. The solution might need to be redesigned, upgraded, or replaced.
You can back up your workload VMs using the tools and services available from your cloud
service provider.
With AWS Backup, you can configure backup policies from a central backup console, making it
easy to ensure that your application data is backed up and protected.
AWS Backup provides automated backup schedules, retention management, and life cycle
management. You can enforce your backup policies, encrypt your backups, protect your
backups from manual deletion, and report on backup activity from a centralized console.
While you are responsible for backing up your workload VMs, VMware is responsible
for backing up and restoring the management infrastructure, which includes VMware
vCenter Server®, VMware NSX® Controller instances, and VMware NSX®
Edge appliances.
True
False
• Cyberattacks
About DRaaS
In addition, you don't have to manage and maintain your own DR site.
Video Transcript
As a modern business that continues to transform and become more digital, the need for
protection in the event of an outage grows. Technology and network failures, power
failures, natural disasters, and of course the growing threat of cyberattacks, including
ransomware, are putting you at risk if you don't have a disaster recovery solution in place
to protect you.
It's also important to remember that even after you moved to cloud, you still need DR.
Ask yourself: When was the last time you conducted a DR end-to-end test? Even if you
have DR in place today, odds are it requires operational time to manage and monitor,
costly capital investment in infrastructure, business interruption for testing and
With Disaster Recovery as a Service, you get safety and protection from disaster with
offsite storage, easy test and validation, and simplified prescriptive, or changeable
recovery options; Lower TCO that leverages existing VMware investments and skills,
reducing capital expenditure and retraining your operations; Speed and simplicity with
quick on-ramp and no new skills required.
To learn more about how DRaaS could be right for you, please visit our website.
DR Solutions
You can select a DR solution that best aligns with the criticality, recovery time objective (RTO),
and recovery point objective (RPO) requirements for your applications and workloads, and with
your organizational policies.
VMware Site Recovery Manager is VMware Cloud Disaster Recovery VMware Site
a powerful solution for offers on-demand DRaaS to protect Recovery delivers hot
organizations who want to utilize a a broad set of IT services in a cost- DRaaS for mission-
secondary data center as their DR efficient manner, with fast recovery critical IT services
site. capabilities. that require very low
RPO and RTO and all
the benefits of
vSphere Replication
and Site Recovery
Manager.
VMware DR Portfolio
The VMware portfolio of DR solutions can help you to achieve balance in your DR strategy. You
Comparing DR Solutions
This table presents the similarities and differences in the VMware DR solutions.
Learner Objectives
After completing this lesson, you should be able to:
Site Recovery
A VMware Cloud on AWS SDDC can use the add-on to become the disaster
recovery site for an on-premises cloud or data center.
Site Recovery uses the host-based replication feature of VMware vSphere Replication and
the orchestration of Site Recovery Manager
Recovery Site
The recovery site is where the protected VMs are recovered if a failover occurs.
Failover
When disasters occur, Site Recovery fails over workloads to the recovery site:
• Site Recovery performs a disaster recovery failover or a planned migration, and fails
back recovered VMs to the original site.
• With Site Recovery failover, minimal downtime occurs. Business operations can
continue with minimal to possibly no disruption.
Failback
After failover, and when the original site is available, you might want to move your
workloads back as soon as possible:
• Site Recovery provides one-click failback to simplify and automate this action.
• All workloads are migrated back to the original site by following the runbooks from
the original failover.
A cloud SDDC can use the Site Recovery add-on as a disaster recovery site in an on-
premises or cloud data center.
A cloud SDDC cannot use the Site Recovery add-on as a disaster recovery site for an on-
premises data center.
Site Recovery is prebuilt with VMware Cloud on AWS, which helps VMware Cloud on AWS
become the disaster recovery site for an on-premises or cloud data center.
Site Recovery works with Site Recovery Manager and vSphere Replication 8.1 and later to
automate the recovery, testing, re-protecting, and failback of virtual machines.
Site Recovery
Manager is a
business
continuity and
disaster recovery
solution that
helps to plan,
test, and run the
recovery of VMs
A preview of Site Recovery Manager Appliance Management Interface Summary between a
page. primary site and a
recovery site.
The Site Recovery license key is part of the subscription to the service.
When you pair the Site Recovery Manager on-premises instance with the Site
vSphere Replication
Failover Topologies
Failover is the process of recovering an affected VM by failing over to its replica in the disaster
recovery site.
Site Recovery can be used in several failover topologies, depending on customer requirements,
constraints, and objectives.
Active-Passive
Disaster Recovery Page 483
Active-Passive
An active-passive failover topology includes a production site that runs applications and
services, and a secondary or recovery site that is idle until needed for recovery.
This common topology provides dedicated recovery resources. You pay for a site, compute
capacity, and storage that are not used most of the time.
Active-Active
In an active-active failover topology, Site Recovery can be used where low-priority workloads,
such as test and development, run at the recovery site and are powered off as part of the
recovery plan.
Recovery site resources are used regularly, rather than being held in reserve. The resources
are used as sufficient capacity for critical systems if a disaster occurs.
Bidirectional
In a bidirectional failover topology, Site Recovery supports the protection of VMs in both
directions.
This topology is used in situations where production applications are operating at both sites,
for example, VMs at site A are protected at site B, and vice versa.
Network Ports
For information about the list of network ports for VMware Site Recovery, access the Site
Recovery Installation and Configuration documentation.
Site Recovery deploys Site Recovery Manager version 8.x in your VMware Cloud on AWS SDDC.
Other compatibility considerations include:
• Site Recovery is compatible with the following vCenter Server and ESXi versions:
○ On-premises vCenter Server versions 6.0 U3 and later, including version 7.0
○ On-premises ESXi version 6.0 U3 and later, including version 7.0
• Site Recovery Manager version 8.3 is compatible with version 8.2 installed on-premises.
• Site Recovery Manager version 8.2 is the latest version to support vSphere version 6.0 U3.
• Site Recovery Manager version 8.3 supports only vSphere version 6.5 and later.
Compatibility Requirements
For more information on compatibility requirements, access the Site Recovery Release Notes.
Site Recovery works with Site Recovery Manager and vSphere Replication version 8.1 and later.
Service Delivery
Site Recovery is an add-on for VMware Cloud on AWS. As such, it has the following
characteristics:
• VMware delivers, sells, supports, and maintains the Site Recovery add-on.
• The Site Recovery service places individual VMs and applications in application
consistency groups.
Site Recovery Manager version 8.1.2 is the first version to support VMs that are
attached to NSX-T Data Center installations.
• Do not use non-default plug-in identifiers because they are not supported.
• Verify that network communication succeeds between on-premises vSphere and VMware
Cloud on AWS.
A popular scenario for setting up VMware Site Recovery in your environment is to deploy an on-
premises protected site and a VMware Cloud on AWS SDDC recovery site.
The Site Recovery deployment process for this scenario is outlined as follows:
Through a series of demonstration videos, you can explore how these steps are performed.
This video describes the process of enabling the Site Recovery service, installing the on-
premises components, configuring the VMware Cloud on AWS firewall, pairing the sites, and
configuring mappings.
Installation and Configuration of VMware Site Recovery for VMware Cloud on AWS
In this demo, I'm going to walk you through the installation and configuration of VMware
Site Recovery for VMware Cloud on AWS. That process is going to include activating the
service, downloading the components that we need for on-premises, configuring the
firewall, installing vSphere Replication, installing Site Recovery Manager, and then
configuring vSphere Replication. So let's get started.
Start with activating the service. This is going to, in the background, install and configure
Site Recovery Manager and vSphere Replication within the VMware Cloud on AWS
environment. That's including IP addresses and all related configuration certificates,
things like that. While that process is going on, that's a fully automated process, we can
download the components that we're going to need for our on-prem installation. That
would be vSphere Replication and Site Recovery Manager.
After we get those components downloaded, we can configure the firewall. The firewall
configuration consists of four rules that need to be entered. Those rules are documented
both in the documentation for VMware Site Recovery, as well as in the inline help for
VMware Site Recovery.
Once those rules are created, we would move on to installing vSphere Replication. This is
done with an OVF and is pretty much a standard OVF installation. So it consists of
selecting the OVF, selecting where we're going to install that, accepting license
agreement, putting what datastore we're going to install it in, what network we're going
to connect it to, NTP servers, passwords, networking properties, like gateways, domain
names, DNS servers, the IP address for the appliance and the mask.
While that process completes, we'll move on to installing Site Recovery Manager. Site
Recovery Manager is installed on a Windows server and is again a very basic installation,
After that's completed, we can finish configuring vSphere Replication. And that's going to
consist of changing a couple of things, and putting in a password, and then saving and
restarting services. Once that's complete and our services are running, we'll be able to
access the VMware Site Recovery console through vCenter. And now we're ready to pair
up our sites.
So, to pair up our sites, we're going to select the vCenter, enter in the PSC name for our
VMware Cloud on AWS instance, username and password, select that vCenter, select the
services that we want to pair, and we're done.
Got the sites paired up, log in, we can see details about the Site Recovery Manager pairing
as well as the vSphere Replication pairing. And now we just need to configure our
mappings.
So first off, we'll do network mappings. Go ahead and select the single network that we
want to map to our VMC on AWS instance and add that. Then we'll configure it in
reverse. Configure our test network and we're good to go. Move on to our folders. We're
going to use the automatic mapping here and just select the top level, and it's going to
match up anywhere where it finds the same name. Go ahead and configure those in
reverse. And we're ready to complete that. Lastly, for mappings, we'll configure our
resource mappings. Go ahead and set that up. Select our cluster at our on-prem location
and our compute resource pool within VMC and AWS, configure it in reverse, configure
our placeholder datastore.
This concludes our demonstration of installing and configuring VMware Site Recovery.
The video demonstrates the process of replicating and protecting VMs in VMware Cloud on
AWS.
It describes how to use the HTML5 UI to replicate and protect VMs, including steps such as
selecting replication settings, adding VMs to a new or existing protection group, and adding a
protection group to a new or existing recovery plan.
Replicating and Protecting VMs with VMware Site Recovery for VMware Cloud on AWS | vSAN
In this demonstration, I'm going to walk you through replicating and protecting virtual
machines using VMware Site Recovery for VMware Cloud on AWS. Let's get started.
We start at our on-premises location, select our payroll application, which consists of 10
VMs. Right-click on those VMs. Select Site Recovery actions and configure replication.
That's going to take us to our VMware Site Recovery window, where we can confirm those
10 VMs that we selected, confirm that we want those replicated to our VMC environment.
Select our datastore, in this case we want the WorkloadDatastore, change our replication
settings as needed, things like RPO, guest quiescing.
Next, we can either add our VMs to an existing protection group or create a new
protection group for them. In this case, we're going to create a new protection group. We
also have the same option when it comes to recovery plans, either adding them to an
existing recovery plan or creating a new recovery plan. We'll create a new recovery plan
and then we'll navigate to the VMware Site Recovery window, where we can monitor the
status of replication.
And once they're complete, we can take a look at our protection groups, confirm that that
payroll protection group was created and contains all 10 of those VMs, and that we have a
recovery plan that contains all 10 of those VMs because it contains that protection group.
This concludes our demonstration of protecting VMs with VMware Site Recovery.
This video shows an example of using Site Recovery to fail over from a customer site to VMware
Cloud on AWS. The video also describes how to run a re-protect, recovery plan test, and
failback using planned migration to return workloads to the on-premises data center after the
disaster has passed.
VMware Site Recovery To Failover From Customer Site To VMware Cloud on AWS | vSAN
Video Transcript
In this demo, I'm going to walk you through using VMware Site Recovery to fail over from
a customer site to VMware Cloud on AWS, and then reprotect and fail back.
This picture shows our current situation. Our VMs are protected at our customer site
replicating over to VMware Cloud on AWS. Here we are in our on-premises environment,
our customer environment. These are the 10 VMs that we're protecting that are part of
our payroll application, and we can see that all 10 of them are running.
If we look in our VMware Cloud environment, we see that all 10 of those VMs are
protected. That icon that is highlighted there indicates that that is protected by VMware
Site Recovery. And if we look at our VMware Site Recovery panel, we can see that our
current configuration is paired, connected. Everything's ready to go. We have a protection
group specifically for our payroll application. And we can see that all 10 of our payroll VMs
are protected. And then we also have a recovery plan that contains that protection group.
Our environment has now just experienced an outage. We can no longer connect to our
on-prem environment, and we need to get that payroll application failed over to VMware
Cloud on AWS as quickly as possible. And just to confirm that our situation, if we connect
into the VMware Site Recovery panel within VMware Cloud on AWS, we can see that our
VMware Cloud on AWS environment is connected, but our on-premises site is in an
unknown and disconnected state. And we're generating errors that are showing that
we've lost that connectivity to our on-prem environment.
So, just giving you an idea visually of what that looks like. Our customer site has failed. We
One of the things you might have noticed is that the option for running a planned
migration was grayed out. And the reason for that is because our sites are disconnected.
We don't have connectivity between our two sites. What that means is that our recovery
plan is going to run in a slightly different way.
And you'll see that in the steps that it's running through, skipping our pre-synchronizing
storage, and then it’s actually generating an error because it's not able to shut down VMs
at our protected site, and it's not able to prepare those protected site VMs for migration
also. Those two things do not impact the recovery of those VMs, but they do create an
issue for when we want to fail back.
We're running through that recovery process right now. We're getting those VMs
powered on. Just a second here, you'll see that we've succeeded with that. If we look up
at the top, we can see that our recovery has completed. However, there are some errors
and warnings. Part of that is that it wasn't able to shut down those VMs because the sites
aren't connected.
We can now see that within our VMware Cloud on AWS environment, those 10 payroll
VMs are running. Our payroll application is now up. Payroll is able to run our company
payroll, don't have an issue there. We've now restored connectivity to our original
production site.
At our original production site, we can see that our original VMs are still powered on. Now
what we need to do is we need to run our recovery plan another time. We need to do this
in order to clean up any of the issues that were left over from having to run the recovery
plan while the sites were disconnected.
The thing that that's going to take care of is things like shutting down those VMs that
were at that protected site, and preparing them for migration. That will put us in a
position of being ready to reprotect our VMs and get them ready for testing or failback.
The next thing we're going to do is reprotect, and all that this is doing is reversing
replication and protection. So, getting things configured so that we are able to fail back to
our original site or run a test of that failback to our original site. You can see that reflected
in these steps here.
We're just going to configure storage in the reverse direction, and then configure
protection in the reverse direction. Now that we've done that, we see that we have,
again, our VMs running in our VMware cloud environment. We now see that our VMs in
our customer environment are ready, are protected as well. They're showing those
placeholder icons.
The next thing we're going to do is run a test of our recovery plan. A test of the recovery
plan is going to allow us to verify that our recovery plan is going to work the way that we
expected it to, if we need to use it. It's going to do that non-disruptively. When we run a
test of the recovery plan, this is completely non-disruptive to both storage and
You can see that we've run through that process. Now you can see that the VMs at our
customer site have that placeholder icon and are powered on. You can see that they're
connected now to our test network instead of to our production network, which is how
we're keeping them network-isolated.
Now that we've completed our test successfully, the next thing that we'll want to do is
clean that test up. The reason that we would do that is that would just allow us to run an
additional test if we wanted to or run a fail over as needed.
That process is now completed. Now the next step for our failback would be to actually
run that planned migration and move our VMs from VMware Cloud on AWS back to our
customer site.
We'll go ahead and run that workflow. You'll see that the option that we're going to select
is that planned migration option. We're not in a disaster recovery situation now. The
difference between a planned migration and disaster recovery is that in planned
migration, if we hit any errors along the way, the plan would stop and give us the
opportunity to fix those before it moved on to the next step. Compare that with disaster
recovery, where it's going to just keep running, because the idea is to get you up. It's a
disaster, so you want to get up and running as quickly as possible.
Our recovery completed successfully, our VMs are back running again at our customer
environment. You can see there now just normal VMs, no special icons anymore. These
are our production VMs.
Now the thing that we need to do is again run that reprotect so that our VMs are
protected in VMware Cloud on AWS. If we have another failure at some point in the
future, again, we are protected. We've already seen this workflow before. We'll run
through this just really fast. Now that workflow has completed.
Now we're ready at this point to run another test. Any time we make a major change in
our environment, it's a good idea to run a test to verify that things are acting and
behaving the way that you would expect them to. We can see that our placeholder VMs
are in our VMware Cloud environment and our regular VMs are in our customer
environment.
Everything is as it should be prior to running that test. We'll go ahead and kick that test
off. That option that you see there for replicating recent changes just allows you to run
that test in two different ways. We can run it in a way where we simulate a disaster,
which would be not replicating those changes. Or, we could run it in a way where we're
simulating a planned migration, which would be where we would replicate those changes.
It just gives you a couple of different options for how you want to run that test.
The test completed successfully, and we can see our test VMs are running in VMware
Cloud on AWS, and we can see that they're using the SRM-generated port groups within
VMware Cloud on AWS to keep that network traffic isolated.
After we test and verify that our application works as we expect it to, now we can run a
cleanup. And all that that's going to do is it's just going to power off those VMs, delete
You can deploy multiple instances of Site Recovery in a VMware Cloud on AWS SDDC.
With multiple Site Recovery instances, you can perform the following actions:
• Connect a single VMware Cloud on AWS SDDC to multiple on-premises sites and to other
VMware Cloud on AWS SDDCs for disaster recovery purposes.
• Recover VMs from multiple protected sites to the same VMware Cloud on AWS SDDC, or
recover different sets of VMs from a single VMware Cloud on AWS SDDC to multiple
recovery sites.
You can apply other complex multisite topologies, but you must establish network
connectivity between the remote sites and the shared VMware Cloud on AWS SDDC.
Multisite Topologies
For more information about multisite topologies, access the Site Recovery documentation.
Learner Objectives
After completing this lesson, you should be able to:
To meet its requirements, the organization can use VMware Cloud Disaster Recovery.
The problem with traditional disaster recovery is that it's expensive, complex and
unreliable because data center failover touches many different components from
applications and servers to networking and storage. It ends up being a very complex and
manual process. VMware Cloud Disaster Recovery is transforming DR.
For all VMware workloads, with its on-demand-DR delivered as an easy-to-use SaaS
solution with cloud economics, we've converted a complex DR process into an easy-to-use
SaaS product. And you only pay for compute resources when you test or when disaster
strikes, which is exactly how cloud-based DR should be: Elastic, pay as needed.
VMware Cloud Disaster Recovery provides simple disaster recovery, combined with cloud
economics, in the event of ransomware attacks, power outages, or natural disasters.
VMware Cloud Disaster Recovery keeps VMs in their native format, eliminating brittle VM
conversions, slow down recovery, make failback a nightmare.
So how does it work? Through a simple UI. You set protection policies and DR runbooks
replicas can be created every few hours, multiple times per day, whatever frequency
makes sense for your business. These replicas are then encrypted and stored in their
native VM format in the cloud. Compliance checks automatically run every 30 minutes to
ensure your DR plan works when you need it. Non-disruptive recovery tests can be run as
frequently as desired to reduce risk.
When disaster strikes, just click a button to fail over to the cloud. VMware Cloud DR
automatically provisions SDDC on VMware Cloud on AWS. The stored replicas, which
could be hours, days, or weeks old, are instantly powered off via an NFS datastore and
mounted by ESX hosts in that SDDC, resulting in fast recovery. And there's no learning
curve for your IT team. During the disaster, they can use the same vCenter tools to
manage their cloud DR site. The last thing you want to do in the middle of a disaster is
Once the disaster is over, failback is simple too. With a click of a button deduplicated
change data is compressed and encrypted, which minimizes egress charges automatically
sent back to the production data center. This combination of on-demand compute and
efficient cloud storage delivers low total cost of ownership.
You get everything you need for on-demand cloud DR in a single SaaS solution, with top-
of-the-line support from VMware.
You can manage both production and DR sites with vCenter Server
because you retain access to familiar VMware vSphere abstractions
Consistent and
Familiar Operations
SaaS-Based
Management
Built-In Audit
Reports
Continuous DR
Health Checks
To help fight ransomware attacks, you can use VMware Cloud Disaster Recovery to create
secure remote backups of critical data through regularly scheduled application consistent
snapshots of VMs and files.
If a ransomware attack occurs, you can go back in time to a moment before the attack
happened and recover snapshots from months or years ago. You can use these snapshots to
rebuild your VMs and computing environment in a recovery SDDC deployed on VMware Cloud
on AWS.
VMware Cloud Disaster Recovery was designed for its systems and repository to be
operationally isolated (known as operational air-gapping) and for instantiating isolated recovery
environments.
Deploying and
using VMware
Cloud Disaster
Recovery
involves
installing cloud
service
In the example, a production site is connected to a recovery target site (VMware components to
Cloud on AWS) through cloud-based services connect the
production site
to the recovery
site.
Production Site
The production site to be protected can be any of your current vSphere clusters.
You can use any of your VMFS, vSAN, or vSphere Virtual Volumes datastores in the production
site.
Cloud-Based Services
VMware Cloud services include the SaaS orchestrator and the SCFS.
With these components, you can configure protection for the on-premises infrastructure.
Recovery Site
The recovery site is created immediately before a recovery is performed. It does not need
to be provisioned to support replication in the steady state. This site is also called the
failover site.
You can use any of your VMFS, vSAN, or vSphere Virtual Volumes datastores in the
recovery site.
On-Demand Deployment
With an on-demand deployment, the recurring costs of a cloud DR site are eliminated in
their entirety until a failover occurs and cloud resources are provisioned.
For example, a low DR cost, steady-state replication occurs with no active VMware Cloud
on AWS hosts.
With a pilot light deployment, you can deploy a smaller subset of SDDC hosts ahead of
time to recover critical applications with lower RTO requirements than in a purely on-
demand approach.
For example, a steady-state replication occurs with DR costs for only three VMware Cloud
on AWS hosts in a pilot light SDDC cluster.
When a disaster strikes on the production site, the DR plan failover starts, whether the plan is
for an on-demand or a pilot light deployment.
1. The SaaS Orchestrator initiates the SDDC 1. The SaaS orchestrator selects the
cluster build, and the recovery backup is recovery backup.
selected. 2. The DR VMs start on the live
2. The DR VMs start on the live mount (NFS) mount (NFS).
3. The DR plan completes by migrating the VMs 3. The DR plan completes by
into the DR SDDC cluster using vSphere migrating the VMs into the DR
vMotion. SDDC cluster using vSphere
4. Additional clusters are created, if necessary, for vMotion.
capacity expansion or performance 4. Additional clusters or hosts are
improvement. created, if necessary, for capacity
5. The disaster is mitigated, and only the changes expansion or performance
(delta-based) are failed back to the production improvement.
site. 5. The disaster is mitigated, and
only the changes are failed back
to the production site.
Cost Savings
On-Demand Deployment
Because it uses an on-demand strategy, VMware Cloud Disaster Recovery reduces the
operating costs of DR:
• Backups are sent to the SCFS and, after some processing, are stored in a cost-
effective compressed form.
• The bulk of the DR infrastructure is programmatically deployed following a DR event.
• The costs of the cloud SDDC are incurred only when running a DR plan.
• Administrators can add clusters and hosts only when needed.
A pilot light deployment assists organizations in reducing the total cost of cloud
High-Level Architecture
In a VMware Cloud Disaster Recovery solution for VMware on AWS, components work together
to deliver disaster recovery.
When you set up VMware Cloud Disaster Recovery, you follow a workflow, from activating
VMware Cloud Disaster Recovery, accessing the SaaS orchestrator, deploying the DraaS
Connector, to configuring the SDDC and creating a DR plan.
DRaaS Connector
When you deploy the DRaaS Connector as an OVA in vCenter Server, DR protection is
enabled for customer-managed or VMware Cloud on AWS SDDCs.
The DRaaS Connector can be redeployed at any time with no loss of backup data.
Software upgrades for the connector are automatic. Each connector provides additional
replication bandwidth for the site.
Orchestrator
VMware Cloud Disaster Recovery is delivered as SaaS and provides SDDC orchestration
and management using the DRaaS console.
SCFS is integrated with and managed by the VMware Cloud control panel. Other features
include:
• Uses AWS S3 storage for deep retention
• Uses the Datrium log-structured file system
• Provides VMware Cloud with DR and ransomware recovery with deep cloud data
protection
The DRaaS Connector performs the following tasks in protection and recovery sites.
Backup
1. The DRaaS Connector uses VMware vSphere® Storage APIs - Data Protection to create
snapshots of the virtual machine disk (VMDK) file.
2. The DRaaS Connector uses changed block tracking (CBT) to query only changed blocks.
3. Snapshots are compressed and encrypted before being sent to the cloud backup
repository on AWS S3.
Replication
Failover
You run a failover operation after a disaster or cybercrime event when the source site is
no longer available. The failover operation is orchestrated on the destination site, based
on previously replicated snapshots.
When failing over to a VMware Cloud on AWS SDDC, VMs that belong to the protection
groups defined in your DR plan are recovered to the vCenter Server instance in your
recovery SDDC.
When a plan finishes executing, you must explicitly commit a failover or roll back and
You can failover your VM using fully on-demand or pilot light modes:
Failback from an SDDC brings back only data that has changed since the failover.
A failback from the recovery SDDC consists of the following general stages:
1. Undo stage: VMs on the failback target are restored to the state that matches the
snapshots used at recovery time:
2. Catchup stage: VM changes incurred while running in the SDDC following failover are
applied to the VMs on the failover target:
• Differences between the VM state at the time of recovery and failback are
applied to the SCFS snapshot.
• VM backups for the on-premises system are retrieved from the SCFS using a
general forever incremental protocol.
• VMs are recovered to a protected vSphere site.
When VMs are recovered, they are automatically deleted from the recovery SDDC.
DEMO - VMware Cloud Disaster Recovery provides lower RTO from the SCFS
Video Transcript
In this demo, we will explore one of the architectural advantages of using VMware Cloud
Disaster Recovery to minimize recovery times when failing over to your VMware Cloud on
AWS DR site.
VMware Cloud Disaster Recovery provides fast and reliable recovery using the unique
capabilities of the scale-out cloud file system to store the recovery points for your
production VMware workloads.
Each recovery point created by VMware Cloud Disaster Recovery is represented in the
scale-out cloud file system inventory as a complete set of VMs at the point-in-time
specified by the protection group scheduling policies. The incremental changes received
from the protected site. DRaaS connector are synthesized into a full image of the VMs and
Disaster Recovery Page 515
from the protected site. DRaaS connector are synthesized into a full image of the VMs and
stored as an immutable recovery point.
When it comes time to use these recovery points for a disaster event, there is no need to
wait for lengthy restores, image reconstruction, or data migration. They are ready to use
directly from the scale-out cloud file system. As part of the VMware Cloud Disaster
Recovery setup, the scale-out cloud file system is presented to the recovery SDDC as an
NFS-mounted datastore.
Outside of DR testing or actual disaster events, this special datastore appears empty.
When a DR plan is run, the selected recovery point is chosen, with the latest copy being
the default. Then, an instant clone of that recovery point is made available in the
mounted datastore to use for recovery. The original recovery point is left unchanged.
Depending on the capacity, performance and availability SLAs needed for DR operations,
the running VMs for that DR plan can be left on the scale-out cloud file system datastore.
The VMs in that datastore view can now be configured by VMware Cloud Disaster
Recovery into the SDDC inventory and quickly powered on. Or, they can be set to migrate
into the SDDC vSAN WorkloadDatastore with no downtime. Note that the VM migration is
performed in the background under VMware Cloud Disaster Recovery orchestration
control. No further user interaction is needed. The VMs that were failed over to the SDDC
as part of this DR plan are already up and running and ready for service.
Leveraging the recovery point inventory in the scale-out cloud file system and the
mounted datastore architecture of the SDDC allow VMware Cloud Disaster Recovery to
quickly bring the desired version of the VMs into inventory and lower recovery times for
your DR solution.
Protection Groups
A key configuration component of VMware Cloud Disaster Recovery are protection groups.
Protection groups contain one or more VMs in your vSphere environment. They are added to a
DR plan so that you can orchestrate recovery to a new site using selected snapshots of your
VMs.
A protected site includes the vCenter Server instance which contains the VMs you want to
protect. A vCenter Server instance can only belong to one site, and protection groups and DR
plans can only be associated with one vCenter Server instance.
The deployment process for VMware Cloud Disaster Recovery can be outlined as follows:
Through a series of demonstration videos, you can explore how these steps are performed
Video Transcript
In this step of setting up VMware Cloud Disaster Recovery, we're going to set up a
protected site. This is step number two in our quick setup. We start off in the UI
dashboard and we have our setup steps here. So let's go ahead and click on setup here.
We're going to take a look around. We can build an on-prem protected site. We can also
protect VMware Cloud on AWS SDDCs in another region. But for this setup, we're going to
use an on-prem vCenter. We'll give it a name here, Data Center Site 1 as an example, and
click Set up. This is going to create the logical entity for our protected site, and we see
that in our menu here, and it takes us to the Protected sites page.
We have a few steps to do. We're going to deploy the connector, register the vCenter, and
we're going to create a test protection group. So let's go about these tasks. We'll click
Deploy, and let's go ahead and copy the URL to our clipboard, and we're going to use that
to basically paste into our vCenter.
I've created a folder here for DRaaS connectors. What I want to do is deploy a new OVF
appliance here. I'll go ahead and paste in the connector URL from the other screen. We'll
come in here and give it a name. I'm going to call this DRC-1, DRaaS connector 1. We'll
choose the vSAN cluster. We'll go ahead and ignore the certificate for now. Put that on
our vsanDatastore. Now here, we're going to choose a network that can talk to the other
We're going to go ahead and click Finish. The OVA will deploy. This will take a few
minutes, depending on your system. And once that's done, then what we'll do is we'll go
ahead in here and power this on. So let's go ahead and power on the virtual machine. It'll
take a few minutes to initialize and come up to Ready.
We're going to log in and configure it in just a moment. So let's go ahead and open a
screen here on the console. I'm going to log in as the administrator with the initial default
password, it was on that other screen we were looking at. We'll go ahead and do a static
IP assignment. So let's choose option A. I've got an IP address already configured, so let's
go ahead and enter that in, a subnet mask and a gateway. I've got a couple of DNS servers
that I want to use. It's essentially testing the network.
The FQDN of the orchestrator, again, was back on that deployment screen that we had.
We could copy it from there. I happened to know what it is, so we're going to type that in
here. It's essentially going to validate. Back on that screen was a passcode. Let's go take a
quick look at what I was talking about here.
In this deploy window, we have the credentials, we have the FQDN. We also have the
passcode here that we can use. So we'll go ahead and take that, and paste that in here.
And that changes every five minutes. We're going to give it a label, this matches the VM
name, DRC-1. It's basically connecting, setting things up. We're good. We can go ahead
and exit out of this window, and go back to our vCenter and finish up what we're doing
here. We've got the connector set here. If you remember the password, the default
password was set. If we wanted to get to the other one, it's stored in here. We're not
going to worry about that right now.
Let's go ahead and register the vCenter. So for this, I'm going to go back to my vCenter.
I'm going to pick the root here. I'm going to go ahead and copy the IP address. Come back
into VMware Cloud Disaster Recovery UI. Let's go ahead and register the vCenter. Paste in
the default user here. This is an administrative-level user, and I have the password for
that. And we'll go ahead and register this.
This is going to create the DRaaS connector and the vCenter relationship for this protected
site. If I need to get to the vCenter, this is how I would also remove it. If I wanted to
register a different one, I can manage my connectors, I can manage my vCenter
registrations all from this part here.
So let's go ahead and create a protection group and we'll call it TEST. We're going to build
it on Data Center Site 1, the one we just created. We're going to associate it with the
vCenter that we just registered. And I'm going to use just a simple naming pattern here,
TEST*, and we'll see what virtual machines we have that have those. It looks like there's
10 virtual machines in my environment that matched that pattern.
We can go now and here's a simple schedule. We're going to get into protection groups
later. But for this one, the default is daily at midnight, with a retention of one week. You
could add more schedules here or adjust this. That's going to be a topic for another task.
So let's just finish this.
I want to test out to make sure everything's working. We now have our DRaaS connector
We can go back to our test here. We'll go ahead and select this and go ahead and clean
this up. We don't need to keep this around. I just wanted to make sure that the protection
group within the protected site worked. And with that, we are basically done setting up a
protective site.
We deployed the OVA DRaaS connector. We registered the associated vCenter that had
the virtual machines we wanted to protect. We created a sample TEST protection group
that's going to run every night at midnight, validated with a manual snapshot that
everything worked, and cleaned up.
And we're finished with this task.
This video describes how you can test DR plans without disrupting running workloads.
In this demonstration of VMware Cloud Disaster Recovery, we will show how we can
easily test our DR plans to VMware Cloud on AWS with no disruption to running
workloads in the production VMware site. This testing will provide higher confidence in
the plans and construction we have set up for actual DR needs.
We started out with two vCenters. The one on the left is our production on-prem site
running some example virtual machine workloads. And the one on the right is a newly-
provisioned, empty SDDC in VMC on AWS. This cloud-based SDDC could be set up just in
time for testing or impending disaster recovery needs, or always running in a minimal
pilot light configuration for continuous access, and then scale when needed for DR.
To test the DR plan, we will connect to the SaaS orchestration component of VMware
Cloud Disaster Recovery. From the dashboard, we will navigate to the DR plans and select
the sample application plan from our list of recovery plans. Continuous checks for plan
compliance help ensure reliability. Note: This plan is ready and could be used for an actual
failover if desired.
In this case, we click Test plan to perform a quick non-disruptive test of the DR plan. The
latest recovery point is automatically selected when running a plan. It is possible to select
a different snapshot from the deep range of recovery points for use cases such as
ransomware. In this case, we will use the latest recovery point.
During plan testing, we also have the option to leave the VM workloads on the live-mount
NFS datastore that's holding the snapshots. This will save some testing time and allow us
to free up the SDDC sooner, if desired. Note that during a DR plan test, changes to VMs
while in the failover site location are not captured. So there is no need to fully migrate
them into the target SDDC. This option is not available during an actual failover.
Switching between views, we can see that they are quickly up and running on the
software-defined data center powered by vSphere NSX and vSAN. There is no need to
convert the virtual machine format or perform lengthy restore operations. This particular
DR plan has an optional step, number five, that prompts for user input, providing even
further control of the plan execution and testing scenarios. Once all of the failover actions
have been completed, we are presented with the option to clean up the test.
Let's first take a quick look at the two vCenters again. The production site vCenter on the
left is still operating as when we started. And the SDDC on the right is now running the
workload specified by the plan. Note: The plan took advantage of test network isolation
settings to make sure that the test VMs on the right do not interfere with production VMs
on the left. To finish the testing, we return to the SaaS Orchestrator UI and click through
the cleanup confirmations.
Switching back to watching the two vCenters, we see the cloud-based SDDC getting
cleared back to an empty status, ready for other testing, or even decommissioning if
desired. Once the test failover has been cleaned up, we acknowledge the plan testing and
we're done. VMware Cloud Disaster Recovery automatically generates detailed reports
whenever a plan is run. These reports can be exported for compliance, audits and
regulatory requirements.
In this demo, we saw how easy it is to non-disruptively test the failover of a sample
workload using VMware Cloud Disaster Recovery. Testing DR plans regularly increases the
confidence that they will work as planned when needed. Continuous health checks help
ensure a test or recovery can be performed at any time. Workflows are orchestrated to
bring up workloads in the desired order. Reports are generated automatically for
compliance.
SaaS orchestrator
SCFS
SDDC
DRaaS Connector
The video describes how to recover VM workloads using the failover operations.
Video Transcript
In this demonstration of VMware Cloud Disaster Recovery, we will show how we can
quickly and reliably recover VM workloads to VMware Cloud on AWS.
We select the sample application plan from our list of recovery plans. Continuous checks
for plan compliance help ensure reliability. We leverage cloud economics by provisioning
and scaling up the target DR site only when needed.
We click next a few times and enter FAILOVER, then finish to start the recovery. VMware
Cloud Disaster Recovery orchestrates the recovery of multiple virtual machines based on
plan specifications. In this case, we recover a database server, then a web server, a file
server, and a virtual desktop in the VMware Cloud on AWS, only consuming cloud
compute resources in the event of an actual failover.
The VMs are quickly up and running from the live-mount datastore on the just-in-time
To finish the plan, click Commit and enter COMMIT FAILOVER to continue running these
workloads in VMware Cloud on AWS, until the disaster is resolved and the workloads can
be migrated back on-prem and cloud resource consumption reduced.
In this example, we saw how easy it is to recover VM workloads using VMware Cloud
Disaster Recovery. Continuous health checks help increase reliability that a test or
recovery can be performed at any time. Cloud costs are reduced, as workflows are
orchestrated to bring up workloads only when needed for DR. Reports are generated
automatically for compliance.
Protected site
DR site
DR plan
DRaaS Connector
The video describes how to fail back workload VMs from a DR site to the original on-premises
site.
In this demonstration of VMware Cloud DR, we will show the simple process of failing a
workload set of VMs back from their DR site running in a VMC SDDC to the original on-
prem vCenter site.
We start off with our split screen view with the on-prem vCenter data center on the left
that experienced the DR event, and the VMC cloud-based SDDC on the right currently
running the prescribed VM workload after executing that DR failover operation. But in this
example, we do not care much about the state of the on-prem vCenter VM workload, the
highlighted VMs on the left, as the VMware Cloud DR failback plan execution will address
their state as part of the orchestration.
To begin the failback, let's navigate to the VMware Cloud DR orchestrator dashboard and
then to DR plans view. Note that this interface is running in the cloud independent of
either site being managed. Here in the DR plans inventory, we already have our sample
application failback plan constructed and ready to execute.
Note that as a failback plan, it is not testable in the same manner as failover plans. We see
the compliance checks are all passed for this DR plan, so it is okay to proceed with the
operation. A quick preview of the DR plan steps looks a bit different than the original
failover steps defined to transition from on-prem to cloud operations. We will cover these
in a bit more detail shortly.
We enter PLANNED FAILOVER and then hit the Start failover button to begin executing the
plan. Let's review the steps in more detail as they execute. The first action is to prepare
the original site for an optimal recovery. This starts with powering off the original VMs if
they were left running. Then, the original site is recovered through snapshot and CBT
management back to the same point in time that was used for the original DR failover
Once the original site is ready for failback, the plan orchestration then powers off the VMs
in the cloud DR site SDDC. The orchestration process then takes a snapshot of the virtual
machines at the DR site to capture the changes that have accrued while operating in the
DR mode in VMC.
The next few steps then determine the required changes that need transferred back to
the original on-prem site for failback. The on-prem VMs will be customized, actually
returned to their original vCenter configuration based on the DR plan details. Then, each
step in the DR plan is executed, much like the steps were run in the original failover plan.
And the change is applied from the latest snapshot taken in the VMC SDDC. This DR plan
has an optional user input step, allowing us to confirm the entire failback proceeded as
desired before concluding the DR plan execution. Once the user input is acknowledged,
the plan will perform its final step of deleting the powered off VMs from the VMC SDDC.
We navigate to the SDDC for a quick check of the site that shows the VMs being removed.
Note that once the SDDC is cleared, it could be deleted until needed again for testing or
failover. We then complete the failback by committing the plan execution, and we are
done.
In this demo, we have successfully failed a workload back from the VMC SDDC to the
original on-prem vCenter environment with VMware Cloud DR.
Compliance Management
VMware Cloud Disaster Recovery offers several health check features, including continuous
compliance checks.
Continuous compliance checks verify the integrity of DR plans so that plans are ready to run.
For example, compliance checks ensure that the specified protection groups are active on the
protected site and are being replicated successfully to the target site.
Video Transcript
In this demonstration, we will explore the various built-in health checks, reports and
status monitoring capabilities of VMware Cloud Disaster Recovery that enable a greater
degree of visibility into the overall health and readiness of the solution. We will explore
how to review the status of VMware Cloud Disaster Recovery components and
operations, monitor the VM protection policies, overall readiness of the DR plans, and
monitor events and alarms in the system. We'll look at enabling email alerts for various
conditions and how to produce runbooks, health check reports, and track configuration
changes.
Let's start at the top level in the SaaS orchestrator dashboard. This management interface
runs in the cloud, independent of the configured protected sites and recovery sites. This
gives us a system-wide view of current components, sites, and operational status. The
global summary provides a synopsis of the key components of VMware Cloud Disaster
Recovery. This includes overall system health, cloud backup storage consumption. This is
the scale-out cloud filesystem, protected sites and recovery SDDCs enabled for DR
configurations, protection VM coverage based on the protection groups defined for those
protected sites, and DR plans in the inventory. Green check marks indicate good
operational health. If there were a problem in an area, we could easily navigate into the
associated detailed view.
On the right-hand side, we see a list of currently running tasks as well as recently finished
tasks, and any recent alarms. These lists track the recent activity in VMware Cloud
Disaster Recovery system. From the dashboard, we can navigate to other functional areas
for operations, administration, and as we'll see here, health checks, reports and status.
Let's look closer at protection groups. From the protection groups list view, we can see
the current status of each protection policy that has been defined and which protected
site it is associated with. For the example here, they are all of type on-prem site and
replicating their change datasets to the scale-out cloud filesystem called Cloud Backup. If
there were any issues, we could navigate into the affected protection group and review
the history or details of any of the individual snapshots. A healthy OK status in the Health
column indicates that the policies are running on schedule as defined.
Let's now navigate over to the DR plans view. This is where most of the details we are
interested in will be found. In this view, we start with an overall DR plan list, which
displays the current state of each of the defined DR plans. The plan status shows which
plans are enabled and ready to failover or test. The plans can be in a number of different
states, depending on the current conditions. For details on the other states, please
consult the product documentation.
The other key indicator in this view is the Compliance status in the right-hand column of
the main window. DR health checks run against active plans every 30 minutes and check
the operational readiness of the plans in several key areas that we will explore in just a
bit. One other operational characteristic shown in this view is the protected site, usually
the on-prem data center location, or an SDDC from another region, and recovery site,
usually the target SDDC. If the SDDC does not exist, this field will be empty and the full DR
health check will be incomplete.
Next, we will explore an individual DR plan, Let's pick the APPS plan. The DR health check
status is really obvious in this view. If we click the Show button near the green check
mark, we'll get a more detailed view of the health checks being performed. The checks
cover four main areas: the protected site, the recovery site, the orchestration steps, and
VMware Cloud Disaster Recovery component integration status. Each of the checks
actually has several lower-level checks conducted.
It's possible to download the health check report as a PDF and share with others or file as
desired. The report has a timestamp and summary information as well, for easy tracking.
In this view, on the plan details menu bar is a Reports tab. There are two types of plan
reports available here, run and configuration history.
These run reports are generated any time this plan is executed for either testing or
failover operations. This provides a useful audit trail history of the plan. Similar to the
health check report, this automatically generated report can also be downloaded as PDF
and shared or filed as desired. Run reports contain critical timing and task tracking
information to provide insight into the plan details as well as the overall plan execution
results.
This is essentially your run book, documentation of plan testing, or actual failover
The last area to look at in the DR plans view is the plan itself. Let's edit the plan and
review the Alerts settings. It's here that you can build email notification triggers for the
plans in your environment. Simply configure one or more recipients in the orchestrator
setup to receive email alerts. Then for each plan, choose what triggers you want included.
These can be for regular health check operations, or for plan execution changes. When
something changes in the plans, your team will be alerted automatically.
One last health and status area within the orchestrator UI worth exploring is the
monitoring view. In this view, we can review all events and alarms presented by the
system. With some basic filtering, grouping and level selection, it is possible to narrow
down the view and focus on specific operations or areas where attention might be
needed.
Let's review what we have covered in this demonstration. From the SaaS orchestrator UI,
we can easily see and review the status of VMware Cloud Disaster Recovery components
and operations, track the on-prem application VM protection policies, the protection
groups, monitor the overall readiness status of the DR plans defined, produce detailed
health check reports, execution runbooks and plan configuration reports that are ready to
download and share, set up email alert mechanisms for your administrative team, and
review events and alarms for all parts of the VMware Cloud Disaster Recovery setup.
This level of visibility, detail and tracking make VMware Cloud Disaster Recovery easier to
manage and provide a higher degree of confidence that when disaster arises, you will be
prepared for recovery to the cloud.
• VMware Cloud on AWS offers two DRaaS solutions: Site Recovery and VMware Cloud
Disaster Recovery. You select a DRaaS solution that best aligns with your requirements.
• Site Recovery is a separately purchased add-on for VMware Cloud on AWS. It provides
DRaaS for data center failures by replicating VMs between an on-premises data center
and a VMware Cloud on AWS data center.
• You can deploy multiple instances of Site Recovery in a VMware Cloud on AWS SDDC.
• All VMware Cloud Disaster Recovery components, including cloud storage, are deployed
and managed by VMware in an AWS account dedicated to each tenant.
Additional Resources
• For information about DRaaS for VMware Cloud on AWS, access the resources on the
VMware Tech Zone website at https://vmc.techzone.vmware.com/vmc-aws-draas.
• For information about building and maintaining DR solution using VMware Cloud Disaster
Recovery, watch the demonstration videos at https://www.youtube.com/playlist?
list=PLNOz1mVhDkG6ZsnZPI_bol5o1ii1onPTv.
Learner Objectives
After completing this lesson, you should be able to:
For information about account management for other hyperscaler partners, you can access the following
resources:
Each organization has one or more organization owners, who have access to all the resources and services of
the organization and can invite additional users to the account.
By default, these additional users are organization members, who can use and manage cloud services
belonging to the organization but cannot invite new users.
A VMware Customer Connect account is required to authenticate an Administrator during the initial Cloud
Services Portal on-boarding process.
If you do not have a VMware Customer Connect account, you are prompted to create one during
Organization Owner Account creation.
After an organization owner invites you to an organization in VMware Cloud, you can accept the
invitation to create your account and gain access to the service.
Administrator
This role has full cloud administrator rights to all service features in VMware Cloud on AWS.
NSX Manager UI
With the organization role of NSX Cloud Admin or NSX Cloud Auditor, you can use either the VMware NSX
Manager web interface or the VMware Cloud console Networking & Security tab to manage your SDDC
networks.
The NSX Manager interface is accessible at a public IP address reachable by any browser that can connect to
the Internet. You click OPEN NSX MANAGER on the SDDC Summary tab to open the public NSX Manager
interface.
• You configure Hybrid Linked Mode from your SDDC by adding your on-premises LDAP domain as an
identity source for the SDDC vCenter Server.
• You can configure Hybrid Linked Mode from your SDDC if your on-premises LDAP service is provided by
a native Active Directory (Integrated Windows Authentication) domain or an OpenLDAP directory
service
Adding an identity source is optional when configuring Hybrid Linked Mode from the Cloud Gateway
Appliance, but adding an identity source allows you to configure users or groups with a lesser level of access
than the Cloud Administrator.
For more information about using OpenLDAP as the identity source, access VMware knowledge base article
2064977.
Enterprise Federation
VMware Cloud services users with a federated domain use their corporate credentials to log in to the
VMware Cloud services console across organizations.
Setting up enterprise federation for your corporate domain is a self-service process that involves multiple
steps, users, and roles.
1. As an organization owner, you start the self-service federation workflow on behalf of your organization
and invite an Enterprise Administrator to complete the setup.
2. The Enterprise Administrator must determine the type of federation setup that is most suitable for
your enterprise.
If your corporate domain is not federated, your access to VMware Cloud Services is authenticated
through your VMware ID account.
If you are new to VMware Cloud services, visit my.vmware.com to create a VMware ID.
To start the self-service federation setup, you must first receive an email invitation with a link
to the special federation organization.
The organization owner who sent you the invitation has identified you as an Enterprise
Administrator and granted you the permissions to initiate and configure the federation setup
for your enterprise domain.
For more information about enterprise federation, access the VMware Cloud services product
documentation.
When enterprise federation for your enterprise domain is set up to use your third-party identity provider,
users accessing VMware Cloud services from the federated domain are redirected to the login screen of the
identity provider for your enterprise.
Users authenticate directly with your identity provider through SAML JIT dynamic provisioning.
An on-premises instance of VMware Workspace ONE Access connector syncs users and groups from your
Active Directory to a dedicated instance of a Workspace ONE Access tenant.
Only synced groups and users can log in to VMware Cloud services with their corporate credentials.
User authentication can be set up to use either a SAML 2.0 based identity provider or the Workspace ONE
Access connector authentication methods.
Learner Objectives
After completing this lesson you should be able to:
This lesson focuses on maintenance and support as it relates to VMware Cloud on AWS.
For more information about maintenance and support for other hyperscaler partners, you can
access the following resources:
VMware Cloud on AWS is sold and managed as a service from VMware. In this way, VMware
has ownership of many management and operational responsibilities.
• Virtual Machines
• VMware Tools
• Guest operating systems
• Third-party products
• Applications
VMware Responsibilities
Amazon is mainly responsible for the hardware used in the cloud SDDC.
Customer Responsibilities
VMware Responsibilities
VMware performs tasks for managing, maintaining, and monitoring the SDDC,
which include:
Amazon Responsibilities
Administrators patch VMware Tools, but VMware provides an up-to-date repository for the latest
VMware provides problem, event, and incident management services, as well as capacity
management services, and SDDC upgrades for the VMware Cloud on AWS platform.
Incident Management
VMware services include incident detection, severity classification, recording, escalation, and
return to service for the VMware Cloud on AWS platform.
For problem, event, and incident management for your workload virtual machines that are
deployed in the cloud SDDC, you can follow the same processes for existing virtual machines.
Capacity Management
VMware regularly performs updates on SDDCs. These updates ensure continuous delivery of
new features and bug fixes, and maintain consistent software versions across all SDDCs.
Updates to the SDDC software are mandatory and must be done in a timely manner. VMware
works to ensure proper notification in a timely manner.
When an SDDC update is upcoming, VMware sends a notification email to inform you of the
upcoming update. Typically, the email is sent 7 days before a regular update and 1 to 2 days
Delays to upgrades can result in your SDDC running an unsupported software version.
These updates are made to vCenter Server and VMware NSX® Edge. A backup of the
management appliances is taken during this phase.
You cannot access VMware NSX® Manager and vCenter Server during this phase. Your
workloads and other resources function as usual, subject to a few constraints.
These updates are for ESXi hosts and host networking software
in the SDDC. An additional host is temporarily added to your SDDC to provide enough
capacity for the update. You are not billed for these host additions.
VMware vSphere vMotion and VMware vSphere Distributed Resource Scheduler activities
facilitate the update. During this time, your workloads and other resources function as
usual, subject to a few constraints.
These updates are for VMware NSX appliances. A backup of the management appliances
is taken during this phase. You do not have access to NSX Manager and vCenter Server
during this phase.
Your workloads and other resources function as usual subject to a few constraints.
You receive notifications by email when each phase of the update process starts, completes, is
rescheduled, or is canceled. You do not need to respond to these notifications.
To ensure receipt of these notifications, you add the following address to your email safe sender
list: [email protected].
Sends notification 7 days before a regular update and 1 to 2 days before an emergency
update.
Performs incident detection, severity classification, recording, escalation, and return to
To ensure proper, continuous operation of your workloads, review the latest news and status
of your VMware Cloud on AWS environment.
VMware Cloud service offerings release new versions and updates at an increased pace
compared to other VMware products.
The website also posts scheduled maintenance windows and a history of past incidents.
VMware periodically sends notifications to keep you informed about upcoming maintenance
and other events that impact the VMware Cloud on AWS service.
The notification channels that are available include email, VMware Cloud console, and the
Activity Log UI.
Email Notifications
Scheduled Maintenance
Scheduled maintenance windows are communicated in advance, and follow-up emails are
sent before, during, and on completion of maintenance.
Administrators receive emails with specific details about certain disruptive patches.
Availability Issue
Activity Log
The Activity Log pane in the VMware Cloud console contains a history of significant actions in
your organization, such as SDDC deployments and removals, as well as notifications sent by
VMware for events such as SDDC upgrades and maintenance.
Where can you find the current status of the VMware HCX and VMware Tanzu cloud services?
(Select one option)
Scheduled Maintenance
In the VMware Cloud console, you can view scheduled maintenance windows on the
Maintenance tab of the SDDC.
Before contacting VMware Support, you can use a variety of resources in the VMware Cloud
Console to find information that might help resolve your issues.
You can access self-support resources through the VMware Cloud console.
From the Support panel, you can search VMware content to find answers to questions,
chat with VMware Support, and create a support request.
Connectivity Validator
The Connectivity Validator provides network connectivity tests to verify that network
access is available for Hybrid Linked Mode.
The Connectivity Validator can also be used to check VMware Site Recovery
connectivity. You can check that all required network connectivity from your VMware
Cloud on AWS SDDC to the remote site is in place.
You can use the tests both during the initial setup of Site Recovery, and to troubleshoot
connectivity issues during day-to-day management.
The status of each test is displayed as it runs. When a test has finished, you can expand
the test to see details of the test results.
Helpful Links
You can access resources for self-support outside the VMware Cloud on AWS SDDC
console:
VMware technical support provides several features at no additional cost when you use
VMware Cloud on AWS:
You can contact VMware technical support for VMware Cloud on AWS directly through the
VMware Cloud console.
VMware engages AWS on your behalf for VMware Cloud on AWS support issues as necessary.
In the VMware Cloud console, click the Support tab to view the information that VMware
Support needs from you.
What is the first thing you should do? (Select one option)
Learner Objectives
After completing this lesson you should be able to:
• Identify best practices for avoiding common issues with cloud SDDC operations
• Troubleshoot common problems that can occur in cloud SDDC operations
This lesson focuses on preventive actions and common issues in the following areas of SDDC
operations:
Unless otherwise specified, the troubleshooting best practices described in this lesson apply
generally for all clouds.
Management Subnet
When you deploy a cloud SDDC, you must consider IP address management just as with a
traditional data center.
Maintenance and Troubleshooting Page 553
traditional data center.
During the SDDC deployment process, you specify an IP range for the management network of
the SDDC. The choice of address space is important because it cannot be changed without
making the SDDC inoperable and having to rebuild it.
Suppose you are deploying a VMware Cloud on AWS SDDC and must assign a management
subnet. Which configuration do you think will cause problems? (Select one option)
You select a range of IP addresses that do not overlap with the AWS subnet that you
connect to.
To deploy a single-host SDDC, you specify a management network address that overlaps
with the IP address range 192.168.1.0/24.
You select a /23 CIDR block because the SDDC will not increase in capacity.
When setting up the address space for the management subnet for a VMware Cloud on AWS
SDDC, use the following job aid.
VPN Connectivity
For example, when you set up a connection to a VMware Cloud on AWS SDDC, verify
that the following conditions are met:
• The IP addresses between the VMware Cloud on AWS SDDC and the on-premises SDDC do
not conflict.
• VMware Cloud on AWS SDDC can communicate with the on-premises DNS server, as
necessary.
Do you think that an L2 VPN connection problem might be caused by a configuration error?
Several possible configuration problems can cause an IPsec VPN tunnel to fail.
To help prevent issues with VPN connectivity, verify that the following elements are configured
correctly:
• Remote peer
• Pre-shared key
• Firewall rules
• IKE version or phase 1 cryptography
• Phase 2 cryptography
If you make changes to a VPN, disable and re-enable the tunnel to ensure that configuration
changes are applied.
You connect your VMware Cloud on AWS SDDC to your on-premises SDDC over a policy-based
VPN.
You can ping IP addresses in the on-premises network from VMs in the SDDC network, but
workload VMs cannot reach your on-premises DNS servers.
1. If you can configure your on-premises connection over a route-based VPN or Direct
Connect, you can skip the rest of these steps.
2. If you must use a policy-based VPN as your on-premises connection, configure the SDDC
side of the VPN tunnel to allow DNS requests over the VPN
If your SDDC includes both a policy-based VPN and another connection such as a route-based
VPN, DX, or VTGW, connectivity over the policy-based VPN fails if any of those other
connections advertises the default route (0.0.0.0/0) to the SDDC.
AWS Direct Connect provides direct connectivity into an AWS region through private leased
When configuring a private virtual network interface (VIF) for AWS Direct Connect, verify that
you perform the following steps:
This AWS account should be the AWS account ID of your VMware Cloud on AWS SDDC.
You can configure Hybrid Linked Mode in the VMware Cloud console or with the Cloud Gateway
Appliance. Careful configuration can help you to avoid failures later on.
In the VMware Cloud console, you can access the Connectivity Validator to verify that all
required network connectivity is in place for Hybrid Linked Mode.
After running the tool, you get a connection failure and must verify your configuration.
Which configuration can cause a Hybrid Linked Mode connection failure? (Select one option)
You can help to maintain the safety and security of your cloud SDDC
management infrastructure by configuring firewall rules and security roles
correctly.
By default, the management gateway blocks traffic to all management network destinations
from all sources. You add management gateway firewall rules to allow secure traffic from
trusted sources.
An inbound firewall rule is configured with a source address of Any. Do you think that this rule
will cause problems? (Select the best option)
How do you fix the firewall rule with Any as the source address? (Select the best action)
Modify the firewall rule so that the source address is a specific management group that
requires vCenter Server access.
Change the firewall rule to outbound and the source to a defined management inventory
group.
Modify one of the predefined management inventory groups in the SDDC infrastructure
and add it as the source.
Firewall Rules
For more information about creating secure firewall rules, access VMware knowledge base
article 84154.
The NSX Cloud Auditor role can view NSX service settings and events but cannot make any
changes to the service.
Assign the user the Administrator role so that the user has full cloud administrator rights.
Verify that the user should have permission to change the NSX configuration.
Delete the role from the user's privileges because the user does not have access rights.
Which practice can help you to avoid problems in your cloud SDDC? (Select one option)
VM Troubleshooting
• Snapshot failures
• Power-on operation failures
• Performance problems
• Connection problems
• VMware Tools installation failure
Performance issues can have different causes: CPU constraints, memory overcommitment,
storage latency, or network latency.
To help prevent performance issues, follow these general best practices for cloud SDDC hosts:
• Plan your deployment by allocating enough resources for all the virtual machines you run,
as well as those needed by SDDC itself.
• Deactivate unused or unnecessary virtual hardware devices because they can impact
performance.
To view what is supported by each version of ESXi, access VMware knowledge base
article 2007240.
Two VMs in your SDDC must run without contention because they contain key business
applications. You adjust the resources of the VMs to guarantee a fixed amount of memory.
The VMs are powered off during the adjustments. When you try to power on the VMs again,
one VM fails to start.
Troubleshooting VMs
For help with troubleshooting workload problems, access vSphere Virtual Machine
Administration documentation.
For help with specific issues, search the VMware knowledge base archive.
Some VM configurations that you use in your on-premises data center are not supported in the
SDDC. Others are supported with limitations.
Creating a VM that includes a virtual hardware device that requires a physical change to
the host.
Creating an encrypted VM from an unencrypted VM or VM template.
Deploying a VM from a template in a content library and customizing the guest OS after
the deployment task is complete.
To determine which configuration limitations apply to SDDCs, access the VMware Cloud on
AWS documentation.
You can migrate your workload VMs from your on-premises hosts to those in your cloud SDDC
and back again, as well as across hosts in your SDDC.
The method that you choose is based on your tolerance for workload VM downtime, the
number of VMs that you must move, and your on-premises networking configuration.
To help avoid problems with migrations of your workloads and applications, follow these
general guidelines:
• Configure Hybrid Linked Mode and verify that the vSphere Client is accessible.
• Establish and configure network connectivity.
• Confirm the compatibility of the VM for migration.
• Verify that the available bandwidth is sufficient for the desired migration type.
• Verify that port 8000 is open for vSphere vMotion migrations.
Verify whether the VM DRS or vSphere HA overrides is preventing the hybrid migration
with vSphere vMotion.
Use VMware HCX to retry the migration of the VM to the VMware Cloud on AWS SDDC.
Check whether you need to select one of the higher-performing elastic DRS policies for
the VM.
For example, in VMware Cloud on AWS, two vSAN datastores are provided for
each SDDC cluster:
These datastores are logical entities that share a common capacity pool. Each
datastore reports the total available free space in the cluster as its capacity.
Maintenance and Troubleshooting Page 565
datastore reports the total available free space in the cluster as its capacity.
Capacity consumed in either datastore updates the Free value for both.
vsanDatastore
The vsanDatastore provides storage for the management VMs in your SDDC, such as
vCenter Server, NSX controllers, and so on.
The management and troubleshooting of the vSAN storage in your SDDC is handled by
VMware.
For this reason, you can't edit the vSAN cluster settings or monitor the vSAN cluster. You
also do not have permission to browse this datastore, upload files to it, or delete files
from it.
WorkloadDatastore
WorkloadDatastore provides storage for your workload VMs, templates, ISO images, and
any other files you choose to upload to your SDDC.
You have full permission to browse this datastore, create folders, upload files, delete files,
and perform all other operations needed to consume this storage.
Intermittent or unexpected storage performance degradation can occur. The following best
practices help to maintain storage performance and to avoid issues:
• Use thin provisioning disks for all workload VM VMDKs because these disks do not cause
performance impacts and help to maintain storage utilization efficiency.
• Use a RAID-1 storage policy to provide the best storage throughput at the lowest latency.
• Do not run any production VMs with a snapshot chain in place. Consolidate or delete all
snapshots when available.
The datastores in your SDDC are assigned the default VM storage policy. You can define
additional storage policies and assign them to either datastore.
vSAN storage polices define storage requirements for your virtual machines. These policies
guarantee the required level of service for your VMs because they determine how storage is
allocated to the VM.
Sometimes, configuration issues occur. Consider the following problem, its symptoms, and
resolution.
Problem You cannot change the storage policy applied to any data, except for a
VM.
Data refers to objects in the datastore other than VMs, such as ISO
image files, custom folders, scripts, and so on.
Click the Symptoms and Resolution tabs to learn more about how to
resolve this problem.
Resolution The VMC Workload Storage Policy - Cluster-1 storage policy is applied
by default to the data in WorkloadDatastore when it is created.
You can change a VM's storage policy in the vSphere Client, but the
data's storage policy cannot be changed this way.
You also cannot check the current storage policy contents that are
applied to the data if the contents of the original default storage policy
are changed after you create the data.
If you need to change the storage policy that is applied to the data, you
must remove the data and recreate it in WorkloadDatastore with a new
storage policy.
To help avoid issues with storage policies, follow best practices. For VMware Cloud on AWS
SDDCs, consider the following best practices:
For clusters with six or more hosts, you cannot remove a host if the cluster storage
utilization is greater than 40% of the total storage capacity.
For all other types of clusters, do not remove a host if the cluster storage utilization is
greater than 40% of the total storage capacity.
Do not edit the managed storage policies that VMware Cloud on AWS creates for your
clusters.
If you rename a policy, it is no longer managed by VMware Cloud on AWS. If you edit the
settings of the managed storage policy, your changes are overwritten at the next storage
policy reconfiguration.
When you deploy a VM from a template, select Datastore Default for the VM storage policy.
The VM is deployed with the current cluster managed storage policy.
Recognize the effects of virtual machine storage policies on consumption and the SLA.
VM storage policies affect the consumption of storage capacity in the vSAN cluster and
whether they meet the requirements defined in the Service Level Agreement for VMware
Cloud on AWS (the SLA).
When migrating VMs between clusters in the same SDDC, change the VM storage policy to
the destination cluster's managed policy.
The default option of Keep existing VM Storage Policies is only appropriate if using a
custom policy; otherwise, select the policy assigned to the destination cluster.
• The Organization Owner role can invite additional users (who become organization
members) to the Organization. Service roles define the privileges of organization
members when they access the VMware Cloud services that the organization uses.
With the dynamic setup, users authenticate directly with the identity provider through
SAML JIT dynamic provisioning.
With connectorless-based setup, user authentication can be set up to use either a SAML
2.0 based identity provider or the Workspace ONE Access connector authentication
methods.
• For monitoring and maintaining your cloud SDDC, you can stay informed through release
notes, the VMware Cloud Services Status page, email notifications for schedule
maintenance, and the Support panel, Activity log, and Connectivity Validator in the
VMware Cloud console.
• To avoid problems in your cloud SDDC environment, verify that components such as
security, networking, and storage are configured correctly.
• You can troubleshoot workload issues in a cloud SDDC using methods similar to those you
use for on-premises workloads.
The following guidelines can help you to determine an appropriate address space.
✓ Choose a range of IP addresses that does not overlap with the AWS subnet that you
connect to.
✓ You should provision an IP range that is unique in your organization.
If you plan to connect your SDDC to an on-premises data center, the IP address range
of the subnet must be unique within your enterprise network infrastructure. It cannot
overlap the IP address range of any of your on- premises networks.
✓ If you deploy a single-host SDDC, the IP address range 192.168.1.0/24 is reserved for
the default compute network of the SDDC. If you specify a management network
address range that overlaps this address, single- host SDDC creation fails.
✓ If you deploy a multi-host SDDC, no compute gateway logical network is created
during deployment, so you must create one after the SDDC is deployed.
✓ CIDR blocks of size 16, 20, or 23 are supported, but they must be in one of the private
address space blocks that are defined by RFC 1918 (10.0.0.0/8, 172.16.0.0/12, or
192.168.0.0/16).
✓ The range must be large enough to facilitate all hosts that you deploy on day 1 but
must also account for future growth.
✓ The management CIDR block cannot be changed after the SDDC is deployed, so a /23
block is appropriate only for SDDCs that will not require much growth in capacity.
For a complete list of IPv4 addresses reserved by VMware Cloud on AWS, access Reserved
Network Addresses in the VMware Cloud on AWS Networking and Security guide at
https://docs.vmware.com/en/VMware-Cloud-on-AWS/index.html .