0% found this document useful (0 votes)
17 views150 pages

Introduction To Microsoft Azure Fundamentals

AZ 900 Material

Uploaded by

gajraajsingh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views150 pages

Introduction To Microsoft Azure Fundamentals

AZ 900 Material

Uploaded by

gajraajsingh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 150

Microsoft Azure Fundamentals:

Describe cloud concepts


New to the cloud? Microsoft Azure fundamentals is a three-part series that teaches
you basic cloud concepts, provides a streamlined overview of many Azure services,
and guides you with hands-on exercises to deploy your very first services for free.
Complete all of the learning paths in the series if you are preparing for Exam AZ-
900: Microsoft Azure Fundamentals. This is the first learning path in the
series, Microsoft Azure Fundamentals: Describe cloud concepts. The other
learning paths in the series are Part 2: Describe Azure architecture and
services and Part 3: Describe Azure management and governance.

Part 1: Describe cloud computing


This module introduces you to cloud computing. It covers things such as
cloud concepts, deployment models, and understanding shared responsibility
in the cloud.

Learning objectives
Upon completion of this module, you will be able to:
 Define cloud computing.
 Describe the shared responsibility model.
 Define cloud models, including public, private, and hybrid.
 Identify appropriate use cases for each cloud model.
 Describe the consumption-based model.
 Compare cloud pricing models.

Introduction to Microsoft Azure


Fundamentals
Microsoft Azure is a cloud computing platform with an ever-expanding set of
services to help you build solutions to meet your business goals. Azure
services support everything from simple to complex. Azure has simple web
services for hosting your business presence in the cloud. Azure also supports
running fully virtualized computers managing your custom software
solutions. Azure provides a wealth of cloud-based services like remote
storage, database hosting, and centralized account management. Azure also
offers new capabilities like artificial intelligence (AI) and Internet of Things
(IoT) focused services.
In this series, you’ll cover cloud computing basics, be introduced to some of
the core services provided by Microsoft Azure, and will learn more about the
governance and compliance services that you can use.

What is Azure Fundamentals?


Azure Fundamentals is a series of three learning paths that familiarize you
with Azure and its many services and features.

Whether you're interested in compute, networking, or storage services;


learning about cloud security best practices; or exploring governance and
management options, think of Azure Fundamentals as your curated guide to
Azure.

Azure Fundamentals includes interactive exercises that give you hands-on


experience with Azure. Many exercises provide a temporary Azure portal
environment called the sandbox, which allows you to practice creating cloud
resources for free at your own pace.

Technical IT experience isn't required; however, having general IT knowledge


will help you get the most from your learning experience.

Introduction to cloud computing


In this module, you’ll be introduced to general cloud concepts. You’ll start
with an introduction to the cloud in general. Then you'll dive into concepts
like shared responsibility, different cloud models, and explore the unique
pricing method for the cloud.

If you’re already familiar with cloud computing, this module may be largely
review for you.

Learning objectives
After completing this module, you’ll be able to:

 Define cloud computing.


 Describe the shared responsibility model.
 Define cloud models, including public, private, and hybrid.
 Identify appropriate use cases for each cloud model.
 Describe the consumption-based model.
 Compare cloud pricing models.
What is cloud computing
Cloud computing is the delivery of computing services over the internet.
Computing services include common IT infrastructure such as virtual
machines, storage, databases, and networking. Cloud services also expand
the traditional IT offerings to include things like Internet of Things (IoT),
machine learning (ML), and artificial intelligence (AI).

Because cloud computing uses the internet to deliver these services, it


doesn’t have to be constrained by physical infrastructure the same way that
a traditional datacenter is. That means if you need to increase your IT
infrastructure rapidly, you don’t have to wait to build a new datacenter—you
can use the cloud to rapidly expand your IT footprint.

This short video provides a quick introduction to cloud computing.

Describe the shared responsibility


model
You may have heard of the shared responsibility model, but you may not
understand what it means or how it impacts cloud computing.

Start with a traditional corporate datacenter. The company is responsible for


maintaining the physical space, ensuring security, and maintaining or
replacing the servers if anything happens. The IT department is responsible
for maintaining all the infrastructure and software needed to keep the
datacenter up and running. They’re also likely to be responsible for keeping
all systems patched and on the correct version.

With the shared responsibility model, these responsibilities get shared


between the cloud provider and the consumer. Physical security, power,
cooling, and network connectivity are the responsibility of the cloud provider.
The consumer isn’t collocated with the datacenter, so it wouldn’t make sense
for the consumer to have any of those responsibilities.

At the same time, the consumer is responsible for the data and information
stored in the cloud. (You wouldn’t want the cloud provider to be able to read
your information.) The consumer is also responsible for access security,
meaning you only give access to those who need it.
Then, for some things, the responsibility depends on the situation. If you’re
using a cloud SQL database, the cloud provider would be responsible for
maintaining the actual database. However, you’re still responsible for the
data that gets ingested into the database. If you deployed a virtual machine
and installed an SQL database on it, you’d be responsible for database
patches and updates, as well as maintaining the data and information stored
in the database.

With an on-premises datacenter, you’re responsible for everything. With


cloud computing, those responsibilities shift. The shared responsibility model
is heavily tied into the cloud service types (covered later in this learning
path): infrastructure as a service (IaaS), platform as a service (PaaS), and
software as a service (SaaS). IaaS places the most responsibility on the
consumer, with the cloud provider being responsible for the basics of
physical security, power, and connectivity. On the other end of the spectrum,
SaaS places most of the responsibility with the cloud provider. PaaS, being a
middle ground between IaaS and SaaS, rests somewhere in the middle and
evenly distributes responsibility between the cloud provider and the
consumer.

The following diagram highlights how the Shared Responsibility Model


informs who is responsible for what, depending on the cloud service type.
You’ll always be responsible for:

 The information and data stored in the cloud


 Devices that are allowed to connect to your cloud (cell phones,
computers, and so on)
 The accounts and identities of the people, services, and devices within
your organization

The cloud provider is always responsible for:

 The physical datacenter


 The physical network
 The physical hosts

Your service model will determine responsibility for things like:

 Operating systems
 Network controls
 Applications
 Identity and infrastructure

Define cloud models


What are cloud models? The cloud models define the deployment type of
cloud resources. The three main cloud models are: private, public, and
hybrid.

Private cloud
Let’s start with a private cloud. A private cloud is, in some ways, the natural
evolution from a corporate datacenter. It’s a cloud (delivering IT services
over the internet) that’s used by a single entity. Private cloud provides much
greater control for the company and its IT department. However, it also
comes with greater cost and fewer of the benefits of a public cloud
deployment. Finally, a private cloud may be hosted from your on site
datacenter. It may also be hosted in a dedicated datacenter offsite,
potentially even by a third party that has dedicated that datacenter to your
company.

Public cloud
A public cloud is built, controlled, and maintained by a third-party cloud
provider. With a public cloud, anyone that wants to purchase cloud services
can access and use resources. The general public availability is a key
difference between public and private clouds.

Hybrid cloud
A hybrid cloud is a computing environment that uses both public and private
clouds in an inter-connected environment. A hybrid cloud environment can
be used to allow a private cloud to surge for increased, temporary demand
by deploying public cloud resources. Hybrid cloud can be used to provide an
extra layer of security. For example, users can flexibly choose which services
to keep in public cloud and which to deploy to their private cloud
infrastructure.

The following table highlights a few key comparative aspects between the
cloud models.

Public cloud Private cloud Hybrid cloud

No capital expenditures to scale Organizations have complete Provides the most flexibility
up control over resources and
security

Applications can be quickly Data is not collocated with other Organizations determine where
provisioned and de-provisioned organizations’ data to run their applications

Organizations pay only for what Hardware must be purchased for Organizations control security,
they use startup and maintenance compliance, or legal
requirements

Organizations don’t have Organizations are responsible


complete control over resources for hardware maintenance and
and security updates

Multi-cloud
A fourth, and increasingly likely scenario is a multi-cloud scenario. In a multi-
cloud scenario, you use multiple public cloud providers. Maybe you use
different features from different cloud providers. Or maybe you started your
cloud journey with one provider and are in the process of migrating to a
different provider. Regardless, in a multi-cloud environment you deal with
two (or more) public cloud providers and manage resources and security in
both environments.

Azure Arc
Azure Arc is a set of technologies that helps manage your cloud
environment. Azure Arc can help manage your cloud environment, whether
it's a public cloud solely on Azure, a private cloud in your datacenter, a
hybrid configuration, or even a multi-cloud environment running on multiple
cloud providers at once.

Azure VMware Solution


What if you’re already established with VMware in a private cloud
environment but want to migrate to a public or hybrid cloud? Azure VMware
Solution lets you run your VMware workloads in Azure with seamless
integration and scalability.

Describe the consumption-based


model
When comparing IT infrastructure models, there are two types of expenses
to consider. Capital expenditure (CapEx) and operational expenditure
(OpEx).

CapEx is typically a one-time, up-front expenditure to purchase or secure


tangible resources. A new building, repaving the parking lot, building a
datacenter, or buying a company vehicle are examples of CapEx.

In contrast, OpEx is spending money on services or products over time.


Renting a convention center, leasing a company vehicle, or signing up for
cloud services are all examples of OpEx.

Cloud computing falls under OpEx because cloud computing operates on a


consumption-based model. With cloud computing, you don’t pay for the
physical infrastructure, the electricity, the security, or anything else
associated with maintaining a datacenter. Instead, you pay for the IT
resources you use. If you don’t use any IT resources this month, you don’t
pay for any IT resources.

This consumption-based model has many benefits, including:

 No upfront costs.
 No need to purchase and manage costly infrastructure that users might
not use to its fullest potential.
 The ability to pay for more resources when they're needed.
 The ability to stop paying for resources that are no longer needed.

With a traditional datacenter, you try to estimate the future resource needs.
If you overestimate, you spend more on your datacenter than you need to
and potentially waste money. If you underestimate, your datacenter will
quickly reach capacity and your applications and services may suffer from
decreased performance. Fixing an under-provisioned datacenter can take a
long time. You may need to order, receive, and install more hardware. You'll
also need to add power, cooling, and networking for the extra hardware.

In a cloud-based model, you don’t have to worry about getting the resource
needs just right. If you find that you need more virtual machines, you add
more. If the demand drops and you don’t need as many virtual machines,
you remove machines as needed. Either way, you’re only paying for the
virtual machines that you use, not the “extra capacity” that the cloud
provider has on hand.

Compare cloud pricing models


Cloud computing is the delivery of computing services over the internet by
using a pay-as-you-go pricing model. You typically pay only for the cloud
services you use, which helps you:

 Plan and manage your operating costs.


 Run your infrastructure more efficiently.
 Scale as your business needs change.

To put it another way, cloud computing is a way to rent compute power and
storage from someone else’s datacenter. You can treat cloud resources like
you would resources in your own datacenter. However, unlike in your own
datacenter, when you're done using cloud resources, you give them back.
You’re billed only for what you use.

Instead of maintaining CPUs and storage in your datacenter, you rent them
for the time that you need them. The cloud provider takes care of
maintaining the underlying infrastructure for you. The cloud enables you to
quickly solve your toughest business challenges and bring cutting-edge
solutions to your users.

Describe the benefits of


using cloud services
Introduction
In this module, you’ll be introduced to some of the benefits that cloud
computing offers. You’ll learn how cloud computing can help you meet
variable demand while providing a good experience for your customer. You’ll
also learn about security, governance, and overall manageability in the
cloud.

Learning objectives
After completing this module, you’ll be able to:

 Describe the benefits of high availability and scalability in the cloud.


 Describe the benefits of reliability and predictability in the cloud.
 Describe the benefits of security and governance in the cloud.
 Describe the benefits of manageability in the cloud.

Describe the benefits of high


availability and scalability in the
cloud
When building or deploying a cloud application, two of the biggest
considerations are uptime (or availability) and the ability to handle demand
(or scale).

High availability
When you’re deploying an application, a service, or any IT resources, it’s
important the resources are available when needed. High availability focuses
on ensuring maximum availability, regardless of disruptions or events that
may occur.

When you’re architecting your solution, you’ll need to account for service
availability guarantees. Azure is a highly available cloud environment with
uptime guarantees depending on the service. These guarantees are part of
the service-level agreements (SLAs).

This short video describes Azure SLAs in more detail.

Scalability
Another major benefit of cloud computing is the scalability of cloud
resources. Scalability refers to the ability to adjust resources to meet
demand. If you suddenly experience peak traffic and your systems are
overwhelmed, the ability to scale means you can add more resources to
better handle the increased demand.

The other benefit of scalability is that you aren't overpaying for services.
Because the cloud is a consumption-based model, you only pay for what you
use. If demand drops off, you can reduce your resources and thereby reduce
your costs.

Scaling generally comes in two varieties: vertical and horizontal. Vertical


scaling is focused on increasing or decreasing the capabilities of resources.
Horizontal scaling is adding or subtracting the number of resources.

Vertical scaling

With vertical scaling, if you were developing an app and you needed more
processing power, you could vertically scale up to add more CPUs or RAM to
the virtual machine. Conversely, if you realized you had over-specified the
needs, you could vertically scale down by lowering the CPU or RAM
specifications.

Horizontal scaling

With horizontal scaling, if you suddenly experienced a steep jump in


demand, your deployed resources could be scaled out (either automatically
or manually). For example, you could add additional virtual machines or
containers, scaling out. In the same manner, if there was a significant drop in
demand, deployed resources could be scaled in (either automatically or
manually), scaling in.
Describe the benefits of reliability
and predictability in the cloud
Reliability and predictability are two crucial cloud benefits that help you
develop solutions with confidence.

Reliability
Reliability is the ability of a system to recover from failures and continue to
function. It's also one of the pillars of the Microsoft Azure Well-Architected
Framework.

The cloud, by virtue of its decentralized design, naturally supports a reliable


and resilient infrastructure. With a decentralized design, the cloud enables
you to have resources deployed in regions around the world. With this global
scale, even if one region has a catastrophic event other regions are still up
and running. You can design your applications to automatically take
advantage of this increased reliability. In some cases, your cloud
environment itself will automatically shift to a different region for you, with
no action needed on your part. You’ll learn more about how Azure leverages
global scale to provide reliability later in this series.

Predictability
Predictability in the cloud lets you move forward with confidence.
Predictability can be focused on performance predictability or cost
predictability. Both performance and cost predictability are heavily
influenced by the Microsoft Azure Well-Architected Framework. Deploy a
solution that’s built around this framework and you have a solution whose
cost and performance are predictable.

Performance

Performance predictability focuses on predicting the resources needed to


deliver a positive experience for your customers. Autoscaling, load
balancing, and high availability are just some of the cloud concepts that
support performance predictability. If you suddenly need more resources,
autoscaling can deploy additional resources to meet the demand, and then
scale back when the demand drops. Or if the traffic is heavily focused on one
area, load balancing will help redirect some of the overload to less stressed
areas.
Cost

Cost predictability is focused on predicting or forecasting the cost of the


cloud spend. With the cloud, you can track your resource use in real time,
monitor resources to ensure that you’re using them in the most efficient
way, and apply data analytics to find patterns and trends that help better
plan resource deployments. By operating in the cloud and using cloud
analytics and information, you can predict future costs and adjust your
resources as needed. You can even use tools like the Total Cost of Ownership
(TCO) or Pricing Calculator to get an estimate of potential cloud spend.

Describe the benefits of security


and governance in the cloud
Whether you’re deploying infrastructure as a service or software as a
service, cloud features support governance and compliance. Things like set
templates help ensure that all your deployed resources meet corporate
standards and government regulatory requirements. Plus, you can update all
your deployed resources to new standards as standards change. Cloud-
based auditing helps flag any resource that’s out of compliance with your
corporate standards and provides mitigation strategies. Depending on your
operating model, software patches and updates may also automatically be
applied, which helps with both governance and security.

On the security side, you can find a cloud solution that matches your security
needs. If you want maximum control of security, infrastructure as a service
provides you with physical resources but lets you manage the operating
systems and installed software, including patches and maintenance. If you
want patches and maintenance taken care of automatically, platform as a
service or software as a service deployment may be the best cloud strategies
for you.

And because the cloud is intended as an over-the-internet delivery of IT


resources, cloud providers are typically well suited to handle things like
distributed denial of service (DDoS) attacks, making your network more
robust and secure.

By establishing a good governance footprint early, you can keep your cloud
footprint updated, secure, and well managed.

Describe the benefits of


manageability in the cloud
A major benefit of cloud computing is the manageability options. There are
two types of manageability for cloud computing that you’ll learn about in this
series, and both are excellent benefits.

Management of the cloud


Management of the cloud speaks to managing your cloud resources. In the
cloud, you can:

 Automatically scale resource deployment based on need.


 Deploy resources based on a preconfigured template, removing the
need for manual configuration.
 Monitor the health of resources and automatically replace failing
resources.
 Receive automatic alerts based on configured metrics, so you’re aware
of performance in real time.

Management in the cloud


Management in the cloud speaks to how you’re able to manage your cloud
environment and resources. You can manage these:

 Through a web portal.


 Using a command line interface.
 Using APIs.
 Using PowerShell.

Describe cloud service types


covers the different cloud service types and shares some of the use cases
and benefits aligned with each service type.

Introduction
In this module, you’ll be introduced to cloud service types. You’ll learn how
each cloud service type determines the flexibility you’ll have with managing
and configuring resources. You'll understand how the shared responsibility
model applies to each cloud service type, and about various use cases for
each cloud service type.
Learning objectives
After completing this module, you’ll be able to:

 Describe infrastructure as a service (IaaS).


 Describe platform as a service (PaaS).
 Describe software as a service (SaaS).
 Identify appropriate use cases for each cloud service (IaaS, PaaS, SaaS).

Describe Infrastructure as a
Service
Infrastructure as a service (IaaS) is the most flexible category of cloud
services, as it provides you the maximum amount of control for your cloud
resources. In an IaaS model, the cloud provider is responsible for maintaining
the hardware, network connectivity (to the internet), and physical security.
You’re responsible for everything else: operating system installation,
configuration, and maintenance; network configuration; database and
storage configuration; and so on. With IaaS, you’re essentially renting the
hardware in a cloud datacenter, but what you do with that hardware is up to
you.

Shared responsibility model


The shared responsibility model applies to all the cloud service types. IaaS
places the largest share of responsibility with you. The cloud provider is
responsible for maintaining the physical infrastructure and its access to the
internet. You’re responsible for installation and configuration, patching and
updates, and security.
Scenarios
Some common scenarios where IaaS might make sense include:

 Lift-and-shift migration: You’re standing up cloud resources similar to


your on-prem datacenter, and then simply moving the things running
on-prem to running on the IaaS infrastructure.
 Testing and development: You have established configurations for
development and test environments that you need to rapidly replicate.
You can stand up or shut down the different environments rapidly with
an IaaS structure, while maintaining complete control.

Describe Platform as a Service


Platform as a service (PaaS) is a middle ground between renting space in a
datacenter (infrastructure as a service) and paying for a complete and
deployed solution (software as a service). In a PaaS environment, the cloud
provider maintains the physical infrastructure, physical security, and
connection to the internet. They also maintain the operating systems,
middleware, development tools, and business intelligence services that
make up a cloud solution. In a PaaS scenario, you don't have to worry about
the licensing or patching for operating systems and databases.
PaaS is well suited to provide a complete development environment without
the headache of maintaining all the development infrastructure.

Shared responsibility model


The shared responsibility model applies to all the cloud service types. PaaS
splits the responsibility between you and the cloud provider. The cloud
provider is responsible for maintaining the physical infrastructure and its
access to the internet, just like in IaaS. In the PaaS model, the cloud provider
will also maintain the operating systems, databases, and development tools.
Think of PaaS like using a domain joined machine: IT maintains the device
with regular updates, patches, and refreshes.

Depending on the configuration, you or the cloud provider may be


responsible for networking settings and connectivity within your cloud
environment, network and application security, and the directory
infrastructure.

Scenarios
Some common scenarios where PaaS might make sense include:
 Development framework: PaaS provides a framework that developers
can build upon to develop or customize cloud-based applications.
Similar to the way you create an Excel macro, PaaS lets developers
create applications using built-in software components. Cloud features
such as scalability, high-availability, and multi-tenant capability are
included, reducing the amount of coding that developers must do.
 Analytics or business intelligence: Tools provided as a service with PaaS
allow organizations to analyze and mine their data, finding insights and
patterns and predicting outcomes to improve forecasting, product
design decisions, investment returns, and other business decisions.

Describe Software as a Service


Software as a service (SaaS) is the most complete cloud service model from
a product perspective. With SaaS, you’re essentially renting or using a fully
developed application. Email, financial software, messaging applications, and
connectivity software are all common examples of a SaaS implementation.

While the SaaS model may be the least flexible, it’s also the easiest to get up
and running. It requires the least amount of technical knowledge or expertise
to fully employ.

Shared responsibility model


The shared responsibility model applies to all the cloud service types. SaaS is
the model that places the most responsibility with the cloud provider and the
least responsibility with the user. In a SaaS environment you’re responsible
for the data that you put into the system, the devices that you allow to
connect to the system, and the users that have access. Nearly everything
else falls to the cloud provider. The cloud provider is responsible for physical
security of the datacenters, power, network connectivity, and application
development and patching.
Scenarios
Some common scenarios for SaaS are:

 Email and messaging.


 Business productivity applications.
 Finance and expense tracking.

Part 2: Describe Azure


architecture and services
Describe the core architectural
components of Azure
Introduction
In this module, you’ll be introduced to the core architectural components of
Azure. You’ll learn about the physical organization of Azure: datacenters,
availability zones, and regions; and you’ll learn about the organizational
structure of Azure: resources and resource groups, subscriptions, and
management groups.

Learning objectives
After completing this module, you’ll be able to:

 Describe Azure regions, region pairs, and sovereign regions.


 Describe Availability Zones.
 Describe Azure datacenters.
 Describe Azure resources and Resource Groups.
 Describe subscriptions.
 Describe management groups.
 Describe the hierarchy of resource groups, subscriptions, and
management groups.

What is Microsoft Azure


Azure is a continually expanding set of cloud services that help you meet
current and future business challenges. Azure gives you the freedom to
build, manage, and deploy applications on a massive global network using
your favorite tools and frameworks.

What does Azure offer?


With help from Azure, you have everything you need to build your next great
solution. The following lists several of the benefits that Azure provides, so
you can easily invent with purpose:

 Be ready for the future: Continuous innovation from Microsoft


supports your development today and your product visions for
tomorrow.
 Build on your terms: You have choices. With a commitment to open
source, and support for all languages and frameworks, you can build
how you want and deploy where you want.
 Operate hybrid seamlessly: On-premises, in the cloud, and at the
edge, we'll meet you where you are. Integrate and manage your
environments with tools and services designed for a hybrid cloud
solution.
 Trust your cloud: Get security from the ground up, backed by a team
of experts, and proactive compliance trusted by enterprises,
governments, and startups.

What can I do with Azure?


Azure provides more than 100 services that enable you to do everything
from running your existing applications on virtual machines to exploring new
software paradigms, such as intelligent bots and mixed reality.

Many teams start exploring the cloud by moving their existing applications to
virtual machines (VMs) that run in Azure. Migrating your existing apps to VMs
is a good start, but the cloud is much more than a different place to run your
VMs.

For example, Azure provides artificial intelligence (AI) and machine-learning


(ML) services that can naturally communicate with your users through vision,
hearing, and speech. It also provides storage solutions that dynamically grow
to accommodate massive amounts of data. Azure services enable solutions
that aren't feasible without the power of the cloud.

Get started with Azure accounts


To create and use Azure services, you need an Azure subscription. When
you're completing Learn modules, most of the time a temporary subscription
is created for you, which runs in an environment called the Learn sandbox.
When you're working with your own applications and business needs, you
need to create an Azure account, and a subscription will be created for you.
After you've created an Azure account, you're free to create additional
subscriptions. For example, your company might use a single Azure account
for your business and separate subscriptions for development, marketing,
and sales departments. After you've created an Azure subscription, you can
start creating Azure resources within each subscription.
If you're new to Azure, you can sign up for a free account on the Azure
website to start exploring at no cost to you. When you're ready, you can
choose to upgrade your free account. You can also create a new subscription
that enables you to start paying for Azure services you need beyond the
limits of a free account.

Create an Azure account


You can purchase Azure access directly from Microsoft by signing up on the
Azure website or through a Microsoft representative. You can also purchase
Azure access through a Microsoft partner. Cloud Solution Provider partners
offer a range of complete managed-cloud solutions for Azure.

What is the Azure free account?

The Azure free account includes:


 Free access to popular Azure products for 12 months.
 A credit to use for the first 30 days.
 Access to more than 25 products that are always free.

The Azure free account is an excellent way for new users to get started and
explore. To sign up, you need a phone number, a credit card, and a Microsoft
or GitHub account. The credit card information is used for identity
verification only. You won't be charged for any services until you upgrade to
a paid subscription.

What is the Azure free student account?

The Azure free student account offer includes:

 Free access to certain Azure services for 12 months.


 A credit to use in the first 12 months.
 Free access to certain software developer tools.

The Azure free student account is an offer for students that gives $100 credit
and free developer tools. Also, you can sign up without a credit card.

What is the Microsoft Learn sandbox?

Many of the Learn exercises use a technology called the sandbox, which
creates a temporary subscription that's added to your Azure account. This
temporary subscription allows you to create Azure resources during a Learn
module. Learn automatically cleans up the temporary resources for you after
you've completed the module.

When you're completing a Learn module, you're welcome to use your


personal subscription to complete the exercises in a module. However, the
sandbox is the preferred method to use because it allows you to create and
test Azure resources at no cost to you.

Exercise - Explore the Learn


sandbox
This module requires a sandbox to complete.

A sandbox gives you access to free resources. Your personal subscription


will not be charged. The sandbox may only be used to complete training on
Microsoft Learn. Use for any other reason is prohibited, and may result in
permanent loss of access to the sandbox.
Microsoft provides this lab experience and related content for educational
purposes. All presented information is owned by Microsoft and intended
solely for learning about the covered products and services in this Microsoft
Learn module.

Activate sandbox

In this exercise, you explore the Learn sandbox. You can interact with the
Learn sandbox in three different ways. During exercises, you'll be provided
for instructions for at least one of the methods below.

You start by activating the Learn sandbox. Then, you’ll investigate each of
the methods to work in the Learn sandbox.

Activate the Learn Sandbox


If you haven’t already, use the Activate sandbox button above to activate the
Learn sandbox.

If you receive a notice saying Microsoft Learn needs your permission to


create Azure resource, use the Review permission button to review and
accept the permissions. Once you approve the permissions, it may take a
few minutes for the sandbox to activate.

Task 1: Use the PowerShell CLI


Once the sandbox launches, half the screen will be in PowerShell command
line interface (CLI) mode. If you’re familiar with PowerShell, you can manage
your Azure environment using PowerShell commands.
Tip

You can tell you're in PowerShell mode by the PS before your directory on
the command line.

Use the PowerShell Get-date command to get the current date and time.

PowerShellCopy
Get-date

Most Azure specific commands will start with the letters az. The Get-date
command you just ran is a PowerShell specific command. Let's try an Azure
command to check what version of the CLI you're using right now.

PowerShellCopy
az version

Task 2: Use the BASH CLI


If you’re more familiar with BASH, you can use BASH command instead by
shifting to the BASH CLI.

Enter bash to switch to the BASH CLI.

PowerShellCopy
bash
Tip

You can tell you're in BASH mode by the username displayed on the
command line. It will be your username@azure.

Again, use the Get-date command to get the current date and time.

Azure CLICopy
Get-date

You received an error because Get-date is a PowerShell specific command.

Use the date command to get the current date and time.

Azure CLICopy
date

Just like in the PowerShell mode of the CLI, you can use the letters az to start
an Azure command in the BASH mode. Try to run an update to the CLI with
az upgrade.

Azure CLICopy
az upgrade

You can change back to PowerShell mode by entering pwsh on the BASH
command line.

Task 3: Use Azure CLI interactive mode


Another way to interact is using the Azure CLI interactive mode. This
changes CLI behavior to more closely resemble an integrated development
environment (IDE). Interactive mode provides autocompletion, command
descriptions, and even examples. If you’re unfamiliar with BASH and
PowerShell, but want to use the command line, interactive mode may help
you.

Enter az interactive to enter interactive mode.

Azure CLICopy
az interactive

Decide whether you wish to send telemetry data and enter YES or NO.

You may have to wait a minute or two to allow the interactive mode to fully
initialize. Then, enter the letter “a” and auto-completion should start to work.
If auto-completion isn’t working, erase what you’ve entered, wait a bit
longer, and try again.
Once initialized, you can use the arrow keys or tab to help complete your
commands. Interactive mode is set up specifically for Azure, so you don't
need to enter az to start a command (but you can if you want to or are used
to it). Try the upgrade or version commands again, but this time without az
in front.

Azure CLICopy
version
Azure CLICopy
upgrade

The commands should have worked the same as before, and given you the
same results. Use the exit command to leave interactive mode.
Azure CLICopy
exit

Task 4: Use the Azure portal


You’ll also have the option of using the Azure portal during sandbox
exercises. You need to use the link provided in the exercise to access the
Azure portal. Using the provided link, instead of opening the portal yourself,
ensures the correct subscription is used and the exercise remains free for
you to complete.

Sign in to the Azure portal to check out the Azure web interface. Once in the
portal, you can see all the services Azure has to offer as well as look around
at resource groups and so on.

Continue
You're all set for now. We'll come back to this sandbox later in this module
and actually create an Azure resource!

Describe Azure physical


infrastructure
Throughout your journey with Microsoft Azure, you’ll hear and use terms like
Regions, Availability Zones, Resources, Subscriptions, and more. This module
focuses on the core architectural components of Azure. The core
architectural components of Azure may be broken down into two main
groupings: the physical infrastructure, and the management infrastructure.

Physical infrastructure
The physical infrastructure for Azure starts with datacenters. Conceptually,
the datacenters are the same as large corporate datacenters. They’re
facilities with resources arranged in racks, with dedicated power, cooling,
and networking infrastructure.

As a global cloud provider, Azure has datacenters around the world.


However, these individual datacenters aren’t directly accessible. Datacenters
are grouped into Azure Regions or Azure Availability Zones that are designed
to help you achieve resiliency and reliability for your business-critical
workloads.

The Global infrastructure site gives you a chance to interactively explore the
underlying Azure infrastructure.

Regions

A region is a geographical area on the planet that contains at least one, but
potentially multiple datacenters that are nearby and networked together
with a low-latency network. Azure intelligently assigns and controls the
resources within each region to ensure workloads are appropriately
balanced.

When you deploy a resource in Azure, you'll often need to choose the region
where you want your resource deployed.

Note

Some services or virtual machine (VM) features are only available in certain
regions, such as specific VM sizes or storage types. There are also some
global Azure services that don't require you to select a particular region,
such as Microsoft Entra ID, Azure Traffic Manager, and Azure DNS.

Availability Zones

Availability zones are physically separate datacenters within an Azure region.


Each availability zone is made up of one or more datacenters equipped with
independent power, cooling, and networking. An availability zone is set up to
be an isolation boundary. If one zone goes down, the other continues
working. Availability zones are connected through high-speed, private fiber-
optic networks.
Important

To ensure resiliency, a minimum of three separate availability zones are


present in all availability zone-enabled regions. However, not all Azure
Regions currently support availability zones.

Use availability zones in your apps

You want to ensure your services and data are redundant so you can protect
your information in case of failure. When you host your infrastructure, setting
up your own redundancy requires that you create duplicate hardware
environments. Azure can help make your app highly available through
availability zones.

You can use availability zones to run mission-critical applications and build
high-availability into your application architecture by co-locating your
compute, storage, networking, and data resources within an availability zone
and replicating in other availability zones. Keep in mind that there could be a
cost to duplicating your services and transferring data between availability
zones.
Availability zones are primarily for VMs, managed disks, load balancers, and
SQL databases. Azure services that support availability zones fall into three
categories:

 Zonal services: You pin the resource to a specific zone (for example, VMs,
managed disks, IP addresses).
 Zone-redundant services: The platform replicates automatically across zones
(for example, zone-redundant storage, SQL Database).
 Non-regional services: Services are always available from Azure geographies
and are resilient to zone-wide outages as well as region-wide outages.

Even with the additional resiliency that availability zones provide, it’s
possible that an event could be so large that it impacts multiple availability
zones in a single region. To provide even further resilience, Azure has Region
Pairs.

Region pairs

Most Azure regions are paired with another region within the same
geography (such as US, Europe, or Asia) at least 300 miles away. This
approach allows for the replication of resources across a geography that
helps reduce the likelihood of interruptions because of events such as
natural disasters, civil unrest, power outages, or physical network outages
that affect an entire region. For example, if a region in a pair was affected by
a natural disaster, services would automatically fail over to the other region
in its region pair.

Important

Not all Azure services automatically replicate data or automatically fall back
from a failed region to cross-replicate to another enabled region. In these
scenarios, recovery and replication must be configured by the customer.

Examples of region pairs in Azure are West US paired with East US and
South-East Asia paired with East Asia. Because the pair of regions are
directly connected and far enough apart to be isolated from regional
disasters, you can use them to provide reliable services and data
redundancy.
Additional advantages of region pairs:
 If an extensive Azure outage occurs, one region out of every pair is prioritized
to make sure at least one is restored as quickly as possible for applications
hosted in that region pair.
 Planned Azure updates are rolled out to paired regions one region at a time to
minimize downtime and risk of application outage.
 Data continues to reside within the same geography as its pair (except for
Brazil South) for tax- and law-enforcement jurisdiction purposes.
Important

Most regions are paired in two directions, meaning they are the backup for
the region that provides a backup for them (West US and East US back each
other up). However, some regions, such as West India and Brazil South, are
paired in only one direction. In a one-direction pairing, the Primary region
does not provide backup for its secondary region. So, even though West
India’s secondary region is South India, South India does not rely on West
India. West India's secondary region is South India, but South India's
secondary region is Central India. Brazil South is unique because it's paired
with a region outside of its geography. Brazil South's secondary region is
South Central US. The secondary region of South Central US isn't Brazil
South.

Sovereign Regions

In addition to regular regions, Azure also has sovereign regions. Sovereign


regions are instances of Azure that are isolated from the main instance of
Azure. You may need to use a sovereign region for compliance or legal
purposes.

Azure sovereign regions include:

 US DoD Central, US Gov Virginia, US Gov Iowa and more: These regions are
physical and logical network-isolated instances of Azure for U.S. government
agencies and partners. These datacenters are operated by screened U.S.
personnel and include additional compliance certifications.
 China East, China North, and more: These regions are available through a
unique partnership between Microsoft and 21Vianet, whereby Microsoft
doesn't directly maintain the datacenters.

Describe Azure management


infrastructure
The management infrastructure includes Azure resources and resource
groups, subscriptions, and accounts. Understanding the hierarchical
organization will help you plan your projects and products within Azure.

Azure resources and resource groups


A resource is the basic building block of Azure. Anything you create,
provision, deploy, etc. is a resource. Virtual Machines (VMs), virtual networks,
databases, cognitive services, etc. are all considered resources within Azure.

Resource groups are simply groupings of resources. When you create a


resource, you’re required to place it into a resource group. While a resource
group can contain many resources, a single resource can only be in one
resource group at a time. Some resources may be moved between resource
groups, but when you move a resource to a new group, it will no longer be
associated with the former group. Additionally, resource groups can't be
nested, meaning you can’t put resource group B inside of resource group A.
Resource groups provide a convenient way to group resources together.
When you apply an action to a resource group, that action will apply to all
the resources within the resource group. If you delete a resource group, all
the resources will be deleted. If you grant or deny access to a resource
group, you’ve granted or denied access to all the resources within the
resource group.

When you’re provisioning resources, it’s good to think about the resource
group structure that best suits your needs.

For example, if you’re setting up a temporary dev environment, grouping all


the resources together means you can deprovision all of the associated
resources at once by deleting the resource group. If you’re provisioning
compute resources that will need three different access schemas, it may be
best to group resources based on the access schema, and then assign
access at the resource group level.

There aren’t hard rules about how you use resource groups, so consider how
to set up your resource groups to maximize their usefulness for you.

Azure subscriptions
In Azure, subscriptions are a unit of management, billing, and scale. Similar
to how resource groups are a way to logically organize resources,
subscriptions allow you to logically organize your resource groups and
facilitate billing.

Using Azure requires an Azure subscription. A subscription provides you with


authenticated and authorized access to Azure products and services. It also
allows you to provision resources. An Azure subscription links to an Azure
account, which is an identity in Microsoft Entra ID or in a directory that
Microsoft Entra ID trusts.
An account can have multiple subscriptions, but it’s only required to have
one. In a multi-subscription account, you can use the subscriptions to
configure different billing models and apply different access-management
policies. You can use Azure subscriptions to define boundaries around Azure
products, services, and resources. There are two types of subscription
boundaries that you can use:

 Billing boundary: This subscription type determines how an Azure account is


billed for using Azure. You can create multiple subscriptions for different types
of billing requirements. Azure generates separate billing reports and invoices
for each subscription so that you can organize and manage costs.
 Access control boundary: Azure applies access-management policies at the
subscription level, and you can create separate subscriptions to reflect
different organizational structures. An example is that within a business, you
have different departments to which you apply distinct Azure subscription
policies. This billing model allows you to manage and control access to the
resources that users provision with specific subscriptions.

Create additional Azure subscriptions

Similar to using resource groups to separate resources by function or access,


you might want to create additional subscriptions for resource or billing
management purposes. For example, you might choose to create additional
subscriptions to separate:

 Environments: You can choose to create subscriptions to set up separate


environments for development and testing, security, or to isolate data for
compliance reasons. This design is particularly useful because resource access
control occurs at the subscription level.
 Organizational structures: You can create subscriptions to reflect different
organizational structures. For example, you could limit one team to lower-cost
resources, while allowing the IT department a full range. This design allows
you to manage and control access to the resources that users provision within
each subscription.
 Billing: You can create additional subscriptions for billing purposes. Because
costs are first aggregated at the subscription level, you might want to create
subscriptions to manage and track costs based on your needs. For instance,
you might want to create one subscription for your production workloads and
another subscription for your development and testing workloads.

Azure management groups


The final piece is the management group. Resources are gathered into
resource groups, and resource groups are gathered into subscriptions. If
you’re just starting in Azure that might seem like enough hierarchy to keep
things organized. But imagine if you’re dealing with multiple applications,
multiple development teams, in multiple geographies.

If you have many subscriptions, you might need a way to efficiently manage
access, policies, and compliance for those subscriptions. Azure management
groups provide a level of scope above subscriptions. You organize
subscriptions into containers called management groups and apply
governance conditions to the management groups. All subscriptions within a
management group automatically inherit the conditions applied to the
management group, the same way that resource groups inherit settings from
subscriptions and resources inherit from resource groups. Management
groups give you enterprise-grade management at a large scale, no matter
what type of subscriptions you might have. Management groups can be
nested.

Management group, subscriptions, and


resource group hierarchy
You can build a flexible structure of management groups and subscriptions
to organize your resources into a hierarchy for unified policy and access
management. The following diagram shows an example of creating a
hierarchy for governance by using management groups.
Some examples of how you could use management groups might be:

 Create a hierarchy that applies a policy. You could limit VM locations to


the US West Region in a group called Production. This policy will inherit onto all
the subscriptions that are descendants of that management group and will
apply to all VMs under those subscriptions. This security policy can't be altered
by the resource or subscription owner, which allows for improved governance.
 Provide user access to multiple subscriptions. By moving multiple
subscriptions under a management group, you can create one Azure role-
based access control (Azure RBAC) assignment on the management group.
Assigning Azure RBAC at the management group level means that all sub-
management groups, subscriptions, resource groups, and resources
underneath that management group would also inherit those permissions. One
assignment on the management group can enable users to have access to
everything they need instead of scripting Azure RBAC over different
subscriptions.

Important facts about management groups:

 10,000 management groups can be supported in a single directory.


 A management group tree can support up to six levels of depth. This limit
doesn't include the root level or the subscription level.
 Each management group and subscription can support only one parent.

Exercise - Create an Azure


resource
This module requires a sandbox to complete.

In this exercise, you’ll use the Azure portal to create a resource. The focus of
the exercise is observing how Azure resource groups populate with created
resources.

Important

The sandbox should already be activated, but if the sandbox closed,


reactivate the sandbox before continuing.

Task 1: Create a virtual machine


In this task, you’ll create a virtual machine using the Azure portal.

1. Sign in to the Azure portal.


2. Select Create a resource > Compute > Virtual Machine > Create.
3. The Create a virtual machine pane opens to the basics tab.
4. Verify or enter the following values for each setting. If a setting isn’t
specified, leave the default value.

Basics tab

Expand table
Setting Value
Subscription Concierge Subscription
Resource group Select the resource group name that begins with learn.
Virtual machine name my-VM
Region Leave default
Availability options Leave default
Security type Leave default
Image Leave default
VM architecture Leave default
Run with Azure Spot discount Unchecked
Size Leave default
Authentication type Password
Username azureuser
Password Enter a custom password
Confirm password Reenter the custom password
Public inbound ports None

5. Select Review and Create.


6. Select Create

Wait while the VM is provisioned. Deployment is in progress will change to


Deployment is complete when the VM is ready.

Task 2: Verify resources created


Once the deployment is created, you can verify that Azure created not only a
VM, but all of the associated resources the VM needs.

1. Select Home
2. Select Resource groups
3. Select the [sandbox resource group name] resource group

You should see a list of resources in the resource group. The storage account
and virtual network are associated with the Learn sandbox. However, the
rest of the resources were created when you created the virtual machine. By
default, Azure gave them all a similar name to help with association and
grouped them in the same resource group.

Congratulations! You've created a resource in Azure and had a chance to see


how resources get grouped on creation.

Clean up
The sandbox automatically cleans up your resources when you're finished
with this module.

When you're working in your own subscription, it's a good idea at the end of
a project to identify whether you still need the resources you created.
Resources that you leave running can cost you money. You can delete
resources individually or delete the resource group to delete the entire set of
resources.

Describe Azure compute and


networking services
This module focuses on some of the computer services and networking
services available within Azure.

Introduction
In this module, you’ll be introduced to the compute and networking services
of Azure. You’ll learn about three of the compute options (virtual machines,
containers, and Azure functions). You’ll also learn about some of the
networking features, such as Azure virtual networks, Azure DNS, and Azure
ExpressRoute.

Learning objectives
After completing this module, you’ll be able to:

 Compare compute types, including container instances, virtual


machines, and functions.
 Describe virtual machine options, including virtual machines (VMs),
virtual machine scale sets, virtual machine availability sets, and Azure
Virtual Desktop.
 Describe resources required for virtual machines.
 Describe application hosting options, including Azure Web Apps,
containers, and virtual machines.
 Describe virtual networking, including the purpose of Azure Virtual
Networks, Azure virtual subnets, peering, Azure DNS, VPN Gateway, and
ExpressRoute.
 Define public and private endpoints.

Describe Azure virtual machines


With Azure Virtual Machines (VMs), you can create and use VMs in the cloud.
VMs provide infrastructure as a service (IaaS) in the form of a virtualized
server and can be used in many ways. Just like a physical computer, you can
customize all of the software running on your VM. VMs are an ideal choice
when you need:

 Total control over the operating system (OS).


 The ability to run custom software.
 To use custom hosting configurations.

An Azure VM gives you the flexibility of virtualization without having to buy


and maintain the physical hardware that runs the VM. However, as an IaaS
offering, you still need to configure, update, and maintain the software that
runs on the VM.

You can even create or use an already created image to rapidly provision
VMs. You can create and provision a VM in minutes when you select a
preconfigured VM image. An image is a template used to create a VM and
may already include an OS and other software, like development tools or
web hosting environments.

Scale VMs in Azure


You can run single VMs for testing, development, or minor tasks. Or you can
group VMs together to provide high availability, scalability, and redundancy.
Azure can also manage the grouping of VMs for you with features such as
scale sets and availability sets.
Virtual machine scale sets

Virtual machine scale sets let you create and manage a group of identical,
load-balanced VMs. If you simply created multiple VMs with the same
purpose, you’d need to ensure they were all configured identically and then
set up network routing parameters to ensure efficiency. You’d also have to
monitor the utilization to determine if you need to increase or decrease the
number of VMs.

Instead, with virtual machine scale sets, Azure automates most of that work.
Scale sets allow you to centrally manage, configure, and update a large
number of VMs in minutes. The number of VM instances can automatically
increase or decrease in response to demand, or you can set it to scale based
on a defined schedule. Virtual machine scale sets also automatically deploy a
load balancer to make sure that your resources are being used efficiently.
With virtual machine scale sets, you can build large-scale services for areas
such as compute, big data, and container workloads.

Virtual machine availability sets

Virtual machine availability sets are another tool to help you build a more
resilient, highly available environment. Availability sets are designed to
ensure that VMs stagger updates and have varied power and network
connectivity, preventing you from losing all your VMs with a single network
or power failure.

Availability sets do this by grouping VMs in two ways: update domain and
fault domain.

 Update domain: The update domain groups VMs that can be rebooted at the
same time. This allows you to apply updates while knowing that only one
update domain grouping will be offline at a time. All of the machines in one
update domain will be updated. An update group going through the update
process is given a 30-minute time to recover before maintenance on the next
update domain starts.
 Fault domain: The fault domain groups your VMs by common power source
and network switch. By default, an availability set will split your VMs across up
to three fault domains. This helps protect against a physical power or
networking failure by having VMs in different fault domains (thus being
connected to different power and networking resources).

Best of all, there’s no additional cost for configuring an availability set. You
only pay for the VM instances you create.

Examples of when to use VMs


Some common examples or use cases for virtual machines include:

 During testing and development. VMs provide a quick and easy way to
create different OS and application configurations. Test and development
personnel can then easily delete the VMs when they no longer need them.
 When running applications in the cloud. The ability to run certain
applications in the public cloud as opposed to creating a traditional
infrastructure to run them can provide substantial economic benefits. For
example, an application might need to handle fluctuations in demand. Shutting
down VMs when you don't need them or quickly starting them up to meet a
sudden increase in demand means you pay only for the resources you use.
 When extending your datacenter to the cloud: An organization can
extend the capabilities of its own on-premises network by creating a virtual
network in Azure and adding VMs to that virtual network. Applications like
SharePoint can then run on an Azure VM instead of running locally. This
arrangement makes it easier or less expensive to deploy than in an on-
premises environment.
 During disaster recovery: As with running certain types of applications in
the cloud and extending an on-premises network to the cloud, you can get
significant cost savings by using an IaaS-based approach to disaster recovery.
If a primary datacenter fails, you can create VMs running on Azure to run your
critical applications and then shut them down when the primary datacenter
becomes operational again.

Move to the cloud with VMs


VMs are also an excellent choice when you move from a physical server to
the cloud (also known as lift and shift). You can create an image of the
physical server and host it within a VM with little or no changes. Just like a
physical on-premises server, you must maintain the VM: you’re responsible
for maintaining the installed OS and software.

VM Resources
When you provision a VM, you’ll also have the chance to pick the resources
that are associated with that VM, including:

 Size (purpose, number of processor cores, and amount of RAM)


 Storage disks (hard disk drives, solid state drives, etc.)
 Networking (virtual network, public IP address, and port configuration)

Exercise - Create an Azure virtual


machine
This module requires a sandbox to complete.
In this exercise, you create an Azure virtual machine (VM) and install Nginx,
a popular web server.

You could use the Azure portal, the Azure CLI, Azure PowerShell, or an Azure
Resource Manager (ARM) template.

In this instance, you're going to use the Azure CLI.

Task 1: Create a Linux virtual machine and


install Nginx
Use the following Azure CLI commands to create a Linux VM and install
Nginx. After your VM is created, you'll use the Custom Script Extension to
install Nginx. The Custom Script Extension is an easy way to download and
run scripts on your Azure VMs. It's just one of the many ways you can
configure the system after your VM is up and running.

1. From Cloud Shell, run the following az vm create command to create a


Linux VM:

Azure CLICopy
az vm create \
--resource-group "[sandbox resource group name]" \
--name my-vm \
--public-ip-sku Standard \
--image Ubuntu2204 \
--admin-username azureuser \
--generate-ssh-keys

Your VM will take a few moments to come up. You named the VM my-
vm. You use this name to refer to the VM in later steps.

2. Run the following az vm extension set command to configure Nginx on


your VM:

Azure CLICopy
az vm extension set \
--resource-group "[sandbox resource group name]" \
--vm-name my-vm \
--name customScript \
--publisher Microsoft.Azure.Extensions \
--version 2.1 \
--settings
'{"fileUris":["https://raw.githubusercontent.com/MicrosoftDocs/mslearn-
welcome-to-azure/master/configure-nginx.sh"]}' \
--protected-settings '{"commandToExecute": "./configure-nginx.sh"}'
This command uses the Custom Script Extension to run a Bash script on
your VM. The script is stored on GitHub. While the command runs, you
can choose to examine the Bash script from a separate browser tab. To
summarize, the script:

a. Runs apt-get update to download the latest package information from


the internet. This step helps ensure that the next command can
locate the latest version of the Nginx package.
b. Installs Nginx.
c. Sets the home page, /var/www/html/index.html, to print a welcome
message that includes your VM's host name.

Describe Azure virtual desktop


Another type of virtual machine is the Azure Virtual Desktop. Azure Virtual
Desktop is a desktop and application virtualization service that runs on the
cloud. It enables you to use a cloud-hosted version of Windows from any
location. Azure Virtual Desktop works across devices and operating systems,
and works with apps that you can use to access remote desktops or most
modern browsers.

The following video gives you an overview of Azure Virtual Desktop:

Enhance security
Azure Virtual Desktop provides centralized security management for users'
desktops with Microsoft Entra ID. You can enable multifactor authentication
to secure user sign-ins. You can also secure access to data by assigning
granular role-based access controls (RBACs) to users.

With Azure Virtual Desktop, the data and apps are separated from the local
hardware. The actual desktop and apps are running in the cloud, meaning
the risk of confidential data being left on a personal device is reduced.
Additionally, user sessions are isolated in both single and multi-session
environments.

Multi-session Windows 10 or Windows 11


deployment
Azure Virtual Desktop lets you use Windows 10 or Windows 11 Enterprise
multi-session, the only Windows client-based operating system that enables
multiple concurrent users on a single VM. Azure Virtual Desktop also
provides a more consistent experience with broader application support
compared to Windows Server-based operating systems.

Describe Azure containers


While virtual machines are an excellent way to reduce costs versus the
investments that are necessary for physical hardware, they're still limited to
a single operating system per virtual machine. If you want to run multiple
instances of an application on a single host machine, containers are an
excellent choice.

What are containers?


Containers are a virtualization environment. Much like running multiple
virtual machines on a single physical host, you can run multiple containers
on a single physical or virtual host. Unlike virtual machines, you don't
manage the operating system for a container. Virtual machines appear to be
an instance of an operating system that you can connect to and manage.
Containers are lightweight and designed to be created, scaled out, and
stopped dynamically. It's possible to create and deploy virtual machines as
application demand increases, but containers are a lighter weight, more
agile method. Containers are designed to allow you to respond to changes on
demand. With containers, you can quickly restart if there's a crash or
hardware interruption. One of the most popular container engines is Docker,
and Azure supports Docker.

Compare virtual machines to containers


The following video highlights several of the important differences between
virtual machines and containers:

Azure Container Instances

Azure Container Instances offer the fastest and simplest way to run a
container in Azure; without having to manage any virtual machines or adopt
any additional services. Azure Container Instances are a platform as a
service (PaaS) offering. Azure Container Instances allow you to upload your
containers and then the service will run the containers for you.
Azure Container Apps

Azure Container Apps are similar in many ways to a container instance. They
allow you to get up and running right away, they remove the container
management piece, and they're a PaaS offering. Container Apps have extra
benefits such as the ability to incorporate load balancing and scaling. These
other functions allow you to be more elastic in your design.

Azure Kubernetes Service

Azure Kubernetes Service (AKS) is a container orchestration service. An


orchestration service manages the lifecycle of containers. When you're
deploying a fleet of containers, AKS can make fleet management simpler and
more efficient.

Use containers in your solutions

Containers are often used to create solutions by using a microservice


architecture. This architecture is where you break solutions into smaller,
independent pieces. For example, you might split a website into a container
hosting your front end, another hosting your back end, and a third for
storage. This split allows you to separate portions of your app into logical
sections that can be maintained, scaled, or updated independently.

Imagine your website back-end has reached capacity but the front end and
storage aren't being stressed. With containers, you could scale the back end
separately to improve performance. If something necessitated such a
change, you could also choose to change the storage service or modify the
front end without impacting any of the other components.

Describe Azure functions


Azure Functions is an event-driven, serverless compute option that doesn’t
require maintaining virtual machines or containers. If you build an app using
VMs or containers, those resources have to be “running” in order for your
app to function. With Azure Functions, an event wakes the function,
alleviating the need to keep resources provisioned when there are no events.

Benefits of Azure Functions


Using Azure Functions is ideal when you're only concerned about the code
running your service and not about the underlying platform or infrastructure.
Functions are commonly used when you need to perform work in response to
an event (often via a REST request), timer, or message from another Azure
service, and when that work can be completed quickly, within seconds or
less.

Functions scale automatically based on demand, so they may be a good


choice when demand is variable.

Azure Functions runs your code when it's triggered and automatically
deallocates resources when the function is finished. In this model, you're
only charged for the CPU time used while your function runs.

Functions can be either stateless or stateful. When they're stateless (the


default), they behave as if they're restarted every time they respond to an
event. When they're stateful (called Durable Functions), a context is passed
through the function to track prior activity.

Functions are a key component of serverless computing. They're also a


general compute platform for running any type of code. If the needs of the
developer's app change, you can deploy the project in an environment that
isn't serverless. This flexibility allows you to manage scaling, run on virtual
networks, and even completely isolate the functions.

Describe application hosting


options
If you need to host your application on Azure, you might initially turn to a
virtual machine (VM) or containers. Both VMs and containers provide
excellent hosting solutions. VMs give you maximum control of the hosting
environment and allow you to configure it exactly how you want. VMs also
may be the most familiar hosting method if you’re new to the cloud.
Containers, with the ability to isolate and individually manage different
aspects of the hosting solution, can also be a robust and compelling option.

There are other hosting options that you can use with Azure, including Azure
App Service.

Azure App Service


App Service enables you to build and host web apps, background jobs,
mobile back-ends, and RESTful APIs in the programming language of your
choice without managing infrastructure. It offers automatic scaling and high
availability. App Service supports Windows and Linux. It enables automated
deployments from GitHub, Azure DevOps, or any Git repo to support a
continuous deployment model.

Azure App Service is a robust hosting option that you can use to host your
apps in Azure. Azure App Service lets you focus on building and maintaining
your app, and Azure focuses on keeping the environment up and running.

Azure App Service is an HTTP-based service for hosting web applications,


REST APIs, and mobile back ends. It supports multiple languages,
including .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python. It also
supports both Windows and Linux environments.

Types of app services

With App Service, you can host most common app service styles like:

 Web apps
 API apps
 WebJobs
 Mobile apps

App Service handles most of the infrastructure decisions you deal with in
hosting web-accessible apps:

 Deployment and management are integrated into the platform.


 Endpoints can be secured.
 Sites can be scaled quickly to handle high traffic loads.
 The built-in load balancing and traffic manager provide high availability.

All of these app styles are hosted in the same infrastructure and share these
benefits. This flexibility makes App Service the ideal choice to host web-
oriented applications.

Web apps

App Service includes full support for hosting web apps by using ASP.NET,
ASP.NET Core, Java, Ruby, Node.js, PHP, or Python. You can choose either
Windows or Linux as the host operating system.

API apps

Much like hosting a website, you can build REST-based web APIs by using
your choice of language and framework. You get full Swagger support and
the ability to package and publish your API in Azure Marketplace. The
produced apps can be consumed from any HTTP- or HTTPS-based client.

WebJobs

You can use the WebJobs feature to run a program (.exe, Java, PHP, Python,
or Node.js) or script (.cmd, .bat, PowerShell, or Bash) in the same context as
a web app, API app, or mobile app. They can be scheduled or run by a
trigger. WebJobs are often used to run background tasks as part of your
application logic.

Mobile apps

Use the Mobile Apps feature of App Service to quickly build a back end for
iOS and Android apps. With just a few actions in the Azure portal, you can:

 Store mobile app data in a cloud-based SQL database.


 Authenticate customers against common social providers, such as MSA,
Google, Twitter, and Facebook.
 Send push notifications.
 Execute custom back-end logic in C# or Node.js.

On the mobile app side, there's SDK support for native iOS and Android,
Xamarin, and React native apps.

Describe Azure virtual networking


Azure virtual networks and virtual subnets enable Azure resources, such as
VMs, web apps, and databases, to communicate with each other, with users
on the internet, and with your on-premises client computers. You can think of
an Azure network as an extension of your on-premises network with
resources that link other Azure resources.

Azure virtual networks provide the following key networking capabilities:

 Isolation and segmentation


 Internet communications
 Communicate between Azure resources
 Communicate with on-premises resources
 Route network traffic
 Filter network traffic
 Connect virtual networks
Azure virtual networking supports both public and private endpoints to
enable communication between external or internal resources with other
internal resources.

 Public endpoints have a public IP address and can be accessed from


anywhere in the world.
 Private endpoints exist within a virtual network and have a private IP
address from within the address space of that virtual network.

Isolation and segmentation


Azure virtual network allows you to create multiple isolated virtual networks.
When you set up a virtual network, you define a private IP address space by
using either public or private IP address ranges. The IP range only exists
within the virtual network and isn't internet routable. You can divide that IP
address space into subnets and allocate part of the defined address space to
each named subnet.

For name resolution, you can use the name resolution service that's built into
Azure. You also can configure the virtual network to use either an internal or
an external DNS server.

Internet communications
You can enable incoming connections from the internet by assigning a public
IP address to an Azure resource, or putting the resource behind a public load
balancer.

Communicate between Azure resources


You'll want to enable Azure resources to communicate securely with each
other. You can do that in one of two ways:

 Virtual networks can connect not only VMs but other Azure resources,
such as the App Service Environment for Power Apps, Azure Kubernetes
Service, and Azure virtual machine scale sets.
 Service endpoints can connect to other Azure resource types, such as
Azure SQL databases and storage accounts. This approach enables you
to link multiple Azure resources to virtual networks to improve security
and provide optimal routing between resources.
Communicate with on-premises resources
Azure virtual networks enable you to link resources together in your on-
premises environment and within your Azure subscription. In effect, you can
create a network that spans both your local and cloud environments. There
are three mechanisms for you to achieve this connectivity:

 Point-to-site virtual private network connections are from a computer


outside your organization back into your corporate network. In this case,
the client computer initiates an encrypted VPN connection to connect to
the Azure virtual network.
 Site-to-site virtual private networks link your on-premises VPN device or
gateway to the Azure VPN gateway in a virtual network. In effect, the
devices in Azure can appear as being on the local network. The
connection is encrypted and works over the internet.
 Azure ExpressRoute provides a dedicated private connectivity to Azure
that doesn't travel over the internet. ExpressRoute is useful for
environments where you need greater bandwidth and even higher levels
of security.

Route network traffic


By default, Azure routes traffic between subnets on any connected virtual
networks, on-premises networks, and the internet. You also can control
routing and override those settings, as follows:

 Route tables allow you to define rules about how traffic should be
directed. You can create custom route tables that control how packets
are routed between subnets.
 Border Gateway Protocol (BGP) works with Azure VPN gateways, Azure
Route Server, or Azure ExpressRoute to propagate on-premises BGP
routes to Azure virtual networks.

Filter network traffic


Azure virtual networks enable you to filter traffic between subnets by using
the following approaches:

 Network security groups are Azure resources that can contain multiple
inbound and outbound security rules. You can define these rules to allow
or block traffic, based on factors such as source and destination IP
address, port, and protocol.
 Network virtual appliances are specialized VMs that can be compared to
a hardened network appliance. A network virtual appliance carries out a
particular network function, such as running a firewall or performing
wide area network (WAN) optimization.

Connect virtual networks


You can link virtual networks together by using virtual network peering.
Peering allows two virtual networks to connect directly to each other.
Network traffic between peered networks is private, and travels on the
Microsoft backbone network, never entering the public internet. Peering
enables resources in each virtual network to communicate with each other.
These virtual networks can be in separate regions, which allows you to
create a global interconnected network through Azure.

User-defined routes (UDR) allow you to control the routing tables between
subnets within a virtual network or between virtual networks. This allows for
greater control over network traffic flow.

Exercise - Configure network


access
This module requires a sandbox to complete.

In this exercise, you'll configure the access to the virtual machine (VM) you
created earlier in this module.

Important

The Microsoft Learn sandbox should still be running. If the sandbox timed
out, you'll need to redo the previous exercise (Exercise - Create an Azure
virtual machine).

To verify the VM you created previously is still running, use the following
command:

Azure CLICopy
az vm list

If you receive an empty response [], you need to complete the first exercise
in this module again. If the result lists your current VM and its settings, you
may continue.
Right now, the VM you created and installed Nginx on isn't accessible from
the internet. You'll create a network security group that changes that by
allowing inbound HTTP access on port 80.

Task 1: Access your web server


In this procedure, you get the IP address for your VM and attempt to access
your web server's home page.

1. Run the following az vm list-ip-addresses command to get your VM's IP


address and store the result as a Bash variable:

Azure CLICopy
IPADDRESS="$(az vm list-ip-addresses \
--resource-group "[sandbox resource group name]" \
--name my-vm \
--query "[].virtualMachine.network.publicIpAddresses[*].ipAddress" \
--output tsv)"

2. Run the following curl command to download the home page:

BashCopy
curl --connect-timeout 5 http://$IPADDRESS

The --connect-timeout argument specifies to allow up to five seconds for


the connection to occur. After five seconds, you see an error message
that states that the connection timed out:

OutputCopy
curl: (28) Connection timed out after 5001 milliseconds

This message means that the VM was not accessible within the timeout
period.

3. As an optional step, try to access the web server from a browser:

a. Run the following to print your VM's IP address to the console:

BashCopy
echo $IPADDRESS

You see an IP address, for example, 23.102.42.235.

b. Copy the IP address that you see to the clipboard.


c. Open a new browser tab and go to your web server. After a few
moments, you see that the connection isn't happening. If you wait for
the browser to time out, you'll see something like this:

d. Keep this browser tab open for later.

Task 2: List the current network security


group rules
Your web server wasn't accessible. To find out why, let's examine your
current NSG rules.

1. Run the following az network nsg list command to list the network
security groups that are associated with your VM:

Azure CLICopy
az network nsg list \
--resource-group "[sandbox resource group name]" \
--query '[].name' \
--output tsv

You see this:

OutputCopy
my-vmNSG

Every VM on Azure is associated with at least one network security


group. In this case, Azure created an NSG for you called my-vmNSG.

2. Run the following az network nsg rule list command to list the rules
associated with the NSG named my-vmNSG:

Azure CLICopy
az network nsg rule list \
--resource-group "[sandbox resource group name]" \
--nsg-name my-vmNSG

You see a large block of text in JSON format in the output. In the next
step, you'll run a similar command that makes this output easier to
read.

3. Run the az network nsg rule list command a second time. This time,
use the --query argument to retrieve only the name, priority, affected
ports, and access (Allow or Deny) for each rule. The --output argument
formats the output as a table so that it's easy to read.

Azure CLICopy
az network nsg rule list \
--resource-group "[sandbox resource group name]" \
--nsg-name my-vmNSG \
--query '[].{Name:name, Priority:priority, Port:destinationPortRange,
Access:access}' \
--output table

You see this:

OutputCopy
Name Priority Port Access
----------------- ---------- ------ --------
default-allow-ssh 1000 22 Allow

You see the default rule, default-allow-ssh. This rule allows inbound
connections over port 22 (SSH). SSH (Secure Shell) is a protocol that's
used on Linux to allow administrators to access the system remotely.
The priority of this rule is 1000. Rules are processed in priority order,
with lower numbers processed before higher numbers.

By default, a Linux VM's NSG allows network access only on port 22. This
enables administrators to access the system. You need to also allow inbound
connections on port 80, which allows access over HTTP.
Task 3: Create the network security rule
Here, you create a network security rule that allows inbound access on port
80 (HTTP).

1. Run the following az network nsg rule create command to create a rule
called allow-http that allows inbound access on port 80:

Azure CLICopy
az network nsg rule create \
--resource-group "[sandbox resource group name]" \
--nsg-name my-vmNSG \
--name allow-http \
--protocol tcp \
--priority 100 \
--destination-port-range 80 \
--access Allow

For learning purposes, here you set the priority to 100. In this case, the
priority doesn't matter. You would need to consider the priority if you
had overlapping port ranges.

2. To verify the configuration, run az network nsg rule list to see the
updated list of rules:

Azure CLICopy
az network nsg rule list \
--resource-group "[sandbox resource group name]" \
--nsg-name my-vmNSG \
--query '[].{Name:name, Priority:priority, Port:destinationPortRange,
Access:access}' \
--output table

You see this both the default-allow-ssh rule and your new rule, allow-
http:

OutputCopy
Name Priority Port Access
----------------- ---------- ------ --------
default-allow-ssh 1000 22 Allow
allow-http 100 80 Allow

Task 4: Access your web server again


Now that you've configured network access to port 80, let's try to access the
web server a second time.
Note

After you update the NSG, it may take a few moments before the updated
rules propagate. Retry the next step, with pauses between attempts, until
you get the desired results.

1. Run the same curl command that you ran earlier:

BashCopy
curl --connect-timeout 5 http://$IPADDRESS

You see this:

HTMLCopy
<html><body><h2>Welcome to Azure! My name is my-vm.</h2></body></html>

2. As an optional step, refresh your browser tab that points to your web
server. You see this:

Nice work. In practice, you can create a standalone network security group
that includes the inbound and outbound network access rules you need. If
you have multiple VMs that serve the same purpose, you can assign that
NSG to each VM at the time you create it. This technique enables you to
control network access to multiple VMs under a single, central set of rules.
Clean up
The sandbox automatically cleans up your resources when you're finished
with this module.

When you're working in your own subscription, it's a good idea at the end of
a project to identify whether you still need the resources you created.
Resources that you leave running can cost you money. You can delete
resources individually or delete the resource group to delete the entire set of
resources.

Describe Azure virtual private


networks
A virtual private network (VPN) uses an encrypted tunnel within another
network. VPNs are typically deployed to connect two or more trusted private
networks to one another over an untrusted network (typically the public
internet). Traffic is encrypted while traveling over the untrusted network to
prevent eavesdropping or other attacks. VPNs can enable networks to safely
and securely share sensitive information.

VPN gateways
A VPN gateway is a type of virtual network gateway. Azure VPN Gateway
instances are deployed in a dedicated subnet of the virtual network and
enable the following connectivity:

 Connect on-premises datacenters to virtual networks through a site-to-site


connection.
 Connect individual devices to virtual networks through a point-to-site
connection.
 Connect virtual networks to other virtual networks through a network-to-
network connection.

All data transfer is encrypted inside a private tunnel as it crosses the


internet. You can deploy only one VPN gateway in each virtual network.
However, you can use one gateway to connect to multiple locations, which
includes other virtual networks or on-premises datacenters.

When setting up a VPN gateway, you must specify the type of VPN - either
policy-based or route-based. The primary distinction between these two
types is how they determine which traffic needs encryption. In Azure,
regardless of the VPN type, the method of authentication employed is a pre-
shared key.

 Policy-based VPN gateways specify statically the IP address of packets that


should be encrypted through each tunnel. This type of device evaluates every
data packet against those sets of IP addresses to choose the tunnel where that
packet is going to be sent through.
 In Route-based gateways, IPSec tunnels are modeled as a network interface or
virtual tunnel interface. IP routing (either static routes or dynamic routing
protocols) decides which one of these tunnel interfaces to use when sending
each packet. Route-based VPNs are the preferred connection method for on-
premises devices. They're more resilient to topology changes such as the
creation of new subnets.

Use a route-based VPN gateway if you need any of the following types of
connectivity:

 Connections between virtual networks


 Point-to-site connections
 Multisite connections
 Coexistence with an Azure ExpressRoute gateway

High-availability scenarios
If you’re configuring a VPN to keep your information safe, you also want to
be sure that it’s a highly available and fault tolerant VPN configuration. There
are a few ways to maximize the resiliency of your VPN gateway.

Active/standby

By default, VPN gateways are deployed as two instances in an active/standby


configuration, even if you only see one VPN gateway resource in Azure.
When planned maintenance or unplanned disruption affects the active
instance, the standby instance automatically assumes responsibility for
connections without any user intervention. Connections are interrupted
during this failover, but they're typically restored within a few seconds for
planned maintenance and within 90 seconds for unplanned disruptions.

Active/active

With the introduction of support for the BGP routing protocol, you can also
deploy VPN gateways in an active/active configuration. In this configuration,
you assign a unique public IP address to each instance. You then create
separate tunnels from the on-premises device to each IP address. You can
extend the high availability by deploying an additional VPN device on-
premises.

ExpressRoute failover

Another high-availability option is to configure a VPN gateway as a secure


failover path for ExpressRoute connections. ExpressRoute circuits have
resiliency built in. However, they aren't immune to physical problems that
affect the cables delivering connectivity or outages that affect the complete
ExpressRoute location. In high-availability scenarios, where there's risk
associated with an outage of an ExpressRoute circuit, you can also provision
a VPN gateway that uses the internet as an alternative method of
connectivity. In this way, you can ensure there's always a connection to the
virtual networks.

Zone-redundant gateways

In regions that support availability zones, VPN gateways and ExpressRoute


gateways can be deployed in a zone-redundant configuration. This
configuration brings resiliency, scalability, and higher availability to virtual
network gateways. Deploying gateways in Azure availability zones physically
and logically separates gateways within a region while protecting your on-
premises network connectivity to Azure from zone-level failures. These
gateways require different gateway stock keeping units (SKUs) and use
Standard public IP addresses instead of Basic public IP addresses.

Describe Azure ExpressRoute


Azure ExpressRoute lets you extend your on-premises networks into the
Microsoft cloud over a private connection, with the help of a connectivity
provider. This connection is called an ExpressRoute Circuit. With
ExpressRoute, you can establish connections to Microsoft cloud services,
such as Microsoft Azure and Microsoft 365. This allows you to connect
offices, datacenters, or other facilities to the Microsoft cloud. Each location
would have its own ExpressRoute circuit.

Connectivity can be from an any-to-any (IP VPN) network, a point-to-point


Ethernet network, or a virtual cross-connection through a connectivity
provider at a colocation facility. ExpressRoute connections don't go over the
public Internet. This allows ExpressRoute connections to offer more
reliability, faster speeds, consistent latencies, and higher security than
typical connections over the Internet.
Features and benefits of ExpressRoute
There are several benefits to using ExpressRoute as the connection service
between Azure and on-premises networks.

 Connectivity to Microsoft cloud services across all regions in the geopolitical


region.
 Global connectivity to Microsoft services across all regions with the
ExpressRoute Global Reach.
 Dynamic routing between your network and Microsoft via Border Gateway
Protocol (BGP).
 Built-in redundancy in every peering location for higher reliability.

Connectivity to Microsoft cloud services

ExpressRoute enables direct access to the following services in all regions:

 Microsoft Office 365


 Microsoft Dynamics 365
 Azure compute services, such as Azure Virtual Machines
 Azure cloud services, such as Azure Cosmos DB and Azure Storage

Global connectivity

You can enable ExpressRoute Global Reach to exchange data across your on-
premises sites by connecting your ExpressRoute circuits. For example, say
you had an office in Asia and a datacenter in Europe, both with ExpressRoute
circuits connecting them to the Microsoft network. You could use
ExpressRoute Global Reach to connect those two facilities, allowing them to
communicate without transferring data over the public internet.

Dynamic routing

ExpressRoute uses the BGP. BGP is used to exchange routes between on-
premises networks and resources running in Azure. This protocol enables
dynamic routing between your on-premises network and services running in
the Microsoft cloud.

Built-in redundancy

Each connectivity provider uses redundant devices to ensure that


connections established with Microsoft are highly available. You can
configure multiple circuits to complement this feature.
ExpressRoute connectivity models
ExpressRoute supports four models that you can use to connect your on-
premises network to the Microsoft cloud:

 CloudExchange colocation
 Point-to-point Ethernet connection
 Any-to-any connection
 Directly from ExpressRoute sites

Co-location at a cloud exchange

Co-location refers to your datacenter, office, or other facility being physically


co-located at a cloud exchange, such as an ISP. If your facility is co-located
at a cloud exchange, you can request a virtual cross-connect to the Microsoft
cloud.

Point-to-point Ethernet connection

Point-to-point ethernet connection refers to using a point-to-point connection


to connect your facility to the Microsoft cloud.

Any-to-any networks

With any-to-any connectivity, you can integrate your wide area network
(WAN) with Azure by providing connections to your offices and datacenters.
Azure integrates with your WAN connection to provide a connection like you
would have between your datacenter and any branch offices.

Directly from ExpressRoute sites

You can connect directly into the Microsoft's global network at a peering
location strategically distributed across the world. ExpressRoute Direct
provides dual 100 Gbps or 10-Gbps connectivity, which supports
Active/Active connectivity at scale.

Security considerations
With ExpressRoute, your data doesn't travel over the public internet, so it's
not exposed to the potential risks associated with internet communications.
ExpressRoute is a private connection from your on-premises infrastructure to
your Azure infrastructure. Even if you have an ExpressRoute connection, DNS
queries, certificate revocation list checking, and Azure Content Delivery
Network requests are still sent over the public internet.

Describe Azure DNS


Azure DNS is a hosting service for DNS domains that provides name
resolution by using Microsoft Azure infrastructure. By hosting your domains
in Azure, you can manage your DNS records using the same credentials,
APIs, tools, and billing as your other Azure services.

Benefits of Azure DNS


Azure DNS leverages the scope and scale of Microsoft Azure to provide
numerous benefits, including:

 Reliability and performance


 Security
 Ease of Use
 Customizable virtual networks
 Alias records

Reliability and performance

DNS domains in Azure DNS are hosted on Azure's global network of DNS
name servers, providing resiliency and high availability. Azure DNS uses
anycast networking, so each DNS query is answered by the closest available
DNS server to provide fast performance and high availability for your
domain.

Security

Azure DNS is based on Azure Resource Manager, which provides features


such as:

 Azure role-based access control (Azure RBAC) to control who has access to
specific actions for your organization.
 Activity logs to monitor how a user in your organization modified a resource or
to find an error when troubleshooting.
 Resource locking to lock a subscription, resource group, or resource. Locking
prevents other users in your organization from accidentally deleting or
modifying critical resources.
Ease of use

Azure DNS can manage DNS records for your Azure services and provide
DNS for your external resources as well. Azure DNS is integrated in the Azure
portal and uses the same credentials, support contract, and billing as your
other Azure services.

Because Azure DNS is running on Azure, it means you can manage your
domains and records with the Azure portal, Azure PowerShell cmdlets, and
the cross-platform Azure CLI. Applications that require automated DNS
management can integrate with the service by using the REST API and SDKs.

Customizable virtual networks with private domains

Azure DNS also supports private DNS domains. This feature allows you to use
your own custom domain names in your private virtual networks, rather than
being stuck with the Azure-provided names.

Alias records

Azure DNS also supports alias record sets. You can use an alias record set to
refer to an Azure resource, such as an Azure public IP address, an Azure
Traffic Manager profile, or an Azure Content Delivery Network (CDN)
endpoint. If the IP address of the underlying resource changes, the alias
record set seamlessly updates itself during DNS resolution. The alias record
set points to the service instance, and the service instance is associated with
an IP address.

Describe Azure storage services


This module introduces you to storage in Azure, including things such as
different types of storage and how a distributed infrastructure can make your
data more resilient.

Introduction
In this module, you’ll be introduced to the Azure storage services. You’ll
learn about the Azure Storage Account and how that relates to the different
storage services that are available. You’ll also learn about blob storage tiers,
data redundancy options, and ways to move data or even entire
infrastructures to Azure.
Learning objectives
After completing this module, you’ll be able to:

 Compare Azure storage services.


 Describe storage tiers.
 Describe redundancy options.
 Describe storage account options and storage types.
 Identify options for moving files, including AzCopy, Azure Storage
Explorer, and Azure File Sync.
 Describe migration options, including Azure Migrate and Azure Data
Box.

Describe Azure storage accounts


The following video introduces the different services that should be available with Azure
Storage.

A storage account provides a unique namespace for your Azure Storage data that's accessible
from anywhere in the world over HTTP or HTTPS. Data in this account is secure, highly
available, durable, and massively scalable.

When you create your storage account, you’ll start by picking the storage account type. The type
of account determines the storage services and redundancy options and has an impact on the use
cases. Below is a list of redundancy options that will be covered later in this module:

 Locally redundant storage (LRS)


 Geo-redundant storage (GRS)
 Read-access geo-redundant storage (RA-GRS)
 Zone-redundant storage (ZRS)
 Geo-zone-redundant storage (GZRS)
 Read-access geo-zone-redundant storage (RA-GZRS)

Type Supported services Redundancy Usage


Options
Standard Blob Storage LRS, GRS, RA- Standard storage account type for blobs,
general- (including Data GRS, ZRS, file shares, queues, and tables.
purpose v2 Lake Storage), GZRS, RA- Recommended for most scenarios using
Queue Storage, GZRS Azure Storage. If you want support for
Table Storage, and network file system (NFS) in Azure
Azure Files Files, use the premium file shares
account type.
Premium Blob Storage LRS, ZRS Premium storage account type for block
block (including Data blobs and append blobs. Recommended
blobs Lake Storage) for scenarios with high transaction rates
or that use smaller objects or require
consistently low storage latency.
Premium Azure Files LRS, ZRS Premium storage account type for file
file shares shares only. Recommended for
enterprise or high-performance scale
applications. Use this account type if
you want a storage account that
supports both Server Message Block
(SMB) and NFS file shares.
Premium Page blobs only LRS Premium storage account type for page
page blobs blobs only.

Storage account endpoints


One of the benefits of using an Azure Storage Account is having a unique namespace in Azure
for your data. In order to do this, every storage account in Azure must have a unique-in-Azure
account name. The combination of the account name and the Azure Storage service endpoint
forms the endpoints for your storage account.

When naming your storage account, keep these rules in mind:

 Storage account names must be between 3 and 24 characters in length and may contain
numbers and lowercase letters only.
 Your storage account name must be unique within Azure. No two storage accounts can
have the same name. This supports the ability to have a unique, accessible namespace in
Azure.

The following table shows the endpoint format for Azure Storage services.

Storage service Endpoint


Blob Storage https://<storage-account-name>.blob.core.windows.net
Data Lake Storage Gen2 https://<storage-account-name>.dfs.core.windows.net
Azure Files https://<storage-account-name>.file.core.windows.net
Queue Storage https://<storage-account-name>.queue.core.windows.net
Table Storage https://<storage-account-name>.table.core.windows.net

Describe Azure storage


redundancy
Azure Storage always stores multiple copies of your data so that it's
protected from planned and unplanned events such as transient hardware
failures, network or power outages, and natural disasters. Redundancy
ensures that your storage account meets its availability and durability
targets even in the face of failures.

When deciding which redundancy option is best for your scenario, consider
the tradeoffs between lower costs and higher availability. The factors that
help determine which redundancy option you should choose include:

 How your data is replicated in the primary region.


 Whether your data is replicated to a second region that is geographically
distant to the primary region, to protect against regional disasters.
 Whether your application requires read access to the replicated data in the
secondary region if the primary region becomes unavailable.

Redundancy in the primary region


Data in an Azure Storage account is always replicated three times in the
primary region. Azure Storage offers two options for how your data is
replicated in the primary region, locally redundant storage (LRS) and zone-
redundant storage (ZRS).

Locally redundant storage

Locally redundant storage (LRS) replicates your data three times within a
single data center in the primary region. LRS provides at least 11 nines of
durability (99.999999999%) of objects over a given year.

LRS is the lowest-cost redundancy option and offers the least durability
compared to other options. LRS protects your data against server rack and
drive failures. However, if a disaster such as fire or flooding occurs within the
data center, all replicas of a storage account using LRS may be lost or
unrecoverable. To mitigate this risk, Microsoft recommends using zone-
redundant storage (ZRS), geo-redundant storage (GRS), or geo-zone-
redundant storage (GZRS).

Zone-redundant storage

For Availability Zone-enabled Regions, zone-redundant storage (ZRS)


replicates your Azure Storage data synchronously across three Azure
availability zones in the primary region. ZRS offers durability for Azure
Storage data objects of at least 12 nines (99.9999999999%) over a given
year.

With ZRS, your data is still accessible for both read and write operations
even if a zone becomes unavailable. No remounting of Azure file shares from
the connected clients is required. If a zone becomes unavailable, Azure
undertakes networking updates, such as DNS repointing. These updates may
affect your application if you access data before the updates have
completed.

Microsoft recommends using ZRS in the primary region for scenarios that
require high availability. ZRS is also recommended for restricting replication
of data within a country or region to meet data governance requirements.

Redundancy in a secondary region


For applications requiring high durability, you can choose to additionally copy
the data in your storage account to a secondary region that is hundreds of
miles away from the primary region. If the data in your storage account is
copied to a secondary region, then your data is durable even in the event of
a catastrophic failure that prevents the data in the primary region from being
recovered.

When you create a storage account, you select the primary region for the
account. The paired secondary region is based on Azure Region Pairs, and
can't be changed.

Azure Storage offers two options for copying your data to a secondary
region: geo-redundant storage (GRS) and geo-zone-redundant storage
(GZRS). GRS is similar to running LRS in two regions, and GZRS is similar to
running ZRS in the primary region and LRS in the secondary region.

By default, data in the secondary region isn't available for read or write
access unless there's a failover to the secondary region. If the primary region
becomes unavailable, you can choose to fail over to the secondary region.
After the failover has completed, the secondary region becomes the primary
region, and you can again read and write data.

Important

Because data is replicated to the secondary region asynchronously, a failure


that affects the primary region may result in data loss if the primary region
can't be recovered. The interval between the most recent writes to the
primary region and the last write to the secondary region is known as the
recovery point objective (RPO). The RPO indicates the point in time to which
data can be recovered. Azure Storage typically has an RPO of less than 15
minutes, although there's currently no SLA on how long it takes to replicate
data to the secondary region.

Geo-redundant storage

GRS copies your data synchronously three times within a single physical
location in the primary region using LRS. It then copies your data
asynchronously to a single physical location in the secondary region (the
region pair) using LRS. GRS offers durability for Azure Storage data objects of
at least 16 nines (99.99999999999999%) over a given year.
Geo-zone-redundant storage

GZRS combines the high availability provided by redundancy across


availability zones with protection from regional outages provided by geo-
replication. Data in a GZRS storage account is copied across three Azure
availability zones in the primary region (similar to ZRS) and is also replicated
to a secondary geographic region, using LRS, for protection from regional
disasters. Microsoft recommends using GZRS for applications requiring
maximum consistency, durability, and availability, excellent performance,
and resilience for disaster recovery.
GZRS is designed to provide at least 16 nines (99.99999999999999%) of
durability of objects over a given year.

Read access to data in the secondary region


Geo-redundant storage (with GRS or GZRS) replicates your data to another
physical location in the secondary region to protect against regional outages.
However, that data is available to be read only if the customer or Microsoft
initiates a failover from the primary to secondary region. However, if you
enable read access to the secondary region, your data is always available,
even when the primary region is running optimally. For read access to the
secondary region, enable read-access geo-redundant storage (RA-GRS) or
read-access geo-zone-redundant storage (RA-GZRS).

Describe Azure storage services


The Azure Storage platform includes the following data services:

 Azure Blobs: A massively scalable object store for text and binary data. Also
includes support for big data analytics through Data Lake Storage Gen2.
 Azure Files: Managed file shares for cloud or on-premises deployments.
 Azure Queues: A messaging store for reliable messaging between application
components.
 Azure Disks: Block-level storage volumes for Azure VMs.
 Azure Tables: NoSQL table option for structured, non-relational data.

Benefits of Azure Storage


Azure Storage services offer the following benefits for application developers
and IT professionals:

 Durable and highly available. Redundancy ensures that your data is safe if
transient hardware failures occur. You can also opt to replicate data across
data centers or geographical regions for additional protection from local
catastrophes or natural disasters. Data replicated in this way remains highly
available if an unexpected outage occurs.
 Secure. All data written to an Azure storage account is encrypted by the
service. Azure Storage provides you with fine-grained control over who has
access to your data.
 Scalable. Azure Storage is designed to be massively scalable to meet the data
storage and performance needs of today's applications.
 Managed. Azure handles hardware maintenance, updates, and critical issues
for you.
 Accessible. Data in Azure Storage is accessible from anywhere in the world
over HTTP or HTTPS. Microsoft provides client libraries for Azure Storage in a
variety of languages, including .NET, Java, Node.js, Python, PHP, Ruby, Go, and
others, as well as a mature REST API. Azure Storage supports scripting in Azure
PowerShell or Azure CLI. And the Azure portal and Azure Storage Explorer offer
easy visual solutions for working with your data.

Azure Blobs
Azure Blob storage is an object storage solution for the cloud. It can store
massive amounts of data, such as text or binary data. Azure Blob storage is
unstructured, meaning that there are no restrictions on the kinds of data it
can hold. Blob storage can manage thousands of simultaneous uploads,
massive amounts of video data, constantly growing log files, and can be
reached from anywhere with an internet connection.

Blobs aren't limited to common file formats. A blob could contain gigabytes
of binary data streamed from a scientific instrument, an encrypted message
for another application, or data in a custom format for an app you're
developing. One advantage of blob storage over disk storage is that it
doesn't require developers to think about or manage disks. Data is uploaded
as blobs, and Azure takes care of the physical storage needs.

Blob storage is ideal for:

 Serving images or documents directly to a browser.


 Storing files for distributed access.
 Streaming video and audio.
 Storing data for backup and restore, disaster recovery, and archiving.
 Storing data for analysis by an on-premises or Azure-hosted service.

Accessing blob storage

Objects in blob storage can be accessed from anywhere in the world via
HTTP or HTTPS. Users or client applications can access blobs via URLs, the
Azure Storage REST API, Azure PowerShell, Azure CLI, or an Azure Storage
client library. The storage client libraries are available for multiple
languages, including .NET, Java, Node.js, Python, PHP, and Ruby.

Blob storage tiers

Data stored in the cloud can grow at an exponential pace. To manage costs
for your expanding storage needs, it's helpful to organize your data based on
attributes like frequency of access and planned retention period. Data stored
in the cloud can be handled differently based on how it's generated,
processed, and accessed over its lifetime. Some data is actively accessed
and modified throughout its lifetime. Some data is accessed frequently early
in its lifetime, with access dropping drastically as the data ages. Some data
remains idle in the cloud and is rarely, if ever, accessed after it's stored. To
accommodate these different access needs, Azure provides several access
tiers, which you can use to balance your storage costs with your access
needs.

Azure Storage offers different access tiers for your blob storage, helping you
store object data in the most cost-effective manner. The available access
tiers include:

 Hot access tier: Optimized for storing data that is accessed frequently (for
example, images for your website).
 Cool access tier: Optimized for data that is infrequently accessed and stored
for at least 30 days (for example, invoices for your customers).
 Cold access tier: Optimized for storing data that is infrequently accessed and
stored for at least 90 days.
 Archive access tier: Appropriate for data that is rarely accessed and stored
for at least 180 days, with flexible latency requirements (for example, long-
term backups).

The following considerations apply to the different access tiers:

 Hot and cool access tiers can be set at the account level. The cold and archive
access tiers aren't available at the account level.
 Hot, cool, cold, and archive tiers can be set at the blob level, during or after
upload.
 Data in the cool and cold access tiers can tolerate slightly lower availability,
but still requires high durability, retrieval latency, and throughput
characteristics similar to hot data. For cool and cold data, a lower availability
service-level agreement (SLA) and higher access costs compared to hot data
are acceptable trade-offs for lower storage costs.
 Archive storage stores data offline and offers the lowest storage costs, but also
the highest costs to rehydrate and access data.

Azure Files
Azure File storage offers fully managed file shares in the cloud that are
accessible via the industry standard Server Message Block (SMB) or Network
File System (NFS) protocols. Azure Files file shares can be mounted
concurrently by cloud or on-premises deployments. SMB Azure file shares are
accessible from Windows, Linux, and macOS clients. NFS Azure Files shares
are accessible from Linux or macOS clients. Additionally, SMB Azure file
shares can be cached on Windows Servers with Azure File Sync for fast
access near where the data is being used.

Azure Files key benefits:

 Shared access: Azure file shares support the industry standard SMB and NFS
protocols, meaning you can seamlessly replace your on-premises file shares
with Azure file shares without worrying about application compatibility.
 Fully managed: Azure file shares can be created without the need to manage
hardware or an OS. This means you don't have to deal with patching the server
OS with critical security upgrades or replacing faulty hard disks.
 Scripting and tooling: PowerShell cmdlets and Azure CLI can be used to
create, mount, and manage Azure file shares as part of the administration of
Azure applications. You can create and manage Azure file shares using Azure
portal and Azure Storage Explorer.
 Resiliency: Azure Files has been built from the ground up to always be
available. Replacing on-premises file shares with Azure Files means you don't
have to wake up in the middle of the night to deal with local power outages or
network issues.
 Familiar programmability: Applications running in Azure can access data in
the share via file system I/O APIs. Developers can therefore use their existing
code and skills to migrate existing applications. In addition to System IO APIs,
you can use Azure Storage Client Libraries or the Azure Storage REST API.

Azure Queues
Azure Queue storage is a service for storing large numbers of messages.
Once stored, you can access the messages from anywhere in the world via
authenticated calls using HTTP or HTTPS. A queue can contain as many
messages as your storage account has room for (potentially millions). Each
individual message can be up to 64 KB in size. Queues are commonly used to
create a backlog of work to process asynchronously.

Queue storage can be combined with compute functions like Azure Functions
to take an action when a message is received. For example, you want to
perform an action after a customer uploads a form to your website. You
could have the submit button on the website trigger a message to the Queue
storage. Then, you could use Azure Functions to trigger an action once the
message was received.

Azure Disks
Azure Disk storage, or Azure managed disks, are block-level storage volumes
managed by Azure for use with Azure VMs. Conceptually, they’re the same
as a physical disk, but they’re virtualized – offering greater resiliency and
availability than a physical disk. With managed disks, all you have to do is
provision the disk, and Azure will take care of the rest.

Azure Tables
Azure Table storage stores large amounts of structured data. Azure tables
are a NoSQL datastore that accepts authenticated calls from inside and
outside the Azure cloud. This enables you to use Azure tables to build your
hybrid or multi-cloud solution and have your data always available. Azure
tables are ideal for storing structured, non-relational data.

Exercise - Create a storage blob


This module requires a sandbox to complete.

Create a storage account


In this task, you'll create a new storage account.

1. Sign in to the Azure portal at https://portal.azure.com

2. Select Create a resource.

3. Under Categories, select Storage.

4. Under Storage account, select Create.

5. On the Basics tab of the Create a storage account blade, fill in the
following information. Leave the defaults for everything else.

Setting Value
Subscription Concierge Subscription
Resource group Select the resource group that starts with learn
Storage account name Create a unique storage account name
Region Leave default
Performance Standard
Redundancy Locally redundant storage (LRS)
6. On the Advanced tab of the Create a storage account blade, fill in the
following information. Leave the defaults for everything else.

Setting Value
Allow enabling anonymous access on individual containers Checked

7. Select Review to review your storage account settings and allow Azure
to validate the configuration.

8. Once validated, select Create. Wait for the notification that the account
was successfully created.

9. Select Go to resource.

Work with blob storage


In this section, you'll create a Blob container and upload a picture.

1. Under Data storage, select Containers.


2. Select + Container and complete the information.

Setting Value

Name Enter a name for the container


Anonymous access level Private (no anonymous access)

3. Select Create.

Note: Step 4 will need an image. If you want to upload an image you
already have on your computer, continue to Step 4. Otherwise, open a
new browser window and search Bing for an image of a flower. Save the
image to your computer.

4. Back in the Azure portal, select the container you created, then select
Upload.

5. Browse for the image file you want to upload. Select it and then select
upload.

Note: You can upload as many blobs as you like in this way. New blobs
will be listed within the container.
6. Select the Blob (file) you just uploaded. You should be on the properties
tab.

7. Copy the URL from the URL field and paste it into a new tab.

You should receive an error message similar to the following.

Copy
<Error>
<Code>ResourceNotFound</Code>
<Message>The specified resource does not exist. RequestId:4a4bd3d9-
101e-005a-1a3e-84bd42000000</Message>
</Error>

Change the access level of your blob


1. Go back to the Azure portal.

2. Select Change access level.

3. Set the Anonymous access level to Blob (anonymous read access for
blobs only).

4. Select OK.
5. Refresh the tab where you attempted to access the file earlier.

Congratulations - you've completed this exercise. You created a storage


account, added a container to the storage account, and then uploaded blobs
(files) to your container. Then you changed the access level so you could
access your file from the internet.

Identify Azure data migration


options
Now that you understand the different storage options within Azure, it’s
important to also understand how to get your data and information into
Azure. Azure supports both real-time migration of infrastructure,
applications, and data using Azure Migrate as well as asynchronous
migration of data using Azure Data Box.

Azure Migrate
Azure Migrate is a service that helps you migrate from an on-premises
environment to the cloud. Azure Migrate functions as a hub to help you
manage the assessment and migration of your on-premises datacenter to
Azure. It provides the following:

 Unified migration platform: A single portal to start, run, and track your
migration to Azure.
 Range of tools: A range of tools for assessment and migration. Azure Migrate
tools include Azure Migrate: Discovery and assessment and Azure Migrate:
Server Migration. Azure Migrate also integrates with other Azure services and
tools, and with independent software vendor (ISV) offerings.
 Assessment and migration: In the Azure Migrate hub, you can assess and
migrate your on-premises infrastructure to Azure.

Integrated tools

In addition to working with tools from ISVs, the Azure Migrate hub also
includes the following tools to help with migration:

 Azure Migrate: Discovery and assessment. Discover and assess on-


premises servers running on VMware, Hyper-V, and physical servers in
preparation for migration to Azure.
 Azure Migrate: Server Migration. Migrate VMware VMs, Hyper-V VMs,
physical servers, other virtualized servers, and public cloud VMs to Azure.
 Data Migration Assistant. Data Migration Assistant is a stand-alone tool to
assess SQL Servers. It helps pinpoint potential problems blocking migration. It
identifies unsupported features, new features that can benefit you after
migration, and the right path for database migration.
 Azure Database Migration Service. Migrate on-premises databases to
Azure VMs running SQL Server, Azure SQL Database, or SQL Managed
Instances.
 Azure App Service migration assistant. Azure App Service migration
assistant is a standalone tool to assess on-premises websites for migration to
Azure App Service. Use Migration Assistant to migrate .NET and PHP web apps
to Azure.
 Azure Data Box. Use Azure Data Box products to move large amounts of
offline data to Azure.

Azure Data Box


Azure Data Box is a physical migration service that helps transfer large
amounts of data in a quick, inexpensive, and reliable way. The secure data
transfer is accelerated by shipping you a proprietary Data Box storage
device that has a maximum usable storage capacity of 80 terabytes. The
Data Box is transported to and from your datacenter via a regional carrier. A
rugged case protects and secures the Data Box from damage during transit.

You can order the Data Box device via the Azure portal to import or export
data from Azure. Once the device is received, you can quickly set it up using
the local web UI and connect it to your network. Once you’re finished
transferring the data (either into or out of Azure), simply return the Data
Box. If you’re transferring data into Azure, the data is automatically uploaded
once Microsoft receives the Data Box back. The entire process is tracked
end-to-end by the Data Box service in the Azure portal.

Use cases

Data Box is ideally suited to transfer data sizes larger than 40 TBs in
scenarios with no to limited network connectivity. The data movement can
be one-time, periodic, or an initial bulk data transfer followed by periodic
transfers.

Here are the various scenarios where Data Box can be used to import data to
Azure.

 Onetime migration - when a large amount of on-premises data is moved to


Azure.
 Moving a media library from offline tapes into Azure to create an online media
library.
 Migrating your VM farm, SQL server, and applications to Azure.
 Moving historical data to Azure for in-depth analysis and reporting using
HDInsight.
 Initial bulk transfer - when an initial bulk transfer is done using Data Box (seed)
followed by incremental transfers over the network.
 Periodic uploads - when large amount of data is generated periodically and
needs to be moved to Azure.

Here are the various scenarios where Data Box can be used to export data
from Azure.

 Disaster recovery - when a copy of the data from Azure is restored to an on-
premises network. In a typical disaster recovery scenario, a large amount of
Azure data is exported to a Data Box. Microsoft then ships this Data Box, and
the data is restored on your premises in a short time.
 Security requirements - when you need to be able to export data out of Azure
due to government or security requirements.
 Migrate back to on-premises or to another cloud service provider - when you
want to move all the data back to on-premises, or to another cloud service
provider, export data via Data Box to migrate the workloads.

Once the data from your import order is uploaded to Azure, the disks on the
device are wiped clean in accordance with NIST 800-88r1 standards. For an
export order, the disks are erased once the device reaches the Azure
datacenter.

Identify Azure file movement


options
In addition to large scale migration using services like Azure Migrate and
Azure Data Box, Azure also has tools designed to help you move or interact
with individual files or small file groups. Among those tools are AzCopy,
Azure Storage Explorer, and Azure File Sync.

AzCopy
AzCopy is a command-line utility that you can use to copy blobs or files to or
from your storage account. With AzCopy, you can upload files, download
files, copy files between storage accounts, and even synchronize files.
AzCopy can even be configured to work with other cloud providers to help
move files back and forth between clouds.
Important: Synchronizing blobs or files with AzCopy is one-direction
synchronization. When you synchronize, you designated the source and
destination, and AzCopy will copy files or blobs in that direction. It doesn't
synchronize bi-directionally based on timestamps or other metadata.

Azure Storage Explorer


Azure Storage Explorer is a standalone app that provides a graphical
interface to manage files and blobs in your Azure Storage Account. It works
on Windows, macOS, and Linux operating systems and uses AzCopy on the
backend to perform all of the file and blob management tasks. With Storage
Explorer, you can upload to Azure, download from Azure, or move between
storage accounts.

Azure File Sync


Azure File Sync is a tool that lets you centralize your file shares in Azure Files
and keep the flexibility, performance, and compatibility of a Windows file
server. It’s almost like turning your Windows file server into a miniature
content delivery network. Once you install Azure File Sync on your local
Windows server, it will automatically stay bi-directionally synced with your
files in Azure.

With Azure File Sync, you can:

 Use any protocol that's available on Windows Server to access your data
locally, including SMB, NFS, and FTPS.
 Have as many caches as you need across the world.
 Replace a failed local server by installing Azure File Sync on a new server in
the same datacenter.
 Configure cloud tiering so the most frequently accessed files are replicated
locally, while infrequently accessed files are kept in the cloud until requested.

Describe Azure identity, access,


and security
This module covers some of the authorization and authentication methods
available with Azure.

Introduction
In this module, you’ll be introduced to the Azure identity, access, and
security services and tools. You’ll learn about directory services in Azure,
authentication methods, and access control. You’ll also cover things like Zero
Trust and defense in depth, and how they keep your cloud safer. You’ll wrap
up with an introduction to Microsoft Defender for Cloud.

Learning objectives
After completing this module, you’ll be able to:

 Describe directory services in Azure, including Microsoft Entra ID and Microsoft


Entra Domain Services.
 Describe authentication methods in Azure, including single sign-on (SSO),
multifactor authentication (MFA), and passwordless.
 Describe external identities and guest access in Azure.
 Describe Microsoft Entra Conditional Access.
 Describe Azure Role Based Access Control (RBAC).
 Describe the concept of Zero Trust.
 Describe the purpose of the defense in depth model.
 Describe the purpose of Microsoft Defender for Cloud.

Describe Azure directory services


Microsoft Entra ID is a directory service that enables you to sign in and
access both Microsoft cloud applications and cloud applications that you
develop. Microsoft Entra ID can also help you maintain your on-premises
Active Directory deployment.

For on-premises environments, Active Directory running on Windows Server


provides an identity and access management service that's managed by
your organization. Microsoft Entra ID is Microsoft's cloud-based identity and
access management service. With Microsoft Entra ID, you control the identity
accounts, but Microsoft ensures that the service is available globally. If
you've worked with Active Directory, Microsoft Entra ID will be familiar to
you.

When you secure identities on-premises with Active Directory, Microsoft


doesn't monitor sign-in attempts. When you connect Active Directory with
Microsoft Entra ID, Microsoft can help protect you by detecting suspicious
sign-in attempts at no extra cost. For example, Microsoft Entra ID can detect
sign-in attempts from unexpected locations or unknown devices.
Who uses Microsoft Entra ID?
Microsoft Entra ID is for:

 IT administrators. Administrators can use Microsoft Entra ID to control


access to applications and resources based on their business requirements.
 App developers. Developers can use Microsoft Entra ID to provide a
standards-based approach for adding functionality to applications that they
build, such as adding SSO functionality to an app or enabling an app to work
with a user's existing credentials.
 Users. Users can manage their identities and take maintenance actions like
self-service password reset.
 Online service subscribers. Microsoft 365, Microsoft Office 365, Azure, and
Microsoft Dynamics CRM Online subscribers are already using Microsoft Entra
ID to authenticate into their account.

What does Microsoft Entra ID do?


Microsoft Entra ID provides services such as:

 Authentication: This includes verifying identity to access applications and


resources. It also includes providing functionality such as self-service password
reset, multifactor authentication, a custom list of banned passwords, and
smart lockout services.
 Single sign-on: Single sign-on (SSO) enables you to remember only one
username and one password to access multiple applications. A single identity
is tied to a user, which simplifies the security model. As users change roles or
leave an organization, access modifications are tied to that identity, which
greatly reduces the effort needed to change or disable accounts.
 Application management: You can manage your cloud and on-premises
apps by using Microsoft Entra ID. Features like Application Proxy, SaaS apps,
the My Apps portal, and single sign-on provide a better user experience.
 Device management: Along with accounts for individual people, Microsoft
Entra ID supports the registration of devices. Registration enables devices to
be managed through tools like Microsoft Intune. It also allows for device-based
Conditional Access policies to restrict access attempts to only those coming
from known devices, regardless of the requesting user account.

Can I connect my on-premises AD with


Microsoft Entra ID?
If you had an on-premises environment running Active Directory and a cloud
deployment using Microsoft Entra ID, you would need to maintain two
identity sets. However, you can connect Active Directory with Microsoft Entra
ID, enabling a consistent identity experience between cloud and on-
premises.

One method of connecting Microsoft Entra ID with your on-premises AD is


using Microsoft Entra Connect. Microsoft Entra Connect synchronizes user
identities between on-premises Active Directory and Microsoft Entra ID.
Microsoft Entra Connect synchronizes changes between both identity
systems, so you can use features like SSO, multifactor authentication, and
self-service password reset under both systems.

What is Microsoft Entra Domain Services?


Microsoft Entra Domain Services is a service that provides managed domain
services such as domain join, group policy, lightweight directory access
protocol (LDAP), and Kerberos/NTLM authentication. Just like Microsoft Entra
ID lets you use directory services without having to maintain the
infrastructure supporting it, with Microsoft Entra Domain Services, you get
the benefit of domain services without the need to deploy, manage, and
patch domain controllers (DCs) in the cloud.

A Microsoft Entra Domain Services managed domain lets you run legacy
applications in the cloud that can't use modern authentication methods, or
where you don't want directory lookups to always go back to an on-premises
AD DS environment. You can lift and shift those legacy applications from
your on-premises environment into a managed domain, without needing to
manage the AD DS environment in the cloud.

Microsoft Entra Domain Services integrates with your existing Microsoft Entra
tenant. This integration lets users sign into services and applications
connected to the managed domain using their existing credentials. You can
also use existing groups and user accounts to secure access to resources.
These features provide a smoother lift-and-shift of on-premises resources to
Azure.

How does Microsoft Entra Domain Services work?

When you create a Microsoft Entra Domain Services managed domain, you
define a unique namespace. This namespace is the domain name. Two
Windows Server domain controllers are then deployed into your selected
Azure region. This deployment of DCs is known as a replica set.

You don't need to manage, configure, or update these DCs. The Azure
platform handles the DCs as part of the managed domain, including backups
and encryption at rest using Azure Disk Encryption.
Is information synchronized?

A managed domain is configured to perform a one-way synchronization from


Microsoft Entra ID to Microsoft Entra Domain Services. You can create
resources directly in the managed domain, but they aren't synchronized
back to Microsoft Entra ID. In a hybrid environment with an on-premises AD
DS environment, Microsoft Entra Connect synchronizes identity information
with Microsoft Entra ID, which is then synchronized to the managed domain.

Applications, services, and VMs in Azure that connect to the managed


domain can then use common Microsoft Entra Domain Services features such
as domain join, group policy, LDAP, and Kerberos/NTLM authentication.

Describe Azure authentication


methods
Authentication is the process of establishing the identity of a person, service,
or device. It requires the person, service, or device to provide some type of
credential to prove who they are. Authentication is like presenting ID when
you’re traveling. It doesn’t confirm that you’re ticketed, it just proves that
you're who you say you are. Azure supports multiple authentication
methods, including standard passwords, single sign-on (SSO), multifactor
authentication (MFA), and passwordless.

For the longest time, security and convenience seemed to be at odds with
each other. Thankfully, new authentication solutions provide both security
and convenience.

The following diagram shows the security level compared to the


convenience. Notice Passwordless authentication is high security and high
convenience while passwords on their own are low security but high
convenience.
What's single sign-on?
Single sign-on (SSO) enables a user to sign in one time and use that
credential to access multiple resources and applications from different
providers. For SSO to work, the different applications and providers must
trust the initial authenticator.

More identities mean more passwords to remember and change. Password


policies can vary among applications. As complexity requirements increase,
it becomes increasingly difficult for users to remember them. The more
passwords a user has to manage, the greater the risk of a credential-related
security incident.

Consider the process of managing all those identities. More strain is placed
on help desks as they deal with account lockouts and password reset
requests. If a user leaves an organization, tracking down all those identities
and ensuring they're disabled can be challenging. If an identity is
overlooked, this might allow access when it should have been eliminated.

With SSO, you need to remember only one ID and one password. Access
across applications is granted to a single identity that's tied to the user,
which simplifies the security model. As users change roles or leave an
organization, access is tied to a single identity. This change greatly reduces
the effort needed to change or disable accounts. Using SSO for accounts
makes it easier for users to manage their identities and for IT to manage
users.
Important: Single sign-on is only as secure as the initial authenticator
because the subsequent connections are all based on the security of the
initial authenticator.

What’s multifactor authentication?


Multifactor authentication is the process of prompting a user for an extra
form (or factor) of identification during the sign-in process. MFA helps protect
against a password compromise in situations where the password was
compromised but the second factor wasn't.

Think about how you sign into websites, email, or online services. After
entering your username and password, have you ever needed to enter a
code that was sent to your phone? If so, you've used multifactor
authentication to sign in.

Multifactor authentication provides additional security for your identities by


requiring two or more elements to fully authenticate. These elements fall
into three categories:

 Something the user knows – this might be a challenge question.


 Something the user has – this might be a code that's sent to the user's mobile
phone.
 Something the user is – this is typically some sort of biometric property, such
as a fingerprint or face scan.

Multifactor authentication increases identity security by limiting the impact


of credential exposure (for example, stolen usernames and passwords). With
multifactor authentication enabled, an attacker who has a user's password
would also need to have possession of their phone or their fingerprint to fully
authenticate.

Compare multifactor authentication with single-factor authentication. Under


single-factor authentication, an attacker would need only a username and
password to authenticate. Multifactor authentication should be enabled
wherever possible because it adds enormous benefits to security.

What's Microsoft Entra multifactor authentication?

Microsoft Entra multifactor authentication is a Microsoft service that provides


multifactor authentication capabilities. Microsoft Entra multifactor
authentication enables users to choose an additional form of authentication
during sign-in, such as a phone call or mobile app notification.
What’s passwordless authentication?
Features like MFA are a great way to secure your organization, but users
often get frustrated with the additional security layer on top of having to
remember their passwords. People are more likely to comply when it's easy
and convenient to do so. Passwordless authentication methods are more
convenient because the password is removed and replaced with something
you have, plus something you are, or something you know.

Passwordless authentication needs to be set up on a device before it can


work. For example, your computer is something you have. Once it’s been
registered or enrolled, Azure now knows that it’s associated with you. Now
that the computer is known, once you provide something you know or are
(such as a PIN or fingerprint), you can be authenticated without using a
password.

Each organization has different needs when it comes to authentication.


Microsoft global Azure and Azure Government offer the following three
passwordless authentication options that integrate with Microsoft Entra ID:

 Windows Hello for Business


 Microsoft Authenticator app
 FIDO2 security keys

Windows Hello for Business

Windows Hello for Business is ideal for information workers that have their
own designated Windows PC. The biometric and PIN credentials are directly
tied to the user's PC, which prevents access from anyone other than the
owner. With public key infrastructure (PKI) integration and built-in support for
single sign-on (SSO), Windows Hello for Business provides a convenient
method for seamlessly accessing corporate resources on-premises and in the
cloud.

Microsoft Authenticator App

You can also allow your employee's phone to become a passwordless


authentication method. You may already be using the Microsoft
Authenticator App as a convenient multifactor authentication option in
addition to a password. You can also use the Authenticator App as a
passwordless option.

The Authenticator App turns any iOS or Android phone into a strong,
passwordless credential. Users can sign-in to any platform or browser by
getting a notification to their phone, matching a number displayed on the
screen to the one on their phone, and then using their biometric (touch or
face) or PIN to confirm. Refer to Download and install the Microsoft
Authenticator app for installation details.

FIDO2 security keys

The FIDO (Fast IDentity Online) Alliance helps to promote open


authentication standards and reduce the use of passwords as a form of
authentication. FIDO2 is the latest standard that incorporates the web
authentication (WebAuthn) standard.

FIDO2 security keys are an unphishable standards-based passwordless


authentication method that can come in any form factor. Fast Identity Online
(FIDO) is an open standard for passwordless authentication. FIDO allows
users and organizations to leverage the standard to sign-in to their resources
without a username or password by using an external security key or a
platform key built into a device.

Users can register and then select a FIDO2 security key at the sign-in
interface as their main means of authentication. These FIDO2 security keys
are typically USB devices, but could also use Bluetooth or NFC. With a
hardware device that handles the authentication, the security of an account
is increased as there's no password that could be exposed or guessed.

Describe Azure external identities


An external identity is a person, device, service, etc. that is outside your
organization. Microsoft Entra External ID refers to all the ways you can
securely interact with users outside of your organization. If you want to
collaborate with partners, distributors, suppliers, or vendors, you can share
your resources and define how your internal users can access external
organizations. If you're a developer creating consumer-facing apps, you can
manage your customers' identity experiences.

External identities may sound similar to single sign-on. With External


Identities, external users can "bring their own identities." Whether they have
a corporate or government-issued digital identity, or an unmanaged social
identity like Google or Facebook, they can use their own credentials to sign
in. The external user’s identity provider manages their identity, and you
manage access to your apps with Microsoft Entra ID or Azure AD B2C to keep
your resources protected.
The following capabilities make up External Identities:

 Business to business (B2B) collaboration - Collaborate with external users


by letting them use their preferred identity to sign-in to your Microsoft
applications or other enterprise applications (SaaS apps, custom-developed
apps, etc.). B2B collaboration users are represented in your directory, typically
as guest users.
 B2B direct connect - Establish a mutual, two-way trust with another
Microsoft Entra organization for seamless collaboration. B2B direct connect
currently supports Teams shared channels, enabling external users to access
your resources from within their home instances of Teams. B2B direct connect
users aren't represented in your directory, but they're visible from within the
Teams shared channel and can be monitored in Teams admin center reports.
 Microsoft Azure Active Directory business to customer (B2C) - Publish
modern SaaS apps or custom-developed apps (excluding Microsoft apps) to
consumers and customers, while using Azure AD B2C for identity and access
management.
Depending on how you want to interact with external organizations and the
types of resources you need to share, you can use a combination of these
capabilities.

With Microsoft Entra ID, you can easily enable collaboration across
organizational boundaries by using the Microsoft Entra B2B feature. Guest
users from other tenants can be invited by administrators or by other users.
This capability also applies to social identities such as Microsoft accounts.

You also can easily ensure that guest users have appropriate access. You can
ask the guests themselves or a decision maker to participate in an access
review and recertify (or attest) to the guests' access. The reviewers can give
their input on each user's need for continued access, based on suggestions
from Microsoft Entra ID. When an access review is finished, you can then
make changes and remove access for guests who no longer need it.

Describe Azure conditional access


Conditional Access is a tool that Microsoft Entra ID uses to allow (or deny)
access to resources based on identity signals. These signals include who the
user is, where the user is, and what device the user is requesting access
from.

Conditional Access helps IT administrators:

 Empower users to be productive wherever and whenever.


 Protect the organization's assets.

Conditional Access also provides a more granular multifactor authentication


experience for users. For example, a user might not be challenged for
second authentication factor if they're at a known location. However, they
might be challenged for a second authentication factor if their sign-in signals
are unusual or they're at an unexpected location.

During sign-in, Conditional Access collects signals from the user, makes
decisions based on those signals, and then enforces that decision by allowing
or denying the access request or challenging for a multifactor authentication
response.

The following diagram illustrates this flow:


Here, the signal might be the user's location, the user's device, or the
application that the user is trying to access.

Based on these signals, the decision might be to allow full access if the user
is signing in from their usual location. If the user is signing in from an
unusual location or a location that's marked as high risk, then access might
be blocked entirely or possibly granted after the user provides a second form
of authentication.

Enforcement is the action that carries out the decision. For example, the
action is to allow access or require the user to provide a second form of
authentication.

When can I use Conditional Access?


Conditional Access is useful when you need to:

 Require multifactor authentication (MFA) to access an application depending


on the requester’s role, location, or network. For example, you could require
MFA for administrators but not regular users or for people connecting from
outside your corporate network.
 Require access to services only through approved client applications. For
example, you could limit which email applications are able to connect to your
email service.
 Require users to access your application only from managed devices. A
managed device is a device that meets your standards for security and
compliance.
 Block access from untrusted sources, such as access from unknown or
unexpected locations.

Describe Azure role-based access


control
When you have multiple IT and engineering teams, how can you control what
access they have to the resources in your cloud environment? The principle
of least privilege says you should only grant access up to the level needed to
complete a task. If you only need read access to a storage blob, then you
should only be granted read access to that storage blob. Write access to that
blob shouldn’t be granted, nor should read access to other storage blobs. It’s
a good security practice to follow.

However, managing that level of permissions for an entire team would


become tedious. Instead of defining the detailed access requirements for
each individual, and then updating access requirements when new resources
are created or new people join the team, Azure enables you to control access
through Azure role-based access control (Azure RBAC).

Azure provides built-in roles that describe common access rules for cloud
resources. You can also define your own roles. Each role has an associated
set of access permissions that relate to that role. When you assign
individuals or groups to one or more roles, they receive all the associated
access permissions.

So, if you hire a new engineer and add them to the Azure RBAC group for
engineers, they automatically get the same access as the other engineers in
the same Azure RBAC group. Similarly, if you add additional resources and
point Azure RBAC at them, everyone in that Azure RBAC group will now have
those permissions on the new resources as well as the existing resources.

How is role-based access control applied to


resources?
Role-based access control is applied to a scope, which is a resource or set of
resources that this access applies to.

The following diagram shows the relationship between roles and scopes. A
management group, subscription, or resource group might be given the role
of owner, so they have increased control and authority. An observer, who
isn't expected to make any updates, might be given a role of Reader for the
same scope, enabling them to review or observe the management group,
subscription, or resource group.
Scopes include:

 A management group (a collection of multiple subscriptions).


 A single subscription.
 A resource group.
 A single resource.

Observers, users managing resources, admins, and automated processes


illustrate the kinds of users or accounts that would typically be assigned
each of the various roles.

Azure RBAC is hierarchical, in that when you grant access at a parent scope,
those permissions are inherited by all child scopes. For example:

 When you assign the Owner role to a user at the management group scope,
that user can manage everything in all subscriptions within the management
group.
 When you assign the Reader role to a group at the subscription scope, the
members of that group can view every resource group and resource within the
subscription.

How is Azure RBAC enforced?


Azure RBAC is enforced on any action that's initiated against an Azure
resource that passes through Azure Resource Manager. Resource Manager is
a management service that provides a way to organize and secure your
cloud resources.

You typically access Resource Manager from the Azure portal, Azure Cloud
Shell, Azure PowerShell, and the Azure CLI. Azure RBAC doesn't enforce
access permissions at the application or data level. Application security must
be handled by your application.
Azure RBAC uses an allow model. When you're assigned a role, Azure RBAC
allows you to perform actions within the scope of that role. If one role
assignment grants you read permissions to a resource group and a different
role assignment grants you write permissions to the same resource group,
you have both read and write permissions on that resource group.

Describe zero trust model


Zero Trust is a security model that assumes the worst case scenario and
protects resources with that expectation. Zero Trust assumes breach at the
outset, and then verifies each request as though it originated from an
uncontrolled network.

Today, organizations need a new security model that effectively adapts to


the complexity of the modern environment; embraces the mobile workforce:
and protects people, devices, applications, and data wherever they're
located.

To address this new world of computing, Microsoft highly recommends the


Zero Trust security model, which is based on these guiding principles:

 Verify explicitly - Always authenticate and authorize based on all available


data points.
 Use least privilege access - Limit user access with Just-In-Time and Just-
Enough-Access (JIT/JEA), risk-based adaptive policies, and data protection.
 Assume breach - Minimize blast radius and segment access. Verify end-to-
end encryption. Use analytics to get visibility, drive threat detection, and
improve defenses.

Adjusting to Zero Trust


Traditionally, corporate networks were restricted, protected, and generally
assumed safe. Only managed computers could join the network, VPN access
was tightly controlled, and personal devices were frequently restricted or
blocked.

The Zero Trust model flips that scenario. Instead of assuming that a device is
safe because it’s within the corporate network, it requires everyone to
authenticate. Then grants access based on authentication rather than
location.
Describe defense-in-depth
The objective of defense-in-depth is to protect information and prevent it
from being stolen by those who aren't authorized to access it.

A defense-in-depth strategy uses a series of mechanisms to slow the


advance of an attack that aims at acquiring unauthorized access to data.

Layers of defense-in-depth
You can visualize defense-in-depth as a set of layers, with the data to be
secured at the center and all the other layers functioning to protect that
central data layer.
Each layer provides protection so that if one layer is breached, a subsequent
layer is already in place to prevent further exposure. This approach removes
reliance on any single layer of protection. It slows down an attack and
provides alert information that security teams can act upon, either
automatically or manually.

Here's a brief overview of the role of each layer:

 The physical security layer is the first line of defense to protect computing
hardware in the datacenter.
 The identity and access layer controls access to infrastructure and change
control.
 The perimeter layer uses distributed denial of service (DDoS) protection to
filter large-scale attacks before they can cause a denial of service for users.
 The network layer limits communication between resources through
segmentation and access controls.
 The compute layer secures access to virtual machines.
 The application layer helps ensure that applications are secure and free of
security vulnerabilities.
 The data layer controls access to business and customer data that you need to
protect.

These layers provide a guideline for you to help make security configuration
decisions in all of the layers of your applications.

Azure provides security tools and features at every level of the defense-in-
depth concept. Let's take a closer look at each layer:
Physical security

Physically securing access to buildings and controlling access to computing


hardware within the datacenter are the first line of defense.

With physical security, the intent is to provide physical safeguards against


access to assets. These safeguards ensure that other layers can't be
bypassed, and loss or theft is handled appropriately. Microsoft uses various
physical security mechanisms in its cloud datacenters.

Identity and access

The identity and access layer is all about ensuring that identities are secure,
that access is granted only to what's needed, and that sign-in events and
changes are logged.

At this layer, it's important to:

 Control access to infrastructure and change control.


 Use single sign-on (SSO) and multifactor authentication.
 Audit events and changes.

Perimeter

The network perimeter protects from network-based attacks against your


resources. Identifying these attacks, eliminating their impact, and alerting
you when they happen are important ways to keep your network secure.

At this layer, it's important to:

 Use DDoS protection to filter large-scale attacks before they can affect the
availability of a system for users.
 Use perimeter firewalls to identify and alert on malicious attacks against your
network.

Network

At this layer, the focus is on limiting the network connectivity across all your
resources to allow only what's required. By limiting this communication, you
reduce the risk of an attack spreading to other systems in your network.

At this layer, it's important to:

 Limit communication between resources.


 Deny by default.
 Restrict inbound internet access and limit outbound access where appropriate.
 Implement secure connectivity to on-premises networks.

Compute

Malware, unpatched systems, and improperly secured systems open your


environment to attacks. The focus in this layer is on making sure that your
compute resources are secure and that you have the proper controls in place
to minimize security issues.

At this layer, it's important to:

 Secure access to virtual machines.


 Implement endpoint protection on devices and keep systems patched and
current.

Application

Integrating security into the application development lifecycle helps reduce


the number of vulnerabilities introduced in code. Every development team
should ensure that its applications are secure by default.

At this layer, it's important to:

 Ensure that applications are secure and free of vulnerabilities.


 Store sensitive application secrets in a secure storage medium.
 Make security a design requirement for all application development.

Data

Those who store and control access to data are responsible for ensuring that
it's properly secured. Often, regulatory requirements dictate the controls and
processes that must be in place to ensure the confidentiality, integrity, and
availability of the data.

In almost all cases, attackers are after data:

 Stored in a database.
 Stored on disk inside virtual machines.
 Stored in software as a service (SaaS) applications, such as Office 365.
 Managed through cloud storage.
Describe Microsoft Defender for
Cloud
Defender for Cloud is a monitoring tool for security posture management and
threat protection. It monitors your cloud, on-premises, hybrid, and multi-
cloud environments to provide guidance and notifications aimed at
strengthening your security posture.

Defender for Cloud provides the tools needed to harden your resources,
track your security posture, protect against cyber-attacks, and streamline
security management. Deployment of Defender for Cloud is easy, it’s already
natively integrated to Azure.

Protection everywhere you’re deployed


Because Defender for Cloud is an Azure-native service, many Azure services
are monitored and protected without needing any deployment. However, if
you also have an on-premises datacenter or are also operating in another
cloud environment, monitoring of Azure services may not give you a
complete picture of your security situation.

When necessary, Defender for Cloud can automatically deploy a Log


Analytics agent to gather security-related data. For Azure machines,
deployment is handled directly. For hybrid and multi-cloud environments,
Microsoft Defender plans are extended to non Azure machines with the help
of Azure Arc. Cloud security posture management (CSPM) features are
extended to multi-cloud machines without the need for any agents.

Azure-native protections

Defender for Cloud helps you detect threats across:

 Azure PaaS services – Detect threats targeting Azure services including Azure
App Service, Azure SQL, Azure Storage Account, and more data services. You
can also perform anomaly detection on your Azure activity logs using the
native integration with Microsoft Defender for Cloud Apps (formerly known as
Microsoft Cloud App Security).
 Azure data services – Defender for Cloud includes capabilities that help you
automatically classify your data in Azure SQL. You can also get assessments
for potential vulnerabilities across Azure SQL and Storage services, and
recommendations for how to mitigate them.
 Networks – Defender for Cloud helps you limit exposure to brute force attacks.
By reducing access to virtual machine ports, using the just-in-time VM access,
you can harden your network by preventing unnecessary access. You can set
secure access policies on selected ports, for only authorized users, allowed
source IP address ranges or IP addresses, and for a limited amount of time.

Defend your hybrid resources

In addition to defending your Azure environment, you can add Defender for
Cloud capabilities to your hybrid cloud environment to protect your non-
Azure servers. To help you focus on what matters the most, you'll get
customized threat intelligence and prioritized alerts according to your
specific environment.

To extend protection to on-premises machines, deploy Azure Arc and enable


Defender for Cloud's enhanced security features.

Defend resources running on other clouds

Defender for Cloud can also protect resources in other clouds (such as AWS
and GCP).

For example, if you've connected an Amazon Web Services (AWS) account to


an Azure subscription, you can enable any of these protections:

 Defender for Cloud's CSPM features extend to your AWS resources. This
agentless plan assesses your AWS resources according to AWS-specific
security recommendations, and includes the results in the secure score. The
resources will also be assessed for compliance with built-in standards specific
to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best
Practices). Defender for Cloud's asset inventory page is a multi-cloud enabled
feature helping you manage your AWS resources alongside your Azure
resources.
 Microsoft Defender for Containers extends its container threat detection and
advanced defenses to your Amazon EKS Linux clusters.
 Microsoft Defender for Servers brings threat detection and advanced defenses
to your Windows and Linux EC2 instances.

Assess, Secure, and Defend


Defender for Cloud fills three vital needs as you manage the security of your
resources and workloads in the cloud and on-premises:

 Continuously assess – Know your security posture. Identify and track


vulnerabilities.
 Secure – Harden resources and services with Azure Security Benchmark.
 Defend – Detect and resolve threats to resources, workloads, and services.
Continuously assess

Defender for cloud helps you continuously assess your environment.


Defender for Cloud includes vulnerability assessment solutions for your
virtual machines, container registries, and SQL servers.

Microsoft Defender for servers includes automatic, native integration with


Microsoft Defender for Endpoint. With this integration enabled, you'll have
access to the vulnerability findings from Microsoft threat and vulnerability
management.

Between these assessment tools you’ll have regular, detailed vulnerability


scans that cover your compute, data, and infrastructure. You can review and
respond to the results of these scans all from within Defender for Cloud.

Secure

From authentication methods to access control to the concept of Zero Trust,


security in the cloud is an essential basic that must be done right. In order to
be secure in the cloud, you have to ensure your workloads are secure. To
secure your workloads, you need security policies in place that are tailored
to your environment and situation. Because policies in Defender for Cloud
are built on top of Azure Policy controls, you're getting the full range and
flexibility of a world-class policy solution. In Defender for Cloud, you can set
your policies to run on management groups, across subscriptions, and even
for a whole tenant.

One of the benefits of moving to the cloud is the ability to grow and scale as
you need, adding new services and resources as necessary. Defender for
Cloud is constantly monitoring for new resources being deployed across your
workloads. Defender for Cloud assesses if new resources are configured
according to security best practices. If not, they're flagged and you get a
prioritized list of recommendations for what you need to fix.
Recommendations help you reduce the attack surface across each of your
resources.
The list of recommendations is enabled and supported by the Azure Security
Benchmark. This Microsoft-authored, Azure-specific, benchmark provides a
set of guidelines for security and compliance best practices based on
common compliance frameworks.

In this way, Defender for Cloud enables you not just to set security policies,
but to apply secure configuration standards across your resources.

To help you understand how important each recommendation is to your


overall security posture, Defender for Cloud groups the recommendations
into security controls and adds a secure score value to each control. The
secure score gives you an at-a-glance indicator of the health of your security
posture, while the controls give you a working list of things to consider to
improve your security score and your overall security posture.

Defend

The first two areas were focused on assessing, monitoring, and maintaining
your environment. Defender for Cloud also helps you defend your
environment by providing security alerts and advanced threat protection
features.
Security alerts

When Defender for Cloud detects a threat in any area of your environment, it
generates a security alert. Security alerts:

 Describe details of the affected resources


 Suggest remediation steps
 Provide, in some cases, an option to trigger a logic app in response

Whether an alert is generated by Defender for Cloud or received by Defender


for Cloud from an integrated security product, you can export it. Defender for
Cloud's threat protection includes fusion kill-chain analysis, which
automatically correlates alerts in your environment based on cyber kill-chain
analysis, to help you better understand the full story of an attack campaign,
where it started, and what kind of impact it had on your resources.

Advanced threat protection

Defender for cloud provides advanced threat protection features for many of
your deployed resources, including virtual machines, SQL databases,
containers, web applications, and your network. Protections include securing
the management ports of your VMs with just-in-time access, and adaptive
application controls to create allowlists for what apps should and shouldn't
run on your machines.

Part 3: Describe Azure


management and governance
Describe cost management in
Azure
This module explores methods to estimate, track, and manage costs in
Azure.

Introduction
In this module, you’ll be introduced to factors that impact costs in Azure and
tools to help you both predict potential costs and monitor and control costs.
Learning objectives
After completing this module, you’ll be able to:

 Describe factors that can affect costs in Azure.


 Compare the Pricing calculator and Total Cost of Ownership (TCO) calculator.
 Describe the Microsoft Cost Management Tool.
 Describe the purpose of tags.

Describe factors that can affect


costs in Azure
The following video provides an introduction to things that can impact your
costs in Azure.

Azure shifts development costs from the capital expense (CapEx) of building
out and maintaining infrastructure and facilities to an operational expense
(OpEx) of renting infrastructure as you need it, whether it’s compute,
storage, networking, and so on.

That OpEx cost can be impacted by many factors. Some of the impacting
factors are:

 Resource type
 Consumption
 Maintenance
 Geography
 Subscription type
 Azure Marketplace

Resource type
A number of factors influence the cost of Azure resources. The type of
resources, the settings for the resource, and the Azure region will all have an
impact on how much a resource costs. When you provision an Azure
resource, Azure creates metered instances for that resource. The meters
track the resources' usage and generate a usage record that is used to
calculate your bill.
Examples

With a storage account, you specify a type such as blob, a performance tier,
an access tier, redundancy settings, and a region. Creating the same storage
account in different regions may show different costs and changing any of
the settings may also impact the price.

With a virtual machine (VM), you may have to consider licensing for the
operating system or other software, the processor and number of cores for
the VM, the attached storage, and the network interface. Just like with
storage, provisioning the same virtual machine in different regions may
result in different costs.
Consumption
Pay-as-you-go has been a consistent theme throughout, and that’s the cloud
payment model where you pay for the resources that you use during a billing
cycle. If you use more compute this cycle, you pay more. If you use less in
the current cycle, you pay less. It’s a straight forward pricing mechanism
that allows for maximum flexibility.

However, Azure also offers the ability to commit to using a set amount of
cloud resources in advance and receiving discounts on those “reserved”
resources. Many services, including databases, compute, and storage all
provide the option to commit to a level of use and receive a discount, in
some cases up to 72 percent.
When you reserve capacity, you’re committing to using and paying for a
certain amount of Azure resources during a given period (typically one or
three years). With the back-up of pay-as-you-go, if you see a sudden surge in
demand that eclipses what you’ve pre-reserved, you just pay for the
additional resources in excess of your reservation. This model allows you to
recognize significant savings on reliable, consistent workloads while also
having the flexibility to rapidly increase your cloud footprint as the need
arises.

Maintenance
The flexibility of the cloud makes it possible to rapidly adjust resources
based on demand. Using resource groups can help keep all of your resources
organized. In order to control costs, it’s important to maintain your cloud
environment. For example, every time you provision a VM, additional
resources such as storage and networking are also provisioned. If you
deprovision the VM, those additional resources may not deprovision at the
same time, either intentionally or unintentionally. By keeping an eye on your
resources and making sure you’re not keeping around resources that are no
longer needed, you can help control cloud costs.

Geography
When you provision most resources in Azure, you need to define a region
where the resource deploys. Azure infrastructure is distributed globally,
which enables you to deploy your services centrally or closest to your
customers, or something in between. With this global deployment comes
global pricing differences. The cost of power, labor, taxes, and fees vary
depending on the location. Due to these variations, Azure resources can
differ in costs to deploy depending on the region.

Network traffic is also impacted based on geography. For example, it’s less
expensive to move information within Europe than to move information from
Europe to Asia or South America.

Network Traffic

Billing zones are a factor in determining the cost of some Azure services.

Bandwidth refers to data moving in and out of Azure datacenters. Some


inbound data transfers (data going into Azure datacenters) are free. For
outbound data transfers (data leaving Azure datacenters), data transfer
pricing is based on zones.
A zone is a geographical grouping of Azure regions for billing purposes.
The bandwidth pricing page has additional information on pricing for data
ingress, egress, and transfer.

Subscription type
Some Azure subscription types also include usage allowances, which affect
costs.

For example, an Azure free trial subscription provides access to a number of


Azure products that are free for 12 months. It also includes credit to spend
within your first 30 days of sign-up. You'll get access to more than 25
products that are always free (based on resource and region availability).

Azure Marketplace
Azure Marketplace lets you purchase Azure-based solutions and services
from third-party vendors. This could be a server with software preinstalled
and configured, or managed network firewall appliances, or connectors to
third-party backup services. When you purchase products through Azure
Marketplace, you may pay for not only the Azure services that you’re using,
but also the services or expertise of the third-party vendor. Billing structures
are set by the vendor.

All solutions available in Azure Marketplace are certified and compliant with
Azure policies and standards. The certification policies may vary based on
the service or solution type and Azure service involved. Commercial
marketplace certification policies has additional information on Azure
Marketplace certifications.

Compare the Pricing and Total


Cost of Ownership calculators
The pricing calculator and the total cost of ownership (TCO) calculator are
two calculators that help you understand potential Azure expenses. Both
calculators are accessible from the internet, and both calculators allow you
to build out a configuration. However, the two calculators have very different
purposes.

Pricing calculator
The pricing calculator is designed to give you an estimated cost for
provisioning resources in Azure. You can get an estimate for individual
resources, build out a solution, or use an example scenario to see an
estimate of the Azure spend. The pricing calculator’s focus is on the cost of
provisioned resources in Azure.

Note

The Pricing calculator is for information purposes only. The prices are only an
estimate. Nothing is provisioned when you add resources to the pricing
calculator, and you won't be charged for any services you select.

With the pricing calculator, you can estimate the cost of any provisioned
resources, including compute, storage, and associated network costs. You
can even account for different storage options like storage type, access tier,
and redundancy.

TCO calculator
The TCO calculator is designed to help you compare the costs for running an
on-premises infrastructure compared to an Azure Cloud infrastructure. With
the TCO calculator, you enter your current infrastructure configuration,
including servers, databases, storage, and outbound network traffic. The TCO
calculator then compares the anticipated costs for your current environment
with an Azure environment supporting the same infrastructure requirements.

With the TCO calculator, you enter your configuration, add in assumptions
like power and IT labor costs, and are presented with an estimation of the
cost difference to run the same environment in your current datacenter or in
Azure.
Exercise - Estimate workload
costs by using the Pricing
calculator
In this exercise, you use the Pricing calculator to estimate the cost of running
a basic web application on Azure.

Start by defining which Azure services you need.

Note: The Pricing calculator is for information purposes only. The prices are
only an estimate, and you won't be charged for any services you select.

Define your requirements


Before you run the Pricing calculator, you need a sense of what Azure
services you need.

For a basic web application hosted in your datacenter, you might run a
configuration similar to the following.

An ASP.NET web application that runs on Windows. The web application


provides information about product inventory and pricing. There are two
virtual machines that are connected through a central load balancer. The
web application connects to a SQL Server database that holds inventory and
pricing information.

To migrate to Azure, you might:

 Use Azure Virtual Machines instances, similar to the virtual machines used in
your datacenter.
 Use Azure Application Gateway for load balancing.
 Use Azure SQL Database to hold inventory and pricing information.

Here's a diagram that shows the basic configuration:

In practice, you would define your requirements in greater detail. But here
are some basic facts and requirements to get you started:

 The application is used internally. It's not accessible to customers.


 This application doesn't require a massive amount of computing power.
 The virtual machines and the database run all the time (730 hours per month).
 The network processes about 1 TB of data per month.
 The database doesn't need to be configured for high-performance workloads
and requires no more than 32 GB of storage.

Explore the Pricing calculator


Let's start with a quick tour of the Pricing calculator.

1. Go to the Pricing calculator.

2. Notice the following tabs:


 Products This is where you choose the Azure services that you want to
include in your estimate. You'll likely spend most of your time here.
 Example scenarios Here you'll find several reference architectures, or
common cloud-based solutions that you can use as a starting point.
 Saved estimates Here you'll find your previously saved estimates.
 FAQs Here you'll discover answers to frequently asked questions about
the Pricing calculator.

Estimate your solution


Here you add each Azure service that you need to the calculator. Then you
configure each service to fit your needs.

Tip: Make sure you have a clean calculator with nothing listed in the
estimate. You can reset the estimate by selecting the trash can icon next to
each item.

Add services to the estimate

1. On the Products tab, select the service from each of these categories:

Category Service
Compute Virtual Machines
Databases Azure SQL Database
Networking Application Gateway

2. Scroll to the bottom of the page. Each service is listed with its default
configuration.

Configure services to match your requirements

1. Under Virtual Machines, set these values:

Setting Value
Region West US

Operating system Windows

Type (OS Only)


Tier Standard
Instance D2 v3
Virtual machines 2 x 730 Hours

Leave the remaining settings at their current values.

2. Under Azure SQL Database, set these values:

Setting Value
Region West US
Type Single Database
Backup storage tier RA-GRS
Purchase model vCore
Service tier General Purpose
Compute tier Provisioned
Generation Gen 5
Instance 8 vCore

Leave the remaining settings at their current values.

3. Under Application Gateway, set these values:

Setting Value
Region West US
Tier Web Application Firewall
Size Medium
Gateway hours 2 x 730 Hours
Data processed 1 TB
Outbound data transfer 5 GB

Leave the remaining settings at their current values.

Review, share, and save your estimate


At the bottom of the page, you see the total estimated cost of running the
solution. You can change the currency type if you want.

At this point, you have a few options:

 Select Export to save your estimate as an Excel document.


 Select Save or Save as to save your estimate to the Saved Estimates tab
for later.
 Select Share to generate a URL so you can share the estimate with your team.

You now have a cost estimate that you can share with your team. You can
make adjustments as you discover any changes to your requirements.

Experiment with some of the options you worked with here, or create a
purchase plan for a workload you want to run on Azure.

Exercise - Compare workload


costs using the TCO calculator
In this exercise, you use the Total Cost of Ownership (TCO) Calculator to
compare the cost of running a sample workload in your datacenter versus on
Azure.

Assume you're considering moving some of your on-premises workloads to


the cloud. But first, you need to understand more about moving from a
relatively fixed cost structure to an ongoing monthly cost structure.

You'll need to investigate whether there are any potential cost savings in
moving your datacenter to the cloud over the next three years. You need to
take into account all of the potentially hidden costs involved with operating
on-premises and in the cloud.

Instead of manually collecting everything you think might be included, you


use the TCO Calculator as a starting point. You adjust the provided cost
assumptions to match your on-premises environment.

Note: Remember, you don't need an Azure subscription to work with the
TCO Calculator.

Let's say that:

 You run two sets, or banks, of 50 virtual machines (VMs) in each bank.
 The first bank of VMs runs Windows Server under Hyper-V virtualization.
 The second bank of VMs runs Linux under VMware virtualization.
 There's also a storage area network (SAN) with 60 TB of disk storage.
 You consume an estimated 15 TB of outbound network bandwidth each month.
 There are also a number of databases involved, but for now, you'll omit those
details.

Recall that the TCO Calculator involves three steps:

Define your workloads


Enter the specifications of your on-premises infrastructure into the TCO
Calculator.

1. Go to the TCO Calculator.

2. Under Define your workloads, select Add server workload to create


a row for your bank of Windows Server VMs.

3. Under Servers, set the value for each of these settings:

Setting Value

Name Servers: Windows VMs


Workload Windows/Linux Server

Environment Virtual Machines

Operating system Windows


Operating System License Datacenter

VMs 50

Virtualization Hyper-V
Core(s) 8

RAM (GB) 16

Optimize by CPU
Windows Server 2008/2008 R2 Off
4. Select Add server workload to create a second row for your bank of
Linux VMs. Then specify these settings:

Setting Value
Name Servers: Linux VMs
Workload Windows/Linux Server
Environment Virtual Machines
Operating system Linux
VMs 50
Virtualization VMware
Core(s) 8
RAM (GB) 16
Optimize by CPU

5. Under Storage, select Add storage. Then specify these settings:

Setting Value
Name Server Storage
Storage type Local Disk/SAN
Disk type HDD
Capacity 60 TB
Backup 120 TB
Archive 0 TB

6. Under Networking, set Outbound bandwidth to 15 TB.

7. Select Next.

Adjust assumptions
Here, you specify your currency. For brevity, you leave the remaining fields
at their default values.

In practice, you would adjust any cost assumptions and make any
adjustments to match your current on-premises environment.

1. At the top of the page, select your currency. This example uses US Dollar ($).
2. Select Next.
View the report
Take a moment to review the generated report.

Remember, you've been tasked to investigate cost savings for your


European datacenter over the next three years.

To make these adjustments:

1. Set Timeframe to 3 Years.


2. Set Region to North Europe.

Scroll to the summary at the bottom. You see a comparison of running your
workloads in the datacenter versus on Azure.

Select Download to download or print a copy of the report in PDF format.

Great work. You now have the information that you can share with your Chief
Financial Officer. If you need to make adjustments, you can revisit the TCO
Calculator to generate a fresh report.

Describe the Microsoft Cost


Management tool
Microsoft Azure is a global cloud provider, meaning you can provision
resources anywhere in the world. You can provision resources rapidly to
meet a sudden demand, or to test out a new feature, or on accident. If you
accidentally provision new resources, you may not be aware of them until it’s
time for your invoice. Cost Management is a service that helps avoid those
situations.

What is Cost Management?


Cost Management provides the ability to quickly check Azure resource costs,
create alerts based on resource spend, and create budgets that can be used
to automate management of resources.

Cost analysis is a subset of Cost Management that provides a quick visual for
your Azure costs. Using cost analysis, you can quickly view the total cost in a
variety of different ways, including by billing cycle, region, resource, and so
on.
You use cost analysis to explore and analyze your organizational costs. You
can view aggregated costs by organization to understand where costs are
accrued and to identify spending trends. And you can see accumulated costs
over time to estimate monthly, quarterly, or even yearly cost trends against
a budget.

Cost alerts
Cost alerts provide a single location to quickly check on all of the different
alert types that may show up in the Cost Management service. The three
types of alerts that may show up are:

 Budget alerts
 Credit alerts
 Department spending quota alerts.

Budget alerts

Budget alerts notify you when spending, based on usage or cost, reaches or
exceeds the amount defined in the alert condition of the budget. Cost
Management budgets are created using the Azure portal or the Azure
Consumption API.

In the Azure portal, budgets are defined by cost. Budgets are defined by cost
or by consumption usage when using the Azure Consumption API. Budget
alerts support both cost-based and usage-based budgets. Budget alerts are
generated automatically whenever the budget alert conditions are met. You
can view all cost alerts in the Azure portal. Whenever an alert is generated, it
appears in cost alerts. An alert email is also sent to the people in the alert
recipients list of the budget.

Credit alerts

Credit alerts notify you when your Azure credit monetary commitments are
consumed. Monetary commitments are for organizations with Enterprise
Agreements (EAs). Credit alerts are generated automatically at 90% and at
100% of your Azure credit balance. Whenever an alert is generated, it's
reflected in cost alerts, and in the email sent to the account owners.

Department spending quota alerts

Department spending quota alerts notify you when department spending


reaches a fixed threshold of the quota. Spending quotas are configured in
the EA portal. Whenever a threshold is met, it generates an email to
department owners, and appears in cost alerts. For example, 50 percent or
75 percent of the quota.

Budgets
A budget is where you set a spending limit for Azure. You can set budgets
based on a subscription, resource group, service type, or other criteria. When
you set a budget, you will also set a budget alert. When the budget hits the
budget alert level, it will trigger a budget alert that shows up in the cost
alerts area. If configured, budget alerts will also send an email notification
that a budget alert threshold has been triggered.

A more advanced use of budgets enables budget conditions to trigger


automation that suspends or otherwise modifies resources once the trigger
condition has occurred.

Describe the purpose of tags


As your cloud usage grows, it's increasingly important to stay organized. A
good organization strategy helps you understand your cloud usage and can
help you manage costs.

One way to organize related resources is to place them in their own


subscriptions. You can also use resource groups to manage related
resources. Resource tags are another way to organize resources. Tags
provide extra information, or metadata, about your resources. This metadata
is useful for:

 Resource management Tags enable you to locate and act on resources that
are associated with specific workloads, environments, business units, and
owners.
 Cost management and optimization Tags enable you to group resources so
that you can report on costs, allocate internal cost centers, track budgets, and
forecast estimated cost.
 Operations management Tags enable you to group resources according to
how critical their availability is to your business. This grouping helps you
formulate service-level agreements (SLAs). An SLA is an uptime or
performance guarantee between you and your users.
 Security Tags enable you to classify data by its security level, such as public
or confidential.
 Governance and regulatory compliance Tags enable you to identify
resources that align with governance or regulatory compliance requirements,
such as ISO 27001. Tags can also be part of your standards enforcement
efforts. For example, you might require that all resources be tagged with an
owner or department name.
 Workload optimization and automation Tags can help you visualize all of
the resources that participate in complex deployments. For example, you
might tag a resource with its associated workload or application name and use
software such as Azure DevOps to perform automated tasks on those
resources.

How do I manage resource tags?


You can add, modify, or delete resource tags through Windows PowerShell,
the Azure CLI, Azure Resource Manager templates, the REST API, or the
Azure portal.

You can use Azure Policy to enforce tagging rules and conventions. For
example, you can require that certain tags be added to new resources as
they're provisioned. You can also define rules that reapply tags that have
been removed. Resources don't inherit tags from subscriptions and resource
groups, meaning that you can apply tags at one level and not have those
tags automatically show up at a different level, allowing you to create
custom tagging schemas that change depending on the level (resource,
resource group, subscription, and so on).

An example tagging structure

A resource tag consists of a name and a value. You can assign one or more
tags to each Azure resource.

Name Value

AppName The name of the application that the resource is part of.
CostCenter The internal cost center code.

Owner The name of the business owner who's responsible for the resource.

Environment An environment name, such as "Prod," "Dev," or "Test."


Impact How important the resource is to business operations, such as "Mission-critical,"
"High-impact," or "Low-impact."

Keep in mind that you don't need to enforce that a specific tag is present on
all of your resources. For example, you might decide that only mission-
critical resources have the Impact tag. All non-tagged resources would then
not be considered as mission-critical.

Describe features and tools in


Azure for governance and
compliance
This module introduces you to tools that can help with governance and
compliance within Azure.

Introduction
In this module, you’ll be introduced to some of the features and tools you
can use to help with governance of your Azure environment. You’ll also learn
about tools you can use to help keep resources in compliance with corporate
or regulatory requirements.

Learning objectives
After completing this module, you’ll be able to:

 Describe the purpose of Microsoft Purview.


 Describe the purpose of Azure Policy.
 Describe the purpose of resource locks.
 Describe the purpose of the Service Trust portal.

Describe the purpose of Microsoft


Purview
Microsoft Purview is a family of data governance, risk, and compliance
solutions that helps you get a single, unified view into your data. Microsoft
Purview brings insights about your on-premises, multicloud, and software-as-
a-service data together.

With Microsoft Purview, you can stay up-to-date on your data landscape
thanks to:

 Automated data discovery


 Sensitive data classification
 End-to-end data lineage

Two main solution areas comprise Microsoft Purview: risk and


compliance and unified data governance.

Microsoft Purview risk and compliance


solutions
Microsoft 365 features as a core component of the Microsoft Purview risk and
compliance solutions. Microsoft Teams, OneDrive, and Exchange are just
some of the Microsoft 365 services that Microsoft Purview uses to help
manage and monitor your data. Microsoft Purview, by managing and
monitoring your data, is able to help your organization:

 Protect sensitive data across clouds, apps, and devices.


 Identify data risks and manage regulatory compliance requirements.
 Get started with regulatory compliance.

Unified data governance


Microsoft Purview has robust, unified data governance solutions that help
manage your on-premises, multicloud, and software as a service data.
Microsoft Purview’s robust data governance capabilities enable you to
manage your data stored in Azure, SQL and Hive databases, locally, and
even in other clouds like Amazon S3.

Microsoft Purview’s unified data governance helps your organization:

 Create an up-to-date map of your entire data estate that includes data
classification and end-to-end lineage.
 Identify where sensitive data is stored in your estate.
 Create a secure environment for data consumers to find valuable data.
 Generate insights about how your data is stored and used.
 Manage access to the data in your estate securely and at scale.

Describe the purpose of Azure


Policy
How do you ensure that your resources stay compliant? Can you be alerted if
a resource's configuration has changed?

Azure Policy is a service in Azure that enables you to create, assign, and
manage policies that control or audit your resources. These policies enforce
different rules across your resource configurations so that those
configurations stay compliant with corporate standards.

How does Azure Policy define policies?


Azure Policy enables you to define both individual policies and groups of
related policies, known as initiatives. Azure Policy evaluates your resources
and highlights resources that aren't compliant with the policies you've
created. Azure Policy can also prevent noncompliant resources from being
created.

Azure Policies can be set at each level, enabling you to set policies on a
specific resource, resource group, subscription, and so on. Additionally,
Azure Policies are inherited, so if you set a policy at a high level, it will
automatically be applied to all of the groupings that fall within the parent.
For example, if you set an Azure Policy on a resource group, all resources
created within that resource group will automatically receive the same
policy.

Azure Policy comes with built-in policy and initiative definitions for Storage,
Networking, Compute, Security Center, and Monitoring. For example, if you
define a policy that allows only a certain size for the virtual machines (VMs)
to be used in your environment, that policy is invoked when you create a
new VM and whenever you resize existing VMs. Azure Policy also evaluates
and monitors all current VMs in your environment, including VMs that were
created before the policy was created.

In some cases, Azure Policy can automatically remediate noncompliant


resources and configurations to ensure the integrity of the state of the
resources. For example, if all resources in a certain resource group should be
tagged with AppName tag and a value of "SpecialOrders," Azure Policy will
automatically apply that tag if it is missing. However, you still retain full
control of your environment. If you have a specific resource that you don’t
want Azure Policy to automatically fix, you can flag that resource as an
exception – and the policy won’t automatically fix that resource.

Azure Policy also integrates with Azure DevOps by applying any continuous
integration and delivery pipeline policies that pertain to the pre-deployment
and post-deployment phases of your applications.

What are Azure Policy initiatives?


An Azure Policy initiative is a way of grouping related policies together. The
initiative definition contains all of the policy definitions to help track your
compliance state for a larger goal.

For example, Azure Policy includes an initiative named Enable Monitoring in


Azure Security Center. Its goal is to monitor all available security
recommendations for all Azure resource types in Azure Security Center.
Under this initiative, the following policy definitions are included:

 Monitor unencrypted SQL Database in Security Center This policy


monitors for unencrypted SQL databases and servers.
 Monitor OS vulnerabilities in Security Center This policy monitors
servers that don't satisfy the configured OS vulnerability baseline.
 Monitor missing Endpoint Protection in Security Center This
policy monitors for servers that don't have an installed endpoint
protection agent.

In fact, the Enable Monitoring in Azure Security Center initiative contains


over 100 separate policy definitions.

Describe the purpose of resource


locks
A resource lock prevents resources from being accidentally deleted or
changed.

Even with Azure role-based access control (Azure RBAC) policies in place,
there's still a risk that people with the right level of access could delete
critical cloud resources. Resource locks prevent resources from being
deleted or updated, depending on the type of lock. Resource locks can be
applied to individual resources, resource groups, or even an entire
subscription. Resource locks are inherited, meaning that if you place a
resource lock on a resource group, all of the resources within the resource
group will also have the resource lock applied.

Types of Resource Locks


There are two types of resource locks, one that prevents users from deleting
and one that prevents users from changing or deleting a resource.

 Delete means authorized users can still read and modify a resource, but
they can't delete the resource.
 ReadOnly means authorized users can read a resource, but they can't
delete or update the resource. Applying this lock is similar to restricting
all authorized users to the permissions granted by the Reader role.

How do I manage resource locks?


You can manage resource locks from the Azure portal, PowerShell, the Azure
CLI, or from an Azure Resource Manager template.

To view, add, or delete locks in the Azure portal, go to the Settings section of
any resource's Settings pane in the Azure portal.

How do I delete or change a locked resource?


Although locking helps prevent accidental changes, you can still make
changes by following a two-step process.

To modify a locked resource, you must first remove the lock. After you
remove the lock, you can apply any action you have permissions to perform.
Resource locks apply regardless of RBAC permissions. Even if you're an
owner of the resource, you must still remove the lock before you can perform
the blocked activity.

Exercise - Configure a resource


lock
In this exercise, you’ll create a resource and configure a resource lock.
Storage accounts are one of the easiest types of resource locks to quickly
see the impact, so you’ll use a storage account for this exercise.

This exercise is a Bring your own subscription exercise, meaning you’ll need
to provide your own Azure subscription to complete the exercise. Don’t worry
though, the entire exercise can be completed for free with the 12 month free
services when you sign up for an Azure account.

For help with signing up for an Azure account, see the Create an Azure
account learning module.

Once you’ve created your free account, follow the steps below. If you don’t
have an Azure account, you can review the steps to see the process for
adding a simple resource lock to a resource.

Task 1: Create a resource


In order to apply a resource lock, you have to have a resource created in
Azure. The first task focuses on creating a resource that you can then lock in
subsequent tasks.

1. Sign in to the Azure portal at https://portal.azure.com

2. Select Create a resource.

3. Under Categories, select Storage.

4. Under Storage Account, select Create.

5. On the Basics tab of the Create storage account blade, fill in the
following information. Leave the defaults for everything else.

Setting Value
Resource group Create new
Storage account name enter a unique storage account name
Location default
Performance Standard
Redundancy Locally redundant storage (LRS)

6. Select Review + Create to review your storage account settings and


allow Azure to validate the configuration.

7. Once validated, select Create. Wait for the notification that the account
was successfully created.

8. Select Go to resource.
Task 2: Apply a read-only resource lock
In this task you apply a read-only resource lock to the storage account. What
impact do you think that will have on the storage account?

1. Scroll down until you find the Settings section of the blade on the left of
the screen.

2. Select Locks.

3. Select + Add.

4. Enter a Lock name.

5. Verify the Lock type is set to Read-only.

6. Select OK.

Task 3: Add a container to the storage


account
In this task, you add a container to the storage account, this container is
where you can store your blobs.
1. Scroll up until you find the Data storage section of the blade on the left
of the screen.

2. Select Containers.

3. Select + Container.

4. Enter a container name and select Create.

5. You should receive an error message: Failed to create storage container.

Note
The error message lets you know that you couldn't create a storage
container because a lock is in place. The read-only lock prevents any create
or update operations on the storage account, so you're unable to create a
storage container.

Task 4: Modify the resource lock and create a


storage container
1. Scroll down until you find the Settings section of the blade on the left of
the screen.

2. Select Locks.

3. Select the read-only resource lock you created.

4. Change the Lock type to Delete and select OK.


5. Scroll up until you find the Data storage section of the blade on the left
of the screen.

6. Select Containers.

7. Select + Container.

8. Enter a container name and select Create.

9. Your storage container should appear in your list of containers.

You can now understand how the read-only lock prevented you from adding
a container to your storage account. Once the lock type was changed (you
could have removed it instead), you were able to add a container.
Task 5: Delete the storage account
You'll actually do this last task twice. Remember that there is a delete lock
on the storage account, so you won't actually be able to delete the storage
account yet.

1. Scroll up until you find Overview at the top of the blade on the left of the
screen.

2. Select Overview.

3. Select Delete.

You should get a notification letting you know you can't delete the resource
because it has a delete lock. In order to delete the storage account, you'll
need to remove the delete lock.
Task 6: Remove the delete lock and delete
the storage account
In the final task, you remove the resource lock and delete the storage
account from your Azure account. This step is important. You want to make
sure you don't have any idle resource just sitting in your account.

1. Select your storage account name in the breadcrumb at the top of the
screen.

2. Scroll down until you find the Settings section of the blade on the left of
the screen.

3. Select Locks.

4. Select Delete.

5. Select Home in the breadcrumb at the top of the screen.

6. Select Storage accounts

7. Select the storage account you used for this exercise.

8. Select Delete.

9. To prevent accidental deletion, Azure prompts you to enter the name of


the storage account you want to delete. Enter the name of the storage
account and select Delete.
10. You should receive a message that the storage account was deleted. If
you go Home > Storage accounts, you should see that the storage
account you created for this exercise is gone.

Congratulations! You've completed configuring, updating, and removing a


resource lock on an Azure resource.

Important: Make sure you complete Task 6, the removal of the storage
account. You are solely responsible for the resources in your Azure account.
Make sure you clean up your account after completing this exercise.
Describe the purpose of the
Service Trust portal
The Microsoft Service Trust Portal is a portal that provides access to various
content, tools, and other resources about Microsoft security, privacy, and
compliance practices.

The Service Trust Portal contains details about Microsoft's implementation of


controls and processes that protect our cloud services and the customer data
therein. To access some of the resources on the Service Trust Portal, you
must sign in as an authenticated user with your Microsoft cloud services
account (Microsoft Entra organization account). You'll need to review and
accept the Microsoft non-disclosure agreement for compliance materials.

Accessing the Service Trust Portal


You can access the Service Trust Portal
at https://servicetrust.microsoft.com/.

The Service Trust Portal features and content are accessible from the main
menu. The categories on the main menu are:

 Service Trust Portal provides a quick access hyperlink to return to the


Service Trust Portal home page.
 My Library lets you save (or pin) documents to quickly access them on your
My Library page. You can also set up to receive notifications when documents
in your My Library are updated.
 All Documents is a single landing place for documents on the service trust
portal. From All Documents, you can pin documents to have them show up in
your My Library.

Note: Service Trust Portal reports and documents are available to download
for at least 12 months after publishing or until a new version of document
becomes available.

Describe features and tools for


managing and deploying Azure
resources
This module covers tools that help you manage your Azure and on-premises
resources.

Introduction
This module introduces you to features and tools for managing and
deploying Azure resources. You learn about the Azure portal (a graphic
interface for managing Azure resources), the command line, and scripting
tools that help deploy or configure resources. You also learn about Azure
services that help you manage your on-premises and multicloud
environments from within Azure.

Learning objectives
After completing this module, you’ll be able to:

 Describe the Azure portal.


 Describe Azure Cloud Shell, including Azure CLI and Azure PowerShell.
 Describe the purpose of Azure Arc.
 Describe Azure Resource Manager (ARM), ARM templates, and Bicep.

Describe tools for interacting with


Azure
To get the most out of Azure, you need a way to interact with the Azure
environment, the management groups, subscriptions, resource groups,
resources, and so on. Azure provides multiple tools for managing your
environment, including the:

 Azure portal
 Azure PowerShell
 Azure Command Line Interface (CLI)

What is the Azure portal?


The Azure portal is a web-based, unified console that provides an alternative
to command-line tools. With the Azure portal, you can manage your Azure
subscription by using a graphical user interface. You can:

 Build, manage, and monitor everything from simple web apps to complex
cloud deployments
 Create custom dashboards for an organized view of resources
 Configure accessibility options for an optimal experience

The following video introduces you to the Azure portal:

The Azure portal is designed for resiliency and continuous availability. It


maintains a presence in every Azure datacenter. This configuration makes
the Azure portal resilient to individual datacenter failures and avoids network
slowdowns by being close to users. The Azure portal updates continuously
and requires no downtime for maintenance activities.

Azure Cloud Shell

Azure Cloud Shell is a browser-based shell tool that allows you to create,
configure, and manage Azure resources using a shell. Azure Cloud Shell
support both Azure PowerShell and the Azure Command Line Interface (CLI),
which is a Bash shell.

You can access Azure Cloud Shell via the Azure portal by selecting the Cloud
Shell icon:
Azure Cloud Shell has several features that make it a unique offering to
support you in managing Azure. Some of those features are:

 It is a browser-based shell experience, with no local installation or


configuration required.
 It is authenticated to your Azure credentials, so when you log in it inherently
knows who you are and what permissions you have.
 You choose the shell you’re most familiar with; Azure Cloud Shell supports both
Azure PowerShell and the Azure CLI (which uses Bash).

What is Azure PowerShell?


Azure PowerShell is a shell with which developers, DevOps, and IT
professionals can run commands called command-lets (cmdlets). These
commands call the Azure REST API to perform management tasks in Azure.
Cmdlets can be run independently to handle one-off changes, or they may be
combined to help orchestrate complex actions such as:

 The routine setup, teardown, and maintenance of a single resource or multiple


connected resources.
 The deployment of an entire infrastructure, which might contain dozens or
hundreds of resources, from imperative code.

Capturing the commands in a script makes the process repeatable and


automatable.

In addition to be available via Azure Cloud Shell, you can install and
configure Azure PowerShell on Windows, Linux, and Mac platforms.

What is the Azure CLI?


The Azure CLI is functionally equivalent to Azure PowerShell, with the
primary difference being the syntax of commands. While Azure PowerShell
uses PowerShell commands, the Azure CLI uses Bash commands.

The Azure CLI provides the same benefits of handling discrete tasks or
orchestrating complex operations through code. It’s also installable on
Windows, Linux, and Mac platforms, as well as through Azure Cloud Shell.

Due to the similarities in capabilities and access between Azure PowerShell


and the Bash based Azure CLI, it mainly comes down to which language
you’re most familiar with.

Describe the purpose of Azure Arc


Managing hybrid and multi-cloud environments can rapidly get complicated.
Azure provides a host of tools to provision, configure, and monitor Azure
resources. What about the on-premises resources in a hybrid configuration or
the cloud resources in a multi-cloud configuration?

In utilizing Azure Resource Manager (ARM), Arc lets you extend your Azure
compliance and monitoring to your hybrid and multi-cloud configurations.
Azure Arc simplifies governance and management by delivering a consistent
multi-cloud and on-premises management platform.

Azure Arc provides a centralized, unified way to:

 Manage your entire environment together by projecting your existing


non-Azure resources into ARM.
 Manage multi-cloud and hybrid virtual machines, Kubernetes clusters,
and databases as if they are running in Azure.
 Use familiar Azure services and management capabilities, regardless of
where they live.
 Continue using traditional ITOps while introducing DevOps practices to
support new cloud and native patterns in your environment.
 Configure custom locations as an abstraction layer on top of Azure Arc-
enabled Kubernetes clusters and cluster extensions.

What can Azure Arc do outside of Azure?


Currently, Azure Arc allows you to manage the following resource types
hosted outside of Azure:

 Servers
 Kubernetes clusters
 Azure data services
 SQL Server
 Virtual machines (preview)

Describe Azure Resource Manager


and Azure ARM templates
Azure Resource Manager (ARM) is the deployment and management service
for Azure. It provides a management layer that enables you to create,
update, and delete resources in your Azure account. Anytime you do
anything with your Azure resources, ARM is involved.

When a user sends a request from any of the Azure tools, APIs, or SDKs, ARM
receives the request. ARM authenticates and authorizes the request. Then,
ARM sends the request to the Azure service, which takes the requested
action. You see consistent results and capabilities in all the different tools
because all requests are handled through the same API.

Azure Resource Manager benefits


With Azure Resource Manager, you can:

 Manage your infrastructure through declarative templates rather than scripts.


A Resource Manager template is a JSON file that defines what you want to
deploy to Azure.
 Deploy, manage, and monitor all the resources for your solution as a group,
rather than handling these resources individually.
 Re-deploy your solution throughout the development life-cycle and have
confidence your resources are deployed in a consistent state.
 Define the dependencies between resources, so they're deployed in the
correct order.
 Apply access control to all services because RBAC is natively integrated into
the management platform.
 Apply tags to resources to logically organize all the resources in your
subscription.
 Clarify your organization's billing by viewing costs for a group of resources that
share the same tag.

Infrastructure as code
Infrastructure as code is a concept where you manage your infrastructure as
lines of code. At an introductory level, it's things like using Azure Cloud Shell,
Azure PowerShell, or the Azure CLI to manage and configure your resources.
As you get more comfortable in the cloud, you can use the infrastructure as
code concept to manage entire deployments using repeatable templates and
configurations. ARM templates and Bicep are two examples of using
infrastructure as code with the Azure Resource Manager to maintain your
environment.

ARM templates

By using ARM templates, you can describe the resources you want to use in
a declarative JSON format. With an ARM template, the deployment code is
verified before any code is run. This ensures that the resources will be
created and connected correctly. The template then orchestrates the
creation of those resources in parallel. That is, if you need 50 instances of
the same resource, all 50 instances are created at the same time.

Ultimately, the developer, DevOps professional, or IT professional needs only


to define the desired state and configuration of each resource in the ARM
template, and the template does the rest. Templates can even execute
PowerShell and Bash scripts before or after the resource has been set up.

Benefits of using ARM templates

ARM templates provide many benefits when planning for deploying Azure
resources. Some of those benefits include:

 Declarative syntax: ARM templates allow you to create and deploy an entire
Azure infrastructure declaratively. Declarative syntax means you declare what
you want to deploy but don’t need to write the actual programming commands
and sequence to deploy the resources.
 Repeatable results: Repeatedly deploy your infrastructure throughout the
development lifecycle and have confidence your resources are deployed in a
consistent manner. You can use the same ARM template to deploy multiple
dev/test environments, knowing that all the environments are the same.
 Orchestration: You don't have to worry about the complexities of ordering
operations. Azure Resource Manager orchestrates the deployment of
interdependent resources, so they're created in the correct order. When
possible, Azure Resource Manager deploys resources in parallel, so your
deployments finish faster than serial deployments. You deploy the template
through one command, rather than through multiple imperative commands.
 Modular files: You can break your templates into smaller, reusable
components and link them together at deployment time. You can also nest one
template inside another template. For example, you could create a template
for a VM stack, and then nest that template inside of templates that deploy
entire environments, and that VM stack will consistently be deployed in each of
the environment templates.
 Extensibility: With deployment scripts, you can add PowerShell or Bash
scripts to your templates. The deployment scripts extend your ability to set up
resources during deployment. A script can be included in the template or
stored in an external source and referenced in the template. Deployment
scripts give you the ability to complete your end-to-end environment setup in a
single ARM template.

Bicep

Bicep is a language that uses declarative syntax to deploy Azure resources.


A Bicep file defines the infrastructure and configuration. Then, ARM deploys
that environment based on your Bicep file. While similar to an ARM template,
which is written in JSON, Bicep files tend to use a simpler, more concise style.

Some benefits of Bicep are:

 Support for all resource types and API versions: Bicep immediately
supports all preview and GA versions for Azure services. As soon as a resource
provider introduces new resource types and API versions, you can use them in
your Bicep file. You don't have to wait for tools to be updated before using the
new services.
 Simple syntax: When compared to the equivalent JSON template, Bicep files
are more concise and easier to read. Bicep requires no previous knowledge of
programming languages. Bicep syntax is declarative and specifies which
resources and resource properties you want to deploy.
 Repeatable results: Repeatedly deploy your infrastructure throughout the
development lifecycle and have confidence your resources are deployed in a
consistent manner. Bicep files are idempotent, which means you can deploy
the same file many times and get the same resource types in the same state.
You can develop one file that represents the desired state, rather than
developing lots of separate files to represent updates.
 Orchestration: You don't have to worry about the complexities of ordering
operations. Resource Manager orchestrates the deployment of interdependent
resources so they're created in the correct order. When possible, Resource
Manager deploys resources in parallel so your deployments finish faster than
serial deployments. You deploy the file through one command, rather than
through multiple imperative commands.
 Modularity: You can break your Bicep code into manageable parts by using
modules. The module deploys a set of related resources. Modules enable you
to reuse code and simplify development. Add the module to a Bicep file
anytime you need to deploy those resources.

Describe monitoring tools in


Azure
This module covers tools that you can use to monitor your Azure
environment.

Introduction
In this module, you’ll be introduced to tools that help you monitor your
environment and applications, both in Azure and in on-premises or
multicloud environments.

Learning objectives
After completing this module, you’ll be able to:

 Describe the purpose of Azure Advisor.


 Describe Azure Service Health.
 Describe Azure Monitor, including Azure Log Analytics, Azure Monitor
Alerts, and Application Insights.

Describe the purpose of Azure


Advisor
Azure Advisor evaluates your Azure resources and makes recommendations
to help improve reliability, security, and performance, achieve operational
excellence, and reduce costs. Azure Advisor is designed to help you save
time on cloud optimization. The recommendation service includes suggested
actions you can take right away, postpone, or dismiss.

The recommendations are available via the Azure portal and the API, and you
can set up notifications to alert you to new recommendations.

When you're in the Azure portal, the Advisor dashboard displays personalized
recommendations for all your subscriptions. You can use filters to select
recommendations for specific subscriptions, resource groups, or services.
The recommendations are divided into five categories:

 Reliability is used to ensure and improve the continuity of your


business-critical applications.
 Security is used to detect threats and vulnerabilities that might lead to
security breaches.
 Performance is used to improve the speed of your applications.
 Operational Excellence is used to help you achieve process and
workflow efficiency, resource manageability, and deployment best
practices.
 Cost is used to optimize and reduce your overall Azure spending.

The following image shows the Azure Advisor dashboard.

Describe Azure Service Health


Microsoft Azure provides a global cloud solution to help you manage your
infrastructure needs, reach your customers, innovate, and adapt rapidly.
Knowing the status of the global Azure infrastructure and your individual
resources could seem like a daunting task. Azure Service Health helps you
keep track of Azure resource, both your specifically deployed resources and
the overall status of Azure. Azure service health does this by combining
three different Azure services:

 Azure Status is a broad picture of the status of Azure globally. Azure


status informs you of service outages in Azure on the Azure Status page.
The page is a global view of the health of all Azure services across all
Azure regions. It’s a good reference for incidents with widespread
impact.
 Service Health provides a narrower view of Azure services and
regions. It focuses on the Azure services and regions you're using. This
is the best place to look for service impacting communications about
outages, planned maintenance activities, and other health advisories
because the authenticated Service Health experience knows which
services and resources you currently use. You can even set up Service
Health alerts to notify you when service issues, planned maintenance, or
other changes may affect the Azure services and regions you use.
 Resource Health is a tailored view of your actual Azure resources. It
provides information about the health of your individual cloud resources,
such as a specific virtual machine instance. Using Azure Monitor, you
can also configure alerts to notify you of availability changes to your
cloud resources.

By using Azure status, Service health, and Resource health, Azure Service
Health gives you a complete view of your Azure environment-all the way
from the global status of Azure services and regions down to specific
resources. Additionally, historical alerts are stored and accessible for later
review. Something you initially thought was a simple anomaly that turned
into a trend, can readily be reviewed and investigated thanks to the
historical alerts.

Finally, in the event that a workload you’re running is impacted by an event,


Azure Service Health provides links to support.

Describe Azure Monitor


Azure Monitor is a platform for collecting data on your resources, analyzing
that data, visualizing the information, and even acting on the results. Azure
Monitor can monitor Azure resources, your on-premises resources, and even
multi-cloud resources like virtual machines hosted with a different cloud
provider.

The following diagram illustrates just how comprehensive Azure Monitor is:

On the left is a list of the sources of logging and metric data that can be
collected at every layer in your application architecture, from application to
operating system and network.

In the center, the logging and metric data are stored in central repositories.
On the right, the data is used in several ways. You can view real-time and
historical performance across each layer of your architecture or aggregated
and detailed information. The data is displayed at different levels for
different audiences. You can view high-level reports on the Azure Monitor
Dashboard or create custom views by using Power BI and Kusto queries.

Additionally, you can use the data to help you react to critical events in real
time, through alerts delivered to teams via SMS, email, and so on. Or you
can use thresholds to trigger autoscaling functionality to scale to meet the
demand.

Azure Log Analytics


Azure Log Analytics is the tool in the Azure portal where you’ll write and run
log queries on the data gathered by Azure Monitor. Log Analytics is a robust
tool that supports both simple, complex queries, and data analysis. You can
write a simple query that returns a set of records and then use features of
Log Analytics to sort, filter, and analyze the records. You can write an
advanced query to perform statistical analysis and visualize the results in a
chart to identify a particular trend. Whether you work with the results of your
queries interactively or use them with other Azure Monitor features such as
log query alerts or workbooks, Log Analytics is the tool that you're going to
use to write and test those queries.

Azure Monitor Alerts


Azure Monitor Alerts are an automated way to stay informed when Azure
Monitor detects a threshold being crossed. You set the alert conditions, the
notification actions, and then Azure Monitor Alerts notifies when an alert is
triggered. Depending on your configuration, Azure Monitor Alerts can also
attempt corrective action.
Alerts can be set up to monitor the logs and trigger on certain log events, or
they can be set to monitor metrics and trigger when certain metrics are
crossed. For example, you could set a metric-based alert up to notify you
when the CPU usage on a virtual machine exceeded 80%. Alert rules based
on metrics provide near real time alerts based on numeric values. Rules
based on logs allow for complex logic across data from multiple sources.

Azure Monitor Alerts use action groups to configure who to notify and what
action to take. An action group is simply a collection of notification and
action preferences that you associate with one or multiple alerts. Azure
Monitor, Service Health, and Azure Advisor all use actions groups to notify
you when an alert has been triggered.

Application Insights
Application Insights, an Azure Monitor feature, monitors your web
applications. Application Insights is capable of monitoring applications that
are running in Azure, on-premises, or in a different cloud environment.

There are two ways to configure Application Insights to help monitor your
application. You can either install an SDK in your application, or you can use
the Application Insights agent. The Application Insights agent is supported in
C#.NET, VB.NET, Java, JavaScript, Node.js, and Python.

Once Application Insights is up and running, you can use it to monitor a


broad array of information, such as:

 Request rates, response times, and failure rates


 Dependency rates, response times, and failure rates, to show whether
external services are slowing down performance
 Page views and load performance reported by users' browsers
 AJAX calls from web pages, including rates, response times, and failure
rates
 User and session counts
 Performance counters from Windows or Linux server machines, such as
CPU, memory, and network usage

Not only does Application Insights help you monitor the performance of your
application, but you can also configure it to periodically send synthetic
requests to your application, allowing you to check the status and monitor
your application even during periods of low activity.

You might also like