Introduction To Microsoft Azure Fundamentals
Introduction To Microsoft Azure Fundamentals
Learning objectives
Upon completion of this module, you will be able to:
Define cloud computing.
Describe the shared responsibility model.
Define cloud models, including public, private, and hybrid.
Identify appropriate use cases for each cloud model.
Describe the consumption-based model.
Compare cloud pricing models.
If you’re already familiar with cloud computing, this module may be largely
review for you.
Learning objectives
After completing this module, you’ll be able to:
At the same time, the consumer is responsible for the data and information
stored in the cloud. (You wouldn’t want the cloud provider to be able to read
your information.) The consumer is also responsible for access security,
meaning you only give access to those who need it.
Then, for some things, the responsibility depends on the situation. If you’re
using a cloud SQL database, the cloud provider would be responsible for
maintaining the actual database. However, you’re still responsible for the
data that gets ingested into the database. If you deployed a virtual machine
and installed an SQL database on it, you’d be responsible for database
patches and updates, as well as maintaining the data and information stored
in the database.
Operating systems
Network controls
Applications
Identity and infrastructure
Private cloud
Let’s start with a private cloud. A private cloud is, in some ways, the natural
evolution from a corporate datacenter. It’s a cloud (delivering IT services
over the internet) that’s used by a single entity. Private cloud provides much
greater control for the company and its IT department. However, it also
comes with greater cost and fewer of the benefits of a public cloud
deployment. Finally, a private cloud may be hosted from your on site
datacenter. It may also be hosted in a dedicated datacenter offsite,
potentially even by a third party that has dedicated that datacenter to your
company.
Public cloud
A public cloud is built, controlled, and maintained by a third-party cloud
provider. With a public cloud, anyone that wants to purchase cloud services
can access and use resources. The general public availability is a key
difference between public and private clouds.
Hybrid cloud
A hybrid cloud is a computing environment that uses both public and private
clouds in an inter-connected environment. A hybrid cloud environment can
be used to allow a private cloud to surge for increased, temporary demand
by deploying public cloud resources. Hybrid cloud can be used to provide an
extra layer of security. For example, users can flexibly choose which services
to keep in public cloud and which to deploy to their private cloud
infrastructure.
The following table highlights a few key comparative aspects between the
cloud models.
No capital expenditures to scale Organizations have complete Provides the most flexibility
up control over resources and
security
Applications can be quickly Data is not collocated with other Organizations determine where
provisioned and de-provisioned organizations’ data to run their applications
Organizations pay only for what Hardware must be purchased for Organizations control security,
they use startup and maintenance compliance, or legal
requirements
Multi-cloud
A fourth, and increasingly likely scenario is a multi-cloud scenario. In a multi-
cloud scenario, you use multiple public cloud providers. Maybe you use
different features from different cloud providers. Or maybe you started your
cloud journey with one provider and are in the process of migrating to a
different provider. Regardless, in a multi-cloud environment you deal with
two (or more) public cloud providers and manage resources and security in
both environments.
Azure Arc
Azure Arc is a set of technologies that helps manage your cloud
environment. Azure Arc can help manage your cloud environment, whether
it's a public cloud solely on Azure, a private cloud in your datacenter, a
hybrid configuration, or even a multi-cloud environment running on multiple
cloud providers at once.
No upfront costs.
No need to purchase and manage costly infrastructure that users might
not use to its fullest potential.
The ability to pay for more resources when they're needed.
The ability to stop paying for resources that are no longer needed.
With a traditional datacenter, you try to estimate the future resource needs.
If you overestimate, you spend more on your datacenter than you need to
and potentially waste money. If you underestimate, your datacenter will
quickly reach capacity and your applications and services may suffer from
decreased performance. Fixing an under-provisioned datacenter can take a
long time. You may need to order, receive, and install more hardware. You'll
also need to add power, cooling, and networking for the extra hardware.
In a cloud-based model, you don’t have to worry about getting the resource
needs just right. If you find that you need more virtual machines, you add
more. If the demand drops and you don’t need as many virtual machines,
you remove machines as needed. Either way, you’re only paying for the
virtual machines that you use, not the “extra capacity” that the cloud
provider has on hand.
To put it another way, cloud computing is a way to rent compute power and
storage from someone else’s datacenter. You can treat cloud resources like
you would resources in your own datacenter. However, unlike in your own
datacenter, when you're done using cloud resources, you give them back.
You’re billed only for what you use.
Instead of maintaining CPUs and storage in your datacenter, you rent them
for the time that you need them. The cloud provider takes care of
maintaining the underlying infrastructure for you. The cloud enables you to
quickly solve your toughest business challenges and bring cutting-edge
solutions to your users.
Learning objectives
After completing this module, you’ll be able to:
High availability
When you’re deploying an application, a service, or any IT resources, it’s
important the resources are available when needed. High availability focuses
on ensuring maximum availability, regardless of disruptions or events that
may occur.
When you’re architecting your solution, you’ll need to account for service
availability guarantees. Azure is a highly available cloud environment with
uptime guarantees depending on the service. These guarantees are part of
the service-level agreements (SLAs).
Scalability
Another major benefit of cloud computing is the scalability of cloud
resources. Scalability refers to the ability to adjust resources to meet
demand. If you suddenly experience peak traffic and your systems are
overwhelmed, the ability to scale means you can add more resources to
better handle the increased demand.
The other benefit of scalability is that you aren't overpaying for services.
Because the cloud is a consumption-based model, you only pay for what you
use. If demand drops off, you can reduce your resources and thereby reduce
your costs.
Vertical scaling
With vertical scaling, if you were developing an app and you needed more
processing power, you could vertically scale up to add more CPUs or RAM to
the virtual machine. Conversely, if you realized you had over-specified the
needs, you could vertically scale down by lowering the CPU or RAM
specifications.
Horizontal scaling
Reliability
Reliability is the ability of a system to recover from failures and continue to
function. It's also one of the pillars of the Microsoft Azure Well-Architected
Framework.
Predictability
Predictability in the cloud lets you move forward with confidence.
Predictability can be focused on performance predictability or cost
predictability. Both performance and cost predictability are heavily
influenced by the Microsoft Azure Well-Architected Framework. Deploy a
solution that’s built around this framework and you have a solution whose
cost and performance are predictable.
Performance
On the security side, you can find a cloud solution that matches your security
needs. If you want maximum control of security, infrastructure as a service
provides you with physical resources but lets you manage the operating
systems and installed software, including patches and maintenance. If you
want patches and maintenance taken care of automatically, platform as a
service or software as a service deployment may be the best cloud strategies
for you.
By establishing a good governance footprint early, you can keep your cloud
footprint updated, secure, and well managed.
Introduction
In this module, you’ll be introduced to cloud service types. You’ll learn how
each cloud service type determines the flexibility you’ll have with managing
and configuring resources. You'll understand how the shared responsibility
model applies to each cloud service type, and about various use cases for
each cloud service type.
Learning objectives
After completing this module, you’ll be able to:
Describe Infrastructure as a
Service
Infrastructure as a service (IaaS) is the most flexible category of cloud
services, as it provides you the maximum amount of control for your cloud
resources. In an IaaS model, the cloud provider is responsible for maintaining
the hardware, network connectivity (to the internet), and physical security.
You’re responsible for everything else: operating system installation,
configuration, and maintenance; network configuration; database and
storage configuration; and so on. With IaaS, you’re essentially renting the
hardware in a cloud datacenter, but what you do with that hardware is up to
you.
Scenarios
Some common scenarios where PaaS might make sense include:
Development framework: PaaS provides a framework that developers
can build upon to develop or customize cloud-based applications.
Similar to the way you create an Excel macro, PaaS lets developers
create applications using built-in software components. Cloud features
such as scalability, high-availability, and multi-tenant capability are
included, reducing the amount of coding that developers must do.
Analytics or business intelligence: Tools provided as a service with PaaS
allow organizations to analyze and mine their data, finding insights and
patterns and predicting outcomes to improve forecasting, product
design decisions, investment returns, and other business decisions.
While the SaaS model may be the least flexible, it’s also the easiest to get up
and running. It requires the least amount of technical knowledge or expertise
to fully employ.
Learning objectives
After completing this module, you’ll be able to:
Many teams start exploring the cloud by moving their existing applications to
virtual machines (VMs) that run in Azure. Migrating your existing apps to VMs
is a good start, but the cloud is much more than a different place to run your
VMs.
The Azure free account is an excellent way for new users to get started and
explore. To sign up, you need a phone number, a credit card, and a Microsoft
or GitHub account. The credit card information is used for identity
verification only. You won't be charged for any services until you upgrade to
a paid subscription.
The Azure free student account is an offer for students that gives $100 credit
and free developer tools. Also, you can sign up without a credit card.
Many of the Learn exercises use a technology called the sandbox, which
creates a temporary subscription that's added to your Azure account. This
temporary subscription allows you to create Azure resources during a Learn
module. Learn automatically cleans up the temporary resources for you after
you've completed the module.
Activate sandbox
In this exercise, you explore the Learn sandbox. You can interact with the
Learn sandbox in three different ways. During exercises, you'll be provided
for instructions for at least one of the methods below.
You start by activating the Learn sandbox. Then, you’ll investigate each of
the methods to work in the Learn sandbox.
You can tell you're in PowerShell mode by the PS before your directory on
the command line.
Use the PowerShell Get-date command to get the current date and time.
PowerShellCopy
Get-date
Most Azure specific commands will start with the letters az. The Get-date
command you just ran is a PowerShell specific command. Let's try an Azure
command to check what version of the CLI you're using right now.
PowerShellCopy
az version
PowerShellCopy
bash
Tip
You can tell you're in BASH mode by the username displayed on the
command line. It will be your username@azure.
Again, use the Get-date command to get the current date and time.
Azure CLICopy
Get-date
Use the date command to get the current date and time.
Azure CLICopy
date
Just like in the PowerShell mode of the CLI, you can use the letters az to start
an Azure command in the BASH mode. Try to run an update to the CLI with
az upgrade.
Azure CLICopy
az upgrade
You can change back to PowerShell mode by entering pwsh on the BASH
command line.
Azure CLICopy
az interactive
Decide whether you wish to send telemetry data and enter YES or NO.
You may have to wait a minute or two to allow the interactive mode to fully
initialize. Then, enter the letter “a” and auto-completion should start to work.
If auto-completion isn’t working, erase what you’ve entered, wait a bit
longer, and try again.
Once initialized, you can use the arrow keys or tab to help complete your
commands. Interactive mode is set up specifically for Azure, so you don't
need to enter az to start a command (but you can if you want to or are used
to it). Try the upgrade or version commands again, but this time without az
in front.
Azure CLICopy
version
Azure CLICopy
upgrade
The commands should have worked the same as before, and given you the
same results. Use the exit command to leave interactive mode.
Azure CLICopy
exit
Sign in to the Azure portal to check out the Azure web interface. Once in the
portal, you can see all the services Azure has to offer as well as look around
at resource groups and so on.
Continue
You're all set for now. We'll come back to this sandbox later in this module
and actually create an Azure resource!
Physical infrastructure
The physical infrastructure for Azure starts with datacenters. Conceptually,
the datacenters are the same as large corporate datacenters. They’re
facilities with resources arranged in racks, with dedicated power, cooling,
and networking infrastructure.
The Global infrastructure site gives you a chance to interactively explore the
underlying Azure infrastructure.
Regions
A region is a geographical area on the planet that contains at least one, but
potentially multiple datacenters that are nearby and networked together
with a low-latency network. Azure intelligently assigns and controls the
resources within each region to ensure workloads are appropriately
balanced.
When you deploy a resource in Azure, you'll often need to choose the region
where you want your resource deployed.
Note
Some services or virtual machine (VM) features are only available in certain
regions, such as specific VM sizes or storage types. There are also some
global Azure services that don't require you to select a particular region,
such as Microsoft Entra ID, Azure Traffic Manager, and Azure DNS.
Availability Zones
You want to ensure your services and data are redundant so you can protect
your information in case of failure. When you host your infrastructure, setting
up your own redundancy requires that you create duplicate hardware
environments. Azure can help make your app highly available through
availability zones.
You can use availability zones to run mission-critical applications and build
high-availability into your application architecture by co-locating your
compute, storage, networking, and data resources within an availability zone
and replicating in other availability zones. Keep in mind that there could be a
cost to duplicating your services and transferring data between availability
zones.
Availability zones are primarily for VMs, managed disks, load balancers, and
SQL databases. Azure services that support availability zones fall into three
categories:
Zonal services: You pin the resource to a specific zone (for example, VMs,
managed disks, IP addresses).
Zone-redundant services: The platform replicates automatically across zones
(for example, zone-redundant storage, SQL Database).
Non-regional services: Services are always available from Azure geographies
and are resilient to zone-wide outages as well as region-wide outages.
Even with the additional resiliency that availability zones provide, it’s
possible that an event could be so large that it impacts multiple availability
zones in a single region. To provide even further resilience, Azure has Region
Pairs.
Region pairs
Most Azure regions are paired with another region within the same
geography (such as US, Europe, or Asia) at least 300 miles away. This
approach allows for the replication of resources across a geography that
helps reduce the likelihood of interruptions because of events such as
natural disasters, civil unrest, power outages, or physical network outages
that affect an entire region. For example, if a region in a pair was affected by
a natural disaster, services would automatically fail over to the other region
in its region pair.
Important
Not all Azure services automatically replicate data or automatically fall back
from a failed region to cross-replicate to another enabled region. In these
scenarios, recovery and replication must be configured by the customer.
Examples of region pairs in Azure are West US paired with East US and
South-East Asia paired with East Asia. Because the pair of regions are
directly connected and far enough apart to be isolated from regional
disasters, you can use them to provide reliable services and data
redundancy.
Additional advantages of region pairs:
If an extensive Azure outage occurs, one region out of every pair is prioritized
to make sure at least one is restored as quickly as possible for applications
hosted in that region pair.
Planned Azure updates are rolled out to paired regions one region at a time to
minimize downtime and risk of application outage.
Data continues to reside within the same geography as its pair (except for
Brazil South) for tax- and law-enforcement jurisdiction purposes.
Important
Most regions are paired in two directions, meaning they are the backup for
the region that provides a backup for them (West US and East US back each
other up). However, some regions, such as West India and Brazil South, are
paired in only one direction. In a one-direction pairing, the Primary region
does not provide backup for its secondary region. So, even though West
India’s secondary region is South India, South India does not rely on West
India. West India's secondary region is South India, but South India's
secondary region is Central India. Brazil South is unique because it's paired
with a region outside of its geography. Brazil South's secondary region is
South Central US. The secondary region of South Central US isn't Brazil
South.
Sovereign Regions
US DoD Central, US Gov Virginia, US Gov Iowa and more: These regions are
physical and logical network-isolated instances of Azure for U.S. government
agencies and partners. These datacenters are operated by screened U.S.
personnel and include additional compliance certifications.
China East, China North, and more: These regions are available through a
unique partnership between Microsoft and 21Vianet, whereby Microsoft
doesn't directly maintain the datacenters.
When you’re provisioning resources, it’s good to think about the resource
group structure that best suits your needs.
There aren’t hard rules about how you use resource groups, so consider how
to set up your resource groups to maximize their usefulness for you.
Azure subscriptions
In Azure, subscriptions are a unit of management, billing, and scale. Similar
to how resource groups are a way to logically organize resources,
subscriptions allow you to logically organize your resource groups and
facilitate billing.
If you have many subscriptions, you might need a way to efficiently manage
access, policies, and compliance for those subscriptions. Azure management
groups provide a level of scope above subscriptions. You organize
subscriptions into containers called management groups and apply
governance conditions to the management groups. All subscriptions within a
management group automatically inherit the conditions applied to the
management group, the same way that resource groups inherit settings from
subscriptions and resources inherit from resource groups. Management
groups give you enterprise-grade management at a large scale, no matter
what type of subscriptions you might have. Management groups can be
nested.
In this exercise, you’ll use the Azure portal to create a resource. The focus of
the exercise is observing how Azure resource groups populate with created
resources.
Important
Basics tab
Expand table
Setting Value
Subscription Concierge Subscription
Resource group Select the resource group name that begins with learn.
Virtual machine name my-VM
Region Leave default
Availability options Leave default
Security type Leave default
Image Leave default
VM architecture Leave default
Run with Azure Spot discount Unchecked
Size Leave default
Authentication type Password
Username azureuser
Password Enter a custom password
Confirm password Reenter the custom password
Public inbound ports None
1. Select Home
2. Select Resource groups
3. Select the [sandbox resource group name] resource group
You should see a list of resources in the resource group. The storage account
and virtual network are associated with the Learn sandbox. However, the
rest of the resources were created when you created the virtual machine. By
default, Azure gave them all a similar name to help with association and
grouped them in the same resource group.
Clean up
The sandbox automatically cleans up your resources when you're finished
with this module.
When you're working in your own subscription, it's a good idea at the end of
a project to identify whether you still need the resources you created.
Resources that you leave running can cost you money. You can delete
resources individually or delete the resource group to delete the entire set of
resources.
Introduction
In this module, you’ll be introduced to the compute and networking services
of Azure. You’ll learn about three of the compute options (virtual machines,
containers, and Azure functions). You’ll also learn about some of the
networking features, such as Azure virtual networks, Azure DNS, and Azure
ExpressRoute.
Learning objectives
After completing this module, you’ll be able to:
You can even create or use an already created image to rapidly provision
VMs. You can create and provision a VM in minutes when you select a
preconfigured VM image. An image is a template used to create a VM and
may already include an OS and other software, like development tools or
web hosting environments.
Virtual machine scale sets let you create and manage a group of identical,
load-balanced VMs. If you simply created multiple VMs with the same
purpose, you’d need to ensure they were all configured identically and then
set up network routing parameters to ensure efficiency. You’d also have to
monitor the utilization to determine if you need to increase or decrease the
number of VMs.
Instead, with virtual machine scale sets, Azure automates most of that work.
Scale sets allow you to centrally manage, configure, and update a large
number of VMs in minutes. The number of VM instances can automatically
increase or decrease in response to demand, or you can set it to scale based
on a defined schedule. Virtual machine scale sets also automatically deploy a
load balancer to make sure that your resources are being used efficiently.
With virtual machine scale sets, you can build large-scale services for areas
such as compute, big data, and container workloads.
Virtual machine availability sets are another tool to help you build a more
resilient, highly available environment. Availability sets are designed to
ensure that VMs stagger updates and have varied power and network
connectivity, preventing you from losing all your VMs with a single network
or power failure.
Availability sets do this by grouping VMs in two ways: update domain and
fault domain.
Update domain: The update domain groups VMs that can be rebooted at the
same time. This allows you to apply updates while knowing that only one
update domain grouping will be offline at a time. All of the machines in one
update domain will be updated. An update group going through the update
process is given a 30-minute time to recover before maintenance on the next
update domain starts.
Fault domain: The fault domain groups your VMs by common power source
and network switch. By default, an availability set will split your VMs across up
to three fault domains. This helps protect against a physical power or
networking failure by having VMs in different fault domains (thus being
connected to different power and networking resources).
Best of all, there’s no additional cost for configuring an availability set. You
only pay for the VM instances you create.
During testing and development. VMs provide a quick and easy way to
create different OS and application configurations. Test and development
personnel can then easily delete the VMs when they no longer need them.
When running applications in the cloud. The ability to run certain
applications in the public cloud as opposed to creating a traditional
infrastructure to run them can provide substantial economic benefits. For
example, an application might need to handle fluctuations in demand. Shutting
down VMs when you don't need them or quickly starting them up to meet a
sudden increase in demand means you pay only for the resources you use.
When extending your datacenter to the cloud: An organization can
extend the capabilities of its own on-premises network by creating a virtual
network in Azure and adding VMs to that virtual network. Applications like
SharePoint can then run on an Azure VM instead of running locally. This
arrangement makes it easier or less expensive to deploy than in an on-
premises environment.
During disaster recovery: As with running certain types of applications in
the cloud and extending an on-premises network to the cloud, you can get
significant cost savings by using an IaaS-based approach to disaster recovery.
If a primary datacenter fails, you can create VMs running on Azure to run your
critical applications and then shut them down when the primary datacenter
becomes operational again.
VM Resources
When you provision a VM, you’ll also have the chance to pick the resources
that are associated with that VM, including:
You could use the Azure portal, the Azure CLI, Azure PowerShell, or an Azure
Resource Manager (ARM) template.
Azure CLICopy
az vm create \
--resource-group "[sandbox resource group name]" \
--name my-vm \
--public-ip-sku Standard \
--image Ubuntu2204 \
--admin-username azureuser \
--generate-ssh-keys
Your VM will take a few moments to come up. You named the VM my-
vm. You use this name to refer to the VM in later steps.
Azure CLICopy
az vm extension set \
--resource-group "[sandbox resource group name]" \
--vm-name my-vm \
--name customScript \
--publisher Microsoft.Azure.Extensions \
--version 2.1 \
--settings
'{"fileUris":["https://raw.githubusercontent.com/MicrosoftDocs/mslearn-
welcome-to-azure/master/configure-nginx.sh"]}' \
--protected-settings '{"commandToExecute": "./configure-nginx.sh"}'
This command uses the Custom Script Extension to run a Bash script on
your VM. The script is stored on GitHub. While the command runs, you
can choose to examine the Bash script from a separate browser tab. To
summarize, the script:
Enhance security
Azure Virtual Desktop provides centralized security management for users'
desktops with Microsoft Entra ID. You can enable multifactor authentication
to secure user sign-ins. You can also secure access to data by assigning
granular role-based access controls (RBACs) to users.
With Azure Virtual Desktop, the data and apps are separated from the local
hardware. The actual desktop and apps are running in the cloud, meaning
the risk of confidential data being left on a personal device is reduced.
Additionally, user sessions are isolated in both single and multi-session
environments.
Azure Container Instances offer the fastest and simplest way to run a
container in Azure; without having to manage any virtual machines or adopt
any additional services. Azure Container Instances are a platform as a
service (PaaS) offering. Azure Container Instances allow you to upload your
containers and then the service will run the containers for you.
Azure Container Apps
Azure Container Apps are similar in many ways to a container instance. They
allow you to get up and running right away, they remove the container
management piece, and they're a PaaS offering. Container Apps have extra
benefits such as the ability to incorporate load balancing and scaling. These
other functions allow you to be more elastic in your design.
Imagine your website back-end has reached capacity but the front end and
storage aren't being stressed. With containers, you could scale the back end
separately to improve performance. If something necessitated such a
change, you could also choose to change the storage service or modify the
front end without impacting any of the other components.
Azure Functions runs your code when it's triggered and automatically
deallocates resources when the function is finished. In this model, you're
only charged for the CPU time used while your function runs.
There are other hosting options that you can use with Azure, including Azure
App Service.
Azure App Service is a robust hosting option that you can use to host your
apps in Azure. Azure App Service lets you focus on building and maintaining
your app, and Azure focuses on keeping the environment up and running.
With App Service, you can host most common app service styles like:
Web apps
API apps
WebJobs
Mobile apps
App Service handles most of the infrastructure decisions you deal with in
hosting web-accessible apps:
All of these app styles are hosted in the same infrastructure and share these
benefits. This flexibility makes App Service the ideal choice to host web-
oriented applications.
Web apps
App Service includes full support for hosting web apps by using ASP.NET,
ASP.NET Core, Java, Ruby, Node.js, PHP, or Python. You can choose either
Windows or Linux as the host operating system.
API apps
Much like hosting a website, you can build REST-based web APIs by using
your choice of language and framework. You get full Swagger support and
the ability to package and publish your API in Azure Marketplace. The
produced apps can be consumed from any HTTP- or HTTPS-based client.
WebJobs
You can use the WebJobs feature to run a program (.exe, Java, PHP, Python,
or Node.js) or script (.cmd, .bat, PowerShell, or Bash) in the same context as
a web app, API app, or mobile app. They can be scheduled or run by a
trigger. WebJobs are often used to run background tasks as part of your
application logic.
Mobile apps
Use the Mobile Apps feature of App Service to quickly build a back end for
iOS and Android apps. With just a few actions in the Azure portal, you can:
On the mobile app side, there's SDK support for native iOS and Android,
Xamarin, and React native apps.
For name resolution, you can use the name resolution service that's built into
Azure. You also can configure the virtual network to use either an internal or
an external DNS server.
Internet communications
You can enable incoming connections from the internet by assigning a public
IP address to an Azure resource, or putting the resource behind a public load
balancer.
Virtual networks can connect not only VMs but other Azure resources,
such as the App Service Environment for Power Apps, Azure Kubernetes
Service, and Azure virtual machine scale sets.
Service endpoints can connect to other Azure resource types, such as
Azure SQL databases and storage accounts. This approach enables you
to link multiple Azure resources to virtual networks to improve security
and provide optimal routing between resources.
Communicate with on-premises resources
Azure virtual networks enable you to link resources together in your on-
premises environment and within your Azure subscription. In effect, you can
create a network that spans both your local and cloud environments. There
are three mechanisms for you to achieve this connectivity:
Route tables allow you to define rules about how traffic should be
directed. You can create custom route tables that control how packets
are routed between subnets.
Border Gateway Protocol (BGP) works with Azure VPN gateways, Azure
Route Server, or Azure ExpressRoute to propagate on-premises BGP
routes to Azure virtual networks.
Network security groups are Azure resources that can contain multiple
inbound and outbound security rules. You can define these rules to allow
or block traffic, based on factors such as source and destination IP
address, port, and protocol.
Network virtual appliances are specialized VMs that can be compared to
a hardened network appliance. A network virtual appliance carries out a
particular network function, such as running a firewall or performing
wide area network (WAN) optimization.
User-defined routes (UDR) allow you to control the routing tables between
subnets within a virtual network or between virtual networks. This allows for
greater control over network traffic flow.
In this exercise, you'll configure the access to the virtual machine (VM) you
created earlier in this module.
Important
The Microsoft Learn sandbox should still be running. If the sandbox timed
out, you'll need to redo the previous exercise (Exercise - Create an Azure
virtual machine).
To verify the VM you created previously is still running, use the following
command:
Azure CLICopy
az vm list
If you receive an empty response [], you need to complete the first exercise
in this module again. If the result lists your current VM and its settings, you
may continue.
Right now, the VM you created and installed Nginx on isn't accessible from
the internet. You'll create a network security group that changes that by
allowing inbound HTTP access on port 80.
Azure CLICopy
IPADDRESS="$(az vm list-ip-addresses \
--resource-group "[sandbox resource group name]" \
--name my-vm \
--query "[].virtualMachine.network.publicIpAddresses[*].ipAddress" \
--output tsv)"
BashCopy
curl --connect-timeout 5 http://$IPADDRESS
OutputCopy
curl: (28) Connection timed out after 5001 milliseconds
This message means that the VM was not accessible within the timeout
period.
BashCopy
echo $IPADDRESS
1. Run the following az network nsg list command to list the network
security groups that are associated with your VM:
Azure CLICopy
az network nsg list \
--resource-group "[sandbox resource group name]" \
--query '[].name' \
--output tsv
OutputCopy
my-vmNSG
2. Run the following az network nsg rule list command to list the rules
associated with the NSG named my-vmNSG:
Azure CLICopy
az network nsg rule list \
--resource-group "[sandbox resource group name]" \
--nsg-name my-vmNSG
You see a large block of text in JSON format in the output. In the next
step, you'll run a similar command that makes this output easier to
read.
3. Run the az network nsg rule list command a second time. This time,
use the --query argument to retrieve only the name, priority, affected
ports, and access (Allow or Deny) for each rule. The --output argument
formats the output as a table so that it's easy to read.
Azure CLICopy
az network nsg rule list \
--resource-group "[sandbox resource group name]" \
--nsg-name my-vmNSG \
--query '[].{Name:name, Priority:priority, Port:destinationPortRange,
Access:access}' \
--output table
OutputCopy
Name Priority Port Access
----------------- ---------- ------ --------
default-allow-ssh 1000 22 Allow
You see the default rule, default-allow-ssh. This rule allows inbound
connections over port 22 (SSH). SSH (Secure Shell) is a protocol that's
used on Linux to allow administrators to access the system remotely.
The priority of this rule is 1000. Rules are processed in priority order,
with lower numbers processed before higher numbers.
By default, a Linux VM's NSG allows network access only on port 22. This
enables administrators to access the system. You need to also allow inbound
connections on port 80, which allows access over HTTP.
Task 3: Create the network security rule
Here, you create a network security rule that allows inbound access on port
80 (HTTP).
1. Run the following az network nsg rule create command to create a rule
called allow-http that allows inbound access on port 80:
Azure CLICopy
az network nsg rule create \
--resource-group "[sandbox resource group name]" \
--nsg-name my-vmNSG \
--name allow-http \
--protocol tcp \
--priority 100 \
--destination-port-range 80 \
--access Allow
For learning purposes, here you set the priority to 100. In this case, the
priority doesn't matter. You would need to consider the priority if you
had overlapping port ranges.
2. To verify the configuration, run az network nsg rule list to see the
updated list of rules:
Azure CLICopy
az network nsg rule list \
--resource-group "[sandbox resource group name]" \
--nsg-name my-vmNSG \
--query '[].{Name:name, Priority:priority, Port:destinationPortRange,
Access:access}' \
--output table
You see this both the default-allow-ssh rule and your new rule, allow-
http:
OutputCopy
Name Priority Port Access
----------------- ---------- ------ --------
default-allow-ssh 1000 22 Allow
allow-http 100 80 Allow
After you update the NSG, it may take a few moments before the updated
rules propagate. Retry the next step, with pauses between attempts, until
you get the desired results.
BashCopy
curl --connect-timeout 5 http://$IPADDRESS
HTMLCopy
<html><body><h2>Welcome to Azure! My name is my-vm.</h2></body></html>
2. As an optional step, refresh your browser tab that points to your web
server. You see this:
Nice work. In practice, you can create a standalone network security group
that includes the inbound and outbound network access rules you need. If
you have multiple VMs that serve the same purpose, you can assign that
NSG to each VM at the time you create it. This technique enables you to
control network access to multiple VMs under a single, central set of rules.
Clean up
The sandbox automatically cleans up your resources when you're finished
with this module.
When you're working in your own subscription, it's a good idea at the end of
a project to identify whether you still need the resources you created.
Resources that you leave running can cost you money. You can delete
resources individually or delete the resource group to delete the entire set of
resources.
VPN gateways
A VPN gateway is a type of virtual network gateway. Azure VPN Gateway
instances are deployed in a dedicated subnet of the virtual network and
enable the following connectivity:
When setting up a VPN gateway, you must specify the type of VPN - either
policy-based or route-based. The primary distinction between these two
types is how they determine which traffic needs encryption. In Azure,
regardless of the VPN type, the method of authentication employed is a pre-
shared key.
Use a route-based VPN gateway if you need any of the following types of
connectivity:
High-availability scenarios
If you’re configuring a VPN to keep your information safe, you also want to
be sure that it’s a highly available and fault tolerant VPN configuration. There
are a few ways to maximize the resiliency of your VPN gateway.
Active/standby
Active/active
With the introduction of support for the BGP routing protocol, you can also
deploy VPN gateways in an active/active configuration. In this configuration,
you assign a unique public IP address to each instance. You then create
separate tunnels from the on-premises device to each IP address. You can
extend the high availability by deploying an additional VPN device on-
premises.
ExpressRoute failover
Zone-redundant gateways
Global connectivity
You can enable ExpressRoute Global Reach to exchange data across your on-
premises sites by connecting your ExpressRoute circuits. For example, say
you had an office in Asia and a datacenter in Europe, both with ExpressRoute
circuits connecting them to the Microsoft network. You could use
ExpressRoute Global Reach to connect those two facilities, allowing them to
communicate without transferring data over the public internet.
Dynamic routing
ExpressRoute uses the BGP. BGP is used to exchange routes between on-
premises networks and resources running in Azure. This protocol enables
dynamic routing between your on-premises network and services running in
the Microsoft cloud.
Built-in redundancy
CloudExchange colocation
Point-to-point Ethernet connection
Any-to-any connection
Directly from ExpressRoute sites
Any-to-any networks
With any-to-any connectivity, you can integrate your wide area network
(WAN) with Azure by providing connections to your offices and datacenters.
Azure integrates with your WAN connection to provide a connection like you
would have between your datacenter and any branch offices.
You can connect directly into the Microsoft's global network at a peering
location strategically distributed across the world. ExpressRoute Direct
provides dual 100 Gbps or 10-Gbps connectivity, which supports
Active/Active connectivity at scale.
Security considerations
With ExpressRoute, your data doesn't travel over the public internet, so it's
not exposed to the potential risks associated with internet communications.
ExpressRoute is a private connection from your on-premises infrastructure to
your Azure infrastructure. Even if you have an ExpressRoute connection, DNS
queries, certificate revocation list checking, and Azure Content Delivery
Network requests are still sent over the public internet.
DNS domains in Azure DNS are hosted on Azure's global network of DNS
name servers, providing resiliency and high availability. Azure DNS uses
anycast networking, so each DNS query is answered by the closest available
DNS server to provide fast performance and high availability for your
domain.
Security
Azure role-based access control (Azure RBAC) to control who has access to
specific actions for your organization.
Activity logs to monitor how a user in your organization modified a resource or
to find an error when troubleshooting.
Resource locking to lock a subscription, resource group, or resource. Locking
prevents other users in your organization from accidentally deleting or
modifying critical resources.
Ease of use
Azure DNS can manage DNS records for your Azure services and provide
DNS for your external resources as well. Azure DNS is integrated in the Azure
portal and uses the same credentials, support contract, and billing as your
other Azure services.
Because Azure DNS is running on Azure, it means you can manage your
domains and records with the Azure portal, Azure PowerShell cmdlets, and
the cross-platform Azure CLI. Applications that require automated DNS
management can integrate with the service by using the REST API and SDKs.
Azure DNS also supports private DNS domains. This feature allows you to use
your own custom domain names in your private virtual networks, rather than
being stuck with the Azure-provided names.
Alias records
Azure DNS also supports alias record sets. You can use an alias record set to
refer to an Azure resource, such as an Azure public IP address, an Azure
Traffic Manager profile, or an Azure Content Delivery Network (CDN)
endpoint. If the IP address of the underlying resource changes, the alias
record set seamlessly updates itself during DNS resolution. The alias record
set points to the service instance, and the service instance is associated with
an IP address.
Introduction
In this module, you’ll be introduced to the Azure storage services. You’ll
learn about the Azure Storage Account and how that relates to the different
storage services that are available. You’ll also learn about blob storage tiers,
data redundancy options, and ways to move data or even entire
infrastructures to Azure.
Learning objectives
After completing this module, you’ll be able to:
A storage account provides a unique namespace for your Azure Storage data that's accessible
from anywhere in the world over HTTP or HTTPS. Data in this account is secure, highly
available, durable, and massively scalable.
When you create your storage account, you’ll start by picking the storage account type. The type
of account determines the storage services and redundancy options and has an impact on the use
cases. Below is a list of redundancy options that will be covered later in this module:
Storage account names must be between 3 and 24 characters in length and may contain
numbers and lowercase letters only.
Your storage account name must be unique within Azure. No two storage accounts can
have the same name. This supports the ability to have a unique, accessible namespace in
Azure.
The following table shows the endpoint format for Azure Storage services.
When deciding which redundancy option is best for your scenario, consider
the tradeoffs between lower costs and higher availability. The factors that
help determine which redundancy option you should choose include:
Locally redundant storage (LRS) replicates your data three times within a
single data center in the primary region. LRS provides at least 11 nines of
durability (99.999999999%) of objects over a given year.
LRS is the lowest-cost redundancy option and offers the least durability
compared to other options. LRS protects your data against server rack and
drive failures. However, if a disaster such as fire or flooding occurs within the
data center, all replicas of a storage account using LRS may be lost or
unrecoverable. To mitigate this risk, Microsoft recommends using zone-
redundant storage (ZRS), geo-redundant storage (GRS), or geo-zone-
redundant storage (GZRS).
Zone-redundant storage
With ZRS, your data is still accessible for both read and write operations
even if a zone becomes unavailable. No remounting of Azure file shares from
the connected clients is required. If a zone becomes unavailable, Azure
undertakes networking updates, such as DNS repointing. These updates may
affect your application if you access data before the updates have
completed.
Microsoft recommends using ZRS in the primary region for scenarios that
require high availability. ZRS is also recommended for restricting replication
of data within a country or region to meet data governance requirements.
When you create a storage account, you select the primary region for the
account. The paired secondary region is based on Azure Region Pairs, and
can't be changed.
Azure Storage offers two options for copying your data to a secondary
region: geo-redundant storage (GRS) and geo-zone-redundant storage
(GZRS). GRS is similar to running LRS in two regions, and GZRS is similar to
running ZRS in the primary region and LRS in the secondary region.
By default, data in the secondary region isn't available for read or write
access unless there's a failover to the secondary region. If the primary region
becomes unavailable, you can choose to fail over to the secondary region.
After the failover has completed, the secondary region becomes the primary
region, and you can again read and write data.
Important
Geo-redundant storage
GRS copies your data synchronously three times within a single physical
location in the primary region using LRS. It then copies your data
asynchronously to a single physical location in the secondary region (the
region pair) using LRS. GRS offers durability for Azure Storage data objects of
at least 16 nines (99.99999999999999%) over a given year.
Geo-zone-redundant storage
Azure Blobs: A massively scalable object store for text and binary data. Also
includes support for big data analytics through Data Lake Storage Gen2.
Azure Files: Managed file shares for cloud or on-premises deployments.
Azure Queues: A messaging store for reliable messaging between application
components.
Azure Disks: Block-level storage volumes for Azure VMs.
Azure Tables: NoSQL table option for structured, non-relational data.
Durable and highly available. Redundancy ensures that your data is safe if
transient hardware failures occur. You can also opt to replicate data across
data centers or geographical regions for additional protection from local
catastrophes or natural disasters. Data replicated in this way remains highly
available if an unexpected outage occurs.
Secure. All data written to an Azure storage account is encrypted by the
service. Azure Storage provides you with fine-grained control over who has
access to your data.
Scalable. Azure Storage is designed to be massively scalable to meet the data
storage and performance needs of today's applications.
Managed. Azure handles hardware maintenance, updates, and critical issues
for you.
Accessible. Data in Azure Storage is accessible from anywhere in the world
over HTTP or HTTPS. Microsoft provides client libraries for Azure Storage in a
variety of languages, including .NET, Java, Node.js, Python, PHP, Ruby, Go, and
others, as well as a mature REST API. Azure Storage supports scripting in Azure
PowerShell or Azure CLI. And the Azure portal and Azure Storage Explorer offer
easy visual solutions for working with your data.
Azure Blobs
Azure Blob storage is an object storage solution for the cloud. It can store
massive amounts of data, such as text or binary data. Azure Blob storage is
unstructured, meaning that there are no restrictions on the kinds of data it
can hold. Blob storage can manage thousands of simultaneous uploads,
massive amounts of video data, constantly growing log files, and can be
reached from anywhere with an internet connection.
Blobs aren't limited to common file formats. A blob could contain gigabytes
of binary data streamed from a scientific instrument, an encrypted message
for another application, or data in a custom format for an app you're
developing. One advantage of blob storage over disk storage is that it
doesn't require developers to think about or manage disks. Data is uploaded
as blobs, and Azure takes care of the physical storage needs.
Objects in blob storage can be accessed from anywhere in the world via
HTTP or HTTPS. Users or client applications can access blobs via URLs, the
Azure Storage REST API, Azure PowerShell, Azure CLI, or an Azure Storage
client library. The storage client libraries are available for multiple
languages, including .NET, Java, Node.js, Python, PHP, and Ruby.
Data stored in the cloud can grow at an exponential pace. To manage costs
for your expanding storage needs, it's helpful to organize your data based on
attributes like frequency of access and planned retention period. Data stored
in the cloud can be handled differently based on how it's generated,
processed, and accessed over its lifetime. Some data is actively accessed
and modified throughout its lifetime. Some data is accessed frequently early
in its lifetime, with access dropping drastically as the data ages. Some data
remains idle in the cloud and is rarely, if ever, accessed after it's stored. To
accommodate these different access needs, Azure provides several access
tiers, which you can use to balance your storage costs with your access
needs.
Azure Storage offers different access tiers for your blob storage, helping you
store object data in the most cost-effective manner. The available access
tiers include:
Hot access tier: Optimized for storing data that is accessed frequently (for
example, images for your website).
Cool access tier: Optimized for data that is infrequently accessed and stored
for at least 30 days (for example, invoices for your customers).
Cold access tier: Optimized for storing data that is infrequently accessed and
stored for at least 90 days.
Archive access tier: Appropriate for data that is rarely accessed and stored
for at least 180 days, with flexible latency requirements (for example, long-
term backups).
Hot and cool access tiers can be set at the account level. The cold and archive
access tiers aren't available at the account level.
Hot, cool, cold, and archive tiers can be set at the blob level, during or after
upload.
Data in the cool and cold access tiers can tolerate slightly lower availability,
but still requires high durability, retrieval latency, and throughput
characteristics similar to hot data. For cool and cold data, a lower availability
service-level agreement (SLA) and higher access costs compared to hot data
are acceptable trade-offs for lower storage costs.
Archive storage stores data offline and offers the lowest storage costs, but also
the highest costs to rehydrate and access data.
Azure Files
Azure File storage offers fully managed file shares in the cloud that are
accessible via the industry standard Server Message Block (SMB) or Network
File System (NFS) protocols. Azure Files file shares can be mounted
concurrently by cloud or on-premises deployments. SMB Azure file shares are
accessible from Windows, Linux, and macOS clients. NFS Azure Files shares
are accessible from Linux or macOS clients. Additionally, SMB Azure file
shares can be cached on Windows Servers with Azure File Sync for fast
access near where the data is being used.
Shared access: Azure file shares support the industry standard SMB and NFS
protocols, meaning you can seamlessly replace your on-premises file shares
with Azure file shares without worrying about application compatibility.
Fully managed: Azure file shares can be created without the need to manage
hardware or an OS. This means you don't have to deal with patching the server
OS with critical security upgrades or replacing faulty hard disks.
Scripting and tooling: PowerShell cmdlets and Azure CLI can be used to
create, mount, and manage Azure file shares as part of the administration of
Azure applications. You can create and manage Azure file shares using Azure
portal and Azure Storage Explorer.
Resiliency: Azure Files has been built from the ground up to always be
available. Replacing on-premises file shares with Azure Files means you don't
have to wake up in the middle of the night to deal with local power outages or
network issues.
Familiar programmability: Applications running in Azure can access data in
the share via file system I/O APIs. Developers can therefore use their existing
code and skills to migrate existing applications. In addition to System IO APIs,
you can use Azure Storage Client Libraries or the Azure Storage REST API.
Azure Queues
Azure Queue storage is a service for storing large numbers of messages.
Once stored, you can access the messages from anywhere in the world via
authenticated calls using HTTP or HTTPS. A queue can contain as many
messages as your storage account has room for (potentially millions). Each
individual message can be up to 64 KB in size. Queues are commonly used to
create a backlog of work to process asynchronously.
Queue storage can be combined with compute functions like Azure Functions
to take an action when a message is received. For example, you want to
perform an action after a customer uploads a form to your website. You
could have the submit button on the website trigger a message to the Queue
storage. Then, you could use Azure Functions to trigger an action once the
message was received.
Azure Disks
Azure Disk storage, or Azure managed disks, are block-level storage volumes
managed by Azure for use with Azure VMs. Conceptually, they’re the same
as a physical disk, but they’re virtualized – offering greater resiliency and
availability than a physical disk. With managed disks, all you have to do is
provision the disk, and Azure will take care of the rest.
Azure Tables
Azure Table storage stores large amounts of structured data. Azure tables
are a NoSQL datastore that accepts authenticated calls from inside and
outside the Azure cloud. This enables you to use Azure tables to build your
hybrid or multi-cloud solution and have your data always available. Azure
tables are ideal for storing structured, non-relational data.
5. On the Basics tab of the Create a storage account blade, fill in the
following information. Leave the defaults for everything else.
Setting Value
Subscription Concierge Subscription
Resource group Select the resource group that starts with learn
Storage account name Create a unique storage account name
Region Leave default
Performance Standard
Redundancy Locally redundant storage (LRS)
6. On the Advanced tab of the Create a storage account blade, fill in the
following information. Leave the defaults for everything else.
Setting Value
Allow enabling anonymous access on individual containers Checked
7. Select Review to review your storage account settings and allow Azure
to validate the configuration.
8. Once validated, select Create. Wait for the notification that the account
was successfully created.
9. Select Go to resource.
Setting Value
3. Select Create.
Note: Step 4 will need an image. If you want to upload an image you
already have on your computer, continue to Step 4. Otherwise, open a
new browser window and search Bing for an image of a flower. Save the
image to your computer.
4. Back in the Azure portal, select the container you created, then select
Upload.
5. Browse for the image file you want to upload. Select it and then select
upload.
Note: You can upload as many blobs as you like in this way. New blobs
will be listed within the container.
6. Select the Blob (file) you just uploaded. You should be on the properties
tab.
7. Copy the URL from the URL field and paste it into a new tab.
Copy
<Error>
<Code>ResourceNotFound</Code>
<Message>The specified resource does not exist. RequestId:4a4bd3d9-
101e-005a-1a3e-84bd42000000</Message>
</Error>
3. Set the Anonymous access level to Blob (anonymous read access for
blobs only).
4. Select OK.
5. Refresh the tab where you attempted to access the file earlier.
Azure Migrate
Azure Migrate is a service that helps you migrate from an on-premises
environment to the cloud. Azure Migrate functions as a hub to help you
manage the assessment and migration of your on-premises datacenter to
Azure. It provides the following:
Unified migration platform: A single portal to start, run, and track your
migration to Azure.
Range of tools: A range of tools for assessment and migration. Azure Migrate
tools include Azure Migrate: Discovery and assessment and Azure Migrate:
Server Migration. Azure Migrate also integrates with other Azure services and
tools, and with independent software vendor (ISV) offerings.
Assessment and migration: In the Azure Migrate hub, you can assess and
migrate your on-premises infrastructure to Azure.
Integrated tools
In addition to working with tools from ISVs, the Azure Migrate hub also
includes the following tools to help with migration:
You can order the Data Box device via the Azure portal to import or export
data from Azure. Once the device is received, you can quickly set it up using
the local web UI and connect it to your network. Once you’re finished
transferring the data (either into or out of Azure), simply return the Data
Box. If you’re transferring data into Azure, the data is automatically uploaded
once Microsoft receives the Data Box back. The entire process is tracked
end-to-end by the Data Box service in the Azure portal.
Use cases
Data Box is ideally suited to transfer data sizes larger than 40 TBs in
scenarios with no to limited network connectivity. The data movement can
be one-time, periodic, or an initial bulk data transfer followed by periodic
transfers.
Here are the various scenarios where Data Box can be used to import data to
Azure.
Here are the various scenarios where Data Box can be used to export data
from Azure.
Disaster recovery - when a copy of the data from Azure is restored to an on-
premises network. In a typical disaster recovery scenario, a large amount of
Azure data is exported to a Data Box. Microsoft then ships this Data Box, and
the data is restored on your premises in a short time.
Security requirements - when you need to be able to export data out of Azure
due to government or security requirements.
Migrate back to on-premises or to another cloud service provider - when you
want to move all the data back to on-premises, or to another cloud service
provider, export data via Data Box to migrate the workloads.
Once the data from your import order is uploaded to Azure, the disks on the
device are wiped clean in accordance with NIST 800-88r1 standards. For an
export order, the disks are erased once the device reaches the Azure
datacenter.
AzCopy
AzCopy is a command-line utility that you can use to copy blobs or files to or
from your storage account. With AzCopy, you can upload files, download
files, copy files between storage accounts, and even synchronize files.
AzCopy can even be configured to work with other cloud providers to help
move files back and forth between clouds.
Important: Synchronizing blobs or files with AzCopy is one-direction
synchronization. When you synchronize, you designated the source and
destination, and AzCopy will copy files or blobs in that direction. It doesn't
synchronize bi-directionally based on timestamps or other metadata.
Use any protocol that's available on Windows Server to access your data
locally, including SMB, NFS, and FTPS.
Have as many caches as you need across the world.
Replace a failed local server by installing Azure File Sync on a new server in
the same datacenter.
Configure cloud tiering so the most frequently accessed files are replicated
locally, while infrequently accessed files are kept in the cloud until requested.
Introduction
In this module, you’ll be introduced to the Azure identity, access, and
security services and tools. You’ll learn about directory services in Azure,
authentication methods, and access control. You’ll also cover things like Zero
Trust and defense in depth, and how they keep your cloud safer. You’ll wrap
up with an introduction to Microsoft Defender for Cloud.
Learning objectives
After completing this module, you’ll be able to:
A Microsoft Entra Domain Services managed domain lets you run legacy
applications in the cloud that can't use modern authentication methods, or
where you don't want directory lookups to always go back to an on-premises
AD DS environment. You can lift and shift those legacy applications from
your on-premises environment into a managed domain, without needing to
manage the AD DS environment in the cloud.
Microsoft Entra Domain Services integrates with your existing Microsoft Entra
tenant. This integration lets users sign into services and applications
connected to the managed domain using their existing credentials. You can
also use existing groups and user accounts to secure access to resources.
These features provide a smoother lift-and-shift of on-premises resources to
Azure.
When you create a Microsoft Entra Domain Services managed domain, you
define a unique namespace. This namespace is the domain name. Two
Windows Server domain controllers are then deployed into your selected
Azure region. This deployment of DCs is known as a replica set.
You don't need to manage, configure, or update these DCs. The Azure
platform handles the DCs as part of the managed domain, including backups
and encryption at rest using Azure Disk Encryption.
Is information synchronized?
For the longest time, security and convenience seemed to be at odds with
each other. Thankfully, new authentication solutions provide both security
and convenience.
Consider the process of managing all those identities. More strain is placed
on help desks as they deal with account lockouts and password reset
requests. If a user leaves an organization, tracking down all those identities
and ensuring they're disabled can be challenging. If an identity is
overlooked, this might allow access when it should have been eliminated.
With SSO, you need to remember only one ID and one password. Access
across applications is granted to a single identity that's tied to the user,
which simplifies the security model. As users change roles or leave an
organization, access is tied to a single identity. This change greatly reduces
the effort needed to change or disable accounts. Using SSO for accounts
makes it easier for users to manage their identities and for IT to manage
users.
Important: Single sign-on is only as secure as the initial authenticator
because the subsequent connections are all based on the security of the
initial authenticator.
Think about how you sign into websites, email, or online services. After
entering your username and password, have you ever needed to enter a
code that was sent to your phone? If so, you've used multifactor
authentication to sign in.
Windows Hello for Business is ideal for information workers that have their
own designated Windows PC. The biometric and PIN credentials are directly
tied to the user's PC, which prevents access from anyone other than the
owner. With public key infrastructure (PKI) integration and built-in support for
single sign-on (SSO), Windows Hello for Business provides a convenient
method for seamlessly accessing corporate resources on-premises and in the
cloud.
The Authenticator App turns any iOS or Android phone into a strong,
passwordless credential. Users can sign-in to any platform or browser by
getting a notification to their phone, matching a number displayed on the
screen to the one on their phone, and then using their biometric (touch or
face) or PIN to confirm. Refer to Download and install the Microsoft
Authenticator app for installation details.
Users can register and then select a FIDO2 security key at the sign-in
interface as their main means of authentication. These FIDO2 security keys
are typically USB devices, but could also use Bluetooth or NFC. With a
hardware device that handles the authentication, the security of an account
is increased as there's no password that could be exposed or guessed.
With Microsoft Entra ID, you can easily enable collaboration across
organizational boundaries by using the Microsoft Entra B2B feature. Guest
users from other tenants can be invited by administrators or by other users.
This capability also applies to social identities such as Microsoft accounts.
You also can easily ensure that guest users have appropriate access. You can
ask the guests themselves or a decision maker to participate in an access
review and recertify (or attest) to the guests' access. The reviewers can give
their input on each user's need for continued access, based on suggestions
from Microsoft Entra ID. When an access review is finished, you can then
make changes and remove access for guests who no longer need it.
During sign-in, Conditional Access collects signals from the user, makes
decisions based on those signals, and then enforces that decision by allowing
or denying the access request or challenging for a multifactor authentication
response.
Based on these signals, the decision might be to allow full access if the user
is signing in from their usual location. If the user is signing in from an
unusual location or a location that's marked as high risk, then access might
be blocked entirely or possibly granted after the user provides a second form
of authentication.
Enforcement is the action that carries out the decision. For example, the
action is to allow access or require the user to provide a second form of
authentication.
Azure provides built-in roles that describe common access rules for cloud
resources. You can also define your own roles. Each role has an associated
set of access permissions that relate to that role. When you assign
individuals or groups to one or more roles, they receive all the associated
access permissions.
So, if you hire a new engineer and add them to the Azure RBAC group for
engineers, they automatically get the same access as the other engineers in
the same Azure RBAC group. Similarly, if you add additional resources and
point Azure RBAC at them, everyone in that Azure RBAC group will now have
those permissions on the new resources as well as the existing resources.
The following diagram shows the relationship between roles and scopes. A
management group, subscription, or resource group might be given the role
of owner, so they have increased control and authority. An observer, who
isn't expected to make any updates, might be given a role of Reader for the
same scope, enabling them to review or observe the management group,
subscription, or resource group.
Scopes include:
Azure RBAC is hierarchical, in that when you grant access at a parent scope,
those permissions are inherited by all child scopes. For example:
When you assign the Owner role to a user at the management group scope,
that user can manage everything in all subscriptions within the management
group.
When you assign the Reader role to a group at the subscription scope, the
members of that group can view every resource group and resource within the
subscription.
You typically access Resource Manager from the Azure portal, Azure Cloud
Shell, Azure PowerShell, and the Azure CLI. Azure RBAC doesn't enforce
access permissions at the application or data level. Application security must
be handled by your application.
Azure RBAC uses an allow model. When you're assigned a role, Azure RBAC
allows you to perform actions within the scope of that role. If one role
assignment grants you read permissions to a resource group and a different
role assignment grants you write permissions to the same resource group,
you have both read and write permissions on that resource group.
The Zero Trust model flips that scenario. Instead of assuming that a device is
safe because it’s within the corporate network, it requires everyone to
authenticate. Then grants access based on authentication rather than
location.
Describe defense-in-depth
The objective of defense-in-depth is to protect information and prevent it
from being stolen by those who aren't authorized to access it.
Layers of defense-in-depth
You can visualize defense-in-depth as a set of layers, with the data to be
secured at the center and all the other layers functioning to protect that
central data layer.
Each layer provides protection so that if one layer is breached, a subsequent
layer is already in place to prevent further exposure. This approach removes
reliance on any single layer of protection. It slows down an attack and
provides alert information that security teams can act upon, either
automatically or manually.
The physical security layer is the first line of defense to protect computing
hardware in the datacenter.
The identity and access layer controls access to infrastructure and change
control.
The perimeter layer uses distributed denial of service (DDoS) protection to
filter large-scale attacks before they can cause a denial of service for users.
The network layer limits communication between resources through
segmentation and access controls.
The compute layer secures access to virtual machines.
The application layer helps ensure that applications are secure and free of
security vulnerabilities.
The data layer controls access to business and customer data that you need to
protect.
These layers provide a guideline for you to help make security configuration
decisions in all of the layers of your applications.
Azure provides security tools and features at every level of the defense-in-
depth concept. Let's take a closer look at each layer:
Physical security
The identity and access layer is all about ensuring that identities are secure,
that access is granted only to what's needed, and that sign-in events and
changes are logged.
Perimeter
Use DDoS protection to filter large-scale attacks before they can affect the
availability of a system for users.
Use perimeter firewalls to identify and alert on malicious attacks against your
network.
Network
At this layer, the focus is on limiting the network connectivity across all your
resources to allow only what's required. By limiting this communication, you
reduce the risk of an attack spreading to other systems in your network.
Compute
Application
Data
Those who store and control access to data are responsible for ensuring that
it's properly secured. Often, regulatory requirements dictate the controls and
processes that must be in place to ensure the confidentiality, integrity, and
availability of the data.
Stored in a database.
Stored on disk inside virtual machines.
Stored in software as a service (SaaS) applications, such as Office 365.
Managed through cloud storage.
Describe Microsoft Defender for
Cloud
Defender for Cloud is a monitoring tool for security posture management and
threat protection. It monitors your cloud, on-premises, hybrid, and multi-
cloud environments to provide guidance and notifications aimed at
strengthening your security posture.
Defender for Cloud provides the tools needed to harden your resources,
track your security posture, protect against cyber-attacks, and streamline
security management. Deployment of Defender for Cloud is easy, it’s already
natively integrated to Azure.
Azure-native protections
Azure PaaS services – Detect threats targeting Azure services including Azure
App Service, Azure SQL, Azure Storage Account, and more data services. You
can also perform anomaly detection on your Azure activity logs using the
native integration with Microsoft Defender for Cloud Apps (formerly known as
Microsoft Cloud App Security).
Azure data services – Defender for Cloud includes capabilities that help you
automatically classify your data in Azure SQL. You can also get assessments
for potential vulnerabilities across Azure SQL and Storage services, and
recommendations for how to mitigate them.
Networks – Defender for Cloud helps you limit exposure to brute force attacks.
By reducing access to virtual machine ports, using the just-in-time VM access,
you can harden your network by preventing unnecessary access. You can set
secure access policies on selected ports, for only authorized users, allowed
source IP address ranges or IP addresses, and for a limited amount of time.
In addition to defending your Azure environment, you can add Defender for
Cloud capabilities to your hybrid cloud environment to protect your non-
Azure servers. To help you focus on what matters the most, you'll get
customized threat intelligence and prioritized alerts according to your
specific environment.
Defender for Cloud can also protect resources in other clouds (such as AWS
and GCP).
Defender for Cloud's CSPM features extend to your AWS resources. This
agentless plan assesses your AWS resources according to AWS-specific
security recommendations, and includes the results in the secure score. The
resources will also be assessed for compliance with built-in standards specific
to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best
Practices). Defender for Cloud's asset inventory page is a multi-cloud enabled
feature helping you manage your AWS resources alongside your Azure
resources.
Microsoft Defender for Containers extends its container threat detection and
advanced defenses to your Amazon EKS Linux clusters.
Microsoft Defender for Servers brings threat detection and advanced defenses
to your Windows and Linux EC2 instances.
Secure
One of the benefits of moving to the cloud is the ability to grow and scale as
you need, adding new services and resources as necessary. Defender for
Cloud is constantly monitoring for new resources being deployed across your
workloads. Defender for Cloud assesses if new resources are configured
according to security best practices. If not, they're flagged and you get a
prioritized list of recommendations for what you need to fix.
Recommendations help you reduce the attack surface across each of your
resources.
The list of recommendations is enabled and supported by the Azure Security
Benchmark. This Microsoft-authored, Azure-specific, benchmark provides a
set of guidelines for security and compliance best practices based on
common compliance frameworks.
In this way, Defender for Cloud enables you not just to set security policies,
but to apply secure configuration standards across your resources.
Defend
The first two areas were focused on assessing, monitoring, and maintaining
your environment. Defender for Cloud also helps you defend your
environment by providing security alerts and advanced threat protection
features.
Security alerts
When Defender for Cloud detects a threat in any area of your environment, it
generates a security alert. Security alerts:
Defender for cloud provides advanced threat protection features for many of
your deployed resources, including virtual machines, SQL databases,
containers, web applications, and your network. Protections include securing
the management ports of your VMs with just-in-time access, and adaptive
application controls to create allowlists for what apps should and shouldn't
run on your machines.
Introduction
In this module, you’ll be introduced to factors that impact costs in Azure and
tools to help you both predict potential costs and monitor and control costs.
Learning objectives
After completing this module, you’ll be able to:
Azure shifts development costs from the capital expense (CapEx) of building
out and maintaining infrastructure and facilities to an operational expense
(OpEx) of renting infrastructure as you need it, whether it’s compute,
storage, networking, and so on.
That OpEx cost can be impacted by many factors. Some of the impacting
factors are:
Resource type
Consumption
Maintenance
Geography
Subscription type
Azure Marketplace
Resource type
A number of factors influence the cost of Azure resources. The type of
resources, the settings for the resource, and the Azure region will all have an
impact on how much a resource costs. When you provision an Azure
resource, Azure creates metered instances for that resource. The meters
track the resources' usage and generate a usage record that is used to
calculate your bill.
Examples
With a storage account, you specify a type such as blob, a performance tier,
an access tier, redundancy settings, and a region. Creating the same storage
account in different regions may show different costs and changing any of
the settings may also impact the price.
With a virtual machine (VM), you may have to consider licensing for the
operating system or other software, the processor and number of cores for
the VM, the attached storage, and the network interface. Just like with
storage, provisioning the same virtual machine in different regions may
result in different costs.
Consumption
Pay-as-you-go has been a consistent theme throughout, and that’s the cloud
payment model where you pay for the resources that you use during a billing
cycle. If you use more compute this cycle, you pay more. If you use less in
the current cycle, you pay less. It’s a straight forward pricing mechanism
that allows for maximum flexibility.
However, Azure also offers the ability to commit to using a set amount of
cloud resources in advance and receiving discounts on those “reserved”
resources. Many services, including databases, compute, and storage all
provide the option to commit to a level of use and receive a discount, in
some cases up to 72 percent.
When you reserve capacity, you’re committing to using and paying for a
certain amount of Azure resources during a given period (typically one or
three years). With the back-up of pay-as-you-go, if you see a sudden surge in
demand that eclipses what you’ve pre-reserved, you just pay for the
additional resources in excess of your reservation. This model allows you to
recognize significant savings on reliable, consistent workloads while also
having the flexibility to rapidly increase your cloud footprint as the need
arises.
Maintenance
The flexibility of the cloud makes it possible to rapidly adjust resources
based on demand. Using resource groups can help keep all of your resources
organized. In order to control costs, it’s important to maintain your cloud
environment. For example, every time you provision a VM, additional
resources such as storage and networking are also provisioned. If you
deprovision the VM, those additional resources may not deprovision at the
same time, either intentionally or unintentionally. By keeping an eye on your
resources and making sure you’re not keeping around resources that are no
longer needed, you can help control cloud costs.
Geography
When you provision most resources in Azure, you need to define a region
where the resource deploys. Azure infrastructure is distributed globally,
which enables you to deploy your services centrally or closest to your
customers, or something in between. With this global deployment comes
global pricing differences. The cost of power, labor, taxes, and fees vary
depending on the location. Due to these variations, Azure resources can
differ in costs to deploy depending on the region.
Network traffic is also impacted based on geography. For example, it’s less
expensive to move information within Europe than to move information from
Europe to Asia or South America.
Network Traffic
Billing zones are a factor in determining the cost of some Azure services.
Subscription type
Some Azure subscription types also include usage allowances, which affect
costs.
Azure Marketplace
Azure Marketplace lets you purchase Azure-based solutions and services
from third-party vendors. This could be a server with software preinstalled
and configured, or managed network firewall appliances, or connectors to
third-party backup services. When you purchase products through Azure
Marketplace, you may pay for not only the Azure services that you’re using,
but also the services or expertise of the third-party vendor. Billing structures
are set by the vendor.
All solutions available in Azure Marketplace are certified and compliant with
Azure policies and standards. The certification policies may vary based on
the service or solution type and Azure service involved. Commercial
marketplace certification policies has additional information on Azure
Marketplace certifications.
Pricing calculator
The pricing calculator is designed to give you an estimated cost for
provisioning resources in Azure. You can get an estimate for individual
resources, build out a solution, or use an example scenario to see an
estimate of the Azure spend. The pricing calculator’s focus is on the cost of
provisioned resources in Azure.
Note
The Pricing calculator is for information purposes only. The prices are only an
estimate. Nothing is provisioned when you add resources to the pricing
calculator, and you won't be charged for any services you select.
With the pricing calculator, you can estimate the cost of any provisioned
resources, including compute, storage, and associated network costs. You
can even account for different storage options like storage type, access tier,
and redundancy.
TCO calculator
The TCO calculator is designed to help you compare the costs for running an
on-premises infrastructure compared to an Azure Cloud infrastructure. With
the TCO calculator, you enter your current infrastructure configuration,
including servers, databases, storage, and outbound network traffic. The TCO
calculator then compares the anticipated costs for your current environment
with an Azure environment supporting the same infrastructure requirements.
With the TCO calculator, you enter your configuration, add in assumptions
like power and IT labor costs, and are presented with an estimation of the
cost difference to run the same environment in your current datacenter or in
Azure.
Exercise - Estimate workload
costs by using the Pricing
calculator
In this exercise, you use the Pricing calculator to estimate the cost of running
a basic web application on Azure.
Note: The Pricing calculator is for information purposes only. The prices are
only an estimate, and you won't be charged for any services you select.
For a basic web application hosted in your datacenter, you might run a
configuration similar to the following.
Use Azure Virtual Machines instances, similar to the virtual machines used in
your datacenter.
Use Azure Application Gateway for load balancing.
Use Azure SQL Database to hold inventory and pricing information.
In practice, you would define your requirements in greater detail. But here
are some basic facts and requirements to get you started:
Tip: Make sure you have a clean calculator with nothing listed in the
estimate. You can reset the estimate by selecting the trash can icon next to
each item.
1. On the Products tab, select the service from each of these categories:
Category Service
Compute Virtual Machines
Databases Azure SQL Database
Networking Application Gateway
2. Scroll to the bottom of the page. Each service is listed with its default
configuration.
Setting Value
Region West US
Setting Value
Region West US
Type Single Database
Backup storage tier RA-GRS
Purchase model vCore
Service tier General Purpose
Compute tier Provisioned
Generation Gen 5
Instance 8 vCore
Setting Value
Region West US
Tier Web Application Firewall
Size Medium
Gateway hours 2 x 730 Hours
Data processed 1 TB
Outbound data transfer 5 GB
You now have a cost estimate that you can share with your team. You can
make adjustments as you discover any changes to your requirements.
Experiment with some of the options you worked with here, or create a
purchase plan for a workload you want to run on Azure.
You'll need to investigate whether there are any potential cost savings in
moving your datacenter to the cloud over the next three years. You need to
take into account all of the potentially hidden costs involved with operating
on-premises and in the cloud.
Note: Remember, you don't need an Azure subscription to work with the
TCO Calculator.
You run two sets, or banks, of 50 virtual machines (VMs) in each bank.
The first bank of VMs runs Windows Server under Hyper-V virtualization.
The second bank of VMs runs Linux under VMware virtualization.
There's also a storage area network (SAN) with 60 TB of disk storage.
You consume an estimated 15 TB of outbound network bandwidth each month.
There are also a number of databases involved, but for now, you'll omit those
details.
Setting Value
VMs 50
Virtualization Hyper-V
Core(s) 8
RAM (GB) 16
Optimize by CPU
Windows Server 2008/2008 R2 Off
4. Select Add server workload to create a second row for your bank of
Linux VMs. Then specify these settings:
Setting Value
Name Servers: Linux VMs
Workload Windows/Linux Server
Environment Virtual Machines
Operating system Linux
VMs 50
Virtualization VMware
Core(s) 8
RAM (GB) 16
Optimize by CPU
Setting Value
Name Server Storage
Storage type Local Disk/SAN
Disk type HDD
Capacity 60 TB
Backup 120 TB
Archive 0 TB
7. Select Next.
Adjust assumptions
Here, you specify your currency. For brevity, you leave the remaining fields
at their default values.
In practice, you would adjust any cost assumptions and make any
adjustments to match your current on-premises environment.
1. At the top of the page, select your currency. This example uses US Dollar ($).
2. Select Next.
View the report
Take a moment to review the generated report.
Scroll to the summary at the bottom. You see a comparison of running your
workloads in the datacenter versus on Azure.
Great work. You now have the information that you can share with your Chief
Financial Officer. If you need to make adjustments, you can revisit the TCO
Calculator to generate a fresh report.
Cost analysis is a subset of Cost Management that provides a quick visual for
your Azure costs. Using cost analysis, you can quickly view the total cost in a
variety of different ways, including by billing cycle, region, resource, and so
on.
You use cost analysis to explore and analyze your organizational costs. You
can view aggregated costs by organization to understand where costs are
accrued and to identify spending trends. And you can see accumulated costs
over time to estimate monthly, quarterly, or even yearly cost trends against
a budget.
Cost alerts
Cost alerts provide a single location to quickly check on all of the different
alert types that may show up in the Cost Management service. The three
types of alerts that may show up are:
Budget alerts
Credit alerts
Department spending quota alerts.
Budget alerts
Budget alerts notify you when spending, based on usage or cost, reaches or
exceeds the amount defined in the alert condition of the budget. Cost
Management budgets are created using the Azure portal or the Azure
Consumption API.
In the Azure portal, budgets are defined by cost. Budgets are defined by cost
or by consumption usage when using the Azure Consumption API. Budget
alerts support both cost-based and usage-based budgets. Budget alerts are
generated automatically whenever the budget alert conditions are met. You
can view all cost alerts in the Azure portal. Whenever an alert is generated, it
appears in cost alerts. An alert email is also sent to the people in the alert
recipients list of the budget.
Credit alerts
Credit alerts notify you when your Azure credit monetary commitments are
consumed. Monetary commitments are for organizations with Enterprise
Agreements (EAs). Credit alerts are generated automatically at 90% and at
100% of your Azure credit balance. Whenever an alert is generated, it's
reflected in cost alerts, and in the email sent to the account owners.
Budgets
A budget is where you set a spending limit for Azure. You can set budgets
based on a subscription, resource group, service type, or other criteria. When
you set a budget, you will also set a budget alert. When the budget hits the
budget alert level, it will trigger a budget alert that shows up in the cost
alerts area. If configured, budget alerts will also send an email notification
that a budget alert threshold has been triggered.
Resource management Tags enable you to locate and act on resources that
are associated with specific workloads, environments, business units, and
owners.
Cost management and optimization Tags enable you to group resources so
that you can report on costs, allocate internal cost centers, track budgets, and
forecast estimated cost.
Operations management Tags enable you to group resources according to
how critical their availability is to your business. This grouping helps you
formulate service-level agreements (SLAs). An SLA is an uptime or
performance guarantee between you and your users.
Security Tags enable you to classify data by its security level, such as public
or confidential.
Governance and regulatory compliance Tags enable you to identify
resources that align with governance or regulatory compliance requirements,
such as ISO 27001. Tags can also be part of your standards enforcement
efforts. For example, you might require that all resources be tagged with an
owner or department name.
Workload optimization and automation Tags can help you visualize all of
the resources that participate in complex deployments. For example, you
might tag a resource with its associated workload or application name and use
software such as Azure DevOps to perform automated tasks on those
resources.
You can use Azure Policy to enforce tagging rules and conventions. For
example, you can require that certain tags be added to new resources as
they're provisioned. You can also define rules that reapply tags that have
been removed. Resources don't inherit tags from subscriptions and resource
groups, meaning that you can apply tags at one level and not have those
tags automatically show up at a different level, allowing you to create
custom tagging schemas that change depending on the level (resource,
resource group, subscription, and so on).
A resource tag consists of a name and a value. You can assign one or more
tags to each Azure resource.
Name Value
AppName The name of the application that the resource is part of.
CostCenter The internal cost center code.
Owner The name of the business owner who's responsible for the resource.
Keep in mind that you don't need to enforce that a specific tag is present on
all of your resources. For example, you might decide that only mission-
critical resources have the Impact tag. All non-tagged resources would then
not be considered as mission-critical.
Introduction
In this module, you’ll be introduced to some of the features and tools you
can use to help with governance of your Azure environment. You’ll also learn
about tools you can use to help keep resources in compliance with corporate
or regulatory requirements.
Learning objectives
After completing this module, you’ll be able to:
With Microsoft Purview, you can stay up-to-date on your data landscape
thanks to:
Create an up-to-date map of your entire data estate that includes data
classification and end-to-end lineage.
Identify where sensitive data is stored in your estate.
Create a secure environment for data consumers to find valuable data.
Generate insights about how your data is stored and used.
Manage access to the data in your estate securely and at scale.
Azure Policy is a service in Azure that enables you to create, assign, and
manage policies that control or audit your resources. These policies enforce
different rules across your resource configurations so that those
configurations stay compliant with corporate standards.
Azure Policies can be set at each level, enabling you to set policies on a
specific resource, resource group, subscription, and so on. Additionally,
Azure Policies are inherited, so if you set a policy at a high level, it will
automatically be applied to all of the groupings that fall within the parent.
For example, if you set an Azure Policy on a resource group, all resources
created within that resource group will automatically receive the same
policy.
Azure Policy comes with built-in policy and initiative definitions for Storage,
Networking, Compute, Security Center, and Monitoring. For example, if you
define a policy that allows only a certain size for the virtual machines (VMs)
to be used in your environment, that policy is invoked when you create a
new VM and whenever you resize existing VMs. Azure Policy also evaluates
and monitors all current VMs in your environment, including VMs that were
created before the policy was created.
Azure Policy also integrates with Azure DevOps by applying any continuous
integration and delivery pipeline policies that pertain to the pre-deployment
and post-deployment phases of your applications.
Even with Azure role-based access control (Azure RBAC) policies in place,
there's still a risk that people with the right level of access could delete
critical cloud resources. Resource locks prevent resources from being
deleted or updated, depending on the type of lock. Resource locks can be
applied to individual resources, resource groups, or even an entire
subscription. Resource locks are inherited, meaning that if you place a
resource lock on a resource group, all of the resources within the resource
group will also have the resource lock applied.
Delete means authorized users can still read and modify a resource, but
they can't delete the resource.
ReadOnly means authorized users can read a resource, but they can't
delete or update the resource. Applying this lock is similar to restricting
all authorized users to the permissions granted by the Reader role.
To view, add, or delete locks in the Azure portal, go to the Settings section of
any resource's Settings pane in the Azure portal.
To modify a locked resource, you must first remove the lock. After you
remove the lock, you can apply any action you have permissions to perform.
Resource locks apply regardless of RBAC permissions. Even if you're an
owner of the resource, you must still remove the lock before you can perform
the blocked activity.
This exercise is a Bring your own subscription exercise, meaning you’ll need
to provide your own Azure subscription to complete the exercise. Don’t worry
though, the entire exercise can be completed for free with the 12 month free
services when you sign up for an Azure account.
For help with signing up for an Azure account, see the Create an Azure
account learning module.
Once you’ve created your free account, follow the steps below. If you don’t
have an Azure account, you can review the steps to see the process for
adding a simple resource lock to a resource.
5. On the Basics tab of the Create storage account blade, fill in the
following information. Leave the defaults for everything else.
Setting Value
Resource group Create new
Storage account name enter a unique storage account name
Location default
Performance Standard
Redundancy Locally redundant storage (LRS)
7. Once validated, select Create. Wait for the notification that the account
was successfully created.
8. Select Go to resource.
Task 2: Apply a read-only resource lock
In this task you apply a read-only resource lock to the storage account. What
impact do you think that will have on the storage account?
1. Scroll down until you find the Settings section of the blade on the left of
the screen.
2. Select Locks.
3. Select + Add.
6. Select OK.
2. Select Containers.
3. Select + Container.
Note
The error message lets you know that you couldn't create a storage
container because a lock is in place. The read-only lock prevents any create
or update operations on the storage account, so you're unable to create a
storage container.
2. Select Locks.
6. Select Containers.
7. Select + Container.
You can now understand how the read-only lock prevented you from adding
a container to your storage account. Once the lock type was changed (you
could have removed it instead), you were able to add a container.
Task 5: Delete the storage account
You'll actually do this last task twice. Remember that there is a delete lock
on the storage account, so you won't actually be able to delete the storage
account yet.
1. Scroll up until you find Overview at the top of the blade on the left of the
screen.
2. Select Overview.
3. Select Delete.
You should get a notification letting you know you can't delete the resource
because it has a delete lock. In order to delete the storage account, you'll
need to remove the delete lock.
Task 6: Remove the delete lock and delete
the storage account
In the final task, you remove the resource lock and delete the storage
account from your Azure account. This step is important. You want to make
sure you don't have any idle resource just sitting in your account.
1. Select your storage account name in the breadcrumb at the top of the
screen.
2. Scroll down until you find the Settings section of the blade on the left of
the screen.
3. Select Locks.
4. Select Delete.
8. Select Delete.
Important: Make sure you complete Task 6, the removal of the storage
account. You are solely responsible for the resources in your Azure account.
Make sure you clean up your account after completing this exercise.
Describe the purpose of the
Service Trust portal
The Microsoft Service Trust Portal is a portal that provides access to various
content, tools, and other resources about Microsoft security, privacy, and
compliance practices.
The Service Trust Portal features and content are accessible from the main
menu. The categories on the main menu are:
Note: Service Trust Portal reports and documents are available to download
for at least 12 months after publishing or until a new version of document
becomes available.
Introduction
This module introduces you to features and tools for managing and
deploying Azure resources. You learn about the Azure portal (a graphic
interface for managing Azure resources), the command line, and scripting
tools that help deploy or configure resources. You also learn about Azure
services that help you manage your on-premises and multicloud
environments from within Azure.
Learning objectives
After completing this module, you’ll be able to:
Azure portal
Azure PowerShell
Azure Command Line Interface (CLI)
Build, manage, and monitor everything from simple web apps to complex
cloud deployments
Create custom dashboards for an organized view of resources
Configure accessibility options for an optimal experience
Azure Cloud Shell is a browser-based shell tool that allows you to create,
configure, and manage Azure resources using a shell. Azure Cloud Shell
support both Azure PowerShell and the Azure Command Line Interface (CLI),
which is a Bash shell.
You can access Azure Cloud Shell via the Azure portal by selecting the Cloud
Shell icon:
Azure Cloud Shell has several features that make it a unique offering to
support you in managing Azure. Some of those features are:
In addition to be available via Azure Cloud Shell, you can install and
configure Azure PowerShell on Windows, Linux, and Mac platforms.
The Azure CLI provides the same benefits of handling discrete tasks or
orchestrating complex operations through code. It’s also installable on
Windows, Linux, and Mac platforms, as well as through Azure Cloud Shell.
In utilizing Azure Resource Manager (ARM), Arc lets you extend your Azure
compliance and monitoring to your hybrid and multi-cloud configurations.
Azure Arc simplifies governance and management by delivering a consistent
multi-cloud and on-premises management platform.
Servers
Kubernetes clusters
Azure data services
SQL Server
Virtual machines (preview)
When a user sends a request from any of the Azure tools, APIs, or SDKs, ARM
receives the request. ARM authenticates and authorizes the request. Then,
ARM sends the request to the Azure service, which takes the requested
action. You see consistent results and capabilities in all the different tools
because all requests are handled through the same API.
Infrastructure as code
Infrastructure as code is a concept where you manage your infrastructure as
lines of code. At an introductory level, it's things like using Azure Cloud Shell,
Azure PowerShell, or the Azure CLI to manage and configure your resources.
As you get more comfortable in the cloud, you can use the infrastructure as
code concept to manage entire deployments using repeatable templates and
configurations. ARM templates and Bicep are two examples of using
infrastructure as code with the Azure Resource Manager to maintain your
environment.
ARM templates
By using ARM templates, you can describe the resources you want to use in
a declarative JSON format. With an ARM template, the deployment code is
verified before any code is run. This ensures that the resources will be
created and connected correctly. The template then orchestrates the
creation of those resources in parallel. That is, if you need 50 instances of
the same resource, all 50 instances are created at the same time.
ARM templates provide many benefits when planning for deploying Azure
resources. Some of those benefits include:
Declarative syntax: ARM templates allow you to create and deploy an entire
Azure infrastructure declaratively. Declarative syntax means you declare what
you want to deploy but don’t need to write the actual programming commands
and sequence to deploy the resources.
Repeatable results: Repeatedly deploy your infrastructure throughout the
development lifecycle and have confidence your resources are deployed in a
consistent manner. You can use the same ARM template to deploy multiple
dev/test environments, knowing that all the environments are the same.
Orchestration: You don't have to worry about the complexities of ordering
operations. Azure Resource Manager orchestrates the deployment of
interdependent resources, so they're created in the correct order. When
possible, Azure Resource Manager deploys resources in parallel, so your
deployments finish faster than serial deployments. You deploy the template
through one command, rather than through multiple imperative commands.
Modular files: You can break your templates into smaller, reusable
components and link them together at deployment time. You can also nest one
template inside another template. For example, you could create a template
for a VM stack, and then nest that template inside of templates that deploy
entire environments, and that VM stack will consistently be deployed in each of
the environment templates.
Extensibility: With deployment scripts, you can add PowerShell or Bash
scripts to your templates. The deployment scripts extend your ability to set up
resources during deployment. A script can be included in the template or
stored in an external source and referenced in the template. Deployment
scripts give you the ability to complete your end-to-end environment setup in a
single ARM template.
Bicep
Support for all resource types and API versions: Bicep immediately
supports all preview and GA versions for Azure services. As soon as a resource
provider introduces new resource types and API versions, you can use them in
your Bicep file. You don't have to wait for tools to be updated before using the
new services.
Simple syntax: When compared to the equivalent JSON template, Bicep files
are more concise and easier to read. Bicep requires no previous knowledge of
programming languages. Bicep syntax is declarative and specifies which
resources and resource properties you want to deploy.
Repeatable results: Repeatedly deploy your infrastructure throughout the
development lifecycle and have confidence your resources are deployed in a
consistent manner. Bicep files are idempotent, which means you can deploy
the same file many times and get the same resource types in the same state.
You can develop one file that represents the desired state, rather than
developing lots of separate files to represent updates.
Orchestration: You don't have to worry about the complexities of ordering
operations. Resource Manager orchestrates the deployment of interdependent
resources so they're created in the correct order. When possible, Resource
Manager deploys resources in parallel so your deployments finish faster than
serial deployments. You deploy the file through one command, rather than
through multiple imperative commands.
Modularity: You can break your Bicep code into manageable parts by using
modules. The module deploys a set of related resources. Modules enable you
to reuse code and simplify development. Add the module to a Bicep file
anytime you need to deploy those resources.
Introduction
In this module, you’ll be introduced to tools that help you monitor your
environment and applications, both in Azure and in on-premises or
multicloud environments.
Learning objectives
After completing this module, you’ll be able to:
The recommendations are available via the Azure portal and the API, and you
can set up notifications to alert you to new recommendations.
When you're in the Azure portal, the Advisor dashboard displays personalized
recommendations for all your subscriptions. You can use filters to select
recommendations for specific subscriptions, resource groups, or services.
The recommendations are divided into five categories:
By using Azure status, Service health, and Resource health, Azure Service
Health gives you a complete view of your Azure environment-all the way
from the global status of Azure services and regions down to specific
resources. Additionally, historical alerts are stored and accessible for later
review. Something you initially thought was a simple anomaly that turned
into a trend, can readily be reviewed and investigated thanks to the
historical alerts.
The following diagram illustrates just how comprehensive Azure Monitor is:
On the left is a list of the sources of logging and metric data that can be
collected at every layer in your application architecture, from application to
operating system and network.
In the center, the logging and metric data are stored in central repositories.
On the right, the data is used in several ways. You can view real-time and
historical performance across each layer of your architecture or aggregated
and detailed information. The data is displayed at different levels for
different audiences. You can view high-level reports on the Azure Monitor
Dashboard or create custom views by using Power BI and Kusto queries.
Additionally, you can use the data to help you react to critical events in real
time, through alerts delivered to teams via SMS, email, and so on. Or you
can use thresholds to trigger autoscaling functionality to scale to meet the
demand.
Azure Monitor Alerts use action groups to configure who to notify and what
action to take. An action group is simply a collection of notification and
action preferences that you associate with one or multiple alerts. Azure
Monitor, Service Health, and Azure Advisor all use actions groups to notify
you when an alert has been triggered.
Application Insights
Application Insights, an Azure Monitor feature, monitors your web
applications. Application Insights is capable of monitoring applications that
are running in Azure, on-premises, or in a different cloud environment.
There are two ways to configure Application Insights to help monitor your
application. You can either install an SDK in your application, or you can use
the Application Insights agent. The Application Insights agent is supported in
C#.NET, VB.NET, Java, JavaScript, Node.js, and Python.
Not only does Application Insights help you monitor the performance of your
application, but you can also configure it to periodically send synthetic
requests to your application, allowing you to check the status and monitor
your application even during periods of low activity.