Azure
Azure
behind a public load balancer. The web tier is integrated with middle tier that resides on premises via
API’s. The application utilizes Postgre sql (RDS) in multi AZ mode for handling static and configuration
data for fast rendering of pages. In addition elasticache redis instance is being used as part of a ruby
gem. The static contents for this service are served via cloudfront with S3 being the data storage of the
contents.
Worked with POC for setting up data lake analytics platform utilizing azure data factory (data ingestion),
azure data bricks (Apache Spark-based analytics platform), azure data lake storage gen 2, Power BI
Virtual machines
Containers
Azure App Service
Serverless computing
Availability sets
Virtual Machine Scale Sets
Azure Batch
Up to three fault domains that each have a server rack with dedicated power and
network resources
Five logical update domains which then can be increased to a maximum of 20
What is Azure Batch?
Azure Batch enables large-scale job scheduling and compute management with the
ability to scale to tens, hundreds, or thousands of VMs.
There may be situations in which you need raw computing power or supercomputer
level compute power. Azure provides these capabilities.
In this module, you'll learn about using Azure Batch to create and run parallel tasks with the
Azure CLI, and how to use the CLI to check the status of Batch jobs and tasks. This module also
describes how to use the standalone Batch Explorer tool
https://docs.microsoft.com/en-us/learn/modules/run-parallel-tasks-in-azure-batch-with-the-azure-cli/8-
knowledge-check
o create a starter web application, we'll use Maven, a commonly used project
management and build tool for Java apps. Maven includes a feature
called archetypes that can quickly create starter code for different kinds of applications.
We can use the maven-archetype-webapp template to generate the code for a simple web
app that displays "Hello World!" on its homepage.
Run these commands in the Cloud Shell now to create a new web app:
cd ~
cd helloworld
mvn package
When the command finishes running, if you change to the target directory and run ls, you'll
see a file listed called helloworld.war. This is the web application package that we will deploy to
App Service.
Now, let's see how we can deploy our application to App Service.
Automated deployment
Azure supports automated deployment directly from several sources. The following
options are available:
Azure DevOps: You can push your code to Azure DevOps (previously known as Visual
Studio Team Services), build your code in the cloud, run the tests, generate a release from
the code, and finally, push your code to an Azure Web App.
GitHub: Azure supports automated deployment directly from GitHub. When you connect
your GitHub repository to Azure for automated deployment, any changes you push to
your production branch on GitHub will be automatically deployed for you.
Bitbucket: With its similarities to GitHub, you can configure an automated deployment
with Bitbucket.
OneDrive: Microsoft's cloud-based storage. You must have a Microsoft Account linked to
a OneDrive account to deploy to Azure.
Dropbox: Azure supports deployment from Dropbox, which is a popular cloud-based
storage system that is similar to OneDrive.
Manual deployment
There are a few options that you can use to manually push your code to Azure:
Git: App Service web apps feature a Git URL that you can add as a remote repository.
Pushing to the remote repository will deploy your app.
az webapp up: webapp up is a feature of the az command-line interface that packages
your app and deploys it. Unlike other deployment methods, az webapp up can create a
new App Service web app for you if you haven't already created one.
Zipdeploy: Use az webapp deployment source config-zip to send a ZIP of your
application files to App Service. Zipdeploy can also be accessed via basic HTTP utilities
such as curl.
Visual Studio: Visual Studio features an App Service deployment wizard that can walk you
through the deployment process.
FTP/S: FTP or FTPS is a traditional way of pushing your code to many hosting
environments, including App Service.
Some App Service deployment techniques, including the one we'll use here, require a
username and password that are separate from your Azure login. Every web app comes
preconfigured with its own username and a password that can be reset to a new
random value, but can't be changed to something you choose.
Instead of finding those credentials for each one of your apps and storing them
somewhere, you can use an App Service feature called User Deployment Credentials to
create your own username and password. The values you choose will work for
deployments on all App Service web apps that you have permissions to, including new
web apps that you create in the future. The username and password you select are tied
to your Azure login and intended only for your use, so don't share them with others.
You can change both the username and password at any time.
The easiest way to create deployment credentials is from the Azure CLI. Run the
following command in the Cloud Shell to set them up,
substituting [username] and [password] with values you choose.
Azure CLICopy
ConsoleCopy
cd ~/helloworld/target
curl -v -X POST -u [username]:[password]
https://[sitename].scm.azurewebsites.net/api/wardeploy --data-binary @helloworld.war
When the command finishes running, open a new browser tab and navigate
to https://[web_app_name].azurewebsites.net . You'll see the greeting message from your
app — you've deployed successfully!
1. If you do not already have Python installed, download and install Python
3.7: https://www.python.org/downloads/.
2. Download and install Jupyter Notebook: http://jupyter.org/install. Follow the instructions for
"Installing Jupyter with pip", use the commands under the section for Python 3
3. Download and install Turi Create: https://github.com/apple/turicreate#installation. Note: it is not
required that you use virtualenv, but it might be helpful, especially if you run into
installation issues due to conflicting versions of software.
When you deploy a VPN gateway, you specify the VPN type: either policy-based or route-based. The
main difference of these two types of VPNs is how traffic to be encrypted is specified
Policy-based VPNs
Policy-based VPN gateways specify statically the IP address of packets that should be encrypted through
each tunnel. This type of device evaluates every data packet against those sets of IP addresses to choose
the tunnel where that packet is going to be sent through. Key features of policy-based VPN gateways in
Azure include:
Supports IKEv2.
Uses any-to-any (wildcard) traffic selectors.
Can use dynamic routing protocols, where routing/forwarding tables direct traffic to different
IPSec tunnels. In this case, the source and destination networks are not statically defined as they
are in policy-based VPNs or even in route-based VPNs with static routing. Instead, data packets
are encrypted based on network routing tables that are created dynamically using routing
protocols such as BGP (Border Gateway Protocol).
Both types of VPN gateways (route-based and policy-based) in Azure use pre-shared key as the only
method of authentication. Both types also rely on Internet Key Exchange (IKE) in either version 1 or
version 2 and Internet Protocol Security (IPSec). IKE is used to set up a security association (an
agreement of the encryption) between two endpoints. This association is then passed to the IPSec suite,
which encrypts and decrypts data packets encapsulated in the VPN tunnel.
Note: Basic VPN Gateway should only be used for Dev/Test workloads. In addition, it is unsupported to
migrate from Basic to the VpnGW1/2/3/Az skus at a later time without having to remove the gateway
and redeploy.
CloudExchange co-location
Point-to-point Ethernet connection
Any-to-any connection
Security considerations
With ExpressRoute, your data doesn’t travel over the public internet, so it's not exposed to the potential
risks associated with internet communications. ExpressRoute is a private connection from your on-
premises infrastructure to your Azure infrastructure. Even if you have an ExpressRoute connection, DNS
queries, certificate revocation list checking, and Azure Content Delivery Network requests are still sent
over the public internet.
1.
It's a service that provides a VPN connection between on-premises and the Microsoft cloud.
It's a service that provides a direct connection from your on-premises datacenter to the Microsoft cloud.
It's a service that provides a site-to-site VPN connection between your on-premises network and the
Microsoft cloud.
2.
Redundant connectivity
ExpressRoute works by peering your on-premises networks with networks running in the Microsoft
cloud. Resources on your networks can communicate directly with resources hosted by Microsoft. To
support these peerings, ExpressRoute has a number of network and routing requirements:
Ensure that BGP sessions for routing domains have been configured. Depending on your
partner, this might be their or your responsibility. Additionally, for each ExpressRoute circuit,
Microsoft requires redundant BGP sessions between Microsoft’s routers and your peering
routers.
You or your providers need to translate the private IP addresses used on-premises to public IP
addresses by using a NAT service. Microsoft will reject anything except public IP addresses
through Microsoft peering.
Reserve several blocks of IP addresses in your network for routing traffic to the Microsoft cloud.
You configure these blocks as either a /29 subnet or two /30 subnets in your IP address space.
One of these subnets is used to configure the primary circuit to the Microsoft cloud, and the
other implements a secondary circuit. You use the first address in these subnets to
communicate with services in the Microsoft cloud. Microsoft uses the second address to
establish a BGP session.
Use private peering to connect to Azure IaaS and PaaS services deployed inside Azure virtual
networks. The resources that you access must all be located in one or more Azure virtual
networks with private IP addresses. You can't access resources through their public IP address
over a private peering.
Use Microsoft peering to connect to Azure PaaS services, Office 365 services, and Dynamics 365.
The ExpressRoute circuit page (shown earlier) lists each peering and its properties. You can select a
peering to configure these properties.
Peer ASN. The autonomous system number for your side of the peering. This ASN can be public
or private, and 16 bits or 32 bits.
Primary subnet. This is the address range of the primary /30 subnet that you created in your
network. You'll use the first IP address in this subnet for your router. Microsoft uses the second
for its router.
Secondary subnet. This is the address range of your secondary /30 subnet. This subnet provides
a secondary link to Microsoft. The first two addresses are used to hold the IP address of your
router and the Microsoft router.
VLAN ID. This is the VLAN on which to establish the peering. The primary and secondary links
will both use this VLAN ID.
Shared key. This is an optional MD5 hash that's used to encode messages passing over the
circuit.
Advertised public prefixes. This is a list of the address prefixes that you use over the BGP
session. These prefixes must be registered to you, and must be prefixes for public address
ranges.
Customer ASN. This is optional. It's the client-side autonomous system number to use if you are
advertising prefixes that aren't registered to the peer ASN.
Routing registry name. This name identifies the registry in which the customer ASN and public
prefixes are registered.
Before you can connect to a private circuit, you must create an Azure virtual network gateway by using a
subnet on one of your Azure virtual networks. The virtual network gateway provides the entry point to
network traffic that enters from your on-premises network. It directs incoming traffic through the virtual
network to your Azure resources.
You can configure network security groups and firewall rules to control the traffic that's routed from
your on-premises network. You can also block requests from unauthorized addresses in your on-
premises network
You can configure network security groups and firewall rules to control the traffic that's routed from
your on-premises network. You can also block requests from unauthorized addresses in your on-
premises network.
Up to 10 virtual networks can be linked to an ExpressRoute circuit, but these virtual networks must be in
the same geopolitical region as the ExpressRoute circuit. You can link a single virtual network to four
ExpressRoute circuits if necessary. The ExpressRoute circuit can be in the same subscription to the
virtual network, or in a different one.
If you're using the Azure portal, you connect a peering to a virtual network gateway as follows:
You enroll your subscription with Microsoft to activate ExpressRoute Direct. For more information, visit
the ExpressRoute article in the "Learn more" section at the end of this module.
ExpressRoute Direct supports FastPath. When FastPath is enabled, it sends network traffic directly to a
virtual machine that's the intended destination. The traffic bypasses the virtual network gateway,
improving the performance between Azure virtual networks and on-premises networks.
FastPath doesn't support virtual network peering (where you have virtual networks connected
together). It also doesn't support user-defined routes on the gateway subnet.
With ExpressRoute enabled, you can connect to Microsoft through one of several peering connections
and have access to regions within the same geopolitical region. For example, if you connect to Microsoft
through ExpressRoute in France, you'll have access to all Microsoft services hosted in Western Europe.
You can also enable ExpressRoute Premium, which provides cross-region accessibility. For example, if
you access Microsoft through ExpressRoute in Germany, you'll have access to all Microsoft cloud
services in all regions globally.
You can also take advantage of a feature called ExpressRoute Global Reach. It allows you to exchange
data across all of your on-premises datacenters by connecting all of your ExpressRoute circuits.
Point-to-site is useful if you have only a few clients that need to connect to a virtual network.
Secure and isolate access to Azure resources by using network security groups and service endpoints
Identify the capabilities and features of network security groups.
Identify the capabilities and features of virtual network service endpoints.
Use network security groups to restrict network connectivity.
Use virtual network service endpoints to control network traffic to and from Azure services.
multiple IP addresses
multiple ports
service tags
application security groups
Suppose your company wants to restrict access to resources in your datacenter, spread across several
network address ranges. With augmented rules, you can add all these ranges into a single rule, reducing
the administrative overhead and complexity in your network security groups.
You can't create or delete system routes. But you can override the system routes by adding custom
routes to control traffic flow to the next hop.
The Next hop type column shows the network path taken by traffic sent to each address prefix. The path
can be one of the following hop types:
Virtual network: A route is created in the address prefix. The prefix represents each address
range created at the virtual-network level. If multiple address ranges are specified, multiple
routes are created for each address range.
Internet: The default system route 0.0.0.0/0 routes any address range to the internet, unless
you override Azure's default route with a custom route.
None: Any traffic routed to this hop type is dropped and doesn't get routed outside the subnet.
By default, the following IPv4 private-address prefixes are created: 10.0.0.0/8, 172.16.0.0/12,
and 192.168.0.0/16. The prefix 100.64.0.0/10 for a shared address space is also added. None of
these address ranges are globally routable.
Within Azure, there are additional system routes. Azure will create these routes if the following
capabilities are enabled:
Custom routes
System routes might make it easy for you to quickly get your environment up and running. But there are
many scenarios in which you'll want to more closely control the traffic flow within your network. For
example, you might want to route traffic through an NVA or through a firewall from partners and others.
This control is possible with custom routes.
You have two options for implementing custom routes: create a user-defined route or use Border
Gateway Protocol (BGP) to exchange routes between Azure and on-premises networks.
User-defined routes
You use a user-defined route to override the default system routes so that traffic can be routed through
firewalls or NVAs.
For example, you might have a network with two subnets and want to add a virtual machine in the
perimeter network to be used as a firewall. You create a user-defined route so that traffic passes
through the firewall and doesn't go directly between the subnets.
When creating user-defined routes, you can specify these next hop types:
Virtual appliance: A virtual appliance is typically a firewall device used to analyze or filter traffic
that is entering or leaving your network. You can specify the private IP address of a NIC attached
to a virtual machine so that IP forwarding can be enabled. Or you can provide the private IP
address of an internal load balancer.
Virtual network gateway: Use to indicate when you want routes for a specific address to be
routed to a virtual network gateway. The virtual network gateway is specified as a VPN for the
next hop type.
Virtual network: Use to override the default system route within a virtual network.
Internet: Use to route traffic to a specified address prefix that is routed to the internet.
None: Use to drop traffic sent to a specified address prefix.
With user-defined routes, you can't specify the next hop type VirtualNetworkServiceEndpoint, which
indicates virtual network peering.
The longer the route prefix, the shorter the list of IP addresses available through that prefix. By using
longer prefixes, the routing algorithm can select the intended address more quickly.
You can't configure multiple user-defined routes with the same address prefix.
If multiple routes share the same address prefix, Azure selects the route based on its type in the
following order of priority:
4. User-defined routes
5. BGP routes
6. System routes
"$schema":
"http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "",
"parameters": { },
"variables": { },
"functions": [ ],
"resources": [ ],
"outputs": { }
Parameters
"parameters": {
"adminUsername": {
"type": "string",
"metadata": {
},
"adminPassword": {
"type": "securestring",
"metadata": {
"description": "Password for the Virtual Machine."
Variables
"variables": {
"nicName": "myVMNic",
"addressPrefix": "10.0.0.0/16",
"subnetName": "Subnet",
"subnetPrefix": "10.0.0.0/24",
"publicIPAddressName": "myPublicIP",
"virtualNetworkName": "MyVNET"
Functions
"functions": [
"namespace": "contoso",
"members": {
"uniqueName": {
"parameters": [
"name": "namePrefix",
"type": "string"
],
"output": {
"type": "string",
],
Resources
"resources": [
"type": "Microsoft.Network/publicIPAddresses",
"name": "[variables('publicIPAddressName')]",
"location": "[parameters('location')]",
"apiVersion": "2018-08-01",
"properties": {
"publicIPAllocationMethod": "Dynamic",
"dnsSettings": {
"domainNameLabel": "[parameters('dnsLabelPrefix')]"
],
Outputs
In this module, you'll:
Microsoft Azure provides several different ways to host and execute code or workflows
without using Virtual Machines (VMs) including Azure Functions, Microsoft Power
Automate, Azure Logic Apps, and Azure WebJobs. In this module, you will learn about
these technologies and how to choose the right one for a given scenario.
Business processes modeled in software are often called workflows. Azure includes four
different technologies that you can use to build and implement workflows that integrate
multiple systems:
Logic Apps
Microsoft Power Automate
WebJobs
Azure Functions
Design-first technologies
When business analysts discuss and plan a business process, they may draw a flow
diagram on paper. With Logic Apps and Microsoft Power Automate, you can take a
similar approach to designing a workflow. They both include user interfaces in which
you can draw out the workflow. We call this approach a design-first approach.
Microsoft Power Automate is a service that you can use to create workflows even when
you have no development or IT Pro experience. You can create workflows that integrate
and orchestrate many different components by using the website or the Microsoft
Power Automate mobile app.
There are four different types of flow that you can create:
Automated: A flow that is started by a trigger from some event. For example, the
event could be the arrival of a new tweet or a new file being uploaded.
Button: Use a button flow to run a repetitive task with a single click from your
mobile device.
Scheduled: A flow that executes on a regular basis such as once a week, on a
specific date, or after 10 hours.
Business process: A flow that models a business process such as the stock
ordering process or the complaints procedure.
Code-first technologies
The developers on your team will likely prefer to write code when they want to
orchestrate and integrate different business applications into a single workflow. This is
the case when you need more control over the performance of your workflow or need
to write custom code as part of the business process. For such people, Azure includes
WebJobs and Functions.
WebJobs are a part of the Azure App Service that you can use to run a program or script
automatically. There are two kinds of WebJob:
Continuous. These WebJobs run in a continuous loop. For example, you could
use a continuous WebJob to check a shared folder for a new photo.
Triggered. These WebJobs run when you manually start them or on a schedule.
The WebJobs SDK only supports C# and the NuGet package manager.
When you create an Azure Function, you can start by writing the code for it in the portal.
Alternatively, if you need source code management, you can use GitHub or Azure
DevOps Services.
To create an Azure Function, choose from the range of templates. The following list is an
example of some of the templates available to you.
HTTPTrigger. Use this template when you want the code to execute in response
to a request sent through the HTTP protocol.
TimerTrigger. Use this template when you want the code to execute according
to a schedule.
BlobTrigger. Use this template when you want the code to execute when a new
blob is added to an Azure Storage account.
CosmosDBTrigger. Use this template when you want the code to execute in
response to new or updated documents in a NoSQL database.
Azure Functions can integrate with many different services both within Azure and from
third parties. These services can trigger your function, or send data input to your
function, or receive data output from your function.
There are several HPC and batch processing choices available on Azure. You talk with an
Azure expert who advises you to focus on three options: Azure Batch, Azure VM HPC
Instances, and Microsoft HPC Pack.
In a series of 100 tasks and 10 nodes, for example, Batch schedules the first 10 tasks
onto those 10 nodes. Batch immediately allocates later tasks when nodes finish
processing. For spiky workloads, you can configure scaling rules, which Batch also
handles automatically. If you provision 100 VMs with no Batch context, you must code
these scheduling and work allocation mechanisms by hand.
1.
You're trying to provision several H-series Azure VMs in the Azure portal to solve some
complex financial equations. How can you resolve the errors you are receiving?
Use the Azure Virtual Machines pricing detail page to ensure that the problem you want
to solve is supported by the kind of VM you're trying to deploy.
Tell Azure that your subscription needs to support a greater number of cores than is
allowed by default.
Because H-series VMs use large numbers of cores, you can quickly reach the limit
for your subscription. Open a support request to increase this limit.
2.
You want to deploy an HB-series VM for a weather modeling startup. However, this type
of VM doesn't appear as an option in the portal. What should you check?
If you need more flexible control of your high-performance infrastructure, or you want
to manage both cloud and on-premises VMs, consider using the Microsoft HPC Pack.
In researching options for the engineering organization, you've looked at Azure Batch
and Azure HPC Instances. But what if you want to have full control of the management
and scheduling of your clusters of VMs? What if you have significant investment in on-
premises infrastructure in your datacenter? HPC Pack offers a series of installers for
Windows that allows you to configure yo.
HPC Pack enables you to manage both on-premises and cloud infrastructure.
You don't want to be responsible for deciding how to optimize the way the HPC work
gets allocated.
You're trying to set up an HPC Pack topology, starting with on-premises resources.
Which version of Windows Server should you use for the head node?
You can use Windows Server 2012 or any later version for the head node.
ur own control and management plane, and highly flexible deployments of on-premises
and cloud nodes. By contrast with the exclusively cloud-based Batch, HPC Pack has the
flexibility to deploy to on-premises and the cloud. It uses a hybrid of both to expand to
the cloud when your on-premises reserves are insufficient.
Think of Microsoft HPC Pack as a version of the Batch management and scheduling
control layer, over which you have full control, and for which you have responsibility.
Deployment of HPC Pack requires Windows Server 2012 or later, and takes careful
consideration to implement.
1.
You want to be able to expand from on-premises into the cloud as needed.
You need the most powerful VMs available on Azure, and the InfiniBand networking
they offer.
2.
You've got a problem that requires you to use 3D Studio Max. You want the flexibility to
pay the licensing fees on demand. What's the best Azure solution for this task?
Azure Batch
Batch also lets you use some of the most important 3D rendering packages, like
Maya, 3D Studio Max, and Chaos V-Ray.
Azure Batch is an Azure service that enables you to run large-scale parallel and high-
performance computing (HPC) applications efficiently in the cloud. There's no need to
manage or configure infrastructure. Just schedule the job, allocate the resources you
need, and let Batch take care of the rest.
In this module, you'll learn about using Azure Batch to create and run parallel tasks with
the Azure CLI, and how to use the CLI to check the status of Batch jobs and tasks. This
module also describes how to use the standalone Batch Explorer tool to monitor
ongoing jobs.
A sample parallel task
To get to grips with Azure Batch and the CLI, you decide on a simple proof-of-concept
to demonstrate the different nodes working together in parallel. You will loop a number
of times in the CLI, add a numbered task per iteration of the loop, and later download
and look at the metadata generated by each task. This metadata will show the Azure
Batch service scheduling tasks as they are created onto different nodes in sequential
fashion, so that they all execute their work in parallel.
Once an Azure Batch account has been created, what is the next step to take in the
workflow of setting up tasks to run on Azure Batch?
Signing in to the created Batch account is a prerequisite before any other tasks can
be performed.
2.
Which component of Azure Batch allows tasks to be logically grouped together and
have settings in common configured?
An Azure Batch job allows tasks to be logically grouped together and have settings
in common configured.
Many resources can be moved between resource groups with some services having
specific limitations or requirements to move. Resource groups can't be nested. Before
any resource can be provisioned, you need a resource group for it to be placed in.
On the Azure portal menu or from the Home page, select Resource groups, and select
your newly created resource group. Note that you may also see a resource group called
NetworkWatcherRG. You can ignore this resource group, it's created automatically to
enable Network Watcher in Azure virtual networks
A resource can have up to 50 tags. The name is limited to 512 characters for all types of
resources except storage accounts, which have a limit of 128 characters. The tag value is
limited to 256 characters for all types of resources. Tags aren't inherited from parent
resources. Not all resource types support tags, and tags can't be applied to classic
resources.
.
Tagging resources can also help in monitoring to track down impacted resources.
Monitoring systems could include tag data with alerts, giving you the ability to know
exactly who is impacted. In our example above, you applied the Department tag with a
value of Finance to the msftlearn-vnet1 resource. If an alarm was thrown on
msftlearn-vnet1 and the alarm included the tag, you'd know that the finance
department may be impacted by the condition that triggered the alarm. This contextual
information can be valuable if an issue occurs.
t's also common for tags to be used in automation. If you want to automate the
shutdown and startup of virtual machines in development environments during off-
hours to save costs, you can use tags to assist in this automation. Add a shutdown:6PM
and startup:7AM tag to the virtual machines, then create an automation job that looks
for these tags, and shuts them down or starts them up based on the tag value. There are
several solutions in the Azure Automation Runbooks Gallery that use tags in a similar
manner to accomplish this result
You're organizing your resources better in resource groups, and you've applied tags to
your resources to use them in billing reports and in your monitoring solution. Resource
grouping and tagging have made a difference in the existing resources, but how do you
ensure that new resources follow the rules? You'll take a look at how policies can help
you enforce standards in your Azure environment.
hese policies can enforce these rules when resources are created, and can be evaluated
against existing resources to give visibility into compliance.
Policies can enforce things such as only allowing specific types of resources to be
created, or only allowing resources in specific Azure regions. You can enforce naming
conventions across your Azure environment. You can also enforce that specific tags are
applied to resources. You'll take a look at how policies work.
Use Resource Locks to ensure critical resources aren't modified or deleted (as you'll see
in the next unit).
Resource locks are a setting that can be applied to any resource to block modification or
deletion. Resource locks can set to either Delete or Read-only. Delete will allow all
operations against the resource but block the ability to delete it. Read-only will only
allow read activities to be performed against it, blocking any modification or deletion of
the resource. Resource locks can be applied to subscriptions, resource groups, and to
individual resources, and are inherited when applied at higher levels.
At first glance, it might seem like Azure Policy is a way to restrict access to specific
resource types similar to role-based access control (RBAC). However, they solve different
problems. RBAC focuses on user actions at different scopes. You might be added to the
contributor role for a resource group, allowing you to make changes to anything in that
resource group. Azure Policy focuses on resource properties during deployment and for
already-existing resources. Azure Policy controls properties such as the types or
locations of resources. Unlike RBAC, Azure Policy is a default-allow-and-explicit-deny
system.
"if": {
"allOf": [
"field": "type",
"equals": "Microsoft.Compute/virtualMachines"
},
"not": {
"field": "Microsoft.Compute/virtualMachines/sku.name",
"in": "[parameters('listOfAllowedSKUs')]"
},
"then": {
"effect": "Deny"
PowerShell
# Register the resource provider if it's not already
registeredRegister-AzResourceProvider -ProviderNamespace
'Microsoft.PolicyInsights'
Once we have registered the provider, we can create a policy assignment. For example,
here's a policy definition that identifies virtual machines not using managed disks.
PowerShell
# Get a reference to the resource group that will be the scope of the
assignment$rg = Get-AzResourceGroup -Name '<resourceGroupName>'# Get a
reference to the built-in policy definition that will be
assigned$definition = Get-AzPolicyDefinition | Where-Object
{ $_.Properties.DisplayName -eq 'Audit VMs that do not use managed
disks' }# Create the policy assignment with the built-in definition
against your resource groupNew-AzPolicyAssignment -Name 'audit-vm-
manageddisks' -DisplayName 'Audit VMs without managed disks
Assignment' -Scope $rg.ResourceId -PolicyDefinition $definition
Parameter Description
The actual name of the assignment. For this
Name
example, audit-vm-manageddisks was used.
Display name for the policy assignment. In this
DisplayName case, you're using Audit VMs without managed
disks Assignment.
The policy definition, based on which you're using
to create the assignment. In this case, it's the ID
Definition
of policy definition Audit VMs that do not use
managed disks.
A scope determines what resources or grouping
of resources the policy assignment gets enforced
Scope on. It could range from a subscription to resource
groups. Be sure to replace <scope> with the
name of your resource group.
Important facts about management groups
Any Azure AD user in the organization can create a management group. The
creator is given an Owner role assignment.
A single Azure AD organization can support 10,000 management groups.
A management group tree can support up to six levels of depth not including the
Root level or subscription level.
Each management group can have many children.
When your organization creates subscriptions, they are automatically added to
the root management group.
1.
True or false: You can download published audit reports and other compliance-related
information related to Microsoft’s cloud service from the Service Trust Portal
True
You can download published audit reports and other compliance-related information
related to Microsoft’s cloud service from the Service Trust Portal.
False
2.
Which Azure service allows you to configure fine-grained access management for Azure
resources, enabling you to grant users only the rights they need to perform their jobs?
Locks
Policy
Initiatives
Role-based access control (RBAC) provides fine-grained access management for Azure
resources, enabling you to grant users only the rights they need to perform their jobs.
RBAC is provided at no additional cost to all Azure subscriber.
3.
Which Azure service allows you to create, assign, and, manage policies to enforce
different rules and effects over your resources and stay compliant with your corporate
standards and service-level agreements (SLAs)?
Azure Policy
Azure Policy is a service in Azure that you use to create, assign, and, manage policies.
These policies enforce different rules and effects over your resources, so those resources
stay compliant with your corporate standards and service-level agreements (SLAs).
Azure Blueprints
Which of the following services provides up-to-date status information about the health
of Azure services?
Compliance Manager
Azure Monitor
Azure Service Health is the correct answer, because it provides you with a global view of
the health of Azure services. With Azure Status, a component of Azure Service Health,
you can get up-to-the-minute information on service availability.
5.
Where can you obtain details about the personal data Microsoft processes, how
Microsoft processes it, and for what purposes?
You can obtain the details about how Microsoft uses personal data in the Microsoft
Privacy Statement.
Compliance Manager
Trust Center
In this module, you'll explore the monitoring solutions available in Azure. You'll assess
services such as Azure Security Center, Azure Application Insights, and Azure Sentinel, to
analyze infrastructure and applications. You'll also explore how Azure Monitor is used to
unify various monitoring solutions.
1.
Azure Security Center helps you secure your on-premises and cloud resources.
2.
How can you prevent persistent access to your virtual machines by using Azure Security
Center?
With just-in-time access, your virtual machines are only accessed based on rules that
you configure.
3.
Playbooks are automated procedures that you can run against alerts.
1.
You want to analyze and address problems that affect your cloud infrastructure's
security.
You want to analyze and address problems that affect your on-premises infrastructure's
security.
You want to analyze and address problems that affect your application's health.
You can analyze and address issues such as exceptions, failures, and availability
problems.
2.
How can you continuously monitor your applications from different geographic
locations?
Use availability tests to continuously monitor your application from different geographic
locations.
Availability tests let you monitor your application from multiple locations in the world.
Use Log Analytics to continuously monitor your application from different geographic
locations.
3.
Use the gate to stop deployment when an issue has been identified. Deployment will
continue automatically when the issue is resolved.
1.
You want to improve the development lifecycle for an application that spans across on-
premises and the cloud.
You want a detailed overview of your enterprise, potentially across multiple clouds and
on-premises locations.
Azure Sentinel will help monitor and respond to security threats across your entire
enterprise.
You want to be able to cross-query over data collected from multiple sources that span
on-premises and the cloud.
2.
Create an Azure Sentinel instance, and then add Azure Sentinel to a workspace.
Connect your data source, create a workspace, and then add Azure Sentinel to that
workspace.
3.
Sentinel has raised an incident. How can you investigate which users have been
affected?
Use the investigation map, drill down into the incident, and look for data sources.
Use the investigation map, drill down into the incident, and look for user entities
affected by the alert.
Use entities to view users that might have been in the path of a particular threat or
malicious activity.
Use the investigation map, drill down into the incident, and look for playbooks.
1.
You need to write queries to analyze your log data. How would you do this?
You can create and run queries on your logs and view results with Log Analytics.
2.
How can you automatically collect security-related data from all newly created virtual
machines into one central location?
3.
How can you analyze both security-related data and application performance data
together?
Use the Log Analytics agent to query Azure Security Center and Application Insights
workspaces together.
Use a cross-resource query to query Azure Security Center and Application Insights
workspaces together.
You use cross-resource querying to analyze the log data collected from separate
workspaces.
Use automatic provisioning to query Azure Security Center and Application Insights
workspaces together.
1.
2.
Database tables.
Linux Server with stress test and create ubuntu linux server
#cloud-config
package_upgrade: true
packages:
- stress
runcmd:
EOF
az vm create \
--resource-group learn-f50f89f2-6d8d-4a01-8b7e-8b1023726b53 \
--name vm1 \
--image UbuntuLTS \
--custom-data cloud-init.txt \
--generate-ssh-keys
Get VM ID CLI
VMID=$(az vm show \
--resource-group learn-f50f89f2-6d8d-4a01-8b7e-8b1023726b53 \
--name vm1 \
--query id \
--output tsv)
-n "Cpu80PercentAlert" \
--resource-group learn-f50f89f2-6d8d-4a01-8b7e-8b1023726b53 \
--scopes $VMID \
--evaluation-frequency 1m \
--window-size 1m \
--severity 3
3 types of alerts
1 Metric alerts
Log alerts
Activity alerts
Specific operations: Apply to resources within your Azure subscription and often have a scope
with specific resources or a resource group. You use this type when you need to receive an alert
that reports a change to an aspect of your subscription. For example, you can receive an alert if
a virtual machine is deleted or new roles are assigned to a user.
Service health events: Include notice of incidents and maintenance of target resources.
1.
Correct.
2.
Failed
New
Acknowledged
Closed
Azure Monitor is a service for collecting and analyzing telemetry. It helps you get maximum
performance and availability for your cloud applications, and for your on-premises resources and
applications. It shows how your applications are performing and identifies any issues with them.
Because Azure Monitor is an automatic system, it begins to collect data from these sources as soon as
you create Azure resources such as virtual machines and web apps. You can extend the data that Azure
Monitor collects by:
Enabling diagnostics: For some resources, such as Azure SQL Database, you receive full
information about a resource only after you have enabled diagnostic logging for it. You can use
the Azure portal, the Azure CLI, or PowerShell to enable diagnostics.
Adding an agent: For virtual machines, you can install the Log Analytics agent and configure it to
send data to a Log Analytics workspace. This agent increases the amount of information that's
sent to Azure Monitor.
1.
Data from a variety of sources, such as the application event log, the operating system (Windows and
Linux), Azure resources, and custom data sources
2.
Azure Monitor collects two types of data: metrics and logs. Metrics are numerical values that describe
some aspect of a system at a particular time. Logs contain different kinds of data, such as event
information, organized into records.
Blobs aren't limited to common file formats. A blob could contain gigabytes of binary
data streamed from a scientific instrument, an encrypted message for another
application, or data in a custom format for an app you're developing.
Azure Blob storage lets you stream large video or audio files directly to the user's
browser from anywhere in the world. Blob storage is also used to store data for backup,
disaster recovery, and archiving. It has the ability to store up to 8 TB of data for virtual
machines. The following illustration shows an example usage of Azure blob storage.
Azure Data Lake Storage combines the scalability and cost benefits of object storage
with the reliability and performance of the Big Data file system capabilities. The
following illustration shows how Azure Data Lake stores all your business data and
makes it available for analysis.
Data progresses through a flow diagram from ingest in its native format; prepare, where
data is cleansed, enriched, annotated, and schematized; store, where data is retained for
present and future analysis; then to analyze, where analytics engines like Hadoop and
Spark are used on the data. Data is shown ingested to Azure Data Lake Store from
devices, social media, LOB applications, web sites, relational databases, video,
Clickstream, and sensors. From there, it can be accessed with batch queries, interactive
queries, real-time analytics, machine learning, and data warehouse.
Disk types
When working with VMs, you can use standard SSD and HDD disks for less critical
workloads, and premium SSD disks for mission-critical production applications. Azure
Disks have consistently delivered enterprise-grade durability, with an industry-leading
ZERO% annualized failure rate. The following illustration shows an Azure virtual machine
using separate disks to store different data.
Storage tiers
Azure offers three storage tiers for blob object storage:
10. Hot storage tier: optimized for storing data that is accessed frequently.
11. Cool storage tier: optimized for data that are infrequently accessed and stored
for at least 30 days.
12. Archive storage tier: for data that are rarely accessed and stored for at least 180
days with flexible latency requirements.
Encryption for storage services
The following encryption types are available for your resources:
13. Azure Storage Service Encryption (SSE) for data at rest helps you secure your
data to meet the organization's security and regulatory compliance. It encrypts
the data before storing it and decrypts the data before returning it. The
encryption and decryption are transparent to the user.
14. Client-side encryption is where the data is already encrypted by the client
libraries. Azure stores the data in the encrypted state at rest, which is then
decrypted during retrieval.
Suppose you work at a startup with limited funding. Why might you prefer Azure data
storage over an on-premises solution?
To ensure you run on a specific brand of hardware, which will let you form a marketing
partnership with that hardware vendor.
The Azure pay-as-you-go billing model lets you avoid buying expensive hardware.
There are no large, up-front capital expenditures (CapEx) with Azure. You pay monthly
for only the services you use (OpEx).
2.
Which of the following situations would yield the most benefits from relocating an on-
premises data store to Azure?
Unpredictable storage demand that increases and decreases multiple times throughout
the year.
Azure data storage is flexible. You can quickly and easily add or remove capacity. You
can increase performance to handle spikes in load or decrease performance to reduce
costs. In all cases, you pay for only what you use.
3.
A newly released mobile app using Azure data storage has just been mentioned by a
celebrity on social media, seeing a huge spike in user volume. To meet the unexpected
new user demand, what feature of pay-as-you-go storage will be most beneficial?
As the user demand increases, the agility to deploy new servers or services as needed
can help scale to meet the increased user load.
What considerations you need to make when creating an Azure SQL database,
including:
o How a logical server acts as an administrative container for your
databases.
o The differences between purchasing models.
o How elastic pools enable you to share processing power among
databases.
o How collation rules affect how data is compared and sorted.
How to bring up Azure SQL Database from the portal.
How to add firewall rules so that your database is accessible from only trusted
sources.
You can control logins, firewall rules, and security policies through the logical server.
You can also override these policies on each database within the logical server.
Because your logical server can hold more than one database, there's also the idea of
eDTUs, or elastic Database Transaction Units. This option enables you to choose one
price, but allow each database in the pool to consume fewer or greater resources
depending on current load.
What are SQL elastic pools?
When you create your Azure SQL database, you can create a SQL elastic pool.
SQL elastic pools relate to eDTUs. They enable you to buy a set of compute and storage
resources that are shared among all the databases in the pool. Each database can use
the resources they need, within the limits you set, depending on current load.
For your prototype, you won't need a SQL elastic pool because you need only one SQL
database.
What is collation?
Collation refers to the rules that sort and compare data. Collation helps you define
sorting rules when case sensitivity, accent marks, and other language characteristics are
important.
Because you don't have specific requirements around how data is sorted and compared,
you choose the default collation.
Important: Over time if you realize you need additional compute power to keep up with
demand, you can adjust performance options or even switch between the DTU and
vCore performance models.
az sql db list
az sql db list | jq '[.[] | {name: .name}]'
Logistics is your database. Like SQL Server, master includes server metadata, such as
sign-in accounts and system configuration settings.
1.
Who's responsible for performing software updates on your Azure SQL databases and the underlying
OS?
You are. It's up to you to periodically log in and install the latest security patches and updates.
Microsoft Azure. Azure manages the hardware, software updates, and OS patches for you.
Azure SQL databases are a Platform-as-a-Service (PaaS) offering. Azure manages the hardware, software
updates, and OS patches for you.
No one. Your database stays with its original OS and software configuration.
2.
A server that defines the logical rules that sort and compare data.
3.
Your Azure SQL database provides adequate storage and compute power. But you find that you need
additional IO throughput. Which performance model might you use?
DTU
vCore
vCore gives you greater control over what compute and storage resources you create and pay for. You
can increase IO throughput but keep the existing amount of compute and storage.
The pool resource requirements are set based on the overall needs of the group. The pool allows the
databases within the pool to share the allocated resources. SQL elastic pools are used to manage the
budget and performance of multiple SQL databases.
Depending on the performance tier, you can add up to 100 or 500 databases to a single pool.
atabases can be added using the Azure portal, the Azure CLI, or PowerShell.
When using the portal, you can add a new pool to an existing SQL server. Or you can create a new SQL
elastic pool resource and specify the server.
When using the CLI, call az sql db create and specify the pool name using the --elastic-pool-
name parameter. This command can move an existing database into the pool or create a new one if it
doesn't exist.
When using PowerShell, you can assign new databases to a pool using New-AzSqlDatabase and move
existing databases using Set-AzSqlDatabase.
You can add existing Azure SQL databases from your Azure SQL server into the pool or create new
databases. And you can mix service tiers within the same pool.
ADMIN_LOGIN="ServerAdmin"
RESOURCE_GROUP=learn-f53e7651-c7bc-4ff4-bd2b-85573e860915
SERVERNAME=FitnessSQLServer-$RANDOM
LOCATION=<location>
PASSWORD=<password>
--name $SERVERNAME \
--resource-group $RESOURCE_GROUP \
--location $LOCATION \
--admin-user $ADMIN_LOGIN \
--admin-password $PASSWORD
az sql db create \
--resource-group $RESOURCE_GROUP \
--server $SERVERNAME \
--name FitnessParisDB
1.
In the post-migration phase, you validate that your data in the new system is accurate and complete,
matching the data in the original source system. Additionally, you can assess performance to ensure
data is returned in the times outlined in your requirements documentation.
Schema validation
Sync and cutover
2.
Data Migration Assistant assesses your existing database for any compatibility issues with Azure SQL
Database and generates reports with recommended fixes.
Although the online option looks attractive, there's a major downside: cost. The online option requires
creating a SQL Server instance that's based on the Premium price tier. This can become cost prohibitive,
especially when you don't need any of the features of the Premium tier except its support of online
migrations.
1.
The Premium model for Azure SQL Database is expensive. This cost can be a big obstacle to doing online
migrations.
2.
Try to do an offline migration first to see if it will run in an acceptable time frame that doesn't incur the
cost of the Premium database tier.
3.
Azure Database Migration Service is used to perform both offline and online data migrations.
SQL Database currently supports three deployment options: single, elastic pool, and managed instance.
We'll focus on the single-database deployment option.
If Active Directory single sign-on is enabled, you can connect by using your Azure identity.
mv ~/education/data ~/educationdata
cd ~/educationdata
Ls
)
Run the bcp utility to create a format file from the schema of the Courses table in the
database. The format file specifies that the data will be in character format (-c) and
separated by commas (-t,).
Bash
Bash
code courses.fmt
text
text
14.0
2
1 SQLCHAR 0 12 "," 1 CourseID ""
2 SQLCHAR 0 50 "\n" 2 CourseName
SQL_Latin1_General_CP1_CI_AS
Review the file. The data in the first column of the comma-separated file will go
into the CourseID column of the Courses table. The second field will go into the
CourseName column. The second column is character-based and has a collation
that's associated with it. The fields separator in the file is expected to be a
comma. The row terminator (after the second field) should be a newline
character. In a real-world scenario, your data might not be organized this neatly.
You might have different field separators and fields in a different order from the
columns. In that situation, you can edit the format file to change these items on a
field-by-field basis. Press Ctrl+q to close the editor.
Run the following command to import the data in the courses.csv file in the
format that's specified by the amended courses.fmt file. The -F 2 flag directs
the bcp utility to start importing data from line 2 in the data file. The first line
contains headers.
Bash
bcp "$DATABASE_NAME.dbo.courses" in courses.csv -f courses.fmt -S
"$DATABASE_SERVER.database.windows.net" -U $AZURE_USER -P
$AZURE_PASSWORD -F 2
Verify that bcp utility imports 9 rows and doesn't report any errors.
Run the following sequence of operations to import the data for the
dbo.Modules table from the modules.csv file.
15. Generate a format file.
16. Bash
Bash
17. bcp "$DATABASE_NAME.dbo.modules" in modules.csv -f modules.fmt -S
"$DATABASE_SERVER.database.windows.net" -U $AZURE_USER -P
$AZURE_PASSWORD -F 2
Perform the following sequence of operations to import the data for the
dbo.StudyPlans table from the studyplans.csv file.
19. Generate a format file.
20. Bash
Bash
bcp "$DATABASE_NAME.dbo.studyplans" in studyplans.csv -f
studyplans.fmt -S "$DATABASE_SERVER.database.windows.net" -U
$AZURE_USER -P $AZURE_PASSWORD -F 2
Verify that this command imports 45 rows.
14.0
Cat studyplans.fmt
14.0
3
Below is dotnet
Server=tcp:myserver.database.windows.net,1433;Initial Catalog=mydatabase;Persist
Security Info=False;User
ID=myusername;Password=mypassword;MultipleActiveResultSets=False;Encrypt=True;T
rustServerCertificate=False;Connection Timeout=30;
az webapp up \
--resource-group learn-016c467c-a17c-482d-9c1c-147bd0f301f8 \
--name $WEBAPPNAME
We'll start by learning about request units and how to estimate throughput
requirements.
In Azure Cosmos DB, you provision throughput for your containers to run writes, reads,
updates, and deletes. You can provision throughput for an entire database and have it
shared among containers within the database. You can also provision throughput
dedicated to specific containers.
The number of request units consumed for an operation changes depending on the document size, the
number of properties in the document, the operation being performed, and some additional concepts
such as consistency and indexing policy.
You provision the number of RUs on a per-second basis and you can change the value at any time in
increments or decrements of 100 RUs. You can make your changes either programmatically or by using
the Azure portal. You're billed on an hourly basis.
Script usage: As with queries, stored procedures and triggers consume RUs based on the complexity of
their operations. As you develop your application, inspect the request charge header to better
understand how much RU capacity each operation consumes.
The .NET SDK will automatically retry your request after waiting the amount of time specified in the
retry-after header.
True or false: The number of RUs used for a given database operation over the same data varies
over time.
True
False
Azure Cosmos DB ensures that the number of RUs for a given database operation over a given
dataset is deterministic.
2.
Which of the following options affects the number of request units it takes to write a document?
Indexing policy
All the three options (size of the document, Item property count, and Indexing policy) are
considered when provisioning request units.
3.
Which of the following statements is false about Request Units (RUs) in Azure Cosmos DB?
The cost to read a 1 KB item is approximately one Request Unit (or 1 RU).
Once you set the number of request units, it's impossible to modify this number.
You can increase or decrease the number of request units provisioned to a container or a
database.
If you provision 'R' RUs on an Azure Cosmos container (or a database), Azure Cosmos DB
ensures that 'R' RUs are available in each region associated with your account.
What is a partition strategy?
If you continue to add new data to a single server or a single partition, it will eventually run out
of space. A partitioning strategy enables you to add more partitions to your database when need
them. This scaling strategy is called scale out or horizontal scaling.
A partition key defines the partition strategy, it's set when you create a container and can't be
changed. Selecting the right partition key is an important decision to make early in your
development process.
In this unit, you'll learn how to choose a partition key that's right for your scenario, which will
enable you to take advantage of Azure Cosmos DB autoscaling.
The storage space for the data associated with each partition key can't exceed 20 GB, which is the size of
one physical partition in Azure Cosmos DB. So, if your single userID or productId record is going to be
larger than 20 GB, think about using a composite key instead so that each record is smaller. An example
of a composite key would be userID-date, which would look like CustomerName-08072018. This
composite key approach would enable you to create a new partition for each day a user visited the site.
Best practices
When you're trying to determine the right partition key and the solution isn't obvious, here are a
few tips to keep in mind.
Don't be afraid of choosing a partition key that has a large number of values. The more
values your partition key has, the more scalability you have.
To determine the best partition key for a read-heavy workload, review the top three to
five queries you plan on using. The value most frequently included in the WHERE clause
is a good candidate for the partition key.
For write-heavy workloads, you'll need to understand the transactional needs of your
workload, because the partition key is the scope of multi-document transactions.
For each Azure Cosmos DB container, you should specify a partition key that satisfies the
following core properties:
Have a high cardinality. This option allows data to distribute evenly across all physical partitions.
Evenly distribute requests. Remember the total number of RU/s is evenly divided across all
physical partitions.
Evenly distribute storage. Each partition can grow up to 20 GB in size.
In the next two exercises, you will create a database and container. In the first exercise, you will
use the Azure portal to create your database and container. However, if you would prefer to learn
how to create a database and container programmatically, you can skip ahead to the next
exercise.
1.
True or false: You can add a partition key to an Azure Cosmos DB container after it has been
created.
True
False
You can set the partition key only when the container is created.
2.
Your organization is planning to use Azure Cosmos DB to store vehicle telemetry data generated
from millions of vehicles every second. Which of the following options for your Partition Key
will optimize storage distribution?
Vehicle Model
Auto manufacturers have transactions occurring throughout the year. This option will create a
more balanced distribution of storage across partition key values.
n this exercise, you'll use the Azure portal to create an Azure Cosmos DB database named "Products"
with a container named "Clothing", and set your partition key and throughput value
cd myApp
dotnet add package Microsoft.Azure.Cosmos --version 3.0.0
dotnet restore
dotnet build
code .
You can use a framework of Assess, Migrate, Optimize, and Monitor as a path for migration.
Each stage focuses on a particular aspect of ensuring the success of a migration.