0% found this document useful (0 votes)
150 views58 pages

Azure

The document describes an Azure web application that is deployed using a Java web application archive (WAR) file. It discusses using the Maven build tool to generate a starter web application, deploying the WAR file to an Azure App Service web app using the Wardeploy mechanism, and configuring deployment credentials to authenticate the deployment.

Uploaded by

zeeshankaizer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
150 views58 pages

Azure

The document describes an Azure web application that is deployed using a Java web application archive (WAR) file. It discusses using the Maven build tool to generate a starter web application, deploying the WAR file to an Azure App Service web app using the Wardeploy mechanism, and configuring deployment credentials to authenticate the deployment.

Uploaded by

zeeshankaizer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 58

Ruby on rails webservice which is hosted on pool of AWS EC2 virtual machines in an auto scaling group

behind a public load balancer. The web tier is integrated with middle tier that resides on premises via
API’s. The application utilizes Postgre sql (RDS) in multi AZ mode for handling static and configuration
data for fast rendering of pages. In addition elasticache redis instance is being used as part of a ruby
gem. The static contents for this service are served via cloudfront with S3 being the data storage of the
contents.

Worked with POC for setting up data lake analytics platform utilizing azure data factory (data ingestion),
azure data bricks (Apache Spark-based analytics platform), azure data lake storage gen 2, Power BI

What is Azure compute?


There are four common techniques for performing compute in Azure:

 Virtual machines
 Containers
 Azure App Service
 Serverless computing

 otal control over the operating system (OS)


 The ability to run custom software, or
 To use custom hosting configurations

Scaling VMs in Azure

 Availability sets
 Virtual Machine Scale Sets
 Azure Batch

With an availability set, you get:

 Up to three fault domains that each have a server rack with dedicated power and
network resources
 Five logical update domains which then can be increased to a maximum of 20
What is Azure Batch?

Azure Batch enables large-scale job scheduling and compute management with the
ability to scale to tens, hundreds, or thousands of VMs.

When you're ready to run a job, Batch does the following:

 Starts a pool of compute VMs for you


 Installs applications and staging data
 Runs jobs with as many tasks as you have
 Identifies failures
 Requeues work
 Scales down the pool as work completes

There may be situations in which you need raw computing power or supercomputer
level compute power. Azure provides these capabilities.

In this module, you'll learn about using Azure Batch to create and run parallel tasks with the
Azure CLI, and how to use the CLI to check the status of Batch jobs and tasks. This module also
describes how to use the standalone Batch Explorer tool

https://docs.microsoft.com/en-us/learn/modules/run-parallel-tasks-in-azure-batch-with-the-azure-cli/8-
knowledge-check

o create a starter web application, we'll use Maven, a commonly used project
management and build tool for Java apps. Maven includes a feature
called archetypes that can quickly create starter code for different kinds of applications.
We can use the maven-archetype-webapp template to generate the code for a simple web
app that displays "Hello World!" on its homepage.

Run these commands in the Cloud Shell now to create a new web app:

cd ~

mvn archetype:generate -DgroupId=example.demo -DartifactId=helloworld -DinteractiveMode=false -


DarchetypeArtifactId=maven-archetype-webapp
Now, run these commands to change to the new "helloworld" application directory and package
the application for deployment:

cd helloworld

mvn package

When the command finishes running, if you change to the target directory and run ls, you'll
see a file listed called helloworld.war. This is the web application package that we will deploy to
App Service.

Deploy code to App Service


 3 minutes

Now, let's see how we can deploy our application to App Service.

Automated deployment

Automated deployment, or continuous integration, is a process used to push out new


features and bug fixes in a fast and repetitive pattern with minimal impact on end users.

Azure supports automated deployment directly from several sources. The following
options are available:

 Azure DevOps: You can push your code to Azure DevOps (previously known as Visual
Studio Team Services), build your code in the cloud, run the tests, generate a release from
the code, and finally, push your code to an Azure Web App.
 GitHub: Azure supports automated deployment directly from GitHub. When you connect
your GitHub repository to Azure for automated deployment, any changes you push to
your production branch on GitHub will be automatically deployed for you.
 Bitbucket: With its similarities to GitHub, you can configure an automated deployment
with Bitbucket.
 OneDrive: Microsoft's cloud-based storage. You must have a Microsoft Account linked to
a OneDrive account to deploy to Azure.
 Dropbox: Azure supports deployment from Dropbox, which is a popular cloud-based
storage system that is similar to OneDrive.
Manual deployment

There are a few options that you can use to manually push your code to Azure:

 Git: App Service web apps feature a Git URL that you can add as a remote repository.
Pushing to the remote repository will deploy your app.
 az webapp up: webapp up is a feature of the az command-line interface that packages
your app and deploys it. Unlike other deployment methods, az webapp up can create a
new App Service web app for you if you haven't already created one.
 Zipdeploy: Use az webapp deployment source config-zip  to send a ZIP of your
application files to App Service. Zipdeploy can also be accessed via basic HTTP utilities
such as curl.
 Visual Studio: Visual Studio features an App Service deployment wizard that can walk you
through the deployment process.
 FTP/S: FTP or FTPS is a traditional way of pushing your code to many hosting
environments, including App Service.

Next unit: Exercise - Deploy your code to App Service

Configure deployment credentials

Some App Service deployment techniques, including the one we'll use here, require a
username and password that are separate from your Azure login. Every web app comes
preconfigured with its own username and a password that can be reset to a new
random value, but can't be changed to something you choose.

Instead of finding those credentials for each one of your apps and storing them
somewhere, you can use an App Service feature called User Deployment Credentials to
create your own username and password. The values you choose will work for
deployments on all App Service web apps that you have permissions to, including new
web apps that you create in the future. The username and password you select are tied
to your Azure login and intended only for your use, so don't share them with others.
You can change both the username and password at any time.

The easiest way to create deployment credentials is from the Azure CLI. Run the
following command in the Cloud Shell to set them up,
substituting [username] and [password] with values you choose.

Azure CLICopy

az webapp deployment user set --user-name [username] --password [password]


Deploy the application package with wardeploy

Wardeploy is an App Service deployment mechanism that is specifically designed for


deploying WAR web application package files to Java web apps. Wardeploy is part of
the Kudu REST API: an administrative service interface, available on all App Service web
apps, that can be accessed over HTTP. The simplest way to use wardeploy is with
the curl HTTP utility from the command line.

Run the following commands to deploy your app with wardeploy.


Replace [username] and [password] with the Deployment User username and password
you created above, and replace [web_app_name] with the name of your web app.

ConsoleCopy

cd ~/helloworld/target
curl -v -X POST -u [username]:[password]
https://[sitename].scm.azurewebsites.net/api/wardeploy --data-binary @helloworld.war

When the command finishes running, open a new browser tab and navigate
to https://[web_app_name].azurewebsites.net . You'll see the greeting message from your
app — you've deployed successfully!

APPNAME=$(az webapp list --query [0].name --output tsv)

APPRG=$(az webapp list --query [0].resourceGroup --output tsv)

APPPLAN=$(az appservice plan list --query [0].name --output tsv)

APPSKU=$(az appservice plan list --query [0].sku.name --output tsv)

APPLOCATION=$(az appservice plan list --query [0].location --output tsv)

pip freeze > requirements.txt


Downloading and installing Python, Jupyter Notebook and Turi Create on your
own machine

1. If you do not already have Python installed, download and install Python
3.7: https://www.python.org/downloads/.
2. Download and install Jupyter Notebook: http://jupyter.org/install. Follow the instructions for
"Installing Jupyter with pip", use the commands under the section for Python 3
3. Download and install Turi Create: https://github.com/apple/turicreate#installation. Note: it is not
required that you use virtualenv, but it might be helpful, especially if you run into
installation issues due to conflicting versions of software.

Architect network infrastructure in Azure


The virtual private network (VPN) gateway options in Azure.

Connect on-premises networks to Azure by using site-to-site VPN gateways

Azure VPN gateways


A VPN gateway is a type of Virtual Network Gateway. VPN gateways are deployed in Azure virtual
networks and enable the following connectivity:

 Connect on-premises datacenters to Azure virtual networks through a site-to-site connection.


 Connect individual devices to Azure virtual networks through a point-to-site connection.
 Connect Azure virtual networks to other Azure virtual networks through a network-to-network
connection.

When you deploy a VPN gateway, you specify the VPN type: either policy-based or route-based. The
main difference of these two types of VPNs is how traffic to be encrypted is specified

Policy-based VPNs
Policy-based VPN gateways specify statically the IP address of packets that should be encrypted through
each tunnel. This type of device evaluates every data packet against those sets of IP addresses to choose
the tunnel where that packet is going to be sent through. Key features of policy-based VPN gateways in
Azure include:

 Support for IKEv1 only.


 Use of static routing, where combinations of address prefixes from both networks control how
traffic is encrypted and decrypted through the VPN tunnel. The source and destination of the
tunneled networks are declared in the policy and don't need to be declared in routing tables.
 Policy-based VPNs must be used in specific scenarios that require them, such as for compatibility
with legacy on-premises VPN devices.
Route-based VPNs
If defining which IP addresses are behind each tunnel is too cumbersome, route-based gateways can be
used. With route-based gateways, IPSec tunnels are modeled as a network interface or VTI (virtual
tunnel interface). IP routing (static routes or dynamic routing protocols) decide across which one of
these tunnel interfaces to send each packet. Route-based VPNs are the preferred connection method
for on-premises devices, since they are more resilient to topology changes such as the creation of new
subnets, for example. Use a route-based VPN gateway if you need any of the following types of
connectivity:

 Connections between virtual networks


 Point-to-site connections
 Multisite connections
 Coexistence with an Azure ExpressRoute gateway

Key features of route-based VPNs gateways in Azure include:

 Supports IKEv2.
 Uses any-to-any (wildcard) traffic selectors.
 Can use dynamic routing protocols, where routing/forwarding tables direct traffic to different
IPSec tunnels. In this case, the source and destination networks are not statically defined as they
are in policy-based VPNs or even in route-based VPNs with static routing. Instead, data packets
are encrypted based on network routing tables that are created dynamically using routing
protocols such as BGP (Border Gateway Protocol).

Both types of VPN gateways (route-based and policy-based) in Azure use pre-shared key as the only
method of authentication. Both types also rely on Internet Key Exchange (IKE) in either version 1 or
version 2 and Internet Protocol Security (IPSec). IKE is used to set up a security association (an
agreement of the encryption) between two endpoints. This association is then passed to the IPSec suite,
which encrypts and decrypts data packets encapsulated in the VPN tunnel.

Note: Basic VPN Gateway should only be used for Dev/Test workloads. In addition, it is unsupported to
migrate from Basic to the VpnGW1/2/3/Az skus at a later time without having to remove the gateway
and redeploy.

Across on-premises connectivity with ExpressRoute Global Reach


You can enable ExpressRoute Global Reach to exchange data across your on-premises sites by
connecting your ExpressRoute circuits. For example, assume that you have a private datacenter in
California connected to ExpressRoute in Silicon Valley. You have another private datacenter in Texas
connected to ExpressRoute in Dallas. With ExpressRoute Global Reach, you can connect your private
datacenters through two ExpressRoute circuits. Your cross-datacenter traffic will travel through the
Microsoft network.
ExpressRoute connectivity models
ExpressRoute supports three models that you can use to connect your on-premises network to the
Microsoft cloud:

 CloudExchange co-location
 Point-to-point Ethernet connection
 Any-to-any connection

Security considerations
With ExpressRoute, your data doesn’t travel over the public internet, so it's not exposed to the potential
risks associated with internet communications. ExpressRoute is a private connection from your on-
premises infrastructure to your Azure infrastructure. Even if you have an ExpressRoute connection, DNS
queries, certificate revocation list checking, and Azure Content Delivery Network requests are still sent
over the public internet.

1.

What is the Azure ExpressRoute service?

It's a service that provides a VPN connection between on-premises and the Microsoft cloud.

It's a service that encrypts your data in transit.

It's a service that provides a direct connection from your on-premises datacenter to the Microsoft cloud.

This answer is correct.

It's a service that provides a site-to-site VPN connection between your on-premises network and the
Microsoft cloud.

2.

Which of the following is not a benefit of ExpressRoute?

Redundant connectivity

Consistent network throughput

Encrypted network communication

Correct. ExpressRoute does provide private connectivity, but it isn't encrypted.

Access to Microsoft cloud services

rerequisites for ExpressRoute


Before you can connect to Microsoft cloud services by using ExpressRoute, you need to have:
 An ExpressRoute connectivity partner or cloud exchange provider that can set up a connection
from your on-premises networks to the Microsoft cloud.
 An Azure subscription that is registered with your chosen ExpressRoute connectivity partner.
 An active Microsoft Azure account that can be used to request an ExpressRoute circuit.
 An active Office 365 subscription, if you want to connect to the Microsoft cloud and access
Office 365 services.

ExpressRoute works by peering your on-premises networks with networks running in the Microsoft
cloud. Resources on your networks can communicate directly with resources hosted by Microsoft. To
support these peerings, ExpressRoute has a number of network and routing requirements:

 Ensure that BGP sessions for routing domains have been configured. Depending on your
partner, this might be their or your responsibility. Additionally, for each ExpressRoute circuit,
Microsoft requires redundant BGP sessions between Microsoft’s routers and your peering
routers.
 You or your providers need to translate the private IP addresses used on-premises to public IP
addresses by using a NAT service. Microsoft will reject anything except public IP addresses
through Microsoft peering.
 Reserve several blocks of IP addresses in your network for routing traffic to the Microsoft cloud.
You configure these blocks as either a /29 subnet or two /30 subnets in your IP address space.
One of these subnets is used to configure the primary circuit to the Microsoft cloud, and the
other implements a secondary circuit. You use the first address in these subnets to
communicate with services in the Microsoft cloud. Microsoft uses the second address to
establish a BGP session.

ExpressRoute supports two peering schemes:

 Use private peering to connect to Azure IaaS and PaaS services deployed inside Azure virtual
networks. The resources that you access must all be located in one or more Azure virtual
networks with private IP addresses. You can't access resources through their public IP address
over a private peering.
 Use Microsoft peering to connect to Azure PaaS services, Office 365 services, and Dynamics 365.

Create a peering configuration


After the provider status is reported as Provisioned, you can configure the routing for the peerings.
These steps apply only to circuits that are created with service providers who offer Layer 2 connectivity.
For any circuits that operate at Layer 3, the provider might be able to configure the routing for you.

The ExpressRoute circuit page (shown earlier) lists each peering and its properties. You can select a
peering to configure these properties.

Configure private peering


You use private peering to connect your network to your virtual networks running in Azure. To configure
private peering, you must provide the following information:

 Peer ASN. The autonomous system number for your side of the peering. This ASN can be public
or private, and 16 bits or 32 bits.
 Primary subnet. This is the address range of the primary /30 subnet that you created in your
network. You'll use the first IP address in this subnet for your router. Microsoft uses the second
for its router.
 Secondary subnet. This is the address range of your secondary /30 subnet. This subnet provides
a secondary link to Microsoft. The first two addresses are used to hold the IP address of your
router and the Microsoft router.
 VLAN ID. This is the VLAN on which to establish the peering. The primary and secondary links
will both use this VLAN ID.
 Shared key. This is an optional MD5 hash that's used to encode messages passing over the
circuit.

Configure Microsoft peering


You use Microsoft peering to connect to Office 365 and its associated services. To configure Microsoft
peering, you provide a peer ASN, a primary subnet address range, a secondary subnet address range, a
VLAN ID, and an optional shared key as described for a private peering. You must also provide the
following information:

 Advertised public prefixes. This is a list of the address prefixes that you use over the BGP
session. These prefixes must be registered to you, and must be prefixes for public address
ranges.
 Customer ASN. This is optional. It's the client-side autonomous system number to use if you are
advertising prefixes that aren't registered to the peer ASN.
 Routing registry name. This name identifies the registry in which the customer ASN and public
prefixes are registered.

Connect a virtual network to an ExpressRoute circuit


After the ExpressRoute circuit has been established, Azure private peering is configured for your circuit,
and the BGP session between your network and Microsoft is active, you can enable connectivity from
your on-premises network to Azure.

Before you can connect to a private circuit, you must create an Azure virtual network gateway by using a
subnet on one of your Azure virtual networks. The virtual network gateway provides the entry point to
network traffic that enters from your on-premises network. It directs incoming traffic through the virtual
network to your Azure resources.

You can configure network security groups and firewall rules to control the traffic that's routed from
your on-premises network. You can also block requests from unauthorized addresses in your on-
premises network

You can configure network security groups and firewall rules to control the traffic that's routed from
your on-premises network. You can also block requests from unauthorized addresses in your on-
premises network.

Up to 10 virtual networks can be linked to an ExpressRoute circuit, but these virtual networks must be in
the same geopolitical region as the ExpressRoute circuit. You can link a single virtual network to four
ExpressRoute circuits if necessary. The ExpressRoute circuit can be in the same subscription to the
virtual network, or in a different one.

If you're using the Azure portal, you connect a peering to a virtual network gateway as follows:

1. On the ExpressRoute circuit page for your circuit, select Connections.


2. On the Connections page, select Add.
3. On the Add connection page, give your connection a name, and then select your virtual network
gateway. When the operation has finished, your on-premises network will be connected
through the virtual network gateway to your virtual network in Azure. The connection will be
made across the ExpressRoute connection.

ExpressRoute Direct and FastPath


Microsoft also provides an ultra-high-speed option called ExpressRoute Direct. This service enables dual
100-Gbps connectivity. It's suitable for scenarios that involve massive and frequent data ingestion. It's
also suitable for solutions that require extreme scalability, such as banking, government, and retail.

You enroll your subscription with Microsoft to activate ExpressRoute Direct. For more information, visit
the ExpressRoute article in the "Learn more" section at the end of this module.

ExpressRoute Direct supports FastPath. When FastPath is enabled, it sends network traffic directly to a
virtual machine that's the intended destination. The traffic bypasses the virtual network gateway,
improving the performance between Azure virtual networks and on-premises networks.

FastPath doesn't support virtual network peering (where you have virtual networks connected
together). It also doesn't support user-defined routes on the gateway subnet.

Availability and connectivity


Microsoft guarantees a minimum of 99.95 percent availability for an ExpressRoute dedicated circuit.

With ExpressRoute enabled, you can connect to Microsoft through one of several peering connections
and have access to regions within the same geopolitical region. For example, if you connect to Microsoft
through ExpressRoute in France, you'll have access to all Microsoft services hosted in Western Europe.

You can also enable ExpressRoute Premium, which provides cross-region accessibility. For example, if
you access Microsoft through ExpressRoute in Germany, you'll have access to all Microsoft cloud
services in all regions globally.

You can also take advantage of a feature called ExpressRoute Global Reach. It allows you to exchange
data across all of your on-premises datacenters by connecting all of your ExpressRoute circuits.

Point-to-site is useful if you have only a few clients that need to connect to a virtual network.

Secure and isolate access to Azure resources by using network security groups and service endpoints
 Identify the capabilities and features of network security groups.
 Identify the capabilities and features of virtual network service endpoints.
 Use network security groups to restrict network connectivity.
 Use virtual network service endpoints to control network traffic to and from Azure services.

Default security rules


When you create a network security group, Azure creates several default rules. These default rules can't
be changed, but can be overridden with your own rules. These default rules allow connectivity within a
virtual network and from Azure load balancers. They also allow outbound communication to the
internet, and deny inbound traffic from the internet.

The default rules for inbound traffic are:

Default security rules

Priority Rule name Description


Allow inbound coming from any
65000 AllowVnetInbound VM to any VM within the
subnet.
Allow traffic from the default
AllowAzureLoadBalancerInboun
65001 load balancer to any VM within
d
the subnet.
Deny traffic from any external
65500 DenyAllInBound
source to any of the VMs.
The default rules for outbound traffic are:

Default security rules

Priority Rule name Description


Allow outbound going from any
65000 AllowVnetOutbound VM to any VM within the
subnet.
Allow outbound traffic going to
65001 AllowInternetOutbound
the internet from any VM.
Deny traffic from any internal
65500 DenyAllOutBound VM to a system outside the
virtual network.
Augmented security rules
You use augmented security rules for network security groups to simplify the management of large
numbers of rules. Augmented security rules also help when you need to implement more complex
network sets of rules. Augmented rules let you add the following options into a single security rule:

 multiple IP addresses
 multiple ports
 service tags
 application security groups

Suppose your company wants to restrict access to resources in your datacenter, spread across several
network address ranges. With augmented rules, you can add all these ranges into a single rule, reducing
the administrative overhead and complexity in your network security groups.

You can't create or delete system routes. But you can override the system routes by adding custom
routes to control traffic flow to the next hop.

The Next hop type column shows the network path taken by traffic sent to each address prefix. The path
can be one of the following hop types:

 Virtual network: A route is created in the address prefix. The prefix represents each address
range created at the virtual-network level. If multiple address ranges are specified, multiple
routes are created for each address range.
 Internet: The default system route 0.0.0.0/0 routes any address range to the internet, unless
you override Azure's default route with a custom route.
 None: Any traffic routed to this hop type is dropped and doesn't get routed outside the subnet.
By default, the following IPv4 private-address prefixes are created: 10.0.0.0/8, 172.16.0.0/12,
and 192.168.0.0/16. The prefix 100.64.0.0/10 for a shared address space is also added. None of
these address ranges are globally routable.

Within Azure, there are additional system routes. Azure will create these routes if the following
capabilities are enabled:

 Virtual network peering


 Service chaining
 Virtual network gateway
 Virtual network service endpoint

Custom routes
System routes might make it easy for you to quickly get your environment up and running. But there are
many scenarios in which you'll want to more closely control the traffic flow within your network. For
example, you might want to route traffic through an NVA or through a firewall from partners and others.
This control is possible with custom routes.

You have two options for implementing custom routes: create a user-defined route or use Border
Gateway Protocol (BGP) to exchange routes between Azure and on-premises networks.
User-defined routes
You use a user-defined route to override the default system routes so that traffic can be routed through
firewalls or NVAs.

For example, you might have a network with two subnets and want to add a virtual machine in the
perimeter network to be used as a firewall. You create a user-defined route so that traffic passes
through the firewall and doesn't go directly between the subnets.

When creating user-defined routes, you can specify these next hop types:

 Virtual appliance: A virtual appliance is typically a firewall device used to analyze or filter traffic
that is entering or leaving your network. You can specify the private IP address of a NIC attached
to a virtual machine so that IP forwarding can be enabled. Or you can provide the private IP
address of an internal load balancer.
 Virtual network gateway: Use to indicate when you want routes for a specific address to be
routed to a virtual network gateway. The virtual network gateway is specified as a VPN for the
next hop type.
 Virtual network: Use to override the default system route within a virtual network.
 Internet: Use to route traffic to a specified address prefix that is routed to the internet.
 None: Use to drop traffic sent to a specified address prefix.

With user-defined routes, you can't specify the next hop type VirtualNetworkServiceEndpoint, which
indicates virtual network peering.

Route selection and priority


If multiple routes are available in a route table, Azure uses the route with the longest prefix match. For
example, if a message is sent to the IP address 10.0.0.2, but two routes are available with the
10.0.0.0/16 and 10.0.0.0/24 prefixes, Azure selects the route with the 10.0.0.0/24 prefix because it's
more specific.

The longer the route prefix, the shorter the list of IP addresses available through that prefix. By using
longer prefixes, the routing algorithm can select the intended address more quickly.

You can't configure multiple user-defined routes with the same address prefix.

If multiple routes share the same address prefix, Azure selects the route based on its type in the
following order of priority:

4. User-defined routes
5. BGP routes
6. System routes

Basic properties of Azure virtual networks


A virtual network is your network in the cloud. You can divide your virtual network into multiple
subnets. Each subnet has a portion of the IP address space that is assigned to your virtual network. You
can add, remove, expand, or shrink a subnet if there are no VMs or services deployed in it.
By default, all subnets in an Azure virtual network can communicate with each other. However, you can
use a network security group to deny communication between subnets. The smallest subnet that is
supported uses a /29 subnet mask. The largest supported subnet uses a /8 subnet mask.

At the end of this module, you'll be able to:

 Identify the solutions available to provision compute on Azure.


 Choose an appropriate provisioning platform based on your requirements.

ARM template sections

"$schema":
"http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",

"contentVersion": "",

"parameters": { },

"variables": { },

"functions": [ ],

"resources": [ ],

"outputs": { }

Parameters

"parameters": {

"adminUsername": {

"type": "string",

"metadata": {

"description": "Username for the Virtual Machine."

},

"adminPassword": {

"type": "securestring",

"metadata": {
"description": "Password for the Virtual Machine."

Variables

"variables": {

"nicName": "myVMNic",

"addressPrefix": "10.0.0.0/16",

"subnetName": "Subnet",

"subnetPrefix": "10.0.0.0/24",

"publicIPAddressName": "myPublicIP",

"virtualNetworkName": "MyVNET"

Functions

"functions": [

"namespace": "contoso",

"members": {

"uniqueName": {

"parameters": [

"name": "namePrefix",

"type": "string"

],

"output": {
"type": "string",

"value": "[concat(toLower(parameters('namePrefix')), uniqueString(resourceGroup().id))]"

],

Resources

"resources": [

"type": "Microsoft.Network/publicIPAddresses",

"name": "[variables('publicIPAddressName')]",

"location": "[parameters('location')]",

"apiVersion": "2018-08-01",

"properties": {

"publicIPAllocationMethod": "Dynamic",

"dnsSettings": {

"domainNameLabel": "[parameters('dnsLabelPrefix')]"

],

Outputs
In this module, you'll:

 Identify the features and capabilities of virtual machine scale sets.


 Identify the use cases for running applications on virtual machine scale sets.
 Deploy an application on a virtual machine scale set.

Reducing costs by using low-priority scale sets

Microsoft Azure provides several different ways to host and execute code or workflows
without using Virtual Machines (VMs) including Azure Functions, Microsoft Power
Automate, Azure Logic Apps, and Azure WebJobs. In this module, you will learn about
these technologies and how to choose the right one for a given scenario.

Business processes modeled in software are often called workflows. Azure includes four
different technologies that you can use to build and implement workflows that integrate
multiple systems:

 Logic Apps
 Microsoft Power Automate
 WebJobs
 Azure Functions

Design-first technologies
When business analysts discuss and plan a business process, they may draw a flow
diagram on paper. With Logic Apps and Microsoft Power Automate, you can take a
similar approach to designing a workflow. They both include user interfaces in which
you can draw out the workflow. We call this approach a design-first approach.

Microsoft Power Automate is a service that you can use to create workflows even when
you have no development or IT Pro experience. You can create workflows that integrate
and orchestrate many different components by using the website or the Microsoft
Power Automate mobile app.

There are four different types of flow that you can create:

 Automated: A flow that is started by a trigger from some event. For example, the
event could be the arrival of a new tweet or a new file being uploaded.
 Button: Use a button flow to run a repetitive task with a single click from your
mobile device.
 Scheduled: A flow that executes on a regular basis such as once a week, on a
specific date, or after 10 hours.
 Business process: A flow that models a business process such as the stock
ordering process or the complaints procedure.

Code-first technologies
The developers on your team will likely prefer to write code when they want to
orchestrate and integrate different business applications into a single workflow. This is
the case when you need more control over the performance of your workflow or need
to write custom code as part of the business process. For such people, Azure includes
WebJobs and Functions.

WebJobs and the WebJobs SDK

WebJobs are a part of the Azure App Service that you can use to run a program or script
automatically. There are two kinds of WebJob:

 Continuous. These WebJobs run in a continuous loop. For example, you could
use a continuous WebJob to check a shared folder for a new photo.
 Triggered. These WebJobs run when you manually start them or on a schedule.

The WebJobs SDK only supports C# and the NuGet package manager.

When you create an Azure Function, you can start by writing the code for it in the portal.
Alternatively, if you need source code management, you can use GitHub or Azure
DevOps Services.
To create an Azure Function, choose from the range of templates. The following list is an
example of some of the templates available to you.

 HTTPTrigger. Use this template when you want the code to execute in response
to a request sent through the HTTP protocol.
 TimerTrigger. Use this template when you want the code to execute according
to a schedule.
 BlobTrigger. Use this template when you want the code to execute when a new
blob is added to an Azure Storage account.
 CosmosDBTrigger. Use this template when you want the code to execute in
response to new or updated documents in a NoSQL database.

Azure Functions can integrate with many different services both within Azure and from
third parties. These services can trigger your function, or send data input to your
function, or receive data output from your function.

There are several HPC and batch processing choices available on Azure. You talk with an
Azure expert who advises you to focus on three options: Azure Batch, Azure VM HPC
Instances, and Microsoft HPC Pack.
In a series of 100 tasks and 10 nodes, for example, Batch schedules the first 10 tasks
onto those 10 nodes. Batch immediately allocates later tasks when nodes finish
processing. For spiky workloads, you can configure scaling rules, which Batch also
handles automatically. If you provision 100 VMs with no Batch context, you must code
these scheduling and work allocation mechanisms by hand.

1.

You're trying to provision several H-series Azure VMs in the Azure portal to solve some
complex financial equations. How can you resolve the errors you are receiving?

Use the Azure Virtual Machines pricing detail page to ensure that the problem you want
to solve is supported by the kind of VM you're trying to deploy.

Tell Azure that your subscription needs to support a greater number of cores than is
allowed by default.
Because H-series VMs use large numbers of cores, you can quickly reach the limit
for your subscription. Open a support request to increase this limit.

Try again at a different time.

2.

You want to deploy an HB-series VM for a weather modeling startup. However, this type
of VM doesn't appear as an option in the portal. What should you check?

Check that your subscription is allowed to deploy VMs of this type.

Try again at a different time.

Check that this VM is supported in your preferred region of deployment.

HB-series VMs are not supported in all Azure regions.

Microsoft HPC Pack


 10 minutes

If you need more flexible control of your high-performance infrastructure, or you want
to manage both cloud and on-premises VMs, consider using the Microsoft HPC Pack.

In researching options for the engineering organization, you've looked at Azure Batch
and Azure HPC Instances. But what if you want to have full control of the management
and scheduling of your clusters of VMs? What if you have significant investment in on-
premises infrastructure in your datacenter? HPC Pack offers a series of installers for
Windows that allows you to configure yo.

You're trying to advise your organization on whether to choose Azure Batch or


Microsoft HPC Pack. What might be a key factor in choosing HPC Pack?

There's a significant on-premises infrastructure that your organization doesn't want to


waste. It can be used for HPC problems.

HPC Pack enables you to manage both on-premises and cloud infrastructure.

You don't want to be responsible for deciding how to optimize the way the HPC work
gets allocated.

There's no remaining licensing budget for the year.


2.

You're trying to set up an HPC Pack topology, starting with on-premises resources.
Which version of Windows Server should you use for the head node?

Windows Server 2012 R2 or later.

You can use Windows Server 2012 or any later version for the head node.

Windows Server 2016 or later.

Windows Server 2019 or later.

ur own control and management plane, and highly flexible deployments of on-premises
and cloud nodes. By contrast with the exclusively cloud-based Batch, HPC Pack has the
flexibility to deploy to on-premises and the cloud. It uses a hybrid of both to expand to
the cloud when your on-premises reserves are insufficient.

Think of Microsoft HPC Pack as a version of the Batch management and scheduling
control layer, over which you have full control, and for which you have responsibility.
Deployment of HPC Pack requires Windows Server 2012 or later, and takes careful
consideration to implement.

The underlying infrastructure improves as technology upgrades become available. Batch


also lets you use some of the most important 3D rendering packages, like Maya, 3D
Studio Max, and Chaos V-Ray. You pay any licensing fees by the hour. Because
rendering is particularly taxing on the CPU, deploying H-Series VMs into Batch pools
provides an efficient solution.

1.

What is a key benefit of using Azure Batch?

You want a fully managed service for your HPC tasks.

Azure Batch is a fully managed service for HPC tasks in Azure.

You want to be able to expand from on-premises into the cloud as needed.
You need the most powerful VMs available on Azure, and the InfiniBand networking
they offer.

2.

You've got a problem that requires you to use 3D Studio Max. You want the flexibility to
pay the licensing fees on demand. What's the best Azure solution for this task?

Azure Batch

Batch also lets you use some of the most important 3D rendering packages, like
Maya, 3D Studio Max, and Chaos V-Ray.

Microsoft HPC Pack

Azure H-Series VMs

As companies acquire larger volumes of data and more sophisticated methods to


manipulate it, high-performance computing (HPC) becomes more popular. A few years
ago, HPC-capable hardware and techniques were beyond the budget and expertise of
many organizations. Now, the HPC facilities in Azure put HPC techniques at your
fingertips, and empower you to perform new tasks. You've learned about the solutions
available on Azure for HPC workloads: Azure Batch, HPC virtual machines, and the
Microsoft HPC Pack. You can now choose the best option for your HPC workloads.

Azure Batch is an Azure service that enables you to run large-scale parallel and high-
performance computing (HPC) applications efficiently in the cloud. There's no need to
manage or configure infrastructure. Just schedule the job, allocate the resources you
need, and let Batch take care of the rest.

In this module, you'll learn about using Azure Batch to create and run parallel tasks with
the Azure CLI, and how to use the CLI to check the status of Batch jobs and tasks. This
module also describes how to use the standalone Batch Explorer tool to monitor
ongoing jobs.
A sample parallel task
To get to grips with Azure Batch and the CLI, you decide on a simple proof-of-concept
to demonstrate the different nodes working together in parallel. You will loop a number
of times in the CLI, add a numbered task per iteration of the loop, and later download
and look at the metadata generated by each task. This metadata will show the Azure
Batch service scheduling tasks as they are created onto different nodes in sequential
fashion, so that they all execute their work in parallel.

This kind of proof-of-concept actually underlies many real-world applications of Azure


Batch. For example, in the OCR scenario, tasks would also install software like
Imagemagick in order to convert the uploaded water purification images to the TIF
format, and would then install Tesseract to perform the work of conversion. These tasks
would be partitioned in such a way that each worker node would perform a portion of
the OCR work in parallel with the others in order to complete faster.

Even this proof-of-concept will demonstrate important components of Azure Batch


working in concert together. As shown in the graphic below, you'll create a pool, you'll
create worker nodes, you'll create a job, and you'll create tasks, all using the Azure CLI to
issue commands and get immediate feedback.

Advantages of using Azure Batch


Azure Batch is especially well-suited to running large-scale parallel and high-
performance computing (HPC) batch jobs. The service handles everything for you --
managing and scheduling all the nodes and applications required to run your scenarios.
And it's a free service, so you only pay for the underlying compute, storage, and
networking resources that you use.

Once an Azure Batch account has been created, what is the next step to take in the
workflow of setting up tasks to run on Azure Batch?

Sign in to the Batch account

Signing in to the created Batch account is a prerequisite before any other tasks can
be performed.

Create a pool of virtual nodes


Create a Batch job

2.

Which component of Azure Batch allows tasks to be logically grouped together and
have settings in common configured?

Azure Batch account

Azure Batch pool

Azure Batch job

An Azure Batch job allows tasks to be logically grouped together and have settings
in common configured.

In this module, you will:

 Use resource groups to organize Azure resources


 Use tags to organize resources
 Apply policies to enforce standards in your Azure environments
 Use resource locks to protect critical Azure resources from accidental deletion

Many resources can be moved between resource groups with some services having
specific limitations or requirements to move. Resource groups can't be nested. Before
any resource can be provisioned, you need a resource group for it to be placed in.

On the Azure portal menu or from the Home page, select Resource groups, and select
your newly created resource group. Note that you may also see a resource group called
NetworkWatcherRG. You can ignore this resource group, it's created automatically to
enable Network Watcher in Azure virtual networks

A resource can have up to 50 tags. The name is limited to 512 characters for all types of
resources except storage accounts, which have a limit of 128 characters. The tag value is
limited to 256 characters for all types of resources. Tags aren't inherited from parent
resources. Not all resource types support tags, and tags can't be applied to classic
resources.
.

Tagging resources can also help in monitoring to track down impacted resources.
Monitoring systems could include tag data with alerts, giving you the ability to know
exactly who is impacted. In our example above, you applied the Department tag with a
value of Finance to the msftlearn-vnet1 resource. If an alarm was thrown on
msftlearn-vnet1 and the alarm included the tag, you'd know that the finance
department may be impacted by the condition that triggered the alarm. This contextual
information can be valuable if an issue occurs.

t's also common for tags to be used in automation. If you want to automate the
shutdown and startup of virtual machines in development environments during off-
hours to save costs, you can use tags to assist in this automation. Add a shutdown:6PM
and startup:7AM tag to the virtual machines, then create an automation job that looks
for these tags, and shuts them down or starts them up based on the tag value. There are
several solutions in the Azure Automation Runbooks Gallery that use tags in a similar
manner to accomplish this result

Use policies to enforce standards


 7 minutes

You're organizing your resources better in resource groups, and you've applied tags to
your resources to use them in billing reports and in your monitoring solution. Resource
grouping and tagging have made a difference in the existing resources, but how do you
ensure that new resources follow the rules? You'll take a look at how policies can help
you enforce standards in your Azure environment.

hese policies can enforce these rules when resources are created, and can be evaluated
against existing resources to give visibility into compliance.
Policies can enforce things such as only allowing specific types of resources to be
created, or only allowing resources in specific Azure regions. You can enforce naming
conventions across your Azure environment. You can also enforce that specific tags are
applied to resources. You'll take a look at how policies work.

Use Resource Locks to ensure critical resources aren't modified or deleted (as you'll see
in the next unit).

Resource locks are a setting that can be applied to any resource to block modification or
deletion. Resource locks can set to either Delete or Read-only. Delete will allow all
operations against the resource but block the ability to delete it. Read-only will only
allow read activities to be performed against it, blocking any modification or deletion of
the resource. Resource locks can be applied to subscriptions, resource groups, and to
individual resources, and are inherited when applied at higher levels.

How are Azure Policy and RBAC different?

At first glance, it might seem like Azure Policy is a way to restrict access to specific
resource types similar to role-based access control (RBAC). However, they solve different
problems. RBAC focuses on user actions at different scopes. You might be added to the
contributor role for a resource group, allowing you to make changes to anything in that
resource group. Azure Policy focuses on resource properties during deployment and for
already-existing resources. Azure Policy controls properties such as the types or
locations of resources. Unlike RBAC, Azure Policy is a default-allow-and-explicit-deny
system.

7. Create a policy definition


8. Assign a definition to a scope of resources
9. View policy evaluation results

"if": {
"allOf": [

"field": "type",

"equals": "Microsoft.Compute/virtualMachines"

},

"not": {

"field": "Microsoft.Compute/virtualMachines/sku.name",

"in": "[parameters('listOfAllowedSKUs')]"

},

"then": {

"effect": "Deny"

Notice the [parameters('listofAllowedSKUs')] value; this value is a replacement


token that will be filled in when the policy definition is applied to a scope. When a
parameter is defined, it's given a name and optionally given a value.

Apply an Azure policy


To apply a policy, we can use the Azure portal, or one of the command-line tools such
as Azure PowerShell by adding the Microsoft.PolicyInsights extension.

PowerShell
# Register the resource provider if it's not already
registeredRegister-AzResourceProvider -ProviderNamespace
'Microsoft.PolicyInsights'
Once we have registered the provider, we can create a policy assignment. For example,
here's a policy definition that identifies virtual machines not using managed disks.

PowerShell
# Get a reference to the resource group that will be the scope of the
assignment$rg = Get-AzResourceGroup -Name '<resourceGroupName>'# Get a
reference to the built-in policy definition that will be
assigned$definition = Get-AzPolicyDefinition | Where-Object
{ $_.Properties.DisplayName -eq 'Audit VMs that do not use managed
disks' }# Create the policy assignment with the built-in definition
against your resource groupNew-AzPolicyAssignment -Name 'audit-vm-
manageddisks' -DisplayName 'Audit VMs without managed disks
Assignment' -Scope $rg.ResourceId -PolicyDefinition $definition

The preceding commands use the following information:


Table 2

Parameter Description
The actual name of the assignment. For this
Name
example, audit-vm-manageddisks was used.
Display name for the policy assignment. In this
DisplayName case, you're using Audit VMs without managed
disks Assignment.
The policy definition, based on which you're using
to create the assignment. In this case, it's the ID
Definition
of policy definition Audit VMs that do not use
managed disks.
A scope determines what resources or grouping
of resources the policy assignment gets enforced
Scope on. It could range from a subscription to resource
groups. Be sure to replace <scope> with the
name of your resource group.
Important facts about management groups
 Any Azure AD user in the organization can create a management group. The
creator is given an Owner role assignment.
 A single Azure AD organization can support 10,000 management groups.
 A management group tree can support up to six levels of depth not including the
Root level or subscription level.
 Each management group can have many children.
 When your organization creates subscriptions, they are automatically added to
the root management group.
1.

True or false: You can download published audit reports and other compliance-related
information related to Microsoft’s cloud service from the Service Trust Portal

True

You can download published audit reports and other compliance-related information
related to Microsoft’s cloud service from the Service Trust Portal.

False

2.

Which Azure service allows you to configure fine-grained access management for Azure
resources, enabling you to grant users only the rights they need to perform their jobs?

Locks

Policy

Initiatives

Role-based Access Control

Role-based access control (RBAC) provides fine-grained access management for Azure
resources, enabling you to grant users only the rights they need to perform their jobs.
RBAC is provided at no additional cost to all Azure subscriber.

3.

Which Azure service allows you to create, assign, and, manage policies to enforce
different rules and effects over your resources and stay compliant with your corporate
standards and service-level agreements (SLAs)?

Azure Policy

Azure Policy is a service in Azure that you use to create, assign, and, manage policies.
These policies enforce different rules and effects over your resources, so those resources
stay compliant with your corporate standards and service-level agreements (SLAs).

Azure Blueprints

Azure Security Center

Role-based Access Control


4.

Which of the following services provides up-to-date status information about the health
of Azure services?

Compliance Manager

Azure Monitor

Service Trust Portal

Azure Service Health

Azure Service Health is the correct answer, because it provides you with a global view of
the health of Azure services. With Azure Status, a component of Azure Service Health,
you can get up-to-the-minute information on service availability.

5.

Where can you obtain details about the personal data Microsoft processes, how
Microsoft processes it, and for what purposes?

Microsoft Privacy Statement

You can obtain the details about how Microsoft uses personal data in the Microsoft
Privacy Statement.

Compliance Manager

Azure Service Health

Trust Center

In this module, you'll explore the monitoring solutions available in Azure. You'll assess
services such as Azure Security Center, Azure Application Insights, and Azure Sentinel, to
analyze infrastructure and applications. You'll also explore how Azure Monitor is used to
unify various monitoring solutions.

1.

Why would you use Azure Security Center?


You want to secure an infrastructure that consists of on-premises and cloud resources.

Azure Security Center helps you secure your on-premises and cloud resources.

You want to secure an infrastructure that consists of only cloud resources.

You want to secure an infrastructure that consists of only on-premises resources.

2.

How can you prevent persistent access to your virtual machines by using Azure Security
Center?

Use playbooks to block access.

Use just-in-time access to prevent persistent access.

With just-in-time access, your virtual machines are only accessed based on rules that
you configure.

Use automation and orchestration to block access.

3.

Which tool allows you to automate your responses to alerts?

Use just-in-time access to automate your response to alerts.

Use adaptive controls to automate your response to alerts.

Use playbooks to automate your response to alerts.

Playbooks are automated procedures that you can run against alerts.

1.

Why would you use Azure Application Insights?

You want to analyze and address problems that affect your cloud infrastructure's
security.

You want to analyze and address problems that affect your on-premises infrastructure's
security.

You want to analyze and address problems that affect your application's health.
You can analyze and address issues such as exceptions, failures, and availability
problems.

2.

How can you continuously monitor your applications from different geographic
locations?

Use availability tests to continuously monitor your application from different geographic
locations.

Availability tests let you monitor your application from multiple locations in the world.

Use an instrumentation key to continuously monitor your application from different


geographic locations.

Use Log Analytics to continuously monitor your application from different geographic
locations.

3.

How would you continuously monitor your release pipelines?

Use availability tests to monitor your release pipelines.

Use a continuous monitoring gate to monitor release pipelines.

Use the gate to stop deployment when an issue has been identified. Deployment will
continue automatically when the issue is resolved.

Use the application map to monitor your release pipelines.

1.

Why would you use Azure Sentinel?

You want to improve the development lifecycle for an application that spans across on-
premises and the cloud.

You want a detailed overview of your enterprise, potentially across multiple clouds and
on-premises locations.

Azure Sentinel will help monitor and respond to security threats across your entire
enterprise.
You want to be able to cross-query over data collected from multiple sources that span
on-premises and the cloud.

2.

How do you set up Azure Sentinel on Azure?

Create an Azure Sentinel instance, and then add Azure Sentinel to a workspace.

Connect your data source, create a workspace, and then add Azure Sentinel to that
workspace.

Create a workspace, and then add that workspace to Azure Sentinel.

You'll need to create a Log Analytics workspace.

3.

Sentinel has raised an incident. How can you investigate which users have been
affected?

Use the investigation map, drill down into the incident, and look for data sources.

Use the investigation map, drill down into the incident, and look for user entities
affected by the alert.

Use entities to view users that might have been in the path of a particular threat or
malicious activity.

Use the investigation map, drill down into the incident, and look for playbooks.

1.

You need to write queries to analyze your log data. How would you do this?

Use the Log Analytics agent to write your queries.

Use Log Analytics to write your queries.

You can create and run queries on your logs and view results with Log Analytics.

Use a workspace to write your queries.

2.
How can you automatically collect security-related data from all newly created virtual
machines into one central location?

Use the Log Analytics agent.

The agent gathers security-related information from resources into a workspace.

Use an instance of Application Insights.

Use a cross-resource query.

3.

How can you analyze both security-related data and application performance data
together?

Use the Log Analytics agent to query Azure Security Center and Application Insights
workspaces together.

Use a cross-resource query to query Azure Security Center and Application Insights
workspaces together.

You use cross-resource querying to analyze the log data collected from separate
workspaces.

Use automatic provisioning to query Azure Security Center and Application Insights
workspaces together.

1.

What's the composition of an alert rule?

Resource, condition, log, alert type

Metrics, logs, application, operating system

Resource, condition, actions, alert details

Correct. These elements make up an alert rule.

2.

Which of the following is an example of a log data type?


Percentage of CPU over time.

HTTP response records.

Correct. HTTP response records are examples of log data types.

Database tables.

Website requests per hour.

Linux Server with stress test and create ubuntu linux server

cat <<EOF > cloud-init.txt

#cloud-config

package_upgrade: true

packages:

- stress

runcmd:

- sudo stress --cpu 1

EOF

Create server below linux

az vm create \

--resource-group learn-f50f89f2-6d8d-4a01-8b7e-8b1023726b53 \

--name vm1 \

--image UbuntuLTS \

--custom-data cloud-init.txt \

--generate-ssh-keys

Get VM ID CLI

VMID=$(az vm show \
--resource-group learn-f50f89f2-6d8d-4a01-8b7e-8b1023726b53 \

--name vm1 \

--query id \

--output tsv)

Create new metric alert

az monitor metrics alert create \

-n "Cpu80PercentAlert" \

--resource-group learn-f50f89f2-6d8d-4a01-8b7e-8b1023726b53 \

--scopes $VMID \

--condition "max percentage CPU > 80" \

--description "Virtual machine is running at or greater than 80% CPU utilization" \

--evaluation-frequency 1m \

--window-size 1m \

--severity 3

3 types of alerts

1 Metric alerts

Log alerts

Activity alerts

There are two types of activity log alerts:

 Specific operations: Apply to resources within your Azure subscription and often have a scope
with specific resources or a resource group. You use this type when you need to receive an alert
that reports a change to an aspect of your subscription. For example, you can receive an alert if
a virtual machine is deleted or new roles are assigned to a user.
 Service health events: Include notice of incidents and maintenance of target resources.

1.

How are smart groups created?


Through a template deployment.

Through the Azure CLI.

Automatically, using machine learning algorithms.

Correct.

2.

Which of the following is NOT a state of a smart group alert?

Failed

Correct. Failed is not a smart group alert state.

New

Acknowledged

Closed

Features of Azure Monitor logs


 10 minutes

Azure Monitor is a service for collecting and analyzing telemetry. It helps you get maximum
performance and availability for your cloud applications, and for your on-premises resources and
applications. It shows how your applications are performing and identifies any issues with them.

Because Azure Monitor is an automatic system, it begins to collect data from these sources as soon as
you create Azure resources such as virtual machines and web apps. You can extend the data that Azure
Monitor collects by:

 Enabling diagnostics: For some resources, such as Azure SQL Database, you receive full
information about a resource only after you have enabled diagnostic logging for it. You can use
the Azure portal, the Azure CLI, or PowerShell to enable diagnostics.
 Adding an agent: For virtual machines, you can install the Log Analytics agent and configure it to
send data to a Log Analytics workspace. This agent increases the amount of information that's
sent to Azure Monitor.

1.

What data does Azure Monitor collect?

Data from a variety of sources, such as the application event log, the operating system (Windows and
Linux), Azure resources, and custom data sources

This answer is correct.

Azure billing details


Backups of database transaction logs

2.

What two fundamental types of data does Azure Monitor collect?

Metrics and logs

Azure Monitor collects two types of data: metrics and logs. Metrics are numerical values that describe
some aspect of a system at a particular time. Logs contain different kinds of data, such as event
information, organized into records.

Username and password

Email notifications and errors

Azure Blob storage


Azure Blob Storage is unstructured, meaning that there are no restrictions on the kinds
of data it can hold. Blobs are highly scalable and apps work with blobs in much the
same way as they would work with files on a disk, such as reading and writing data. Blob
Storage can manage thousands of simultaneous uploads, massive amounts of video
data, constantly growing log files, and can be reached from anywhere with an internet
connection.

Blobs aren't limited to common file formats. A blob could contain gigabytes of binary
data streamed from a scientific instrument, an encrypted message for another
application, or data in a custom format for an app you're developing.

Azure Blob storage lets you stream large video or audio files directly to the user's
browser from anywhere in the world. Blob storage is also used to store data for backup,
disaster recovery, and archiving. It has the ability to store up to 8 TB of data for virtual
machines. The following illustration shows an example usage of Azure blob storage.

Azure Data Lake Storage


The Data Lake feature allows you to perform analytics on your data usage and prepare
reports. Data Lake is a large repository that stores both structured and unstructured
data.

Azure Data Lake Storage combines the scalability and cost benefits of object storage
with the reliability and performance of the Big Data file system capabilities. The
following illustration shows how Azure Data Lake stores all your business data and
makes it available for analysis.

Data progresses through a flow diagram from ingest in its native format; prepare, where
data is cleansed, enriched, annotated, and schematized; store, where data is retained for
present and future analysis; then to analyze, where analytics engines like Hadoop and
Spark are used on the data. Data is shown ingested to Azure Data Lake Store from
devices, social media, LOB applications, web sites, relational databases, video,
Clickstream, and sensors. From there, it can be accessed with batch queries, interactive
queries, real-time analytics, machine learning, and data warehouse.

Disk types
When working with VMs, you can use standard SSD and HDD disks for less critical
workloads, and premium SSD disks for mission-critical production applications. Azure
Disks have consistently delivered enterprise-grade durability, with an industry-leading
ZERO% annualized failure rate. The following illustration shows an Azure virtual machine
using separate disks to store different data.

Storage tiers
Azure offers three storage tiers for blob object storage:

10. Hot storage tier: optimized for storing data that is accessed frequently.
11. Cool storage tier: optimized for data that are infrequently accessed and stored
for at least 30 days.
12. Archive storage tier: for data that are rarely accessed and stored for at least 180
days with flexible latency requirements.
Encryption for storage services
The following encryption types are available for your resources:

13. Azure Storage Service Encryption (SSE) for data at rest helps you secure your
data to meet the organization's security and regulatory compliance. It encrypts
the data before storing it and decrypts the data before returning it. The
encryption and decryption are transparent to the user.
14. Client-side encryption is where the data is already encrypted by the client
libraries. Azure stores the data in the encrypted state at rest, which is then
decrypted during retrieval.

Check your knowledge


1.

Suppose you work at a startup with limited funding. Why might you prefer Azure data
storage over an on-premises solution?

To ensure you run on a specific brand of hardware, which will let you form a marketing
partnership with that hardware vendor.

The Azure pay-as-you-go billing model lets you avoid buying expensive hardware.

There are no large, up-front capital expenditures (CapEx) with Azure. You pay monthly
for only the services you use (OpEx).

To get exact control over the location of your data store.

2.

Which of the following situations would yield the most benefits from relocating an on-
premises data store to Azure?

Unpredictable storage demand that increases and decreases multiple times throughout
the year.

Azure data storage is flexible. You can quickly and easily add or remove capacity. You
can increase performance to handle spikes in load or decrease performance to reduce
costs. In all cases, you pay for only what you use.

Long-term, steady growth in storage demand.


Consistent, unchanging storage demand.

3.

A newly released mobile app using Azure data storage has just been mentioned by a
celebrity on social media, seeing a huge spike in user volume. To meet the unexpected
new user demand, what feature of pay-as-you-go storage will be most beneficial?

The ability to provision and deploy new infrastructure quickly

As the user demand increases, the agility to deploy new servers or services as needed
can help scale to meet the increased user load.

The ability to predict the service costs in advance

The ability to meet compliance requirements for data storage

Exercise - Create your Azure SQL database


Here, you'll learn:

 What considerations you need to make when creating an Azure SQL database,
including:
o How a logical server acts as an administrative container for your
databases.
o The differences between purchasing models.
o How elastic pools enable you to share processing power among
databases.
o How collation rules affect how data is compared and sorted.
 How to bring up Azure SQL Database from the portal.
 How to add firewall rules so that your database is accessible from only trusted
sources.

You can control logins, firewall rules, and security policies through the logical server.
You can also override these policies on each database within the logical server.

Because your logical server can hold more than one database, there's also the idea of
eDTUs, or elastic Database Transaction Units. This option enables you to choose one
price, but allow each database in the pool to consume fewer or greater resources
depending on current load.
What are SQL elastic pools?
When you create your Azure SQL database, you can create a SQL elastic pool.

SQL elastic pools relate to eDTUs. They enable you to buy a set of compute and storage
resources that are shared among all the databases in the pool. Each database can use
the resources they need, within the limits you set, depending on current load.

For your prototype, you won't need a SQL elastic pool because you need only one SQL
database.

What is collation?
Collation refers to the rules that sort and compare data. Collation helps you define
sorting rules when case sensitivity, accent marks, and other language characteristics are
important.

Let's take a moment to consider what the default collation,


SQL_Latin1_General_CP1_CI_AS, means.

 Latin1_General refers to the family of Western European languages.


 CP1 refers to code page 1252, a popular character encoding of the Latin
alphabet.
 CI means that comparisons are case insensitive. For example, "HELLO" compares
equally to "hello".
 AS means that comparisons are accent sensitive. For example, "résumé" doesn't
compare equally to "resume".

Because you don't have specific requirements around how data is sorted and compared,
you choose the default collation.

Important: Over time if you realize you need additional compute power to keep up with
demand, you can adjust performance options or even switch between the DTU and
vCore performance models.

az configure --defaults group=learn-adfc7a30-90c1-4ebb-9ac5-6fc6ba5be756 sql-


server=vishalpandita

az sql db list
az sql db list | jq '[.[] | {name: .name}]'

Logistics is your database. Like SQL Server, master includes server metadata, such as
sign-in accounts and system configuration settings.

az sql db show --name Logistics

az sql db show --name Logistics | jq '{name: .name, maxSizeBytes: .maxSizeBytes,


status: .status}'

az sql db show-connection-string --client sqlcmd --name Logistics

sqlcmd -S tcp:vishalpandita.database.windows.net,1433-d Logistics -U vishalpandita -P


"Dehradun@123" -N -l 30

All allow rule put in

1.

Who's responsible for performing software updates on your Azure SQL databases and the underlying
OS?

You are. It's up to you to periodically log in and install the latest security patches and updates.

Microsoft Azure. Azure manages the hardware, software updates, and OS patches for you.

Azure SQL databases are a Platform-as-a-Service (PaaS) offering. Azure manages the hardware, software
updates, and OS patches for you.

No one. Your database stays with its original OS and software configuration.

2.

What is an Azure SQL logical server?

An administrative container for your databases.


You can control logins, firewall rules, and security policies through the logical server.

Another name for an Azure SQL database instance.

A server that defines the logical rules that sort and compare data.

3.

Your Azure SQL database provides adequate storage and compute power. But you find that you need
additional IO throughput. Which performance model might you use?

DTU

vCore

vCore gives you greater control over what compute and storage resources you create and pay for. You
can increase IO throughput but keep the existing amount of compute and storage.

SQL elastic pool

What is a SQL elastic pool?


SQL elastic pools are a resource allocation service used to scale and manage the performance and cost
of a group of Azure SQL databases. Elastic pools allow you to purchase resources for the group. You set
the amount of resources available to the pool, add databases to the pool, and set minimum and
maximum resource limits for the databases within the pool.

The pool resource requirements are set based on the overall needs of the group. The pool allows the
databases within the pool to share the allocated resources. SQL elastic pools are used to manage the
budget and performance of multiple SQL databases.

How many databases to add to a pool?


The general guidance is, if the combined resources you would need for individual databases to meet
capacity spikes is more than 1.5 times the capacity required for the elastic pool, then the pool will be
cost effective.

At a minimum, it is recommended to add at least two S3 databases or fifteen S0 databases to a single


pool for it to have potential cost savings.

Depending on the performance tier, you can add up to 100 or 500 databases to a single pool.

atabases can be added using the Azure portal, the Azure CLI, or PowerShell.

When using the portal, you can add a new pool to an existing SQL server. Or you can create a new SQL
elastic pool resource and specify the server.

When using the CLI, call az sql db create and specify the pool name using the --elastic-pool-
name parameter. This command can move an existing database into the pool or create a new one if it
doesn't exist.
When using PowerShell, you can assign new databases to a pool using New-AzSqlDatabase and move
existing databases using Set-AzSqlDatabase.

You can add existing Azure SQL databases from your Azure SQL server into the pool or create new
databases. And you can mix service tiers within the same pool.

ADMIN_LOGIN="ServerAdmin"

RESOURCE_GROUP=learn-f53e7651-c7bc-4ff4-bd2b-85573e860915

SERVERNAME=FitnessSQLServer-$RANDOM

LOCATION=<location>

PASSWORD=<password>

az sql server create \

--name $SERVERNAME \

--resource-group $RESOURCE_GROUP \

--location $LOCATION \

--admin-user $ADMIN_LOGIN \

--admin-password $PASSWORD

az sql db create \

--resource-group $RESOURCE_GROUP \

--server $SERVERNAME \

--name FitnessParisDB

1.

Why is the post-migration stage an important part of a successful migration plan?

Data accuracy and performance

In the post-migration phase, you validate that your data in the new system is accurate and complete,
matching the data in the original source system. Additionally, you can assess performance to ensure
data is returned in the times outlined in your requirements documentation.

Schema validation
Sync and cutover

2.

What is the correct tool for doing an assessment?

Data Migration Assistant

Data Migration Assistant assesses your existing database for any compatibility issues with Azure SQL
Database and generates reports with recommended fixes.

Azure Data Studio

Azure Database Migration Service

Although the online option looks attractive, there's a major downside: cost. The online option requires
creating a SQL Server instance that's based on the Premium price tier. This can become cost prohibitive,
especially when you don't need any of the features of the Premium tier except its support of online
migrations.

1.

What is the chief obstacle to performing online migrations?

The requirement of the Premium module for Azure SQL Database.

The Premium model for Azure SQL Database is expensive. This cost can be a big obstacle to doing online
migrations.

Online migrations are slower.

Online migrations require more personnel to complete.

2.

What is the best practice for doing migrations?

Try to do an offline migration first.

Try to do an offline migration first to see if it will run in an acceptable time frame that doesn't incur the
cost of the Premium database tier.

Try to perform an online migration first.

Try to perform a manual migration first.

3.

Which service is used to perform the data migration?


Azure Database Migration Service

Azure Database Migration Service is used to perform both offline and online data migrations.

Data Migration Assistant

Azure Data Movement Service

SQL Database currently supports three deployment options: single, elastic pool, and managed instance.
We'll focus on the single-database deployment option.

If Active Directory single sign-on is enabled, you can connect by using your Azure identity.

Let’s Import some data in tables (sql) using bcp tool

git clone https://github.com/MicrosoftDocs/mslearn-develop-app-that-queries-azure-


sql education

mv ~/education/data ~/educationdata

cd ~/educationdata

Ls

CREATE TABLE Courses

CourseID INT NOT NULL PRIMARY KEY,

CourseName VARCHAR(50) NOT NULL

)
Run the bcp utility to create a format file from the schema of the Courses table in the
database. The format file specifies that the data will be in character format (-c) and
separated by commas (-t,).

Bash

 bcp "$DATABASE_NAME.dbo.courses" format nul -c -f courses.fmt -t,


-S "$DATABASE_SERVER.database.windows.net" -U $AZURE_USER -P
$AZURE_PASSWORD
 In the code editor, open the format file, courses.fmt, that was generated by the
previous command.

Bash
code courses.fmt

The file should look like this:

text

text

 14.0
 2
 1 SQLCHAR 0 12 "," 1 CourseID ""
 2 SQLCHAR 0 50 "\n" 2 CourseName
SQL_Latin1_General_CP1_CI_AS
 Review the file. The data in the first column of the comma-separated file will go
into the CourseID column of the Courses table. The second field will go into the
CourseName column. The second column is character-based and has a collation
that's associated with it. The fields separator in the file is expected to be a
comma. The row terminator (after the second field) should be a newline
character. In a real-world scenario, your data might not be organized this neatly.
You might have different field separators and fields in a different order from the
columns. In that situation, you can edit the format file to change these items on a
field-by-field basis. Press Ctrl+q to close the editor.
 Run the following command to import the data in the courses.csv file in the
format that's specified by the amended courses.fmt file. The -F 2 flag directs
the bcp utility to start importing data from line 2 in the data file. The first line
contains headers.

Bash
 bcp "$DATABASE_NAME.dbo.courses" in courses.csv -f courses.fmt -S
"$DATABASE_SERVER.database.windows.net" -U $AZURE_USER -P
$AZURE_PASSWORD -F 2

Verify that bcp utility imports 9 rows and doesn't report any errors.

 Run the following sequence of operations to import the data for the
dbo.Modules table from the modules.csv file.
15. Generate a format file.

16. Bash

 bcp "$DATABASE_NAME.dbo.modules" format nul -c -f modules.fmt -t,


-S "$DATABASE_SERVER.database.windows.net" -U $AZURE_USER -P
$AZURE_PASSWORD
 Import the data from the modules.csv file into the Modules table in the
database.

Bash

17. bcp "$DATABASE_NAME.dbo.modules" in modules.csv -f modules.fmt -S
"$DATABASE_SERVER.database.windows.net" -U $AZURE_USER -P
$AZURE_PASSWORD -F 2

18. Verify that this command imports 16 rows.

 Perform the following sequence of operations to import the data for the
dbo.StudyPlans table from the studyplans.csv file.
19. Generate a format file.

20. Bash

 bcp "$DATABASE_NAME.dbo.studyplans" format nul -c -f


studyplans.fmt -t, -S "$DATABASE_SERVER.database.windows.net" -U
$AZURE_USER -P $AZURE_PASSWORD
 Import the data from the studyplans.csv file into the StudyPlans table in the
database.

Bash
bcp "$DATABASE_NAME.dbo.studyplans" in studyplans.csv -f
studyplans.fmt -S "$DATABASE_SERVER.database.windows.net" -U
$AZURE_USER -P $AZURE_PASSWORD -F 2
Verify that this command imports 45 rows.

bcp "$DATABASE_NAME.dbo.courses" in courses.csv -f courses.fmt -S


"$DATABASE_SERVER.database.windows.net" -U $AZURE_USER -P $AZURE_PASSWORD
-F 2

bcp "$DATABASE_NAME.dbo.modules" format nul -c -f modules.fmt -t, -S


"$DATABASE_SERVER.database.windows.net" -U $AZURE_USER -P $AZURE_PASSWORD

Cat modules.fmt has below values

14.0

1 SQLCHAR 0 5 "," 1 ModuleCode


SQL_Latin1_General_CP1_CI_AS

2 SQLCHAR 0 50 "\n" 2 ModuleTitle


SQL_Latin1_General_CP1_CI_AS

bcp "$DATABASE_NAME.dbo.modules" in modules.csv -f modules.fmt -S


"$DATABASE_SERVER.database.windows.net" -U $AZURE_USER -P $AZURE_PASSWORD
-F 2

bcp "$DATABASE_NAME.dbo.studyplans" format nul -c -f studyplans.fmt -t, -S


"$DATABASE_SERVER.database.windows.net" -U $AZURE_USER -P $AZURE_PASSWORD

Cat studyplans.fmt

14.0
3

1 SQLCHAR 0 12 "," 1 CourseID ""

2 SQLCHAR 0 5 "," 2 ModuleCode


SQL_Latin1_General_CP1_CI_AS

3 SQLCHAR 0 12 "\n" 3 ModuleSequence


""

Below is dotnet

Server=tcp:myserver.database.windows.net,1433;Initial Catalog=mydatabase;Persist
Security Info=False;User
ID=myusername;Password=mypassword;MultipleActiveResultSets=False;Encrypt=True;T
rustServerCertificate=False;Connection Timeout=30;

az webapp up \

--resource-group learn-016c467c-a17c-482d-9c1c-147bd0f301f8 \

--name $WEBAPPNAME

Cosmo DB lesson starts

We'll start by learning about request units and how to estimate throughput
requirements.

In Azure Cosmos DB, you provision throughput for your containers to run writes, reads,
updates, and deletes. You can provision throughput for an entire database and have it
shared among containers within the database. You can also provision throughput
dedicated to specific containers.

What is a request unit?


Azure Cosmos DB measures throughput using something called a request unit (RU). Request
unit usage is measured per second, so the unit of measure is request units per second (RU/s).
You must reserve the number of RU/s you want Azure Cosmos DB to provision in advance, so it
can handle the load you've estimated, and you can scale your RU/s up or down at any time to
meet current demand.

The number of request units consumed for an operation changes depending on the document size, the
number of properties in the document, the operation being performed, and some additional concepts
such as consistency and indexing policy.

You provision the number of RUs on a per-second basis and you can change the value at any time in
increments or decrements of 100 RUs. You can make your changes either programmatically or by using
the Azure portal. You're billed on an hourly basis.

Script usage: As with queries, stored procedures and triggers consume RUs based on the complexity of
their operations. As you develop your application, inspect the request charge header to better
understand how much RU capacity each operation consumes.

The .NET SDK will automatically retry your request after waiting the amount of time specified in the
retry-after header.

Check your knowledge


1.

True or false: The number of RUs used for a given database operation over the same data varies
over time.

True

False

Azure Cosmos DB ensures that the number of RUs for a given database operation over a given
dataset is deterministic.

2.
Which of the following options affects the number of request units it takes to write a document?

Size of the document

Item property count

Indexing policy

All of the above

All the three options (size of the document, Item property count, and Indexing policy) are
considered when provisioning request units.

3.

Which of the following statements is false about Request Units (RUs) in Azure Cosmos DB?

The cost to read a 1 KB item is approximately one Request Unit (or 1 RU).

Requests are rate-limited if you exceed the number of provisioned RU.

Once you set the number of request units, it's impossible to modify this number.

You can increase or decrease the number of request units provisioned to a container or a
database.

If you provision 'R' RUs on an Azure Cosmos container (or a database), Azure Cosmos DB
ensures that 'R' RUs are available in each region associated with your account.
What is a partition strategy?

If you continue to add new data to a single server or a single partition, it will eventually run out
of space. A partitioning strategy enables you to add more partitions to your database when need
them. This scaling strategy is called scale out or horizontal scaling.

A partition key defines the partition strategy, it's set when you create a container and can't be
changed. Selecting the right partition key is an important decision to make early in your
development process.

In this unit, you'll learn how to choose a partition key that's right for your scenario, which will
enable you to take advantage of Azure Cosmos DB autoscaling.

The storage space for the data associated with each partition key can't exceed 20 GB, which is the size of
one physical partition in Azure Cosmos DB. So, if your single userID or productId record is going to be
larger than 20 GB, think about using a composite key instead so that each record is smaller. An example
of a composite key would be userID-date, which would look like CustomerName-08072018. This
composite key approach would enable you to create a new partition for each day a user visited the site.

Best practices
When you're trying to determine the right partition key and the solution isn't obvious, here are a
few tips to keep in mind.

 Don't be afraid of choosing a partition key that has a large number of values. The more
values your partition key has, the more scalability you have.
 To determine the best partition key for a read-heavy workload, review the top three to
five queries you plan on using. The value most frequently included in the WHERE clause
is a good candidate for the partition key.
 For write-heavy workloads, you'll need to understand the transactional needs of your
workload, because the partition key is the scope of multi-document transactions.

Review: Choosing a Partition Key

For each Azure Cosmos DB container, you should specify a partition key that satisfies the
following core properties:

 Have a high cardinality. This option allows data to distribute evenly across all physical partitions.
 Evenly distribute requests. Remember the total number of RU/s is evenly divided across all
physical partitions.
 Evenly distribute storage. Each partition can grow up to 20 GB in size.

In the next two exercises, you will create a database and container. In the first exercise, you will
use the Azure portal to create your database and container. However, if you would prefer to learn
how to create a database and container programmatically, you can skip ahead to the next
exercise.

1.

True or false: You can add a partition key to an Azure Cosmos DB container after it has been
created.

True

False

You can set the partition key only when the container is created.

2.

Your organization is planning to use Azure Cosmos DB to store vehicle telemetry data generated
from millions of vehicles every second. Which of the following options for your Partition Key
will optimize storage distribution?

Vehicle Model

Vehicle Identification Number (VIN) which looks like WDDEJ9EB6DA032037

Auto manufacturers have transactions occurring throughout the year. This option will create a
more balanced distribution of storage across partition key values.

n this exercise, you'll use the Azure portal to create an Azure Cosmos DB database named "Products"
with a container named "Clothing", and set your partition key and throughput value

dotnet new console --output myAppbv

cd myApp
dotnet add package Microsoft.Azure.Cosmos --version 3.0.0

dotnet restore

dotnet build

code .

You'll learn how Azure Migrate can:

 Assess your environment's readiness to move to Azure.


 Estimate monthly costs.
 Get sizing recommendations for machines.

zure migration framework

You can use a framework of Assess, Migrate, Optimize, and Monitor as a path for migration.
Each stage focuses on a particular aspect of ensuring the success of a migration.

Let's look at what's involved at each stage.

You might also like