Impira PDF
Impira PDF
AZ-203T05
Monitor, troubleshoot,
and optimize Azure
solutions
MCT USE ONLY. STUDENT USE PROHIBITED
AZ-203T05
Monitor, troubleshoot, and
optimize Azure solutions
MCT USE ONLY. STUDENT USE PROHIBITED
MCT USE ONLY. STUDENT USE PROHIBITED
Contents
Start Here
Welcome
Welcome to the Monitor, troubleshoot, and optimize Azure solutions course. This course is part of a
series of courses to help you prepare for the AZ-203: Developing Solutions for Microsoft Azure1
certification exam.
Candidates for this exam are Azure Developers who design and build cloud solutions such as applications
and services. They participate in all phases of development, from solution design, to development and
deployment, to testing and maintenance. They partner with cloud solution architects, cloud DBAs, cloud
administrators, and clients to implement the solution.
Candidates should be proficient in developing apps and services by using Azure tools and technologies,
including storage, security, compute, and communications.
Candidates must have at least one year of experience developing scalable solutions through all phases of
software development and be skilled in at least one cloud-supported programming language.
1 https://www.microsoft.com/en-us/learning/exam-az-203.aspx
MCT USE ONLY. STUDENT USE PROHIBITED 2 Welcome to the course
Course description
In this course students will gain the knowledge and skills needed to ensure applications hosted in Azure
are operating efficiently and as intended. Students will learn how Azure Monitor operates and how to use
tools like Log Analytics and Application Insights to better understand what is happening in their applica-
tion. Students will also learn how to implement autoscale, instrument their solutions to support monitor-
ing and logging, and use Azure Cache and CDN options to enhance the end-user experience.
Throughout the course students learn how to create and integrate these resources by using the Azure
Portal, Azure CLI, REST, and application code.
Level: Intermediate
Audience:
●● Students in this course are interested in Azure development or in passing the Microsoft Azure Devel-
oper Associate certification exam.
●● Students should have 1-2 years experience as a developer. This course assumes students know how to
code and have a fundamental knowledge of Azure.
●● It is recommended that students have some experience with PowerShell or Azure CLI, working in the
Azure portal, and with at least one Azure-supported programming language. Most of the examples in
this course are presented in C# .NET.
Course Syllabus
Module 1: Introduction to Azure Monitor
●● Overview of Azure Monitor
Module 2: Develop code to support scalability of apps and services
●● Implement autoscale
●● Implement code that addresses singleton application instances
●● Implement code that handles transient faults
Module 3: Instrument solutions to support monitoring and logging
●● Configure instrumentation in an app or server by using Application Insights
●● Analyze and troubleshoot solutions by using Azure Monitor
Module 4: Integrate caching and content delivery within solutions
●● Azure Cache for Redis
●● Develop for storage on CDNs
MCT USE ONLY. STUDENT USE PROHIBITED
Introduction to Azure Monitor
Log Analytics and Application Insights have been consolidated into Azure Monitor to provide a single
integrated experience for monitoring Azure resources and hybrid environments.
Overview
The following diagram gives a high-level view of Azure Monitor. At the center of the diagram are the data
stores for metrics and logs which are the two fundamental types of data use by Azure Monitor. On the
left are the sources that collect telemetry from different monitored resources and populate the data
stores. On the right are the different functions that Azure Monitor performs with this collected data such
as analysis, alerting, and streaming to external systems.
MCT USE ONLY. STUDENT USE PROHIBITED 4 Introduction to Azure Monitor
As soon as you create an Azure subscription and start adding resources such as virtual machines and web
apps, Azure Monitor starts collecting data. Activity Logs record when resources are created or modified.
Metrics tell you how the resource is performing and the resources that it's consuming.
Extend the data you're collecting into the actual operation of the resources by enabling diagnostics and
adding an agent to compute resources. This will collect telemetry for the internal operation of the
resource and allow you to configure different data sources to collect logs and metrics from Windows and
Linux guest operating system.
Add an instrumentation package to your application, to enable Application Insights to collect detailed
information about your application including page views, application requests, and exceptions. Further
verify the availability of your application by configuring an availability test to simulate user traffic.
Insights
Monitoring data is only useful if it can increase your visibility into the operation of your computing
environment. Azure Monitor includes several features and tools that provide valuable insights into your
applications and other resources that they depend on. Monitoring solutions and features such as Appli-
cation Insights and Container Insights provide deep insights into different aspects of your application and
specific Azure services.
Application Insights
Application Insights monitors the availability, performance, and usage of your web applications whether
they're hosted in the cloud or on-premises. It leverages the powerful data analysis platform in Azure
Monitor to provide you with deep insights into your application's operations and diagnose errors without
waiting for a user to report them. Application Insights includes connection points to a variety of develop-
ment tools and integrates with Visual Studio to support your DevOps processes.
Monitoring solutions
Monitoring solutions in Azure Monitor are packaged sets of logic that provide insights for a particular
application or service. They include logic for collecting monitoring data for the application or service,
queries to analyze that data, and views for visualization. Monitoring solutions are available from Micro-
soft and partners to provide monitoring for various Azure services and other applications.
MCT USE ONLY. STUDENT USE PROHIBITED 6 Introduction to Azure Monitor
Alerts
Alerts in Azure Monitor proactively notify you of critical conditions and potentially attempt to take
corrective action. Alert rules based on metrics provide near real time alerting based on numeric values,
while rules based on logs allow for complex logic across data from multiple sources.
Alert rules in Azure Monitor use action groups, which contain unique sets of recipients and actions that
can be shared across multiple rules. Based on your requirements, action groups can perform such actions
as using webhooks to have alerts start external actions or to integrate with your ITSM tools.
Autoscale
Autoscale allows you to have the right amount of resources running to handle the load on your applica-
tion. It allows you to create rules that use metrics collected by Azure Monitor to determine when to
automatically add resources to handle increases in load and also save money by removing resources that
are sitting idle. You specify a minimum and maximum number of instances and the logic for when to
increase or decrease resources.
●● Azure Tenant: Telemetry related to your Azure tenant is collected from tenant-wide services such as
Azure Active Directory.
●● Azure platform: Telemetry related to the health and operation of Azure itself includes data about the
operation and management of your Azure subscription. It includes service health data stored in the
Azure Activity log and audit logs from Azure Active Directory.
●● Guest operating system: Compute resources in Azure, in other clouds, and on-premises have a guest
operating system to monitor. With the installation of one or more agents, you can gather telemetry
from the guest into the same monitoring tools as the Azure services themselves.
●● Applications: In addition to telemetry that your application may write to the guest operating system,
detailed application monitoring is done with Application Insights. Application Insights can collect data
from applications running on a variety of platforms. The application can be running in Azure, another
cloud, or on-premises.
●● Custom sources: Azure Monitor can collect log data from any REST client using the Data Collector
API. This allows you to create custom monitoring scenarios and extend monitoring to resources that
don't expose telemetry through other sources.
Application Insights
Application Insights is an extensible Application Performance Management (APM) service for web
developers on multiple platforms. Use it to monitor your live web application. It will automatically detect
performance anomalies. It includes powerful analytics tools to help you diagnose issues and to under-
stand what users actually do with your app.
MCT USE ONLY. STUDENT USE PROHIBITED 8 Introduction to Azure Monitor
In addition, you can pull in telemetry from the host environments such as performance counters, Azure
diagnostics, or Docker logs. You can also set up web tests that periodically send synthetic requests to
your web service.
All these telemetry streams are integrated in the Azure portal, where you can apply powerful analytic and
search tools to the raw data.
Overview
The diagram below represents the flow of alerts.
MCT USE ONLY. STUDENT USE PROHIBITED 10 Introduction to Azure Monitor
Alert rules are separated from alerts and the action that are taken when an alert fires.
Alert rule - The alert rule captures the target and criteria for alerting. The alert rule can be in an enabled
or a disabled state. Alerts only fire when enabled.
MCT USE ONLY. STUDENT USE PROHIBITED
Overview of Azure Monitor 11
Manage alerts
You can set the state of an alert to specify where it is in the resolution process. When the criteria specified
in the alert rule is met, an alert is created or fired, it has a status of New. You can change the status when
you acknowledge an alert and when you close it. All state changes are stored in the history of the alert.
The following alert states are supported.
State Description
New The issue has just been detected and has not yet
been reviewed.
Acknowledged An administrator has reviewed the alert and
started working on it.
Closed The issue has been resolved. After an alert has
been closed, you can reopen it by changing it to
another state.
Alert state is different and independent of the monitor condition. Alert state is set by the user. Monitor
condition is set by the system. When an alert fires, the alert's monitor condition is set to fired. When the
MCT USE ONLY. STUDENT USE PROHIBITED 12 Introduction to Azure Monitor
underlying condition that caused the alert to fire clears, the monitor condition is set to resolved. The alert
state isn't changed until the user changes it.
Alerts experience
The default Alerts page provides a summary of alerts that are created within a particular time window. It
displays the total alerts for each severity with columns that identify the total number of alerts in each
state for each severity. Select any of the severities to open the All Alerts page filtered by that severity.
It does not show or track older classic alerts. You can change the subscriptions or filter parameters to
update the page.
Review questions
Module 1 review questions
Azure Monitor data types
All data collected by Azure Monitor fits into one of two fundamental types, metrics and logs. What kinds
of information is collected for each fundamental type?
Monitored metrics
Application Insights is aimed at the development team to help you understand how your app is perform-
ing and how it's being used. It monitors many metrics, how many can you think of?
●● Performance counters from your Windows Server or Linux server machines, such as those for CPU,
memory, and network usage.
●● Host diagnostics from Docker or Azure.
●● Diagnostic trace logs from your app so that you can correlate trace events with requests.
●● Custom events and metrics that you write yourself in the client or server code to track business
events, such as the number of items sold or games won.
MCT USE ONLY. STUDENT USE PROHIBITED
Develop code to support scalability of apps and
services
Implement autoscale
Common autoscale patterns
Note: Azure Monitor autoscale currently applies only to Virtual Machine Scale Sets, Cloud Services, App
Service - Web Apps, and API Management services.
●● The scale-out rule is triggered when the virtual machine scale set's average percentage CPU metric
is greater than 85 percent for the past 10 minutes.
●● The scale-in rule is triggered when the virtual machine scale set's average is less than 60 percent
for the past minute.
{
"id": "/subscriptions/s1/resourceGroups/rg1/providers/microsoft.insights/
autoscalesettings/setting1",
"name": "setting1",
"type": "Microsoft.Insights/autoscaleSettings",
"location": "East US",
"properties": {
MCT USE ONLY. STUDENT USE PROHIBITED
Implement autoscale 19
"enabled": true,
"targetResourceUri": "/subscriptions/s1/resourceGroups/rg1/providers/
Microsoft.Compute/virtualMachineScaleSets/vmss1",
"profiles": [
{
"name": "mainProfile",
"capacity": {
"minimum": "1",
"maximum": "4",
"default": "1"
},
"rules": [
{
"metricTrigger": {
"metricName": "Percentage CPU",
"metricResourceUri": "/subscriptions/s1/resourceGroups/rg1/
providers/Microsoft.Compute/virtualMachineScaleSets/vmss1",
"timeGrain": "PT1M",
"statistic": "Average",
"timeWindow": "PT10M",
"timeAggregation": "Average",
"operator": "GreaterThan",
"threshold": 85
},
"scaleAction": {
"direction": "Increase",
"type": "ChangeCount",
"value": "1",
"cooldown": "PT5M"
}
},
{
"metricTrigger": {
"metricName": "Percentage CPU",
"metricResourceUri": "/subscriptions/s1/resourceGroups/rg1/
providers/Microsoft.Compute/virtualMachineScaleSets/vmss1",
"timeGrain": "PT1M",
"statistic": "Average",
"timeWindow": "PT10M",
"timeAggregation": "Average",
"operator": "LessThan",
"threshold": 60
},
"scaleAction": {
"direction": "Decrease",
"type": "ChangeCount",
"value": "1",
"cooldown": "PT5M"
}
}
]
MCT USE ONLY. STUDENT USE PROHIBITED 20 Develop code to support scalability of apps and services
}
]
}
}
Autoscale profiles
There are three types of Autoscale profiles:
●● Regular profile: The most common profile. If you don’t need to scale your resource based on the day
of the week, or on a particular day, you can use a regular profile. This profile can then be configured
with metric rules that dictate when to scale out and when to scale in. You should only have one
regular profile defined.
●● The example profile used earlier in this article is an example of a regular profile. Note that it is also
possible to set a profile to scale to a static instance count for your resource.
●● Fixed date profile: This profile is for special cases. For example, let’s say you have an important event
coming up on December 26, 2017 (PST). You want the minimum and maximum capacities of your
resource to be different on that day, but still scale on the same metrics. In this case, you should add a
fixed date profile to your setting’s list of profiles. The profile is configured to run only on the event’s
day. For any other day, Autoscale uses the regular profile.
"profiles": [{
"name": " regularProfile",
"capacity": {
...
},
"rules": [{
...
},
{
...
}]
},
{
"name": "eventProfile",
"capacity": {
...
},
"rules": [{
...
}, {
...
}],
"fixedDate": {
MCT USE ONLY. STUDENT USE PROHIBITED 24 Develop code to support scalability of apps and services
●● Recurrence profile: This type of profile enables you to ensure that this profile is always used on a
particular day of the week. Recurrence profiles only have a start time. They run until the next recur-
rence profile or fixed date profile is set to start. An Autoscale setting with only one recurrence profile
runs that profile, even if there is a regular profile defined in the same setting. The following example
illustrates a way this profile is used:
"recurrence": {
"frequency": "Week",
"schedule": {
"timeZone": "Pacific Standard Time",
"days": [
"Saturday"
],
"hours": [
0
],
"minutes": [
0
]
}
}
}]
The preceding setting shows that each recurrence profile has a schedule. This schedule determines when
the profile starts running. The profile stops when it’s time to run another profile.
For example, in the preceding setting, “weekdayProfile” is set to start on Monday at 12:00 AM. That
means this profile starts running on Monday at 12:00 AM. It continues until Saturday at 12:00 AM, when
“weekendProfile” is scheduled to start running.
Autoscale evaluation
Given that Autoscale settings can have multiple profiles, and each profile can have multiple metric rules,
it is important to understand how an Autoscale setting is evaluated. Each time the Autoscale job runs, it
begins by choosing the profile that is applicable. Then Autoscale evaluates the minimum and maximum
values, and any metric rules in the profile, and decides if a scale action is necessary.
The first rule would result in a new capacity of 11, and the second rule would result in a capacity of 13. To
ensure service availability, Autoscale chooses the action that results in the maximum capacity, so the
second rule is chosen.
If no scale-out rules are triggered, Autoscale evaluates all the scale-in rules (rules with direction =
“Decrease”). Autoscale only takes a scale-in action if all of the scale-in rules are triggered.
Autoscale calculates the new capacity determined by the scaleAction of each of those rules. Then it
chooses the scale action that results in the maximum of those capacities to ensure service availability.
For example, let's say there is a virtual machine scale set with a current capacity of 10. There are two
scale-in rules: one that decreases capacity by 50 percent, and one that decreases capacity by 3 counts.
The first rule would result in a new capacity of 5, and the second rule would result in a capacity of 7. To
ensure service availability, Autoscale chooses the action that results in the maximum capacity, so the
second rule is chosen.
1 https://docs.microsoft.com/azure/application-insights/app-insights-asp-net
MCT USE ONLY. STUDENT USE PROHIBITED
Implement autoscale 27
3. Click on Autoscale setting to view all the resources for which auto scale is applicable, along with its
current autoscale status
4. Open the Autoscale blade in Azure Monitor and select a resource you want to scale
5. Note: The steps below use an app service plan associated with a web app that has App Insights
configured.
6. In the scale setting blade for the resource, notice that the current instance count is Click on Enable
autoscale.
7. Provide a name for the scale setting, and the click on Add a rule. Notice the scale rule options that
opens as a context pane in the right hand side. By default, it sets the option to scale your instance
count by 1 if the CPU percentage of the resource exceeds 70%. Change the metric source at the top
to Application Insights, select the app insights resource in the Resource dropdown and then select
MCT USE ONLY. STUDENT USE PROHIBITED 28 Develop code to support scalability of apps and services
8. Similar to the step above, add a scale rule that will scale in and decrease the scale count by 1 if the
custom metric is below a threshold.
9. Set the you instance limits. For example, if you want to scale between 2-5 instances depending on the
custom metric fluctuations, set Minimum to ‘2’, Maximum to '5' and Default to ‘2’
10. Note: In case there is a problem reading the resource metrics and the current capacity is below the
default capacity, then to ensure the availability of the resource, Autoscale will scale out to the default
value. If the current capacity is already higher than default capacity, Autoscale will not scale in.
11. Click on Save
MCT USE ONLY. STUDENT USE PROHIBITED
Implement autoscale 29
Congratulations. You now successfully created your scale setting to auto scale your web app based on a
custom metric.
2 https://docs.microsoft.com/en-us/azure/azure-monitor/platform/autoscale-common-metrics
MCT USE ONLY. STUDENT USE PROHIBITED 30 Develop code to support scalability of apps and services
2. If the average CPU% across instances goes to 80, autoscale scales out adding a third instance.
3. Now assume that over time the CPU% falls to 60.
4. Autoscale's scale-in rule estimates the final state if it were to scale-in. For example, 60 x 3 (current
instance count) = 180 / 2 (final number of instances when scaled down) = 90. So autoscale does not
scale-in because it would have to scale-out again immediately. Instead, it skips scaling down.
5. The next time autoscale checks, the CPU continues to fall to 50. It estimates again - 50 x 3 instance =
150 / 2 instances = 75, which is below the scale-out threshold of 80, so it scales in successfully to 2
instances.
The query will return an array of large JSON objects for each VM in your subscription:
[
{
"availabilitySet": null,
"diagnosticsProfile": null,
"hardwareProfile": {
"vmSize": "Standard_B1s"
},
"id": "/subscriptions/9103844d-1370-4716-b02b-69ce936865c6/
resourceGroups/VM/providers/Microsoft.Compute/virtualMachines/simple",
"identity": null,
"instanceView": null,
"licenseType": null,
"location": "eastus",
"name": "simple",
"networkProfile": {
"networkInterfaces": [{
"id": "/subscriptions/9103844d-1370-4716-b02b-
69ce936865c6/resourceGroups/VM/providers/Microsoft.Network/networkInterfac-
es/simple159",
"primary": null,
"resourceGroup": "VM"
}]
},
MCT USE ONLY. STUDENT USE PROHIBITED 34 Develop code to support scalability of apps and services
"osProfile": {
"adminPassword": null,
"adminUsername": "simple",
"computerName": "simple",
"customData": null,
"linuxConfiguration": {
"disablePasswordAuthentication": false,
"ssh": null
},
"secrets": [],
"windowsConfiguration": null
},
"plan": null,
"provisioningState": "Creating",
"resourceGroup": "VM",
"resources": null,
"storageProfile": {
"dataDisks": [],
"imageReference": {
"id": null,
"offer": "UbuntuServer",
"publisher": "Canonical",
"sku": "17.10",
"version": "latest"
},
"osDisk": {
"caching": "ReadWrite",
"createOption": "FromImage",
"diskSizeGb": 30,
"encryptionSettings": null,
"image": null,
"managedDisk": {
"id": "/subscriptions/9103844d-1370-4716-b02b-
69ce936865c6/resourceGroups/VM/providers/Microsoft.Compute/disks/simple_Os-
Disk_1_4da948f5ef1a4232ad2f632077326d0a",
"resourceGroup": "VM",
"storageAccountType": "Premium_LRS"
},
"name": "simple_OsDisk_1_4da948f5ef1a4232ad2f-
632077326d0a",
"osType": "Linux",
"vhd": null,
"writeAcceleratorEnabled": null
}
},
"tags": null,
"type": "Microsoft.Compute/virtualMachines",
"vmId": "6aed2e80-64b2-401b-a8a0-b82ac8a6ed5c",
"zones": null
},
{
MCT USE ONLY. STUDENT USE PROHIBITED
Implement code that addresses singleton application instances 35
...
}
]
Using the --query argument, we can specify project-specific fields to make the JSON object more useful
and easier to read. This is useful if you are deserializing the JSON object into a specific type in your code:
az vm list --query '[].{name:name, image:storageProfile.imageReference.
offer}'
[
{
"image": "UbuntuServer",
"name": "linuxvm"
},
{
"image": "WindowsServer",
"name": "winvm"
}
]
Using the [ ] operator, you can create queries that filter your result set by comparing the values of
various JSON properties:
az vm list --query "[?starts_with(storageProfile.imageReference.offer, 'Win-
dowsServer')]"
You can even combine filtering and projection to create custom queries that only return the resources
you need and project only the fields that are useful to your application:
az vm list --query "[?starts_with(storageProfile.imageReference.offer, 'Ubun-
tu')].{name:name, id:vmId}"
[
{
"name": "linuxvm",
"id": "6aed2e80-64b2-401b-a8a0-b82ac8a6ed5c"
}
]
The authentication file, referenced as azure.auth above, contains information necessary to access your
subscription using a service principal. The authorization file will look similar to the format below:
{
"clientId": "b52dd125-9272-4b21-9862-0be667bdf6dc",
"clientSecret": "ebc6e170-72b2-4b6f-9de2-99410964d2d0",
"subscriptionId": "ffa52f27-be12-4cad-b1ea-c2c241b6cceb",
"tenantId": "72f988bf-86f1-41af-91ab-2d7cd011db47",
"activeDirectoryEndpointUrl": "https://login.microsoftonline.com",
"resourceManagerEndpointUrl": "https://management.azure.com/",
"activeDirectoryGraphResourceId": "https://graph.windows.net/",
"sqlManagementEndpointUrl": "https://management.core.windows.
net:8443/",
"galleryEndpointUrl": "https://gallery.azure.com/",
"managementEndpointUrl": "https://management.core.windows.net/"
}
If you do not already have a service principal, you can generate a service principal and this file using the
Azure CLI:
az ad sp create-for-rbac --sdk-auth > azure.auth
The properties have both synchronous and asynchronous versions of methods to perform actions such as
Create, Delete, List, and Get. If we wanted to get a list of VMs asynchronously, we could use the
ListAsync method:
var vms = await azure.VirtualMachines.ListAsync();
foreach(var vm in vms)
{
Console.WriteLine(vm.Name);
}
MCT USE ONLY. STUDENT USE PROHIBITED
Implement code that addresses singleton application instances 37
You can also use any language-integrated query mechanism, like language-integrated query (LINQ) in C#,
to filter your VM list to a specific subset of VMs that match a filter criteria:
var allvms = await azure.VirtualMachines.ListAsync();
Console.WriteLine(targetvm?.Id);
The INetworkInterface interface has a property named PrimaryIPConfiguration that will get the
configuration of the primary IP address for the current network adapter:
INicIPConfiguration targetipconfig = targetnic.PrimaryIPConfiguration;
The INicIPConfiguration interface has a method named GetPublicIPAddress that will get the IP
address resource that is public and associated with the current specified configuration:
IPublicIPAddress targetipaddress = targetipconfig.GetPublicIPAddress();
Finally, the IPublicIPAddress interface has a property named IPAddress that contains the current IP
address as a string value:
Console.WriteLine($"IP Address:\t{targetipaddress.IPAddress}");
Your application can now use this specific IP address to communicate directly with the intended compute
instance.
MCT USE ONLY. STUDENT USE PROHIBITED 38 Develop code to support scalability of apps and services
1. The application invokes an operation on a hosted service. The request fails, and the service host
responds with HTTP response code 500 (internal server error).
2. The application waits for a short interval and tries again. The request still fails with HTTP response
code 500.
3. The application waits for a longer interval and tries again. The request succeeds with HTTP response
code 200 (OK).
The application should wrap all attempts to access a remote service in code that implements a retry
policy matching one of the strategies listed above. Requests sent to different services can be subject to
different policies. Some vendors provide libraries that implement retry policies, where the application can
specify the maximum number of retries, the amount of time between retry attempts, and other parame-
ters.
An application should log the details of faults and failing operations. This information is useful to opera-
tors. If a service is frequently unavailable or busy, it's often because the service has exhausted its resourc-
es. You can reduce the frequency of these faults by scaling out the service. For example, if a database
service is continually overloaded, it might be beneficial to partition the database and spread the load
across multiple servers.
{
Trace.TraceError("Operation Exception");
currentRetry++;
if (currentRetry > this.retryCount || !IsTransient(ex))
{
throw;
}
}
await Task.Delay(delay);
}
}
The statement that invokes this method is contained in a try/catch block wrapped in a for loop. The for
loop exits if the call to the TransientOperationAsync method succeeds without throwing an excep-
tion. If the TransientOperationAsync method fails, the catch block examines the reason for the
failure. If it's believed to be a transient error, the code waits for a short delay before retrying the opera-
tion.
The for loop also tracks the number of times that the operation has been attempted, and if the code fails
three times, the exception is assumed to be more long lasting. If the exception isn't transient or it's long
lasting, the catch handler will throw an exception. This exception exists in the for loop and should be
caught by the code that invokes the OperationWithBasicRetryAsync method.
Review questions
Module 2 review questions
Autoscale thresholds
When using autoscale why is it important to ensure the maximum and minimum values are different and
have an adequate margin between them? Look at the example below, why wouldn't it be recommended
to use autoscale settings with the same or very similar threshold values for out and in conditions?
●● Increase instances by 1 count when Thread Count <= 600
●● Decrease instances by 1 count when Thread Count >= 600
Transient errors
Why can transient errors be difficult to diagnose and fix?
Node.appendChild(b)});try{c.cookie=d.cookie}catch(a){}c.queue=[];for(var
f=["Event","Exception","Metric","PageView","Trace","Dependency"];f.length;)
b("track"+f.pop());if(b("setAuthenticatedUserContext"),b("clearAuthenticat-
edUserContext"),b("startTrackEvent"),b("stopTrackEvent"),b("startTrack-
Page"),b("stopTrackPage"),b("flush"),!a.disableExceptionTracking){f="oner-
ror",b("_"+f);var g=e[f];e[f]=function(a,b,d,e,h){var
i=g&&g(a,b,d,e,h);return!0!==i&&c["_"+f](a,b,d,e,h),i}}return c
}({
instrumentationKey:"<your instrumentation key>"
});
window.appInsights=appInsights,appInsights.queue&&0===appInsights.queue.
length&&appInsights.trackPageView();
</script>
Insert the script just before the </head> tag of every page you want to track. If your website has a
master page, you can put the script there. For example in an ASP.NET MVC project, you'd put it in View\
Shared\_Layout.cshtml.
The script contains the instrumentation key that directs the data to your Application Insights resource.
Detailed configuration
There are several parameters you can set, though in most cases, you shouldn't need to. For example, you
can disable or limit the number of Ajax calls reported per page view (to reduce traffic). Or you can set
debug mode to have telemetry move rapidly through the pipeline without being batched.
To set these parameters, look for this line in the code snippet, and add more comma-separated items
after it:
})({
instrumentationKey: "..."
// Insert here
});
1 https://github.com/Microsoft/ApplicationInsights-JS/blob/master/API-reference.md#config
MCT USE ONLY. STUDENT USE PROHIBITED
Configure instrumentation in an app or service by using Application Insights 45
2 https://www.nuget.org/packages/Microsoft.ApplicationInsights.DependencyCollector
3 https://docs.microsoft.com/en-us/azure/azure-monitor/app/configuration-with-applicationinsights-config
MCT USE ONLY. STUDENT USE PROHIBITED 46 Instrument solutions to support monitoring and logging
You may get a full example of the config file by installing latest version of Microsoft.ApplicationIn-
sights.WindowsServer4 package. Here is the minimal configuration for dependency collection that is
equivalent to the code example.
<?xml version="1.0" encoding="utf-8"?>
<ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationIn-
sights/2013/Settings">
<InstrumentationKey>Your Key</InstrumentationKey>
<TelemetryInitializers>
<Add Type="Microsoft.ApplicationInsights.DependencyCollector.HttpDe-
pendenciesParsingTelemetryInitializer, Microsoft.AI.DependencyCollector"/>
</TelemetryInitializers>
<TelemetryModules>
<Add Type="Microsoft.ApplicationInsights.DependencyCollector.Dependen-
cyTrackingTelemetryModule, Microsoft.AI.DependencyCollector">
<ExcludeComponentCorrelationHttpHeadersOnDomains>
<Add>core.windows.net</Add>
<Add>core.chinacloudapi.cn</Add>
<Add>core.cloudapi.de</Add>
<Add>core.usgovcloudapi.net</Add>
<Add>localhost</Add>
<Add>127.0.0.1</Add>
</ExcludeComponentCorrelationHttpHeadersOnDomains>
<IncludeDiagnosticSourceActivities>
<Add>Microsoft.Azure.ServiceBus</Add>
<Add>Microsoft.Azure.EventHubs</Add>
</IncludeDiagnosticSourceActivities>
</Add>
</TelemetryModules>
<TelemetryChannel Type="Microsoft.ApplicationInsights.WindowsServer.
TelemetryChannel.ServerTelemetryChannel, Microsoft.AI.ServerTelemetryChan-
nel"/>
</ApplicationInsights>
4 https://www.nuget.org/packages/Microsoft.ApplicationInsights.WindowsServer
MCT USE ONLY. STUDENT USE PROHIBITED
Configure instrumentation in an app or service by using Application Insights 47
●● For .NET Framework Windows app, you may also install and initialize Performance Counter collector
module.
Full example
using Microsoft.ApplicationInsights;
using Microsoft.ApplicationInsights.DependencyCollector;
using Microsoft.ApplicationInsights.Extensibility;
using System.Net.Http;
using System.Threading.Tasks;
namespace ConsoleApp
{
class Program
{
static void Main(string[] args)
{
TelemetryConfiguration configuration = TelemetryConfiguration.
Active;
configuration.InstrumentationKey = "removed";
configuration.TelemetryInitializers.Add(new OperationCorrela-
tionTelemetryInitializer());
configuration.TelemetryInitializers.Add(new HttpDependen-
ciesParsingTelemetryInitializer());
// run app...
telemetryClient.TrackTrace("Hello World!");
module.Initialize(configuration);
return module;
}
}
}
Example code
public partial class Form1 : Form
{
private TelemetryClient tc = new TelemetryClient();
...
private void Form1_Load(object sender, EventArgs e)
{
// Alternative to setting ikey in config file:
tc.InstrumentationKey = "key copied from portal";
tc.Context.Device.OperatingSystem = Environment.OSVersion.To-
String();
What is a Component?
Components are independently deployable parts of your distributed/microservices application. Develop-
ers and operations teams have code-level visibility or access to telemetry generated by these application
components.
●● Components are different from “observed” external dependencies such as SQL, EventHub etc. which
your team/organization may not have access to (code or telemetry).
●● Components run on any number of server/role/container instances.
●● Components can be separate Application Insights instrumentation keys (even if subscriptions are
different) or different roles reporting to a single Application Insights instrumentation key. The preview
map experience shows the components regardless of how they are set up.
One of the key objectives with this experience is to be able to visualize complex topologies with hun-
dreds of components.
Click on any component to see related insights and go to the performance and failure triage experience
for that component.
Set cloud_RoleName
Application Map uses the cloud_RoleName property to identify the components on the map. The
Application Insights SDK automatically adds the cloud_RoleName property to the telemetry emitted by
components. For example, the SDK will add a web site name or service role name to the cloud_Role-
Name property. However, there are cases where you may want to override the default value. To override
cloud_RoleName and change what gets displayed on the Application Map:
MCT USE ONLY. STUDENT USE PROHIBITED
Analyze and troubleshoot solutions by using Azure Monitor 53
.NET
using Microsoft.ApplicationInsights.Channel;
using Microsoft.ApplicationInsights.Extensibility;
namespace CustomInitializer.Telemetry
{
public class MyTelemetryInitializer : ITelemetryInitializer
{
public void Initialize(ITelemetry telemetry)
{
if (string.IsNullOrEmpty(telemetry.Context.Cloud.RoleName))
{
//set custom role name here
telemetry.Context.Cloud.RoleName = "RoleName";
}
}
}
}
Node.js
var appInsights = require("applicationinsights");
appInsights.setup('INSTRUMENTATION_KEY').start();
appInsights.defaultClient.context.tags["ai.cloud.role"] = "your role name";
MCT USE ONLY. STUDENT USE PROHIBITED 54 Instrument solutions to support monitoring and logging
appInsights.defaultClient.context.tags["ai.cloud.roleInstance"] = "your
role instance";
appInsights.defaultClient.addTelemetryProcessor(envelope => {
envelope.tags["ai.cloud.role"] = "your role name";
envelope.tags["ai.cloud.roleInstance"] = "your role instance"
});
Client/browser-side JavaScript
appInsights.queue.push(() => {
appInsights.context.addTelemetryInitializer((envelope) => {
envelope.tags["ai.cloud.role"] = "your role name";
envelope.tags["ai.cloud.roleInstance"] = "your role instance";
});
});
2.
3. Type a name for the dashboard.
4. Have a look at the Tile Gallery for a variety of tiles that you can add to your dashboard. In addition to
adding tiles from the gallery you can pin charts and other views directly from Application Insights to
the dashboard.
5. Locate the Markdown tile and drag it on to your dashboard. This tile allows you to add text format-
ted in markdown which is ideal for adding descriptive text to your dashboard.
6. Add text to the tile's properties and resize it on the dashboard canvas.
7.
8. Click Done customizing at the top of the screen to exit tile customization mode.
MCT USE ONLY. STUDENT USE PROHIBITED 56 Instrument solutions to support monitoring and logging
3.
4. In the top right a notification will appear that your tile was pinned to your dashboard. Click Pinned to
dashboard in the notification to return to your dashboard or use the dashboard pane.
5. That tile is now added to your dashboard. Select Edit to change the positioning of the tile. Click and
drag the it into position and then click Done customizing. Your dashboard now has a tile with some
useful information.
MCT USE ONLY. STUDENT USE PROHIBITED
Analyze and troubleshoot solutions by using Azure Monitor 57
6.
4.
5. Select Pin to dashboard on the right. This adds the view to the last dashboard that you were working
with.
6. In the top right a notification will appear that your tile was pinned to your dashboard. Click Pinned to
dashboard in the notification to return to your dashboard or use the dashboard blade.
7. That tile is now added to your dashboard.
2.
MCT USE ONLY. STUDENT USE PROHIBITED
Analyze and troubleshoot solutions by using Azure Monitor 59
3. Keep the Dashboard name the same and select the Subscription Name to share the dashboard.
Click Publish. The dashboard is now available to other services and subscriptions. You can optionally
define specific users who should have access to the dashboard.
4. Select your Application Insights resource in the home screen.
5. Click Analytics at the top of the screen to open the Analytics portal.
6.
7. Type the following query, which returns the top 10 most requested pages and their request count:
requests
| summarize count() by name
| sort by count_ desc
| take 10
10.
You can retrieve information from the activity logs through the portal, PowerShell, Azure CLI, Insights
REST API, or Insights .NET Library.
PowerShell
1. To retrieve log entries, run the Get-AzureRmLog command. You provide additional parameters to
filter the list of entries. If you do not specify a start and end time, entries for the last hour are returned.
For example, to retrieve the operations for a resource group during the past hour run:
Get-AzureRmLog -ResourceGroup ExampleGroup
1. The following example shows how to use the activity log to research operations taken during a
specified time. The start and end dates are specified in a date format.
Get-AzureRmLog -ResourceGroup ExampleGroup -StartTime 2015-08-28T06:00
-EndTime 2015-09-10T06:00
1. Or, you can use date functions to specify the date range, such as the last 14 days.
Get-AzureRmLog -ResourceGroup ExampleGroup -StartTime (Get-Date).Ad-
dDays(-14)
2. Depending on the start time you specify, the previous commands can return a long list of operations
for the resource group. You can filter the results for what you are looking for by providing search
criteria. For example, if you are trying to research how a web app was stopped, you could run the
following command:
Get-AzureRmLog -ResourceGroup ExampleGroup -StartTime (Get-Date).Ad-
dDays(-14) | Where-Object OperationName -eq Microsoft.Web/sites/stop/action
3. Which for this example shows that a stop action was performed by [email protected].
Authorization :
Scope : /subscriptions/xxxxx/resourcegroups/ExampleGroup/providers/
Microsoft.Web/sites/ExampleSite
Action : Microsoft.Web/sites/stop/action
Role : Subscription Admin
Condition :
Caller : [email protected]
CorrelationId : 84beae59-92aa-4662-a6fc-b6fecc0ff8da
EventSource : Administrative
EventTimestamp : 8/28/2015 4:08:18 PM
OperationName : Microsoft.Web/sites/stop/action
ResourceGroupName : ExampleGroup
ResourceId : /subscriptions/xxxxx/resourcegroups/ExampleGroup/
providers/Microsoft.Web/sites/ExampleSite
Status : Succeeded
SubscriptionId : xxxxx
SubStatus : OK
4. You can look up the actions taken by a particular user, even for a resource group that no longer exists.
MCT USE ONLY. STUDENT USE PROHIBITED
Analyze and troubleshoot solutions by using Azure Monitor 61
6. You can focus on one error by looking at the status message for that entry.
((Get-AzureRmLog -Status Failed -ResourceGroup ExampleGroup -DetailedOut-
put).Properties[1].Content["statusMessage"] | ConvertFrom-Json).error
7. Which returns:
code message
---- -------
DnsRecordInUse DNS record dns.westus.cloudapp.azure.com is already used by
another public IP.
Azure CLI
To retrieve log entries, run the az monitor activity-log list command.
az monitor activity-log list --resource-group <group name>
REST API
The REST operations for working with the activity log are part of the Insights REST API. To retrieve activity
log events, see List the management events in a subscription5.
5 https://msdn.microsoft.com/library/azure/dn931934.aspx
MCT USE ONLY. STUDENT USE PROHIBITED 62 Instrument solutions to support monitoring and logging
If you have already configured Application Insights for your web app, open its Application Insights
resource in the Azure portal.
Or, if you want to see your reports in a new resource, go to the Azure portal, and create an Application
Insights resource.
●● URL: Can be any web page you want to test, but it must be visible from the public internet. The URL
can include a query string. So, for example, you can exercise your database a little. If the URL resolves
to a redirect, we follow it up to 10 redirects.
●● Parse dependent requests: If this option is checked, the test requests images, scripts, style files, and
other files that are part of the web page under test. The recorded response time includes the time tak-
en to get these files. The test fails if all these resources cannot be successfully downloaded within the
timeout for the whole test. If the option is not checked, the test only requests the file at the URL you
specified.
●● Enable retries: If this option is checked, when the test fails, it is retried after a short interval. A failure
is reported only if three successive attempts fail. Subsequent tests are then performed at the usual
test frequency. Retry is temporarily suspended until the next success. This rule is applied independent-
ly at each test location. We recommend this option. On average, about 80% of failures disappear on
retry.
●● Test frequency: Sets how often the test is run from each test location. With a default frequency of five
minutes and five test locations, your site is tested on average every minute.
●● Test locations are the places from where our servers send web requests to your URL. Our minimum
number of recommended test locations is five in order to insure that you can distinguish problems in
your website from network issues. You can select up to 16 locations.
●● Note: We strongly recommend testing from multiple locations with a minimum of five locations. This
is to prevent false alarms that may result from transient issues with a specific location. In addition we
have found that the optimal configuration is to have the number of test locations be equal to the alert
MCT USE ONLY. STUDENT USE PROHIBITED
Analyze and troubleshoot solutions by using Azure Monitor 63
location threshold + 2. Enabling the “Parse dependent requests” option results in a stricter check. The
test could fail for cases which may not be noticeable when manually browsing the site.
●● Success criteria:
●● Test timeout: Decrease this value to be alerted about slow responses. The test is counted as a failure
if the responses from your site have not been received within this period. If you selected Parse
dependent requests, then all the images, style files, scripts, and other dependent resources must have
been received within this period.
●● HTTP response: The returned status code that is counted as a success. 200 is the code that indicates
that a normal web page has been returned.
●● Content match: a string, like “Welcome!” We test that an exact case-sensitive match occurs in every
response. It must be a plain string, without wildcards. Don't forget that if your page content changes
you might have to update it.
●● Alert location threshold: We recommend a minimum of 3/5 locations. The optimal relationship
between alert location threshold and the number of test locations is alert location threshold =
number of test locations - 2, with a minimum of five test locations.
Review questions
Module 3 review questions
Application Insights for web pages
What types of information is gathered if you add the SDK script to your app or web page?
Each data value is associated to a key which can be used to lookup the value from the cache. Redis works
best with smaller values (100k or less), so consider chopping up bigger data into multiple keys. Storing
larger values is possible (up to 500 MB), but increases network latency and can cause caching and
out-of-memory issues if the cache isn't configured to expire old values.
Summary
A database is great for storing large amounts of data, but there is an inherent latency when looking up
data. You send a query. The server interprets the query, looks up the data, and returns it. Servers also
have capacity limits for handling requests. If too many requests are made, data retrieval will likely slow
down. Caching will store frequently requested data in memory that can be returned faster than querying
a database, which should lower latency and increase performance. Azure Cache for Redis gives you access
to a secure, dedicated, and scalable Redis cache, hosted in Azure, and managed by Microsoft.
Name
The Redis cache will need a globally unique name. The name has to be unique within Azure because it is
used to generate a public-facing URL to connect and communicate with the service.
The name must be between 1 and 63 characters, composed of numbers, letters, and the ‘-’ character. The
cache name can't start or end with the '-' character, and consecutive ‘-’ characters aren't valid.
Resource Group
The Azure Cache for Redis is a managed resource and needs a resource group owner. You can either
create a new resource group, or use an existing one in a subscription you are part of.
Location
You will need to decide where the Redis cache will be physically located by selecting an Azure region. You
should always place your cache instance and your application in the same region. Connecting to a cache
in a different region can significantly increase latency and reduce reliability. If you are connecting to the
cache outside of Azure, then select a location close to where the application consuming the data is
running.
Important: Put the Redis cache as close to the data consumer as you can.
Pricing tier
As mentioned in the last unit, there are three pricing tiers available for an Azure Cache for Redis.
●● Basic: Basic cache ideal for development/testing. Is limited to a single server, 53 GB of memory, and
20,000 connections. There is no SLA for this service tier.
●● Standard: Production cache which supports replication and includes an 99.99% SLA. It supports two
servers (master/slave), and has the same memory/connection limits as the Basic tier.
●● Premium: Enterprise tier which builds on the Standard tier and includes persistence, clustering, and
scale-out cache support. This is the highest performing tier with up to 530 GB of memory and 40,000
simultaneous connections.
MCT USE ONLY. STUDENT USE PROHIBITED 68 Integrate caching and content delivery within solutions
You can control the amount of cache memory available on each tier - this is selected by choosing a cache
level from C0-C6 for Basic/Standard and P0-P4 for Premium. Check the pricing page1 for full details.
Tip: Microsoft recommends you always use Standard or Premium Tier for production systems. The Basic
Tier is a single node system with no data replication and no SLA. Also, use at least a C1 cache. C0 caches
are really meant for simple dev/test scenarios since they have a shared CPU core and very little memory.
The Premium tier allows you to persist data in two ways to provide disaster recovery:
1. RDB persistence takes a periodic snapshot and can rebuild the cache using the snapshot in case of
failure.
2.
3. AOF persistence saves every write operation to a log that is saved at least once per second. This
creates bigger files than RDB but has less data loss.
4.
There are several other settings which are only available to the Premium tier.
1 https://azure.microsoft.com/pricing/details/cache/
MCT USE ONLY. STUDENT USE PROHIBITED
Azure Cache for Redis 69
Clustering support
With a premium tier Redis cache, you can implement clustering to automatically split your dataset among
multiple nodes. To implement clustering, you specify the number of shards to a maximum of 10. The cost
incurred is the cost of the original node, multiplied by the number of shards.
Command Description
ping Ping the server. Returns "PONG".
set [key] [value] Sets a key/value in the cache. Returns "OK" on
success.
get [key] Gets a value from the cache.
exists [key] Returns '1' if the key exists in the cache, '0' if it
doesn't.
type [key] Returns the type associated to the value for the
given key.
incr [key] Increment the given value associated with key by
'1'. The value must be an integer or
double value. This returns the new value.
incrby [key] [amount] Increment the given value associated with key by
the specified amount. The value must
be an integer or double value. This
returns the new value.
del [key] Deletes the value associated with the key.
flushdb Delete all keys and values in the database.
Redis has a command-line tool (redis-cli) you can use to experiment directly with these commands. Here
are some examples.
> set somekey somevalue
OK
> get somekey
"somevalue"
> exists somekey
(string) 1
> del somekey
(string) 1
> exists somekey
MCT USE ONLY. STUDENT USE PROHIBITED 70 Integrate caching and content delivery within solutions
(string) 0
Here's an example of working with the INCR commands. These are convenient because they provide
atomic increments across multiple applications that are using the cache.
> set counter 100
OK
> incr counter
(integer) 101
> incrby counter 50
(integer) 151
> type counter
(integer)
tions using the original primary key. Microsoft recommends periodically regenerating the keys - much
like you would your personal passwords.
Warning: Your access keys should be considered confidential information, treat them like you would a
password. Anyone who has an access key can perform any operation on your cache!
You can pass this string to StackExchange.Redis to create a connection the server.
Notice that there are two additional parameters at the end:
●● ssl - ensures that communication is encrypted.
●● abortConnection - allows a connection to be created even if the server is unavailable at that mo-
ment.
There are several other optional parameters2 you can append to the string to configure the client library.
Tip: The connection string should be protected in your application. If the application is hosted on Azure,
consider using an Azure Key Vault to store the value.
Creating a connection
The main connection object in StackExchange.Redis is the StackExchange.Redis.Connection-
Multiplexer class. This object abstracts the process of connecting to a Redis server (or group of
2 https://github.com/StackExchange/StackExchange.Redis/blob/master/docs/Configuration.md#configuration-options
MCT USE ONLY. STUDENT USE PROHIBITED 72 Integrate caching and content delivery within solutions
servers). It's optimized to manage connections efficiently and intended to be kept around while you need
access to the cache.
You create a ConnectionMultiplexer instance using the static ConnectionMultiplexer.Con-
nect or ConnectionMultiplexer.ConnectAsync method, passing in either a connection string or a
ConfigurationOptions object.
Here's a simple example:
using StackExchange.Redis;
...
var connectionString = "[cache-name].redis.cache.windows.net:6380,pass-
word=[password-here],ssl=True,abortConnect=False";
var redisConnection = ConnectionMultiplexer.Connect(connectionString);
// ^^^ store and re-use this!!!
Once you have a ConnectionMultiplexer, there are 3 primary things you might want to do:
1. Access a Redis Database. This is what we will focus on here.
2. Make use of the publisher/subscript features of Redis. This is outside the scope of this module.
3. Access an individual server for maintenance or monitoring purposes.
Tip: The object returned from GetDatabase is a lightweight object, and does not need to be stored.
Only the ConnectionMultiplexer needs to be kept alive.
Once you have a IDatabase object, you can execute methods to interact with the cache. All methods
have synchronous and asynchronous versions which return Task objects to make them compatible with
the async and await keywords.
Here is an example of storing a key/value in the cache:
bool wasSet = db.StringSet("favorite:flavor", "i-love-rocky-road");
The StringSet method returns a bool indicating whether the value was set (true) or not (false). We
can then retrieve the value with the StringGet method:
string value = db.StringGet("favorite:flavor");
Console.WriteLine(value); // displays: ""i-love-rocky-road""
db.StringSet(key, value);
StackExchange.Redis represents keys using the RedisKey type. This class has implicit conversions to
and from both string and byte[], allowing both text and binary keys to be used without any compli-
cation. Values are represented by the RedisValuetype. As with RedisKey, there are implicit conver-
sions in place to allow you to pass string or byte[].
Method Description
CreateBatch Creates a group of operations that will be sent to
the server as a single unit, but not
necessarily processed as a unit.
CreateTransaction Creates a group of operations that will be sent to
the server as a single unit and processed
on the server as a single unit.
KeyDelete Delete the key/value.
KeyExists Returns whether the given key exists in cache.
KeyExpire Sets a time-to-live (TTL) expiration on a key.
KeyRename Renames a key.
KeyTimeToLive Returns the TTL for a key.
KeyType Returns the string representation of the type of
the value stored at key. The different types that
can
be returned are: string, list, set, zset and
hash.
3 https://github.com/StackExchange/StackExchange.Redis/blob/master/src/StackExchange.Redis/Interfaces/IDatabase.cs
MCT USE ONLY. STUDENT USE PROHIBITED 74 Integrate caching and content delivery within solutions
The Execute and ExecuteAsync methods return a RedisResult object which is a data holder that
includes two properties:
●● Type which returns a string indicating the type of the result - “STRING”, "INTEGER", etc.
●● IsNull a true/false value to detect when the result is null.
You can then use ToString() on the RedisResult to get the actual return value.
You can use Execute to perform any supported commands - for example, we can get all the clients
connected to the cache (“CLIENT LIST”):
var result = await db.ExecuteAsync("client", "list");
Console.WriteLine($"Type = {result.Type}\r\nResult = {result}");
We could use the Newtonsoft.Json library to turn an instance of this object into a string:
var stat = new GameStat("Soccer", new DateTime(1950, 7, 16), "FIFA World
Cup",
new[] { "Uruguay", "Brazil" },
new[] { ("Uruguay", 2), ("Brazil", 1) });
We could retrieve it and turn it back into an object using the reverse process:
var result = db.StringGet("event:1950-world-cup");
var stat = Newtonsoft.Json.JsonConvert.DeserializeObject<GameStat>(result.
ToString());
Console.WriteLine(stat.Sport); // displays "Soccer"
Azure CDN
In Azure, the Azure Content Delivery Network (Azure CDN) is a global CDN solution for delivering
high-bandwidth content that is hosted in Azure or in any other location. Using Azure CDN, you can cache
publicly available objects loaded from Azure Blob storage, a web application, a virtual machine, or any
publicly accessible web server. Azure CDN can also accelerate dynamic content, which cannot be cached,
by taking advantage of various network optimizations by using CDN POPs. An example is using route
optimization to bypass Border Gateway Protocol (BGP).
Here’s how Azure CDN works.
1. A user (Example User) requests a file (also called an asset) by using a URL with a special domain name,
such as endpoint_name.azureedge.net. This name can be an endpoint hostname or a custom
domain. The Domain Name System (DNS) routes the request to the best-performing POP location,
which is usually the POP that is geographically closest to the user.
2. If no Edge servers in the POP have the file in their cache, the POP requests the file from the origin
server. The origin server can be an Azure web app, Azure cloud service, Azure storage account, or any
publicly accessible web server.
3. The origin server returns the file to an Edge server in the POP.
4. An Edge server in the POP caches the file and returns the file to the original requestor (Example User).
The file remains cached on the Edge server in the POP until the Time to Live (TTL) specified by its
HTTP headers expires. If the origin server didn't specify a TTL, the default TTL is seven days.
5. Additional users can then request the same file by using the same URL that the original requestor
(Example User) used, which can also be directed to the same POP.
6. If the TTL for the file hasn't expired, the POP Edge server returns the file directly from the cache. This
process results in a faster, more responsive user experience.
MCT USE ONLY. STUDENT USE PROHIBITED 78 Integrate caching and content delivery within solutions
This will globally list every CDN profile associated with your subscription. If you want to filter this list
down to a specific resource group, you can use the --resource-group parameter:
az cdn profile list --resource-group ExampleGroup
To create a new profile, you should use the new create verb for the az cdn profile command group:
az cdn profile create --name DemoProfile --resource-group ExampleGroup
By default, the CDN will be created by using the standard tier and the Akamai provider. You can custom-
ize this further by using the --sku parameter and one of the following options:
●● Custom_Verizon
●● Premium_Verizon
●● Standard_Akamai
●● Standard_ChinaCdn
●● Standard_Verizon
After you have created a new profile, you can use that profile to create an endpoint. Each endpoint
requires you to specify a profile, a resource group, and an origin URL:
az cdn endpoint create --name ContosoEndpoint --origin www.contoso.com
--profile-name DemoProfile --resource-group ExampleGroup
You can customize the endpoint further by assigning a custom domain to the CDN endpoint. This helps
ensure that users see only the domains you choose instead of the Azure CDN domains:
az cdn custom-domain create --name FilesDomain --hostname files.contoso.com
--endpoint-name ContosoEndpoint --profile-name DemoProfile --resource-group
ExampleGroup
Caching rules
Azure CDN caching rules specify cache expiration behavior both globally and with custom conditions.
There are two types of caching rules:
●● Global caching rules. You can set one global caching rule for each endpoint in your profile that affects
all requests to the endpoint. The global caching rule overrides any HTTP cache-directive headers, if
set.
●● Custom caching rules. You can set one or more custom caching rules for each endpoint in your profile.
Custom caching rules match specific paths and file extensions; are processed in order; and override
the global caching rule, if set.
For global and custom caching rules, you can specify the cache expiration duration in days, hours,
minutes, and seconds.
You can also preload assets into an endpoint. This is useful for scenarios where your application creates a
large number of assets, and you want to improve the user experience by prepopulating the cache before
any actual requests occur:
az cdn endpoint load --content-paths '/img/*' '/js/module.js' --name Conto-
soEndpoint --profile-name DemoProfile --resource-group ExampleGroup
MCT USE ONLY. STUDENT USE PROHIBITED 80 Integrate caching and content delivery within solutions
Review Questions
Module 4 review questions
Azure Redis Cache
Can you describe what the Azure Cache for Redis is and what its two tiers of service?