Cheat Sheet Azure Developer Associate AZ 204
Cheat Sheet Azure Developer Associate AZ 204
Cheat Sheet
Quick Bytes for you before the exam!
The information provided in cheat sheet is for educational purposes only; created in our efforts to help aspirants
prepare for the Microsoft Azure Exam AZ-204 certification. Though references have been taken from Azure
documentation, it’s not intended as a substitute for the official docs. The document can be reused, reproduced, and
printed in any form; ensure that appropriate sources are credited and required permissions are received.
750+ Hands-on-Labs
Hands-on Labs - AWS, GCP, Azure (Whizlabs)
1
AZ-204 Whizcard Index
Domain and Topic Names SNO Domain and Topic Names SNO
Azure App Service & Web Apps Implement Caching for solutions
Azure App Service and Web Apps 10 Azure Cache for Redis 126
Diagnostics Logging and Autoscaling 13 Azure Content Delivery Network (CDN) 132
Azure Blob Storage Instrument app service to use Application Insights 176
Set and retrieve properties and metadata 77 Monitor and Analyze metrics, logs, and traces 180
Storage Policies, DLM, and Static Site Hosting 85 Implement Application Insights web tests & alerts 182
2
Containerized Solutions
Azure Container Registry (ACR): It is a service that allows you to build, store and manage
container images and artifacts in a private registry for all types of container deployments. It is a
managed registry service based on the open source Docker Registry 2.0.
Azure Container Instances (ACI): It provides a fast and simple way to run a container on
Azure, without having to manage any virtual machines and adopt a high-level service. It is a
great solution for any scenario that can operate in discrete containers, including common
applications, task automation, and build jobs.
Azure Container Apps (ACP) : It is a fully managed environment that lets you run
microservices and containerized applications on a serverless platform. It lets you run
microservices and containerized applications on a serverless platform running on top of Azure
Kubernetes Service.
● Containers simplify delivery of distributed applications, and have become increasingly popular as
organizations shift to cloud-native development and hybrid multicloud environments.
● The Azure container registry is Microsoft’s own hosting platform for Docker images.
● Azure Container Registry is a private registry service for building, storing, and managing container
images and related artifacts. In this quickstart, you create an Azure container registry instance
with the Azure portal. Then, use Docker commands to push a container image into the registry,
and finally pull and run the image from your registry.
● Azure Container Registry is a multi-tenant service, where the data endpoint storage accounts are
managed by the registry service. There are many benefits for managed storage, such as load
balancing, contentious content splitting, multiple copies for higher concurrent content delivery.
3
What is Docker?
Docker is a platform designed to simplify the process of developing, deploying, and managing
applications by utilizing containerization technology. To Simplift, Docker allows you to package
an application and all its dependencies into a single, lightweight unit called a container. These
containers can run consistently across various environments, from a developer's laptop to a
production server, ensuring that an application behaves predictably and is easily scalable.
Key characteristics include: Portability, Layered Structure, Versioning, Reusability, and Security.
4
Whizlabs Hands-on-labs:
● Azure Container Registry (whizlabs.com)
● Build Docker images and learn about container registries (whizlabs.com)
References links:
● Create a service connection and build and publish Docker images to Azure Container
Registry - Azure Pipelines | Microsoft Learn
● Build and push Docker images to Azure Container Registry with Docker templates -
Azure Pipelines | Microsoft Learn
What is Containerization?
Containerization is a way to package and run software applications, just like bare-metal or
virtualized deployments. Containerization provides below features"
● Isolation: Containerization helps keep each application separate, like having its own box.
This means that one app won't overlap with other application environment even though
both of them are on the same hardware system.
● Consistency: Containers are like standardized boxes, ensuring that the application works
the same way on different computers. This makes it easier for developers to build and test
software because they know it will behave the same everywhere.
● Efficiency: Containers are lightweight and use resources efficiently. This makes it possible
to run many containers on one computer, saving time and money.
Use cases: You can drag images from Azure Container Registry to various deployment targets:
● Scalable orchestration systems
● Azure services
Azure Container Registry is available in multiple service tiers. These tiers offer predictable
pricing and multiple options to align to the capacity.
5
Source: Discover the Azure Container Registry - Training | Microsoft Learn
Whizlabs Hands-on-labs:
● Create a container registry by using a Bicep file (whizlabs.com)
● Create a geo-replicated container registry by using an ARM template (whizlabs.com)
● Build Docker images and learn about container registries (whizlabs.com)
● Artifact Registry vs Container Registry (whizlabs.com)
6
Azure Container Apps
Azure Container Apps is a serverless platform that allows you to manage less infrastructure
and save costs when running containerized applications. Instead of worrying about server
configuration, container orchestration, and deployment details, Container Apps provides all the
up-to-date server resources you need to keep your applications stable and secure.
Azure Container Apps lets you run microservices and containerized applications on a serverless
platform. With container apps, you enjoy the benefits of running containers while leaving
behind the worries of manually configuring cloud infrastructure and complex container
orchestrators.
7
(Source: Azure Container Apps | Microsoft Azure)
Reference Links:
Azure Container Apps | Microsoft Azure
Quickstart: Deploy your first container app using the Azure portal | Microsoft Learn
Quickstart: Deploy your first container app with containerapp up | Microsoft Learn
Comparing Container Apps with other Azure container options | Microsoft Learn
8
Azure Container Instances (ACI)
Containers are becoming the preferred way to package, deploy and manage cloud applications.
Azure Container Instances provide a fast and simple way to run a container on Azure, without
having to manage any virtual machines and adopt a high-level service.
Azure Container Instances is a great solution for any scenario that can operate in discrete
containers, including common applications, task automation, and build jobs. For scenarios
where you need full container orchestration, including service discovery, automatic scaling and
coordinated application upgrades across multiple containers, we recommend Azure Kubernetes
Service (AKS).
9
Use Azure Container Instances to run serverless Docker containers on Azure with simplicity and
speed. When you don't need a full container orchestration platform like Azure Kubernetes
Service, deploy the application to an on-demand container instance.
Whizlabs Hands-on-labs:
Deploy Azure Container Instances (whizlabs.com)
Create an Azure Container Instance with a public IP address using Terraform (whizlabs.com)
Deploying a container instance using Bicep (whizlabs.com)
Deploying a container instance using ARM template (whizlabs.com)
Reference Links:
Serverless containers in Azure - Azure Container Instances
Quickstart - Deploy Docker container to container instance - Portal
Run container images in Azure Container Instances - Training | Microsoft Learn
Run Docker containers with Azure Container Instances - Training | Microsoft Learn
Azure App Service is an HTTP-based service for hosting web applications, REST APIs, and mobile
back ends. You can develop in your favorite programming language or framework. Applications
can be easily deployed and scaled in Windows and Linux-based environments.
App Service not only adds the power of Microsoft Azure to your application, such as security,
load balancing, autoscaling, and automated management. You can also take advantage of its
DevOps capabilities, such as continuous deployment, package management, staging
environments, custom domain, and TLS/SSL certificates from Azure DevOps, GitHub, Docker
Hub, and other sources.
10
Why Azure app service and Web Apps?
Azure App Service is a platform-as-a-service (PaaS) offered on Microsoft Azure that enables
developers to quickly build, deploy, and scale web, mobile, and API applications. Azure Web
Apps is a specific type of Azure App Service that focuses on hosting web applications.
Azure Web Apps provides a fully managed platform for building and hosting web applications
using popular programming languages such as .NET, Java, Node.js, Python, and PHP. It includes
features like automatic scaling, load balancing, traffic management, continuous deployment,
and monitoring. Azure App Service can host web apps natively on Linux for supported
application stacks.
11
Key features :
● Security, Compliance, Multiple languages and frameworks
● Managed production environment, Containerization and Docker
● DevOps optimization and Serverless code
● Global scale with high availability, Connections to SaaS platforms and on-premises data
● Authentication and authorization with Application templates
● Visual Studio, API and mobile features
● Azure App Service costs a bit more than Azure Web App. But depending on the additional
features and benefits you get with Azure App Service; We think the extra cost is justified.
12
● Azure Web Apps is a specific type of Azure App Service that focuses primarily on hosting
web apps, while Azure App Service is a broader category of PaaS offerings that includes
Azure Web Apps and other related services.
➔ Dedicated: The Basic, Standard, Premium, Premiumv2 and Premiumv3 tiers run apps on
dedicated Azure VMs. Only apps in the same app service plan share the same compute
resources. The higher the scale, the more VM instances you have available for scale-out.
➔ Isolated: The Isolated and IsolatedV2 tiers run dedicated Azure VMs in dedicated Azure
virtual networks. It provides network isolation on top of compute isolation for your apps. This
provides maximum scale-out capabilities.
References Links:
Implement Azure App Service web apps - Training | Microsoft Learn
1.Diagnostics Logging
When managing a web application, you should be ready for anything that can go wrong, from
500 errors to people informing you that your website is unavailable. You may troubleshoot
your app with the intelligent and interactive App Service Diagnostics without having to
configure anything. In the event that you have problems with your app, App Service
Diagnostics identifies the issues and directs you to the appropriate resources for a quicker and
easier troubleshooting process.
App Service apps can be debugged with the help of built-in diagnostics. You may add
instrumentation to your application, enable diagnostic logging, and retrieve the data that
Azure has logged in this course.
13
The logging kinds, supporting platforms, and locations for storing and accessing logs are
displayed in the following table.
Type Description
Record the messages that your application code produces. The web framework of
your choice or your application code itself, utilizing the built-in logging mechanism of
Application Logging
your language, produce the messages. One of the following categories—Critical,
Error, Warning, Info, Debug, and Trace—is assigned to each message.
The format of the W3C extended log file contains raw HTTP request data. Data such
Web server Logging as the HTTP method, resource URI, client IP, client port, user agent, response code,
and so forth are included in every log message.
Copies of the error pages in .html format that were supposed to be forwarded to the
Detailed error client browser. App Service can store the error page each time an application error
Logging with HTTP code 400 or higher occurs, however detailed error pages shouldn't be
delivered to clients in production for security reasons.
Comprehensive tracking data on unsuccessful requests, comprising a trail of the IIS
Failed request components that processed the request and the duration of each component's
tracing processing. For every unsuccessful request, a folder containing the XML log file and
the XSL stylesheet to read it with is generated.
Aids in figuring out why a deployment went wrong. There are no customizable
Deployment Logging
settings for deployment logging; everything occurs automatically.
14
Below are the ways to enable application logging on different platform
B.To enable either the Filesystem Application Logging or the Blob Application Logging, or both,
select On. The Filesystem option is for short-term debugging and shuts down after 12 hours.
For long-term logging, use the Blob option. To write logs to a blob storage container, you'll
need one.
15
C.As indicated in the accompanying table, you can also adjust the Level of details recorded in log.
Level Categories
Disabled None
Verbose All categories: trace, debug, info, warning, error, and critical
B. Set the disk quota for the application logs in Quota (MB). Enter the number of days that
the logs should be kept under Retention Period (Days) as per your application need.
16
3. Enable Web server Logging
A. To store logs on blob storage for Web server logging, choose Storage; alternatively,
choose File System to store logs on the App Service file system.
B. Enter the number of days that the logs should be kept under Retention Period (Days).
C. After you're done, choose Save.
17
Log Detailed Errors
● Navigate to your app and pick App Service logs in the Azure portal to record the error
page or failed request tracing for Windows apps.
● After choosing on under Detailed Error Logging or Failed Request Tracing, choose Save.
● The Failed Request Tracing feature by default captures a log of requests that failed with
HTTP status codes between 400 and 600. To specify custom rules, you can override the
<traceFailedRequests> section in the web.config file.
● Logs can be sent to the application diagnostics log by Python applications using the
OpenCensus library.
● The System. Diagnostics. Trace class allows ASP.NET applications to log information
to the application diagnostics log. For instance:e.g., C# code - System. Diagnostics.
Trace.TraceError("If you're seeing this, something bad happened");
18
Stream Logs
Make sure the log type you want is enabled before you start streaming real-time logs. App
Service streams any data written to the console output or files with the extensions.txt,.log,
or.htm that are kept in the D:\home\LogFiles directory.
Events in the stream may occur out of order as a result of some logging buffer types writing to
the log file. For instance, the appropriate HTTP log entry for the page request may appear in the
stream before an application log entry that happens when a user accesses a page.
● Azure portal: Go to your app and choose "Log stream" to start streaming logs.
● Azure CLI - Use the following command to stream logs in real time in Cloud Shell:
az webapp log tail --name appname --resource-group myResourceGroup
● Local console: Install Azure CLI and log in to your account in order to stream logs in the local
console. Once logged in, adhere to the Azure CLI instructions.
Console output logs for the docker host and container are included in the ZIP file for
Linux/container apps. One set of logs for each instance is contained in the ZIP file for a
scaled-out application. These log files are the contents of the /home/Logfiles directory in the
App Service file system.
Configuring settings for Transport Layer Security (TLS), APIs, and service connections is
crucial for ensuring secure and reliable communication between systems. Here's a
breakdown of key aspects to consider:
19
Transport Layer Security (TLS) Configuration:
TLS is the foundation of secure communication over networks. Proper configuration ensures
data confidentiality and integrity. Key aspects include:
● TLS Version Selection: Prioritize the latest stable TLS version (currently TLS 1.3). Disable
older, vulnerable versions like TLS 1.0 and 1.1. This mitigates risks associated with known
exploits.
● Cipher Suite Configuration: Cipher suites define the cryptographic algorithms used for
key exchange, encryption, and message authentication. Choose strong cipher suites that
use modern algorithms like AES-256-GCM or ChaCha20-Poly1305. Avoid weak or
outdated ciphers, such as those using RC4 or MD5. Order cipher suites based on
preference, placing the strongest ones first.
● Certificate Management: Digital certificates are essential for identity verification and
establishing trust. Use certificates issued by trusted Certificate Authorities (CAs).
Implement a robust certificate lifecycle management process, including timely renewal
and revocation of certificates. This prevents disruptions and security breaches due to
expired or compromised certificates.
● Perfect Forward Secrecy (PFS): Enable PFS, which ensures that even if a server's private
key is compromised, past communication remains secure. This is achieved through
ephemeral key exchange algorithms like Diffie-Hellman Ephemeral (DHE) or
Elliptic-Curve Diffie-Hellman Ephemeral (ECDHE).
20
● Error Handling: Design informative and secure error responses. Avoid revealing sensitive
information in error messages. Use appropriate HTTP status codes to indicate the type of
error encountered.
● API Versioning: Use API versioning to manage changes to the API over time. This allows
clients to continue using older versions while new versions are deployed, ensuring
backward compatibility.
Service connections establish communication between applications and external services like
databases, message queues, and other APIs. Secure and reliable service connections are vital for
application functionality. Key considerations include:
By carefully configuring these settings, organizations can establish secure, reliable, and efficient
communication between their systems and protect sensitive data. Remember to consult the
official documentation for the specific technologies and services you are using for detailed
configuration instructions.
21
Category Setting/Aspect Description Considerations/Best Practices
Specifies the version of the TLS Use the latest stable version (TLS 1.3 is
TLS Version protocol used for secure recommended). Disable older, insecure
communication. versions like TLS 1.0 and 1.1.
Defines the set of cryptographic Prioritize strong cipher suites that use modern
algorithms used for key exchange, algorithms (e.g., AES-256-GCM,
Transport Layer Cipher Suites
encryption, and message ChaCha20-Poly1305). Avoid weak or
Security (TLS)
authentication. outdated ciphers.
Use certificates from trusted Certificate
Involves obtaining, installing, and
Certificate Authorities (CAs). Implement proper
managing digital certificates used for
Management certificate lifecycle management (renewal,
authentication and encryption.
revocation).
Use robust authentication mechanisms like
Specifies how clients are OAuth 2.0, API keys (with proper rotation), or
Authentication/Aut
authenticated and authorized to mutual TLS. Implement granular authorization
horization
access the API. controls to restrict access to specific
resources.
Limits the number of requests a Helps prevent abuse and overload. Define
Rate Limiting client can make to the API within a appropriate rate limits based on expected
API Settings given time frame. usage patterns.
Validates the data received by the Implement thorough input validation on all API
Input Validation API to prevent injection attacks and endpoints. Sanitize and escape user-provided
other security vulnerabilities. data.
Provide informative error messages without
Defines how the API responds to revealing sensitive information. Use
Error Handling
errors. appropriate HTTP status codes to indicate the
type of error.
Specifies the address and Store connection strings securely (e.g., using
Connection credentials required to connect to a environment variables or a secrets
String/URL service (e.g., database, message management service). Avoid hardcoding
queue). credentials in code.
Authentication Defines how the application Use strong authentication methods like OAuth
Method authenticates with the service. 2.0, service accounts, or managed identities.
Service Involves configuring network settings
Connections Use network segmentation to isolate services.
such as firewalls, virtual networks,
Network Implement firewall rules to restrict access.
and private endpoints to secure the
Configuration Use private endpoints to connect to services
connection between the application
over a private network.
and the service.
Defines how the application handles Implement retry mechanisms with exponential
Retry/Timeout
transient errors and timeouts when backoff. Set appropriate timeouts to prevent
Policies
connecting to a service. indefinite blocking.
22
2.Autoscaling
Definition - A cloud method or procedure known as autoscaling modifies available resources in
response to the demand at any given time. Instead of scaling up and down, autoscaling scales in
and out.
When the environment changes, autoscaling adjusts by adding or removing web servers and
distributing the load among them. Autoscaling merely modifies the number of web servers; it
has no influence on the CPU, memory, or storage capacity of the web servers that run the
application.
Carefully define your autoscaling rules. A Denial-of-Service assault, for instance, is likely to cause
a significant spike in incoming traffic. It would be costly and ineffective to try to manage a DoS
attack-induced spike in requests. These aren't legitimate requests, and handling them would be
a mistake. Detecting and filtering requests made during an assault before they reach your
service is a better way to handle the situation.
23
request to a web application that requires extensive processing over a sizable dataset may use
up all of the instance's processing and memory resources.
For managing long-term growth, autoscaling is not the ideal strategy. It's possible that your web
application has a small user base at first but gains popularity over time. The overhead of
autoscaling is related to resource monitoring and scaling event determination. If you can predict
the rate in this case.
Another element is the quantity of instances of a service. Most of the time, you might anticipate
simply running a few instances of a service. Nevertheless, whether autoscaling is turned on or
not, your service is vulnerable to outages or unavailability in this scenario. As autoscaling spins
up more instances, the fewer instances you have at first, the less capacity you have to
accommodate a rising workload.
Autoscaling can be set up to determine when to scale in and out based on resource utilization
and a variety of other criteria. Additionally, autoscaling can be set up to happen on a schedule.
You will learn how to define the parameters that can be utilized to auto scale a service in this
unit.
One feature of the App Service Plan that the web application uses is autoscaling. Azure
launches more instances of the hardware specified in the App Service Plan for the web
application when it grows larger.
An App Service Plan contains an instance limit in order to stop runaway autoscaling.
Plans with higher cost tiers have a greater cap. This is the maximum number of instances
that autoscaling can produce.
● Autoscale circumstances
By establishing autoscale circumstances, you demonstrate how to autoscale. Azure
offers
two autoscaling options:
24
➔ Adjust the scale according to a measure, like the number of HTTP requests waiting to
be processed or the length of the disk queue.
➔ Scale in accordance with a timetable to a given instance count. You can plan to scale
out, for instance, on a specified day of the week or at a specific time of day.
Additionally, you can choose an end date, after which the system scales back.
You can only scale out to a certain number of instances when you scale to a particular
instance count. Metric and schedule-based autoscaling can be combined in the same
autoscale condition if you need to scale out gradually. Therefore, you may set up the
system to scale out only during specific hours of the day in the event that the volume of
HTTP requests exceeds a specified threshold.
➔ CPU As a Share. This measure shows how much CPU is being used in each instance.
When a value is high, it indicates that instances are approaching CPU limits, which may
result in delays when handling client requests.
➔ Memory As a Share. This measure records the application's memory occupancy for
each and every iteration. A large number suggests that there may not be enough free
RAM, which could lead to one or more instances failing.
➔ Queue length for disks. The number of pending I/O requests across all instances is
represented by this metric. Disk contention may be present if the value is high.
➔ Http Queue Distance. This indicator displays the number of client requests that the web
application is currently handling. Client requests may fail with HTTP 408 (Timeout)
problems if this number is high.
➔ Info In. The total number of bytes received by all instances is this statistic.
➔ Information Out. The total quantity of bytes transferred by all instances is this statistic.
For other Azure services, you may additionally scale according to metrics. For instance, if
the web application handles requests from a Service Bus Queue, you may need to spin up
additional web applications if an Azure Service Bus Queue has more items than can be
held there for a certain amount of time.
25
● Autoscale Actions
An autoscale rule has the ability to carry out an autoscale action upon determining thet a m
etric has exceeded a threshold. Scale-out or scale-in autoscale actions are also possible.
A scale-in action lowers the instance count, whereas a scale-out action raises the
number of instances.An operator (such as less than, greater than, equal to, and so on) is use
d by an autoscale action to decide how to respond to the threshold.
When performing scale-out activities, the larger than operator is usually used to compare th
e metric value to the threshold.
Scale-in operations typically use the less than operator to compare the metric value to the thr
eshold.Alternatively, an autoscale action can establish a target instance count instead of incre
asing or decreasing the total number of instances.
The cool-down time of an autoscale action is measured in minutes. The scale rule won't be
invoked once more throughout this time. In between autoscale occurrences, this will enable
the system to stabilize. Keep in mind that starting and stopping instances take time, thus any
metrics collected may not reveal any noteworthy changes for a few minutes. Five minutes is
the bare minimum for the cool-down time.
A. Scale out by 1 if the length of the HTTP queue is more than 10.
B. Scale out by one if the CPU use is more than 70%.
C. Scale in by one if the HTTP queue length is zero.
D. Scale in by one if the CPU use falls below 50%.
The autoscale action is carried out when any of the scale-out rules—that is, when the CPU
usage above 70% or the HTTP queue length reaches 10—are satisfied while deciding
whether to scale out. Only when all scale-in requirements are satisfied—that is, when the
CPU utilization falls below 50% and the HTTP queue length reduces to zero—does the
26
autoscale action initiate while scaling in. You must describe the rules in distinct autoscale
conditions if you need to scale in when only one of the scale-in rules is met.
● An App Service Plan only performs manual scaling by default. You can adjust your scale
settings by using the condition groups that appear when you select Custom autoscale.
● The maximum number of instances that your App Service Plan may grow to in response
to incoming HTTP requests is known as the maximum burst. You can set a maximum
burst of up to 30 instances for Premium v2 and v3 subscriptions. The maximum burst
must match the App Service Plan's chosen worker count, if not exceed it.
● Go to the web app's left menu and choose Scale out (App Service Plan) to enable
automatic scaling. After changing the Maximum burst value and choosing Automatic
(preview), click the Save button.
27
Limit the quantity of web application instances.
An app-level setting to set the minimum number of instances is called Always Ready Instances.
Up to the App Service Plan's chosen maximum burst, more instances are added if the load is
greater than what the always-ready instances can manage.
Go to the web app's left menu and choose Scale out (App Service Plan) to define the minimum
number of instances. After making changes to the Always ready instances value, click Save.
● Go to the web app's left menu and choose Scale out (App Service Plan) to define the
maximum number of instances. After updating the Maximum scale limit and choosing
Enforce scale out restriction, click Save.
28
ii. Scale out with rules
An App Service Plan only performs manual scaling by default. You can adjust your scale
settings by using the condition groups that appear when you select Custom autoscale.
29
Include scale parameters
You can add and modify your own custom scale criteria to the automatically established default
scale condition after you enable autoscaling. Keep in mind that every scale condition has two
options: it can scale to a specified instance count or it can scale depending on a metric.
When none of the other scale conditions are in effect, the default scale condition is carried out.
It is also possible to set the minimum and maximum number of instances to be created using a
metric-based scaling condition. The maximum quantity cannot go over the restrictions set forth
in the pricing tier. A schedule specifying when the condition should be applied may also be
included for any scale condition that differs from the default.
30
Observe the autoscaling process
The Run history chart in the Azure portal allows you to monitor when autoscaling has taken
place. This graph illustrates the changes in the number of cases throughout time as well as the
autoscale conditions that led to each variation.
31
No scale action can happen if your setup has minimum=two, maximum=two, and there are
now two instances.Keep an adequate margin between the maximum and minimum instance
counts, which are inclusive. Autoscale always scales between these limits.
ii. You can select Average, Minimum, Maximum, and Total as the metric to scale by for
diagnostics metrics. The average statistic is the most used one.
Based on real-world scenarios, we advise carefully selecting distinct scale-out and scale-in
criteria. The following instances, which have the same or comparable threshold values for out
and in circumstances, are not recommended autoscale settings:
Let's examine an illustration of what can cause a behavior that could appear unclear.
Examine the following order.
1. Let's say there are two instances at start, and after that, each instance has an average of 655
threads.
2. When a third instance is added, autoscale scales out.
3. Assume next that there are 525 threads on average per instance.
4. Autoscale makes an attempt to predict the ultimate state if it scales in before actually doing
so. As an illustration, the current instance count of 525 x 3 = 1,575 / 2 (the final instance
count after scaling in) = 787.5 threads. This means that if the average thread count stays the
same or even slightly decreases, autoscale would need to scale out again right away. But if it
grew larger once again, the entire procedure would happen again, creating an endless cycle.
The goal of estimation during a scale-in is to prevent "flapping"—a condition in which actions
related to scale-in and scale-out are continuously performed back and forth. When selecting
the same scale-out and scale-in thresholds, bear this behavior in mind.
We advise selecting a sufficient buffer between the in and scale-out criteria. Take into
consideration the following improved rule combination as an example.
In this instance - Let's say there are two occurrences at first - Autoscale stops adding a third
instance if the average CPU% for all instances reaches 80 - Let's now assume that the CPU%
gradually drops to 60.
32
The final state, if it were to scale-in, is estimated by Autoscale's scale-in rule. For instance, the
ultimate number of instances when scaled in is equal to 180 / 2 (60 x 3), which is 90. Because
it would have to scale out again right away, autoscale doesn't scale in. It does not scale in
instead.
When autoscale checks again, the CPU keeps dropping below 50. It calculates once more:
50 x 3 instances = 150 / 2 instances = 75. Since this is less than the 80 scale-out criteria, the
scaling in to 2 instances is successful.
iv.Scaling considerations when a profile has several rules configured
In certain situations, a profile may need to have more than one rule set.
When several rules are set, services employ the following set of autoscale rules.
If any rule is met during scale-out, autoscale executes. Every need must be fulfilled for
autoscale on scale-in.
Assume you have the following four autoscale rules, for example:
● Reduce by 1 if CPU is less than 30%.
● Reduce by 1 if Memory is less than 50%.
● If CPU is more than 75%, reduce by 1.
● Scale out by 1 if Memory > 75%
Next, the following takes place:
● We scale out if the CPU is 76% and the memory is 50%.
● We scale out if the CPU is 50% and the memory is 76%.
Conversely, autoscale fails to scale-in when the CPU is at 25% and the RAM is at 51%. If the
CPU is 29% and the memory is 49%, both of the scale-in rules would be satisfied, resulting in
an automated scale-in.
v. Because autoscale adjusts your service to that count when metrics are unavailable, the default
instance count is crucial. Choose a default instance count that is safe for your workloads as a
result.
vi.If any of the following scenarios materialize, Autoscale adds a note to the Activity Log:
An autoscale scale operation is issued.
❖ A scale action is successfully finished by the autoscale service.
❖ The scale action is not taken by the autoscale service.
❖ For autoscale services to decide on a scale, there are no available metrics.
❖ Recuperated metrics are once again available for use in determining scale.
❖ An Activity Log alert is another tool you may use to keep an eye on the autoscale engine's
condition. The notifications tab on the autoscale setting allows you to set up email or
webhook notifications in addition to activity log alerts to receive notifications for
successful scale actions.
33
Deploying Code and Containers to Azure App Service Web Apps
Azure App Service Web Apps enable running web applications without managing infrastructure.
Here's a simplified guide:
Deployment Overview
Supported Languages
Deployment Methods
1. CI/CD Pipelines: Automate deployments with Azure DevOps or GitHub Actions.
2. Git: Push code directly for quick updates.
3. ZIP/WAR: Upload packaged files via the Azure Portal or CLI.
4. FTP/S: Transfer files using legacy tools.
Steps to Deploy
1. Create Web App: Use Azure Portal/CLI; select runtime stack and OS.
2. Configure Deployment: Set up your preferred deployment method.
3. Deploy: Push code or upload packages; Azure auto-configures the environment
34
● Isolation: Packages dependencies and configurations together.
● Scalability: Simplifies scaling container instances.
Deployment Options
1. Single Container: Use images from Docker Hub or Azure Container Registry; specify in
App Service settings.
2. Multi-Container with Docker Compose: Define services in a docker-compose.yml file for
complex applications.
Deployment Steps
1. Build and Push Image:
○ Create a Dockerfile and build the image locally.
○ Push the image to a container registry.
2. Create App Service:
○ Choose 'Docker Container' as the publish option in the Azure Portal.
○ Select a single container or Docker Compose.
3. Configure Settings:
○ Add the image name, registry credentials, and environment variables.
4. Deploy and Monitor:
○ Start the app and verify functionality.
○ Use Azure monitoring tools for logs and performance insights.
Additional Features and Best Practices
● Scaling: Set automatic scaling based on demand or custom metrics.
● Deployment Slots:
○ Use staging slots for updates without downtime.
○ Configure slot-specific settings and swap seamlessly after testing.
● Security: Enable SSL/TLS, use Managed Identity, and integrate with Azure Active
Directory.
● Monitoring: Leverage Azure Monitor and Application Insights for real-time analytics and
alerts.
Deployment slots enhance reliability, reduce downtime, and streamline workflows by allowing
thorough testing before promoting changes to production.
35
● Realistic Testing: Test changes in a staging environment that mimics production.
● Easy Rollbacks: Quickly revert to a previous version if issues occur.
36
▪ It allows us to write less code, maintain less infrastructure while saving on cost.
▪ Azure provides all infrastructure and maintain in background for azure function app.
▪ Azure Functions supports triggers and bindings.
▪ Triggers are ways to start execution of your code,
▪ Bindings are ways to simplify coding for input and output data.
▪ Your functions can operate in an Azure execution environment thanks to a function app. It
serves as the deployment and management unit for your functions as a result. A function app
is made up of one or more distinct functions that are coordinated in their management,
deployment, and scaling. A function app's functions all have the same deployment strategy,
price structure, and runtime version. To arrange and manage your functions collectively,
consider a function app.
Creating and configuring a Function APP
You can create and deploy the Function App through different methods, Main are.
1. Through Azure Portal
2. Through The Azure PowerShell/CLI
3. Through ARM Templates
When you create function app in the azure portal you must choose a hosting plan for your app.
★ Let’s understand first what different hosting plans and differences between them.
★ There are different hosting plans available for Azure Functions (those are as below).
1. Consumption plan - This is the standard hosting package. You only pay for compute
resources while your functions are running, and it scales automatically. Based on the
volume of incoming events, instances of the Functions host are dynamically added and
withdrawn.
2. Premium plan - Pre-warmed workers, which launch programs instantly after being idle,
run on more powerful instances, and connect to virtual networks, automatically scale
based on demand.
3. Dedicated (App service) Dedicated plan. - Use the standard App Service plan charges
to execute your functions. Best suited for prolonged situations in which Durable
Functions cannot be used.
37
Hosting plans and scaling
The following table compares the scaling behaviors of the various hosting plans. Unless
otherwise stated, maximum instances are stated on a per-function app (Consumption) or
per-plan (Premium/Dedicated) basis.
2. Click on it and then function app window will open, in that window click on create function
app. The following window will open.
1. In this window select your subscription, select a resource group if already created of create a
new one. Give your function meaningful name.
2. Next option we have is if we want to deploy code or container image. Based on your
requirement select appropriate option.
38
3. The next option is to select your coding language if you have selected code in the option and
its runtime version. Select appropriate options. Based on your runtime stack choice operating
system option will be enabled or disabled. For e.g. If you choose .NET as your runtime stack,
then by default windows operating system will be choose as .NET can’t operate on Linux
machine.
4. Choose your hosting plan as per requirement and click on next.
39
5. The next window is storage. select your storage account and click on next.
NOTE: - On any plan, a function app requires a general Azure Storage account, which
supports Azure Blob, Queue, Files, and Table storage. This is due to the fact that some
storage accounts do not support queues and tables, despite the fact that functions rely on
Azure Storage for tasks like managing triggers and logging function executions. The same
storage account used by your function app can also be used by your triggers and bindings to
store your application data Use a different storage account, though, for procedures that
require a lot of storage.
40
6. select appropriate networking and monitoring option as per your requitement. If monitoring
in required, then select yes or else no and go to deployment section.
7. If your function needs continuous deployment, then select that option or else leave default
settings as it is.
8. Under tags create tags if necessary and then click on review + create. Review window will
open. Please cross check your configuration once again in review window and click on cerate.
9. Once deployment is completed click on go to resource you will be navigated to your function
app overview window.
41
Azure Function
A function runs as a result of triggers. It is necessary for a function to have exactly one trigger
since it determines how the function is called. Data connected with triggers is frequently
supplied as the function's payload.
An additional resource can be declaratively linked to a function by binding to it; bindings can be
connected as input bindings, output bindings, or both. The function receives parameters that
contain data from bindings.
You can combine various bindings to make it work for you. A function may have one or more
input and/or output bindings; bindings are optional.
You can avoid hardcoding access to other services by using triggers and bindings. Function
parameters are used to receive data (like the contents of a queue message) for your function.
You submit data by utilizing the function's return value, for instance, to construct a queue
message. In the function, every trigger and binding have a direction property. JSON data:
● The direction of triggers is always in and out.
● Input and output bindings use in and out
● A unique direction in out is supported by some bindings. Only the Advanced editor is
accessible through the Integrate tab on the gateway if you utilize in out.
The direction is given in an attribute constructor or inferred from the parameter type when you
use attributes in a class library to configure triggers and bindings.
42
4.Accept the default settings to create a new Application Insight instance on the Monitoring
tab and a new storage account on the Storage tab. Using an already-existing storage
account or Application Insights instance is another option.
5.To examine the app configuration you selected, pick Review + Create. To provision and
deploy the function app, select Create.
6. In order to examine your new function app, select Go to resource. You've successfully
created your new function app.
7. Next, you create a function in the new function app. Select create in azure portal from
functions tab.
43
8.Choose the Azure Blob Storage trigger template.
9.Use the settings as specified in the image below .
44
11. Next, create the demo-workitems container. Find your functions storage account and
go to that storage account.
45
14. Now that you have a blob container, you can test the function by uploading a file to
the container. Back in the Azure portal, browse to your function expand the Logs at the
bottom of the page and make sure that log streaming isn't paused.
15. In a separate browser window, go to your resource group in the Azure portal, and
select the storage account.
16. Select Containers, and then select the demo-workitems container.
17. Select Upload, and then select the folder icon to choose a file to upload.
46
18. Browse to a file on your local computer, such as an image file, choose the file. Select
Open and then Upload.
19. Go back to your function logs and verify that the blob has been read.
47
B. Example of triggers and bindings Timers.
1. We will use the same function app created in previous example to create a timer function.
2. In your function app, select Functions, and then create in azure portal.
3. Select the Timer trigger template.
4. Configure the new trigger with the settings as specified in the image, and then select Create.
5. In your function, select Code + Test and expand the Logs.
48
6. Verify execution by viewing the information written to the logs.
49
4. Under Template details use Httpdemo for New Function, select Anonymous from the
Authorization level drop-down list, and then select Create. Azure creates the HTTP trigger
function. Now, you can run the new function by sending an HTTP request.
5. In your new HTTP trigger function, select Code + Test from the left menu, and then select
Get function URL from the top menu.
6. In the Get function URL dialog, select default from the drop-down list, and then select the
Copy to clipboard icon.
7. Copy and paste the function URL to the address bar of your browser. at initiate the
request, include the query string value?name=<your_name> at the end of this URL and hit
Enter. Your query string value must be echoed back in a response message that the
browser displays.
50
When defining the function, you chose Function above Anonymous access level if
the request URL contained an access key (?code=...). Instead, you have to append
&name=<your_name> in this situation.
8. Information about trace is published to the logs when your function executes. Expand the
Logs arrow at the bottom of the Code + Test page on the gateway to view the trace
output. To view the trace output written to the logs, call your function once again.
9. Once all the things done, please clean up your resources by deleting created functionapp,
functions.
51
Azure Cosmos DB
Perform operations on containers and items
A fully managed NoSQL database, Azure Cosmos DB is intended to offer low latency, elastic
throughput scaling, well-defined semantics for data consistency, and high availability.
Your databases can be set up to be globally dispersed and accessible across all Azure regions.
Locate the data close to your consumers' locations to reduce latency. The locations of your users
and the application's global reach will determine which regions are necessary.
You can add or remove regions linked to your account with Azure Cosmos DB at any moment. To
add or remove a region, your application doesn't need to be stopped or redeployed.
52
your Azure subscription account by creating databases, containers, and things after you create
an account.
1. Azure Cosmos DB databases - Under your account, you are able to build one or more
Azure Cosmos DB databases. A namespace is comparable to a database. The
management unit for a collection of Azure Cosmos DB containers is called a database.
2. Azure Cosmos DB containers - The unit of scalability for both provisioned throughput
and storage is an Azure Cosmos DB container. A container is copied over several areas
after being horizontally partitioned. Based on the partition key, the objects you add to
the container are automatically organized into logical partitions that are dispersed over
physical partitions. Every physical partition on a container has an equal amount of
throughput.
One of the following modes is available for configuring throughput when creating a
container:
● Dedicated provisioned throughput mode: A container's allotted throughput is
guaranteed by the SLAs and is only used by that container.
● Shared provisioned throughput mode: With the exception of containers that are
configured with dedicated supplied throughput, these containers share the
provisioned throughput with the other containers in the same database. Stated
differently, each of the "shared throughput" containers shares the database's
provisioned throughput.
Implementation
1. First, we need to create an Azure Cosmos DB Account.
2. Log in to the Azure portal.
3. From the Azure portal navigation pane, select + Create a resource.
4. Search for Azure Cosmos DB, then select Create/Azure Cosmos DB to get started.
5. Which API best suits your workload? page, select Create in the Azure Cosmos DB for
NoSQL box.
6. In the Create Azure Cosmos DB Account - Azure Cosmos DB for NoSQL page, enter the
basic settings for the new Azure Cosmos DB account.
● Subscription: Select the subscription you want to use.
● Resource Group: Select Create new if you don’t have one or choose already
create one eg.az204rg.
● Account Name: Enter a unique name to identify your Azure Cosmos account.
The name can only contain lowercase letters, numbers, and the hyphen (-)
character. It must be between 3-31 characters in length.
53
● Location: Use the location that is closest to your users to give them the fastest
access to the data.
● Capacity mode: Select Serverless.
7. Choose Review + Create.
8. After checking the account settings, choose Create. A few minutes are needed to create
the account. Await the portal page's appearance. The deployment has ended for you.
9. Choose To access the Azure Cosmos DB account page, navigate to resource.
10.Now we will create .NET application to do interaction with created azure Cosmos DB
Using the console template and the dotnet new command, create a new.NET
application.
11.Import the Microsoft.Azure.Cosmos NuGet package using the dotnet add package
command.
54
12.Build the project with the dotnet build command.
Make an instance of the CosmosClient class in order to establish a connection to the Azure
Cosmos DB NoSQL API. The first step in carrying out any database activity is to take this class.
Using the CosmosClient class, there are three main methods to establish a connection to an API
for a NoSQL account:
● Establish a connection to a NoSQL endpoint API and read/write a key.
● Use an API to establish a NoSQL connection string.
● Make a Microsoft Entra ID connection.
55
17.Replace any existing code with the using Microsoft.Azure.Cosmos statement in the
Program.cs file.
18.After the using statement, insert the following bit of code. The code snippet
incorporates some error checking along with constants and variables into the class.
Make sure you update the EndpointUri and PrimaryKey placeholder values in
accordance with the instructions provided in the code comments.
56
19.Establish a new asynchronous task named CosmosAsync underneath the Main function.
It adds code to call the methods you add later to establish a database and a container,
as well as instantiates our new CosmosClient.
22.After saving your work, execute the dotnet build command in a Visual Studio Code
terminal to see if there are any issues. Run the dotnet run command if the build is
successful. The following messages are shown on the console.
57
24. Now we will create an item in the created container. Using Visual Studio Code, create a
new file named Item.cs. Then, open the file in the editor.
Create a base record type named Item that carries the three properties you want to use
in all items for this container: id, categoryId, and type.
58
26. Create another new file named Category.cs. Open this file in the editor now.
27.Create a new type named Category that inherits from the Item type. Ensure the type
passes its values to the base implementation, and set the type variable to output the
name of the Category type.
59
29. Finally, create one last file named Product.cs. Open this file in the editor too.
30.Create a new type named Product that inherits from Item and adds a few new
properties: name, price, archived, and quantity.
60
33.Open program.cs file and write below code in it.
35.Create a new PartitionKey instance using the same value as the categoryId property for
the Category instance you created earlier.
36.Use the UpsertItemAsync method to create or replace the item passing in an object for
the item to create and a partition key value.
61
37.Print various properties of response to the console including: The unique identifier of
the underlying item, the type of the underlying item, and the request charge in RUs.
38.Save the code and run the code through the console. New item will be created in
container.
62
40.Use Container.ReadItemAsync to point read a specific item by using the id property and
the partition key value.
41.Get your serialized generic type using the Resource property of the ItemResponse class.
42.Output the unique identifier and request charge for the point read operation.
43.Run the code and you will get below kind of result in the command prompt window.
63
Azure Cosmos DB
Consistency Level & Change Feed Notifications
Consistency levels
Instead of seeing data consistency as two extremes, Azure Cosmos DB views it as a spectrum of
options. There are several consistency options all over the spectrum, with eventual consistency
and strong consistency being at opposite ends. These options enable developers to make
fine-grained decisions and trade-offs regarding performance and high availability.
Azure Cosmos DB offers five well-defined levels. From strongest to weakest, the levels are:
● Strong
● Bounded staleness
● Session
● Consistent prefix
● Eventual
Each level provides availability and performance trade-offs. The following image shows the different
consistency levels as a spectrum.
Regardless of the region from which the reads and writes are delivered, the number of regions
linked to your Azure Cosmos DB account, or whether your account is set up with a single or
multiple write region, the consistency levels are region-agnostic and guaranteed for all
operations.
A single read operation scoped within a logical partition or a partition-key range is covered by
read consistency. Either a stored process or a remote client can initiate the read action.
64
A single read operation scoped inside a logical partition is covered by read consistency. Either a
stored process or a remote client can initiate the read action.
65
(Image ref: https://learn.microsoft.com/en-us/azure/cosmos-db/consistency-levels)
3. Session consistency: Reads are guaranteed to respect the consistent-prefix, monotonic
reads, monotonic writes, read-your-writes, and write-follows-reads guarantees in session
consistency inside a single client session. This presupposes that there is only one "writer"
session or that numerous writers share the session token.
66
Assume that within transactions T1 and T2, two write operations are carried out on
documents Doc 1 and Doc 2. The user never sees "Doc 1 v1 and Doc 2 v2" or "Doc 1 v2 and
Doc 2 v1" for the same read or query action when the client does a read in any replica.
Instead, they see "Doc 1 v1 and Doc 2 v1" or "Doc 1 v2 and Doc 2 v2".
The weakest type of consistency is eventual consistency since a client might read data that
are more recent than those it has previously read. When there are no ordering guarantees
needed for the application, eventual consistency is ideal. Retweets, Likes, and non threaded
comments are a few examples.
67
Change feed notification
A persistent record of changes to a container in the order they happen is called a change feed in
Azure Cosmos DB. Azure Cosmos DB's change feed support operates by monitoring an Azure
Cosmos DB container for any modifications. The amended documents are then sorted and
output in the order that they were modified.
The output can be split among one or more consumers for processing in parallel, and the
persistent modifications can be handled progressively and asynchronously.
2. Change feed processor: The Java V4 and.NET V3 SDKs for Azure Cosmos DB include the
change feed processor. It efficiently spreads the event processing among several consumers
and streamlines the process of reading the change feed.
68
The change feed processor implementation consists of four key parts:
● The monitored container: The data used to construct the change feed is stored in the
monitored container. The container's change feed reflects any inserts and updates made
to the monitored container.
● The lease container: In addition to coordinating the processing of the change feed
across several employees, the leasing container serves as a state store. The leased
container may be kept in a different account or in the same account as the monitored
container.
● The compute instance: The change feed processor is hosted by a compute instance,
which is used to detect changes. It could be represented by a virtual machine (VM), a
Kubernetes pod, an Azure App Service instance, or a real physical machine, depending
on the platform. Throughout this text, the instance name is related to its unique identity.
● The delegate: The code that specifies what you, the developer, intend to happen to each
batch of changes that the change feed processor reads is known as the delegate.
69
5. Set up these details under the Global Distribution tab. For this QuickStart, you can leave the
default values as is:
● Geo-Redundancy: Disable
● Multi-region Writes: Disable
● Availability Zones: Disable
70
12.Accept the default settings to create a new Application Insight instance on the Monitoring
tab and a new storage account on the Storage tab. Using an already-existing storage account
or Application Insights instance is another option.
13.Select Review + create to review the app configuration you chose, and then select Create to
provision and deploy the function app.
14.Select Go to resource to view your new function app. Next, you create a function in the new
function app.
15.In your function app, select Functions from the tab in overview section, and then select
create in azure portal option.
16.On the New Function page, enter cosmos in the search field and then choose the Azure
Cosmos DB trigger template.
71
17.Configure the new trigger with the settings as specified below:
72
19.To display the template-based function code, select Code + Test.
This function template writes the number of documents and the first document ID to the logs.
20.Now we will create a container. Open a second instance of the Azure portal in a new tab in
the browser.
21.Search azure Cosmos DB and go to your azure Cosmos DB account and then click on Data
Explorer.
22.Under No SQL API, choose Tasks database and select New Container.
23.In Add Container, use the settings shown in the table below the image.
73
24.Click on Ok. After the container specified in the function binding exists, you can test the
function by adding items to this new container.
25.Now we can test the function. Expand the new Items container in Data Explorer, choose
Items, then select New Item.
26.Replace the contents of the new item with the following content, then choose Save.
{
"id": "task1",
"category": "general",
"description": "some task"
}
27.Switch to the first browser tab that contains your function in the portal. Expand the function
logs and verify that the new document has triggered the function. See that the task1
document ID value is written to the logs.
74
Azure Blob Storage
Set and retrieve properties and metadata
Microsoft's object storage solution for the cloud is called blob storage. Massively unstructured
data storage is the specialty of blob storage. Data that doesn't follow a certain data model or
specification, such text or binary data, is referred to as unstructured data.
Blob storage is intended for use in:
● delivering documents or images straight to a browser.
● putting files in storage for later access.
● audio and video streaming.
● logging data onto files.
● archiving, disaster recovery, and backup and restore data storage.
● storing data so that it can be analysed by an Azure or on-premises service.
75
Access Tiers for block blob
Depending on usage patterns, Azure Storage offers various solutions for accessing block blob
data. Azure Storage has access tiers that are each tailored to a specific data consumption
pattern. You can save your block blob data as cheaply as possible by choosing the appropriate
access tier.
Access tiers that are offered are:
● The Hot Access tier is designed to provide frequent access to storage account objects. The
Hot tier offers the lowest access charges but the highest storage expenses. By default, new
storage accounts are formed in the hot tier.
● Cool Tier: Large volumes of data that are rarely accessed and kept for at least 30 days are
best served by the Cool access tier. In comparison to the Hot tier, the Cool tier has greater
access prices and lower storage expenses.
● Cold Tier - An online tier optimized for storing data that is rarely accessed or modified, but
still requires fast retrieval. Data in cold tier should be stored for at least 90 days. A cold tire
has lower storage costs and higher access costs compared to a cool tire.
● Archive Tier: The only tier accessible for individual block blobs is Archive. Data that can
withstand several hours of retrieval delay and will stay in the archive layer for at least 180
days is best suited for the archive tier. Although accessing data in the archive tier is more
expensive than accessing data in the hot or cool tiers, it is the most economical option for
keeping data.
For instance, the default URL for Blob storage is this if your storage account is called
demostorageaccount: http://demostorageaccount.blob.core.windows.net
76
2. Containers
A container is a file system directory that holds a collection of blobs. An infinite number of
containers may be included in a storage account, and an infinite number of blobs may be
stored in a container.
Since a container name is a component of the unique URI (Uniform Resource Identifier) that
is used to address the container or its blobs, it needs to be a valid DNS name. When naming
a container, abide by following guidelines:
● The length of a container name might range from 3 to 63 characters.
● Container names can only contain lowercase letters, digits, and the dash (-) character.
They must begin with a letter or number.
● Container names cannot contain two dash characters or more in a row.
3. Blobs
● Binary and text data are stored in block blobs. Block blobs consist of separate data blocks
that are manageable. Block blobs are capable of holding up to 190.7 TiB.
● Similar to block blobs, append blobs are composed of blocks but are tailored for add
operations. Append blobs are perfect in situations like virtual machine data logging.
● Random access files up to 8 TB in size are stored in page blobs. Page blobs act as drives for
Azure virtual machines and store virtual hard drive (VHD) files.
Creating a container
1. Select the storage account in which you want to create a container. If you don’t have
existing storage account then create new storage account as to create a container storage
account is must
2. Go to container under data storage and click on container to create a new container
77
3. Click on create after putting appropriate name for container.
4. In the created container upload one text file with some content in it.
Container Properties
System properties: Every Blob storage resource has system attributes. While some are
read-only, others can be set or read. Some system properties under the hood map to specific
standard HTTP headers. These properties are kept up to date for you by the Azure Storage client
library for.NET.
The code example below shows how to set the container properties. We are going to set
content-type and content language properties of the blob.
78
79
The code example that follows retrieves the system properties of a container and outputs the
values of those properties to a console window:
80
Your metadata name needs to follow the C# identifier naming guidelines. While metadata
names are case-insensitive when set or read, they retain the case in which they were produced.
Blob storage concatenates the two values and returns HTTP response code 200 (OK) if two or
more metadata headers with the same name are submitted for a resource.
Before setting metadata
81
After setting metadata
The GetProperties and GetPropertiesAsync methods are used to retrieve metadata in addition
to properties as shown earlier.
The below example shows how to get the metadata
82
Azure Blob storage
Storage Policies, Data Lifecycle Management(DLM), and Static Site Hosting
A stored access policy can be used to modify a signature's start time, expiration time, or
permissions. Once a signature has been issued, it can also be revoked using a stored access policy.
83
A queue, share, table, or container can have up to five access policies specified at once. One
access policy is associated with each SignedIdentifier field, which has its own Id field. Setting
more than five access policies at once results in a 400 status code (Bad Request) being returned
by the service.
84
● Hot: designed with frequent access to data in mind.
● Cool: designed to hold data that is kept for at least 30 days and is not commonly
accessed.
● Cold tier: designed to hold data that is kept for at least ninety days and is not commonly
accessed. In comparison to the cool tier, the cold tier has greater access fees and lower
storage expenses.
● Archive: designed to store data with variable latency requirements, on the scale of
hours, for a minimum of 180 days and infrequently accessed data.
The various access tiers are subject to the following considerations:
● Blobs can have their access tier changed either during or after upload.
● At the account level, you can only set the hot and cool access tiers. Only at the blob level
is it possible to establish the archive access tier.
● Although the availability of data in the cold access tier is marginally lower than that of
hot data, it nevertheless has excellent durability, retrieval latency, and throughput
qualities that are comparable.
● The archival access tier's data is kept offline. The lowest storage prices are provided by
the archive tier, but access charges and latency are also the greatest.
● All redundancy choices are supported by the hot and cool tiers. Only RA-GRS, GRS, and
LRS are supported on the archive tier.
● Limits on data storage are determined per account, not per access tier. You have the
option to exhaust your entire limit in one tier or throughout the course of all three tiers.
85
tiers based on the data's age. Lifecycle management policy rules are offered to migrate aging
data to cooler tiers in order to accomplish this transition.
Lifecycle Policies:
A JSON document containing a set of rules is called a lifecycle management policy. A
filter set and an action set are included in every rule definition found in a policy. The filter set
restricts the items in a container or their names to which rule actions can be applied. The
filtered set of objects is subject to the tier or delete actions by the action set:
JSON:
{
"rules": [
{
"name": "rule1",
"enabled": true,
"type": "Lifecycle",
"definition": {...}
},
{
"name": "rule2",
"type": "Lifecycle",
"definition": {...}
}
]
}
A policy is a collection of rules(At least one rule is required in a policy. You can define up to 100
rules in a policy.):
86
Each rule within the policy has several parameters:
A filter set and an action set are included in every rule specification. The filter set restricts the
items in a container or their names to which rule actions can be applied. The filtered set of
items is subject to the tier or delete actions by the action set.
The example rule that follows filters the account to perform actions on objects that are present
in democontainer and start with foo.
● Cool tier to tier blob thirty days following the most recent alteration
● Tier blob to be archived 90 days following the most recent update
● 2,555 days (seven years) after the last modification, delete the blob.
● 90 days after the snapshot is created, remove the blob snapshots.
{
"rules": [
{
"name": "ruleFoo",
"enabled": true,
"type": "Lifecycle",
"definition": {
"filters": {
"blobTypes": [ "blockBlob" ],
"prefixMatch": [ " democontainer /foo" ]
},
87
"actions": {
"baseBlob": {
"tierToCool": { "daysAfterModificationGreaterThan": 30 },
"tierToArchive": { "daysAfterModificationGreaterThan": 90 },
"delete": { "daysAfterModificationGreaterThan": 2555 }
},
"snapshot": {
"delete": { "daysAfterCreationGreaterThan": 90 }
}
}
}
}
]
}
Implementation
1. Use the Azure portal to enable last access time tracking by doing the following steps:
2. Go into the Azure portal and select your storage account.
3. Choose Lifecycle management from the Data management section.
4. Select the "Enable access tracking" checkbox.
88
5. Using the Azure portal, PowerShell, Azure CLI, or an Azure Resource Manager template,
you can add, modify, or remove a lifecycle management policy.
Using the Azure portal, there are two methods for adding a policy.
7. To see or modify lifecycle management policies, select Lifecycle Management under
Data management.
9. Choose On the Details form, add a rule and give it a name. The settings for the Rule
scope, Blob type, and Blob subtype can also be changed. The scope is configured to filter
blobs in the example that follows. The Filter set tab is added as a result of this.
10.Select Base blobs to set the conditions for your rule. In the following example, blobs are
moved to cool storage if they haven't been modified for 30 days.
89
11.Click on Add. Your policy will get added.
12.Code View:
Go to your storage account in the Azure interface.
13.To see or modify lifecycle management policies, select Lifecycle Management under
Data management.
14.Choose the tab for Code View. You can specify a lifecycle management policy in JSON on
this tab.
The subsequent example A block blob whose name starts with log is moved to the cool tier by
a JSON lifetime policy if it hasn't been updated in more than 15 days.
90
{
"rules": [
{
"enabled": true,
"name": "move-to-cool",
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"tierToCool": {
"daysAfterModificationGreaterThan": 15
}
}
},
"filters": {
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"sample-container/log"
]
}
}
}
]
}
91
User Authentication and Authorization
There are various ways through which we can implement authentication and authorization in
azure. Below are some of the techniques:
The Microsoft Identity Platform is a cloud identity service that lets you build applications where
your users and customers can sign in using their Microsoft identities or social accounts.
The Microsoft identity platform for developers is a set of tools that includes authentication
service, open-source libraries, and application management tools.
The Microsoft identity platform facilitates the development of applications that users and
customers may access using their social media accounts or Microsoft identities. It also grants
permission to access your own or Microsoft APIs, such as Microsoft Graph.
92
Microsoft Authentication Libraries (MSAL). It adheres to industry standards by implementing
scopes that are readable by humans.
● Developer content:Technical documentation that includes code samples, API references,
tutorials, quickstarts, and how-to guides.
The Microsoft identity platform allows developers to integrate contemporary identity and
security advancements such as Conditional Access, Step-Up authentication, and password less
authentication. Applications that are integrated with the Microsoft identity platform
automatically utilize this innovation, so you don't have to implement it yourself.
B.Microsoft Entra ID
Microsoft Entra ID is a cloud-based identity and access management service that enables your
employees to access external resources. Example resources include Microsoft 365, the Azure
portal, and thousands of other SaaS applications.
An application needs to be registered with a Microsoft Entra tenant to assign Identity and
Access Management functions to Microsoft Entra ID. An identity configuration for your
application that enables integration with Microsoft Entra ID is created when you register it with
Microsoft Entra ID. when you register it through the Azure portal. You can select whether an
application is:
An application object (the globally unique instance of the app) and a service principal object are
automatically established in your home tenancy whenever you register an application using the
portal. Additionally, your app has an ID that is globally unique (the app or client ID). Then, to
make your app function, you can add secrets, certificates, and scopes to the portal. You can
also alter the app's branding in the sign-in dialog and do much more.
Application Object
An application on Microsoft Entra is defined by a single application object. The Microsoft Entra
tenant (sometimes referred to as the application's "home" tenant) is where the application
object is housed. A service principle object or more is created using an application object as a
model or blueprint. Every tenant where the application is used has a service principal created in
them. The application object includes some static properties that are applied to all new service
principals, or application instances, much like a class in object-oriented programming.
93
Three parts of an application are described by the application object: resources that the
application may require access to, actions that the application may take, and how the service
can issue tokens to access the application.
● Managed Identity: To represent a managed identity, this kind of service principal is employed.
Applications can leverage managed identities to establish a connection to resources that enable
Microsoft Entra authentication. In your tenancy, a service principal representing a managed
identity is established when it is enabled. Access and permissions can be given to service
principals that represent managed identities, but they cannot be directly updated or changed.
● Legacy: An app developed using legacy experiences or prior to the introduction of app
registrations is represented by this kind of service principle. Although a legacy service principal
lacks an accompanying app registration, it can include features such as reply URLs, credentials,
and service principal names that can be edited by an authorized user. Only the tenant in which it
was initially created may use the service principal.
The service principal is the local representation of your application for use in a particular tenant,
whereas the application object is the global representation for usage across all tenants. To create
comparable service principal objects, common and default properties are obtained from the
application object, which acts as a template.
There are two types of relationships:
● one to many with the corresponding service primary object(s) and
● one to one with the software application.
In order for the application to establish an identity for sign-in and/or access to resources that
94
the tenant has secured, a service principal needs to be generated for each tenant. An application
for a single tenant only has one service principal, which is the home tenant, and is produced and
approved for usage at the time of application registration. In addition, a service principal is
established in each tenant of a multi-tenant application when a user has granted permission for
its use.
C.Microsoft Graph
The entry point to data and intelligence in Microsoft 365 is Microsoft Graph. You can leverage
the uniform programmability paradigm it offers to access the massive amounts of data found in
Windows 10, Microsoft 365, and Enterprise Mobility + Security.
Ref - learn.microsoft.com
Three key elements of the Microsoft 365 platform make data access and flow easier:
● There is just one endpoint available for the Microsoft Graph API:
https://graph.microsoft.com. To access the endpoint, you can utilize SDKs or REST APIs. A
robust suite of services that control user and device identification, access, compliance,
security, and aid in shielding enterprises from data loss or leakage are also included in
Microsoft Graph.
● In order to improve Microsoft 365 experiences like Microsoft Search, Microsoft Graph
connectors operate in the incoming direction, sending data from outside the Microsoft
cloud into Microsoft Graph services and applications. Many popular data sources, including
Box, Google Drive, Jira, and Salesforce, have connectors available.
95
● A collection of tools called Microsoft Graph Data Connect makes it easier to deliver
Microsoft Graph data to well-known Azure data stores in a safe and scalable manner. You
can utilize the Azure development tools to create intelligent applications by using the
cached data as data sources.
Implementation
Microsoft Entra ID needs to be informed about the application you develop in order for it to
communicate with the Microsoft identity platform. This guide explains how to register an
application on the Azure portal within a tenant.
2. If you have access to more than one tenant, select the tenant from the Directories +
Subscriptions menu using the Settings icon in the top menu before registering the
application.
6. Choose Accounts in this organizational directory only under Supported account types. To
learn more about the various account kinds, click the Help me pick menu item.
96
● Later on, the Redirect URI (optional) will be specified.
8. The application's Overview pane is displayed when registration is complete. Record
the Directory (tenant) ID and the Application (client) ID to be used in your application
source code. – need to rephrase from this point
97
Create an application for authentication.
1) Open visual studio and click on a new project.
2) Search for and choose the ASP.NET Core Web App template, and then select Next.
3) Enter a name for the project, such as DemoWebApp.
4) Choose a location for the project or accept the default option, and then select Next.
5) Accept the default for the Framework, Authentication type, and Configure for HTTPS.
Authentication type can be set to none as this tutorial will cover this process.
6) Select create
9) Starting from the Overview page of the app created earlier, under Manage, select Certificates
& secrets and select the Certificates (0) tab.
98
10) Click on the upload certificate.
11) Select the folder icon, then browse for and select the certificate that was previously created.
The certificate will be available in your app folder which we created through visual studio.
99
12) Enter a description for the certificate and select Add.
13) Save the Thumbprint value, which will be used in the next step.
14) In your visual studio, open appsettings.json and replace the file contents with the following
snippet:
100
Code –
{
"AzureAd": {
"Instance": "https://login.microsoftonline.com/",
"TenantId": "Enter the tenant ID obtained from the Azure portal",
"ClientId": "Enter the client ID obtained from the Azure portal",
"ClientCertificates": [
{
"SourceType": "StoreWithThumbprint",
"CertificateStorePath": "CurrentUser/My",
"CertificateThumbprint": "Enter the certificate thumbprint obtained from the Azure portal"
}
],
"CallbackPath": "/signin-oidc"
},
"DownstreamApi": {
"BaseUrl": "https://graph.microsoft.com/v1.0/me",
"Scopes": "user.read"
},
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
},
"AllowedHosts": "*"
}
101
17) In the Azure portal, under Manage, select App registrations, and then select the application
that was previously created.
18) In the left menu, under Manage, select Authentication.
19) In Platform configurations, select Add a platform, and then select Web.
102
In the top menu of Visual Studio, select Tools > NuGet Package Manager > Manage NuGet
Packages for Solution.
22) With the Browse tab selected, search for and select Microsoft.Identity.Web.UI. Select the
Project checkbox, and then select Install and once prompted click on I Agree.
23) Open Program.cs and replace the entire file contents with the following code:
103
24) Expand Pages, right-click Shared, and then select Add > Razor page.
25) Select Razor Page - Empty, and then select Add.
26) Enter _LoginPartial.cshtml for the name, and then select Add.
27) Open _LoginPartial.cshtml and add the following code for adding the sign in and sign out
experience:
28) Open _Layout.cshtml and add a reference to _LoginPartial created in the previous step. This
single line should be placed between </ul> and </div>:
104
29) Under Pages, open the Index.cshtml.cs file and replace the entire contents of the file with the
following snippet. Check that the project namespace matches your project name.
30) Open Index.cshtml and add the following code to the bottom of the file. This will handle how
the information received from the API is displayed:
105
31) Run the application. After the sign in window appears, select the account in which to sign in
with. Ensure the account matches the criteria of the app registration.
32) Upon selecting the account, a second window appears indicating that a code will be sent to
your email address. Select Send code, and check your email inbox.
33) Open the email from the sender Microsoft account team, and enter the 7-digit single-use code.
Once entered, select Sign in.
106
34) For Stay signed in, you can select either No or Yes.
35) The app will ask for permission to sign in and access data. Select Accept to continue.
107
36) The web app now displays profile data acquired from the Microsoft Graph API.
108
Shared Access Signatures (SAS)
A Shared Access Signature (SAS) is a signed URI that refers to one or more storage
resources and contains a token containing unique query parameters. The token indicates how
the resource can be accessed by the client.
Shared Access Signature (SAS) provides secure delegated access to resources in your
storage account. With SAS, you have granular control over how a client can access your data.
For example: What resources the client may access, What permissions they have to those
resources, and How long the SAS is valid.
A signature, one of the query parameters, is constructed from the SAS parameters and signed
with the key used to create the SAS. This signature is used by Azure Storage to authenticate
access to the storage resource.
A common scenario where SAS is useful is as a service where users read and write their own
data to your storage account.
In the scenario where a storage account stores user data, there are two common design
patterns:
109
Source: Microsoft Documentation (Choose when to use shared access signatures)
● User delegation SAS: A user delegation SAS is secured with Microsoft Entra credentials
and also by the permissions specified for the SAS. A user delegation SAS applies to Blob
storage only.
● Service SAS: A service SAS is secured with the storage account key. A service SAS
delegates access to a resource in the following Azure Storage services: Blob storage,
Queue storage, Table storage, or Azure Files.
● Account SAS: The account is secured with a SAS storage account key. A SAS account
provides access to resources in one or more storage services. All operations available
through Service or User Delegation SAS are also available through Account SAS.
110
Source: Microsoft Documentation (Discover shared access signatures)
References Links:
● Grant limited access to data with shared access signatures (SAS) - Azure Storage
● Create shared access signature (SAS) tokens for your storage containers and
blobs Implement shared access signatures - Training | Microsoft Learn
111
Azure Key Vault
Managed hardware security module (HSM) pools and vaults are the two types of containers
that the Azure Key Vault service supports. Software, HSM-backed keys, secrets, and certificates
can all be stored in vaults. Only HSM-backed keys are supported by managed HSM pools.
112
● Key management: Another option for key management is Azure Key Vault. Creating and
managing the encryption keys that are used to secure your data is made simple with Azure
Key Vault.
● Certificate Management: Public and private Secure Sockets Layer/Transport Layer Security
(SSL/TLS) certificates for usage with Azure and your internal connected resources can be
conveniently provisioned, managed, and deployed with Azure Key Vault.
There are two service tiers for Azure Key Vault: Standard, which uses a software key for
encryption, and Premium, which has keys safeguarded by a hardware security module (HSM).
Visit the Azure Key Vault price page to learn how the Standard and Premium tiers differ from
one another.
Benefits
● Centralized application secrets: You can manage the dissemination of your application secrets
by centrally storing them in Azure Key Vault. For instance, you can safely store the connection
string in Key Vault rather than in the app's code. URIs allow your apps to safely retrieve the
data they require. Specific versions of a secret can be retrieved by the applications using these
URIs.
● Safely keep secrets and keys: Before a caller (user or program) may get entry to a key vault,
they must first be properly authenticated and authorized. Microsoft Entra ID is used for
authentication. Key Vault access policy or Azure role-based access control (Azure RBAC) can be
used for authorization. Key vault access policy is used when trying to access data stored in a
vault, and Azure RBAC is utilized for managing the vaults. With the Azure Key Vault Premium
tier, hardware security modules (HSMs) provide hardware protection for Azure Key Vaults in
addition to software protection.
● Monitor access and use: You can keep an eye on things by turning on logging for your vaults.
You have authority over your logs; you can remove records that you no longer require or
secure them by limiting access. Configuring Azure Key Vault allows you to:
113
o replicating your Key Vault's contents both to a secondary region and within a
region. High availability is ensured by data replication, which eliminates the
requirement for administrator intervention to start the failover.
o offering normal PowerShell, Azure CLI, and portal administration options.
o automating processes like enrolment and renewal for certifications that you buy
from public certification authorities.
Authentication
You must authenticate with Key Vault before you can do any operations on it. To authenticate to
Key Vault, there are three options:
● Managed identities for Azure resources: You can give your virtual machine an identity that
allows it to access Key Vault when you install an application on it. Other Azure resources can
also have identities assigned to them. One advantage of this strategy is that the app or service
does not control how the initial secret is rotated. Azure rotates the identity's service primary
client secret automatically. This method is advised as a best practice.
● Service principal and certificate: Key Vault access is possible with the usage of a service
principal and certificate that go hand in hand. Since the developer or owner of the program
must rotate the certificate, we do not advise using this method.
● Service principal and secret: We do not advise using a service principal and secret for Key
Vault authentication, even though you are able to. Rotating the bootstrap secret automatically
to gain access to Key Vault is a challenging task.
Using distinct keys, Perfect Forward Secrecy (PFS) secures connections between client
systems of customers and Microsoft cloud services. RSA-based 2,048-bit encryption key
lengths are also used for connections. It is more difficult for someone to intercept and
access data that is in transit thanks to this combination.
114
Securing App configuration using Azure key vault
1. In the Azure portal, select the Create a resource option located in the upper-left corner:
2. Type "Key Vault" into the search bar, then choose it from the drop-down menu.
3. Choose Key vaults from the results list on the left.
4. Go to Key Vaults and choose Add.
5. Enter the following details in the "Create key vault" section on the right:
115
6. Maintain the default settings for the other Create key vault choices.
7. Click Review+create
8. The info you supplied will be verified by the system and shown. Select "Create."
Right now, the only account that is allowed to access this new vault is your Azure account.
1. Open the Azure portal and log in. search for App Configuration and click on create. Select
the appropriate name, leave default setting as it is and click on Review+cerate.
116
2. Go to Configuration Explorer and select.
117
3. Once you've chosen + Create > Key value, enter the following values:
118
Adding reference to secret in code
1. Run the following command to create an ASP.NET Core web app in a new
DemoAppConfig folder:
3. Execute the subsequent command. The connection string for your App Configuration
store is stored in a secret called ConnectionStrings:AppConfig, which is stored by the
command using Secret Manager. The connection string from your App Configuration
store should be used in place of the placeholder <your_connection_string>. The
connection string is located in the Azure portal's App Configuration store under Access
Keys.
119
4. To help avoid unintentionally sharing secrets within source code, Secret Manager saves
the secret outside of your project tree. It is exclusively utilized for local web app testing.
Use the application settings, connection strings, or environment variables to store the
connection string when the app is deployed to Azure, such as App Service. Alternatively,
you can connect to App Configuration using managed identities or any other Microsoft
Entra identity to completely skip connection strings.
Add a Settings.cs file at the root of your project directory. It defines a strongly typed Settings
class for the configuration you're going to use. Replace the namespace with the name of
your project.
120
6. Open Index.cshtml.cs in the Pages directory, and update the IndexModel class with the
following code. Add the using Microsoft.Extensions.Options namespace at the beginning
of the file, if it's not already there.
7. Open Index.cshtml in the Pages directory, and update the content with the following
code.
121
8. To build the app using the .NET Core CLI, navigate to the root directory of your project.
Run the following command in the command shell:
To launch the web application locally when the build has successfully finished, type the
following command:
122
123
Azure Cache for Redis
One popular method for enhancing a system's scalability and performance is caching. In order
to achieve this, frequently accessed data is temporarily copied to quick storage that is near to
the program. Caching can greatly speed up client application response times by supplying data
faster if the fast data storage is situated closer to the application than the original source.
Based on the Redis program, Azure Cache for Redis offers an in-memory data store. Redis
enhances an application's scalability and performance when it significantly utilizes backend data
stores. Because frequently used data may be swiftly written to and read from the server
memory, it can handle high volumes of application requests. Redis provides modern
applications with a vital low-latency and high-throughput data storage solution.
124
subsequently shared with additional clients, can be updated by the system when
modifications are made to the data.
2. Content Cache - A lot of websites are created using templates that contain static elements like
banners, footers, and headers. These constants shouldn't be altered frequently. When using
an in-memory cache instead of backend datastores, static content can be accessed more
quickly.
3. Session store - Shopping carts and other user history information that a web application might
link to user cookies are frequent uses for this approach. When a cookie is passed through and
validated with each request, storing too much in it can negatively impact performance. A
common method queries the data in a database using the cookie as a key. It is quicker to
associate data with a user using an in-memory cache, such as Azure Cache for Redis, than it is
to interface with a whole relational database
4. Job and message queuing - When the activities involved in a request take a while to complete,
applications frequently add tasks to a queue. Extended operations are queued for sequential
processing, frequently by a different server. Task queue is the term for this kind of job deferral.
5. Distributed transactions -Applications can occasionally need to issue many instructions to a
backend datastore in order to complete an action in one go. Either every order must be
successful, or everything must be rolled back to the original state. It is possible to run many
commands as a single transaction with Azure Cache for Redis.
125
Cache for Redis available service tiers
Tier Description
Basic a single virtual machine hosting an OSS Redis cache. This tier is best for
noncritical and development/test workloads and has no service-level
agreements (SLAs).
Standard an OSS Redis cache that is replicated over two virtual machines.
Premium speedy Redis caches for OSS. More features, reduced latency, increased
throughput, and improved availability are all offered by this tier. VMs
running Premium caches are more powerful than those running Basic
or Standard caches.
Enterprise Redis Enterprise software from Redis Labs powers high-performance
caches. Redis modules like RediSearch, RedisBloom, and Redis
Timeseries are supported on this tier. It provides even greater
availability than the Premium tier as well.
Enterprise Flash Redis Enterprise software from Redis Labs powers affordable big
caches. This layer expands Redis data storage into virtual machine (VM)
non-volatile memory, which is less expensive than DRAM. The total cost
of memory per GB is decreased.
Expiration Policies:
The following are various Azure Redis Cache expiration policies:
● volatile-lru: The default setting that uses a "expire set" to remove the keys that haven't
been utilized in a while.
● allkeys-lru: use the least recently used (LRU) method to remove any key.
● unstable-haphazard: removes a key at random from those that have a "expire set"
● Ten-minute break: There is a 10-minute timeout for inactive connections in Azure Cache
for Redis.
The portal's settings allow you to modify the expiration policy.
A key assigned a time-out that has elapsed is immediately removed by Azure Cache for Redis.
The *STORE commands SET, SETEX, GETSET, and others can be used to set time-out values.
The key will not be given back to anyone who asks for it after the expiration time has passed.
After it expires, though, it might not be physically erased from memory for a while.
126
1. Name - The name of the Redis cache must be globally unique. Since the name is used to
create a public-facing URL for connecting to and interacting with the service, it must be
unique within Azure.
The name must consist of letters, numbers, and the '-' character and be between one
and sixty-three characters long. The '-' character is not allowed to begin or finish the
cache name, and consecutive '-' characters are invalid.
2. Location - Your application and cache instance should always be located in the same area.
Reliability can be greatly lowered and latency can rise dramatically when connecting to a
cache located elsewhere. Choose a location close to the application that is using the data if
you are connecting to the cache from outside of Azure.
3. Cache type - The cache's size, functionality, and performance are all determined by the tier.
Visit Azure Cache for Redis pricing for further details.
4. Clustering Support - You can use clustering with the Premium, Enterprise, and Enterprise
Flash tiers to divide your dataset among several nodes automatically. You can designate up
to ten shards in order to apply clustering. The amount spent is equal to the initial node's
cost times the number of shards.
To interact with Redis instance you can use console from azure portal.
127
Giving values an expiration time
Because it enables us to retain frequently used values in memory, hurting is significant. But we
also need a mechanism to let values expire when they become stale. This is accomplished in
Redis by giving a key a time to live (TTL).
The key is immediately erased after the TTL expires, just like if the DEL command had been sent.
Notes about TTL expirations are provided here.
Expirations can be precisely set in milliseconds or seconds.
There is always a 1 millisecond resolution for expiry times.
128
Redis remembers the date when a key expires, thus when your server is stopped, time is
virtually passing while information about expires is replicated and preserved on disk.
Below is the example of how to set expiration:
129
Azure Content Delivery Network (Azure CDN)
A dispersed network of servers called a content delivery network (CDN) is capable of effectively
delivering web material to users. To reduce latency, CDNs keep cached material on edge servers
near end users.
CDNs are commonly employed for the delivery of static content, including HTML pages,
style sheets, documents, graphics, and client-side scripts. Regardless of a user's location in
respect to the data centre hosting the application, the main benefits of utilizing a CDN are
reduced latency and faster content delivery. Because the web application does not have to
process requests for the content that is hosted in the CDN, CDNs can also aid in reducing strain
on the application.
With the help of Azure Content Delivery Network (CDN), developers can quickly and
efficiently provide high-bandwidth content to users worldwide by caching their material at
carefully chosen physical nodes all around the globe. Moreover, dynamic content—which
cannot be cached—can be expedited by Azure CDN through the use of CDN POPs and different
network optimizations. Using route optimization to avoid Border Gateway Protocol (BGP), for
instance.
When delivering web site assets, Azure CDN offers the following advantages:
● Enhanced user experience and better performance for end users, particularly when
utilizing applications that need several round trips to load content.
● Large scaling is necessary to better manage sudden, high loads, like those that occur
during a product introduction.
● In order to reduce the amount of traffic transmitted to the origin server, user requests
are distributed and content is served straight from edge servers.
Azure Content Delivery Network working
130
1. When user1, uses a URL with a unique domain name, such as <endpoint name>, she requests
a file, also known as an asset.cloudedge.net. This name may be a custom domain or an
endpoint hostname. The best-performing POP location—typically the POP nearest to the
user—is the one to which the DNS directs the request.
2. The POP requests the file from the origin server if none of the edge servers in its cache has it.
Any publicly available web server, an Azure Web App, an Azure Cloud Service, or an Azure
Storage account can act as the origin server.
3. The file is returned to an edge server in the POP by the origin server.
4. The file is cached by an edge server in the POP and then sent back to Alice, the original
requester. Until the time-to-live (TTL) indicated by its HTTP headers expires, the file is kept in
cache on the edge server in the POP hierarchy. The default TTL in the event that the origin
server failed to specify one is seven days.
5. Then, using the same URL that Alice used, other users can request the same file and be sent
to the same POP.
6. The POP edge server returns the file straight from the cache if the file's TTL hasn't expired. A
quicker, more responsive user experience is the outcome of this approach.
Features: Among the main things that Azure CDN provides are:
● Dynamic site speed up
● CDN cache policies
● support for HTTPS custom domains
● logs from Azure diagnostics
● File condensing
● Geo-restrictions
131
2. List CDN profiles and endpoints
In order to prevent us from trying to create duplicates, the procedure below first lists
every profile and endpoint in our resource group. If it discovers a match for the profile and
endpoint names given in our constants, it notes that information for later use.
132
Once the profile is created, we will create an endpoint
133
● Azure Front Door Premium is designed with enhanced security in mind.
4. Enter the following data for the necessary settings on the Create a Front Door profile page.
134
● Subscription - select your subscription
● Resource Group - select already created resource group or else click on create new and
create new resource group
● Name - give your profile name.In this example it is demoAzureFrontdoor
● Tier - Choose the Premium or Standard tier. Optimized content delivery is the standard
tier. The Premium tier prioritizes security and expands upon the Standard tier. See
Comparison of Tiers.(ref. microsoft learn)
● Endpoint Name - Enter globally unique name for your endpoint name
● Origin type - Type of originThe resource type for your origin should be chosen. In this
case, we choose a custom.
● Origin host name - Enter the hostname for your origin
● Private Link - If you would like to have a private connection between your Azure Front
Door and your origin, enable private link service. Supported services are limited to
internal load balancers, Storage Blobs, and App services.
● Caching - To cache contents closer to your users utilizing the Microsoft network and
edge POPs of Azure Front Door, select this check box.
● WAF Policy - To enable this functionality, click Create new or choose an existing WAF
policy from the options.
Note - You have to choose an origin from the same subscription in which the Front Door was
formed when creating an Azure Front Door profile.
5. Select Review+create and then select create to deploy your azure front door.
135
Azure API Management Instance
API management offers the essential features—developer interaction, business insights,
analytics, security, and protection—to guarantee an API program's success. Every API has one or
more actions, and any number of products can incorporate any number of APIs. Developers
must subscribe to a product that includes an API in order to utilize it. After that, they can call
the API's operation, abide by any applicable usage policies.
2. The administrative interface where you configure your API program is called the
management plane. Apply it to:
3. The Developer Portal is an entirely configurable, automatically created webpage that
contains your API documentation. Through the gateway for developers, developers can:
136
● Manage API keys
Products
Developers are able to access APIs through products. One or more APIs are included in products
under API Management, and each product is set up with a title, description, and usage
guidelines. Items may be sealed or exposed. Open products can be used without a subscription,
whereas protected products require one before they can be utilized.
Groups
Groups are used to control which products are visible to developers. The following immutable
system groups are part of API Management:
Administrators: They are responsible for creating the operations, products, and APIs that
developers utilize as well as managing instances of the API Management service. Administrators
of Azure subscriptions are included in this group.
Developers: Users of your developer site with authentication who create apps with your APIs.
Developers can create apps that invoke API operations by gaining access to the developer
portal.
Guests: Unauthenticated users of the developer portal. Certain read-only privileges, such as the
ability to browse APIs but not call them, may be assigned to them.
Administrators can leverage external groups in related Microsoft Entra tenancies or establish
bespoke groups in addition to these system groups.
Policies
A policy is a group of commands that are applied one after the other to an API request or
response. Many different rules are available, but popular statements include call rate restriction
to limit the number of incoming calls from a developer and format conversion from XML to
JSON.
Unless otherwise specified by the policy, policy expressions can be used as text values or
attribute values in any API Management policy. Policy expressions serve as the foundation for
some policies, including the Control flow and Set variable policies.
Depending on your needs, policies can be enforced at several scopes: global (across all APIs),
product, individual API, or API operation.
137
API Gateways
Proxying API queries, implementing policies, and gathering telemetry are all handled by the
service component known as the API Management gateway (sometimes referred to as the data
plane or runtime).
In between clients and services is an API gateway. It routes client requests to services in reverse
proxy fashion. Additionally, it may carry out a number of cross-cutting functions like rate
limitation, SSL termination, and authentication. Clients must submit requests directly to
back-end services if a gateway is not deployed. However, exposing services to customers directly
may have the following issues:
● Complicated client code may be the outcome. The client has to manage failures robustly and
monitor several endpoints.
● Between the client and the backend, coupling is created. The client must understand how
each service is broken down. This makes it more difficult to retain clients and to restructure
services.
● Calls to various services may be necessary for a single operation.
● Every service that interacts with the public must manage issues like SSL, client rate
limitation, and authentication.
● A client-friendly protocol like HTTP or WebSocket must be exposed by services. This reduces
the variety of communication protocols available.
● Publicly accessible services need to be protected from potential attacks by hardening them.
By severing the client from the service, a gateway aids in resolving these problems.
Types of Gateways
Both managed and self-hosted gateways are provided by API Management:
Managed: For each API Management instance across all service tiers in Azure, the managed
gateway is the default gateway component. Wherever the backends hosting the APIs are
located, all API traffic is routed through Azure using the managed gateway.
Self-hosted: A containerized variant of the standard managed gateway, the self-hosted gateway
is an optional feature. When running the gateways off of Azure in the same settings as the API
backends, it's helpful in hybrid and multi cloud scenarios. Customers with hybrid IT
infrastructure may manage APIs hosted on-premises and across clouds using a single API
Management service in Azure thanks to the self-hosted gateway.
Create API
1. Sign into Azure portal
2. Choose the option to Create a resource from the Azure portal menu. On the Azure Home
page, you may also choose Create a resource.
138
3. Choose Integration > API Management from the Create a resource page.
Organization Name The name that your company goes by. This name appears in a number of
places, such as the developer portal's title and the sender of email
notifications.
Administrator Email the email address that API Management will use to send all notifications.
Pricing tier To test the service, choose the Developer tier. You cannot use this tier in
production.
139
5. Select Review + create
2. Choose your API Management instance from the list of services on the page.
140
3. Review the properties of your service in the overview page.
141
Configure Access to APIs
1. Create an API by clicking on APIs under APIs in created API management instance
2. Under that click on Echo API. Here you can create Post, get, del etc. APIs.
142
3. Now click on test to test the API.
Policy Configuration
A series of incoming and outgoing assertions are described in the policy definition, which is a
straightforward XML document. You can directly edit the XML in the definition window.
There are four categories in the configuration: inbound, backend, outgoing, and on-error. A
request and a response follow the sequence in which the designated policy statements are
carried out.
143
<policies>
<inbound>
<!-- statements to be applied to the request go here -->
</inbound>
<backend>
<!-- statements to be applied before the request is forwarded to
the backend service go here -->
</backend>
<outbound>
<!-- statements to be applied to the response go here -->
</outbound>
<on-error>
<!-- statements to be applied if there is an error condition go here -->
</on-error>
</policies>
144
3. We can implement policies under xml directly also.
4. For this example, we will implement a set status code policy. Save the changes.
145
Azure Event Grid and Hub
A.Azure Event Grid
You can utilize Azure Event Grid, a serverless event broker, to integrate apps with events. Event
Grid delivers events to subscriber destinations, which can be any endpoint with network access
that Event Grid has, or applications, Azure services, etc. These events may originate from other
apps, SaaS services, or Azure services. Events are released by publishers, but they have no say in
how they are handled. Selecting which events to handle is up to the subscribers.
Developing applications using event-based architectures is made simple using Event Grid.
Storage blobs and resource groups are two Azure services that Event Grid supports natively.
Moreover, Event Grid supports specific subjects for your own events.
Filters can be used to multicast events to numerous endpoints, route particular events to
distinct endpoints, and ensure the dependable delivery of your events.
The following image shows how Event Grid connects sources and handlers, and isn't a
comprehensive list of supported integrations.
146
● Events - What happened.
The smallest piece of data that completely captures a systemic event is called an event.
Every event contains certain common details, such as its origin, the time it happened, and
its unique identification. Additionally, each event has unique information that is only
pertinent to that particular kind of event. An event pertaining to the creation of a new file
in Azure Storage, for instance, contains information about the file, including its
lastTimeModified value. Or, the URL of the capture file is present in an Event Hubs event.
A service level agreement (SLA) for general availability (GA) covers an event up to 64 KB in
size. Currently in preview is support for events up to 1 MB in size. Events larger than 64
KB incur fees.
● Event subscriptions - The endpoint or built-in mechanism to route events, sometimes to more
than one handler. Subscriptions are also used by handlers to intelligently filter incoming
events.
You can choose the events on a particular topic you want to receive from Event Grid by
subscribing. An endpoint for handling the event is provided by you at the time of
subscription creation. The events that are sent to the endpoint can be filtered. You can
apply a subject pattern or event type filter. If an event subscription is only required
temporarily and you don't want to bother about maintaining it, set an expiration date.
147
● Event handlers - The app or service reacting to the event.
An event handler is where the event is sent from an Event Grid perspective. In order to
process the event, the handler does some further actions. Several handler types are
supported by Event Grid. As the handler, you can use your own webhook or one that is
supported by Azure. Event Grid uses various methods to ensure that the event is delivered,
depending on the type of handler. The event is retried for HTTP webhook event handlers
until the handler returns a status code of 200 – OK. Events for Azure Storage Queue are
repeated until the message push into the queue is successfully processed by the Queue
service.
Implementation
1. Login to azure portal. Launch the Cloud Shell:
2. Select Bash as the shell.
3. Run the following commands to create the variables. Replace <myLocation> with a region
near you.
let rNum=$RANDOM*$RANDOM
myLocation=<myLocation>
myTopicName="az204-egtopic-${rNum}"
mySiteName="az204-egsite-${rNum}"
mySiteURL="https://${mySiteName}.azurewebsites.net"
4. Create a resource group for the new resources you're creating.
148
5. Register the Event Grid resource provider by using the az provider register command.
az provider register --namespace Microsoft.EventGrid
It can take a few minutes for the registration to complete. To check the status run the
following command.
az provider show --namespace Microsoft.EventGrid --query "registrationState"
6. Create a custom topic by using the az eventgrid topic create command. The name must
be unique because it's part of the DNS entry.
7. Copy the following command, specify a name for the web app (Event Grid Viewer
sample), and press ENTER to run the command. Replace <your-site-name> with a unique
name for your web app. The web app name must be unique because it's part of the DNS
entry.
sitename=<your-site-name>
8. Run the az deployment group create to deploy the web app using an Azure Resource
Manager template.
149
The deployment may take a few minutes to complete. After the deployment has
succeeded, view your web app to make sure it's running. In a web browser, navigate to:
https://<your-site-name>.azurewebsites.net
9. You subscribe to an Event Grid topic to tell Event Grid which events you want to track and
where to send those events. The following example subscribes to the custom topic you
created, and passes the URL from your web app as the endpoint for event notification.
The endpoint for your web app must include the suffix /api/updates/.
150
10.Let's trigger an event to see how Event Grid distributes the message to your endpoint.
First, let's get the URL and key for the custom topic.
To simplify this article, you use sample event data to send to the custom topic. Typically,
an application or Azure service would send the event data. The following example
creates sample event data:
event='[ {"id": "'"$RANDOM"'", "eventType": "recordInserted", "subject":
"myapp/vehicles/motorcycles", "eventTime": "'`date +%Y-%m-%dT%H:%M:%S%z`'",
"data":{ "make": "Ducati", "model": "Monster"},"dataVersion": "1.0"} ]'
11.The data element of the JSON is the payload of your event. Any well-formed JSON can go
in this field. You can also use the subject field for advanced routing and filtering.
CURL is a utility that sends HTTP requests. In this article, use CURL to send the event to
the topic.
curl -X POST -H "aeg-sas-key: $key" -d "$event" $endpoint
12.You've triggered the event, and Event Grid sent the message to the endpoint you
configured when subscribing. View your web app to see the event you just sent.
151
B.Azure Event hub
The "front door" of an event pipeline, also known as an event ingestor in solution
architectures, is represented by Azure Event Hubs. To separate the creation of an event stream
from the consumption of such events, an event ingestor is a part or service that stands in
between event publishers and event consumers. By offering a uniform streaming platform with
a time retention buffer, Event Hubs helps to separate the needs of event producers and
attendees.
The following are the essential elements of Event Hubs:
● The main way that developers interact with the Event Hubs client library is through an Event
Hubs client. A variety of Event Hubs clients exist, each focused on a particular use case, such
publishing or attending events.
● As part of an embedded device solution, a mobile device application, a game title running
on a console or other device, some client or server based business solution, or a web site,
an Event Hubs producer is a type of client. They provide telemetry data, diagnostics
information, usage logs, or other log data.
● A client that reads data from the Event Hubs and permits processing is known as an Event
Hubs consumer. Processing can entail filtering, complicated computing, and aggregation.
Distribution or storage of the information in its raw or modified form may also be a part of
processing. Customers of Event Hubs are frequently powerful, large-scale platform
infrastructure components, such as Apache Spark and Azure Stream Analytics, that have
analytics built in.
● An Event Hub's division is a series of events that are held in order. A method of organizing
data related to the parallelism needed by event consumers is partitioning. With the
partitioned consumer design, which Azure Event Hubs offers, each consumer only reads a
certain partition, or subset, of the message stream. Newer events are appended to the end
of this sequence as they become available. When an Event Hub is formed, the number of
partitions is predetermined and cannot be altered.
● A full Event Hub is seen by a consumer group. Consumer groups allow different applications
to separately consume the event stream at their own speed and location, as well as to each
have a separate view of the stream. It is advised that there be only one active consumer for
each partition and consumer group combination. There can be a maximum of five
concurrent readers on a partition per consumer group. All events from a partition are
received by each current reader; if many readers are on the same partition, duplicate events
will be sent to them.
● Any entity that reads event data from an Event Hub is an event receiver. Users of Event Hubs
connect via the AMQP 1.0 session. Events are delivered through a session by the Event Hubs
service when they become available. Via the Kafka protocol 1.0 and later, all Kafka
consumers establish a connection.
152
● Processing units, also known as throughput units, are capacity units that are prepurchase
and regulate the throughput capacity of Event Hubs.
The following figure shows the Event Hubs stream processing architecture:
1. In the Azure portal, select All services in the left menu, and select star (*) next to Event
Hubs in the Analytics category. Confirm that Event Hubs is added to FAVORITES in the
left navigation menu.
2. Select Event Hubs under FAVORITES in the left navigation menu, and select Create on the
toolbar.
153
3. On the Create namespace page, take the following steps:
i) Select the subscription in which you want to create the namespace.
ii) Select the resource group you created in the previous step.
iii) Enter a name for the namespace. The system immediately checks to see if the name
is available.
iv) Select a location for the namespace.
v) Choose Basic for the pricing tier. If you plan to use the namespace from Apache
Kafka apps, use the Standard tier. The basic tier doesn't support Apache Kafka
workloads. To learn about differences between tiers, see Quotas and limits, Event
Hubs Premium, and Event Hubs Dedicated articles.
vi) Leave the throughput units (for standard tier) or processing units (for premium tier)
settings as it is. To learn about throughput units or processing units: Event Hubs
scalability.
vii)Select Review + Create at the bottom of the page.
viii)On the Review + Create page, review the settings, and select Create. Wait for the
deployment to complete.
4. Confirm that you see the Event Hubs Namespace page similar to the following example:
154
5. On the Overview page, select + Event hub on the command bar.
6. Type a name for your event hub, then select Review + create.
155
Azure Service Bus and Azure Queue Storage queues
Storage queues and Service Bus queues are the two kinds of queuing mechanisms that Azure
supports.
Provision of services Bus queues are a component of a larger Azure messaging system that also
includes publish/subscribe, more complex integration patterns, and queuing. Their purpose is to
combine different communication protocols, data contracts, trust domains, and network
environments with applications or application components.
The Azure Storage infrastructure includes storage queues. You can save a lot of messages with
them. Messages can be accessed via HTTP or HTTPS authenticated calls from any location in the
world. A queue message may have a maximum size of 64 KB. Millions of messages could be in a
queue, depending on the storage account's maximum capacity. A frequent purpose for queues
is to generate an asynchronous work backlog.
The feature set of storage queues and service bus queues differs slightly. Depending on what
your specific solution requires, you can select one or both of these options.
Solution architects and developers should take these suggestions into consideration when
deciding whether queuing technology best serves the needs of a particular solution.
156
Think of utilizing storage queues.
When should a solution architect or developer use storage queues?
● In a queue, your application has to hold more than 80 terabytes of messages.
● The application you are using wants to monitor the status of a message that is in the
queue. It is helpful in the event that a worker handling a message crashes. Then, another
worker can pick up where the previous worker left off by using that knowledge.
● Server-side logs of every transaction made against your queues are necessary.
Standard Premium
Variable throughput High throughput
Variable latency Predictable performance
Pay as you go variable pricing Fixed pricing
N/A Ability to scale workload up and down
157
Message size up to 256kb Message size up to 100 MB
1.Queues
● First In, First Out (FIFO) message delivery is provided via queues to one or more rival
customers. In other words, messages are normally received and processed by receivers
in the order that they were added to the queue. Additionally, each message is received
and processed by a single message consumer. It is not necessary for producers (senders)
and consumers (receivers) to process messages simultaneously because they are kept
durably in the queue.
● Load-levelling is a related advantage that allows messages to be sent and received at
varying speeds by producers and consumers. Numerous applications exhibit fluctuations
in system load over time. On the other hand, processing times are usually constant for
each unit of work. When a queue is used as a middleman between message producers
and consumers, the consuming application just has to be able to manage average traffic
as opposed to peak load.
● There is an intrinsic loose coupling between the components when message producers
and consumers are connected via queues. A consumer can be enhanced without
affecting the producer because producers and consumers are unaware of one another
● The Azure interface, PowerShell, CLI, and Resource Manager templates can all be used
to build queues. Next, use clients to send and receive messages.
● A queue enables one consumer to process a message at a time. Topics and subscriptions
offer a publish and subscribe pattern that facilitates one-to-many communication, unlike
queues. Scaling to a large number of recipients is a benefit. Every subscriber registered
with the topic has access to every published communication. Depending on the filter
criteria used to these subscriptions, a copy of a message sent by the publisher to a
subject may reach one or more subscribers. More filters are available for the
subscriptions to utilize in order to limit the communications they wish to receive.
● Messages are sent by publishers to topics in the same manner as they are sent to
queues. Customers do not, however, get messages straight from the subject. Instead,
subscribers to the topic send messages to consumers. A virtual queue that gets copies of
158
all messages submitted to the topic is what a topic subscription looks like. Customers
receive messages from a subscription in the same manner as they do from a queue.
159
4. Click on Review + create and then click on create.
Create Queue
1. Under Entities click on queue.
2. Click on queue then side window will open. Fill out the necessary details and click on
create.
Introduction to Azure Service Bus, an enterprise message broker - Azure Service Bus | Microsoft
Learn
160
the URL https://myaccount.queue.core.windows.net/images-to-download addresses a
queue in the diagram above.
4. Message - A message up to 64 KB in size, in any format. The maximum time-to-live for
versions 2017-07-29 or later can be any positive value or -1, which indicates that the
message never expires. The time-to-live is set to seven days by default if this option is
left out.
Implementation of queue
1. Create storage account if you don’t have already created one
2. Create an app
● To build a new console app called QueueApp, use the dotnet new command in a
console window (such as cmd, PowerShell, or Azure CLI). A basic C# "hello world"
project with a single source file called Program.cs is created by running this
command.
161
4. Add Using statements
● To launch Visual Studio Code in the current directory, type code. from the project
directory's command line. Maintain the open command-line window. Later on, you'll
need to perform additional commands. Click the Yes button if asked to add C# assets
that are needed for debugging and building.
● After using System; line, open the Program.cs source file and add the following
namespaces. This application connects to Azure Storage and manipulates queues
using types from these namespaces.
The code executes asynchronously since the app makes use of cloud resources.
● Make the Main method run asynchronously by updating it. Put an async Task return
value in place of void.
162
● Save program.cs file
You have to obtain your credentials from the Azure portal before you can make any calls
into Azure APIs.
7. Get your credentials from azure portal
163
iv. In the Access keys pane select show keys
v. Find the value of the Connection string in the key1 section. To copy the
connection string, select the Copy to clipboard icon. In the next part, you will
assign the value of the connection string to an environment variable.
Once the connection string has been copied, put it into a fresh environment variable on
the local computer that is executing the application. Open a console window and follow
your operating system's instructions to set the environment variable. Put your real
connection string in lieu of <yourconnectionstring>.
You need to launch a fresh command window in Windows after adding the environment
variable.
9. Restart programs
Restart any open applications that require the environment variable to be read after
adding it. For instance, before proceeding, restart your editor or development
environment.
164
To allow the program to access the storage account, add the connection string to it.
165
ii. Optional: By default, a message's maximum time-to-live is seven days. Any
positive number can be used as the message time-to-live. The code line that
follows appends an eternal message.
Use Timespan.FromSeconds(-1) in your call to SendMessageAsync to add a
message that never expires.
await theQueue.SendMessageAsync(newMessage, default,
TimeSpan.FromSeconds(-1), default);
iii. Save the file.
In order to obtain a message from the queue, create a new method. It's crucial to
remove the message from the queue as soon as it's successfully received to prevent it
from being processed more than once.
Messages transmitted to the queue with SDK versions older than v12 are
automatically encoded with Base64. With version 12, that feature was
eliminated. A message that you obtain using the v12 SDK is not Base64-decoded
by default. The contents must be explicitly Base64-decoded by you.
166
ii. Save the file
At the conclusion of a project, it's great practice to determine whether you still require
the materials you produced. If resources are left idle, it might be expensive. Inquire with
the user about deleting the queue if it is empty but still remains.
167
14.Check for command-line arguments
Finally, before calling Console to end the process, wait for user input.ExamineLine.
i. To check for command-line parameters and wait for user input, expand the Main
method.
1. To build the project, execute the following dotnet command from the project
directory's command line.
168
2. Execute the following command to add the first message to the queue once the
project has successfully built.
3. Launch the application without any command-line options to obtain and eliminate
the initial message in the queue.
dotnet run
4. Run the program again until all of the messages are gone. You will receive a
notification stating that the queue is empty and an instruction to remove the queue
if you run it again.
169
170
Azure Monitor Application Insights
Azure Application Insights, a feature of Azure Monitor, is a robust tool for tracking, diagnosing,
and optimizing the performance of live web applications. It delivers detailed insights into
application health, user behavior, and overall performance.
Key Capabilities
1. Application Health Monitoring
● Dashboard Overview: Instantly access vital metrics and application performance data.
● Architecture Visualization: Understand component dependencies using the application
map.
● Real-Time Metrics: Track live application activity for quick insights.
● Transaction Analysis: Trace individual operations to diagnose performance issues.
● Uptime Monitoring: Use automated tests to ensure application availability.
● Failure Diagnostics: Detect and resolve failures to enhance reliability.
● Performance Metrics: Identify and address bottlenecks to improve speed and efficiency.
2. Comprehensive Monitoring and Alerts
● Issue Alerts: Receive notifications about critical issues based on predefined metrics.
● Detailed Metrics and Logs: Analyze usage patterns and investigate problems through
detailed data.
● Diagnostic Streams: Export logs and metrics for external analysis or integration.
● Custom Dashboards: Create tailored reports and visualizations to track performance
indicators.
171
3. User Behavior Analysis
● Engagement Insights: Track how users interact with your application.
● Journey Analysis: Identify where users drop off and optimize the user flow.
● Navigation Patterns: Map user paths to enhance engagement and usability.
● User Grouping: Segment users by common characteristics for targeted analysis.
4. Code Optimization Tools
● Performance Profiling: Detect bottlenecks in .NET applications with the profiler.
● Snapshot Debugging: Capture exceptions for faster issue resolution.
● AI Recommendations: Leverage AI insights to improve code efficiency and performance.
Azure Application Insights equips developers with the tools to monitor and improve application
performance, ensure reliability, and enhance the user experience, making it an indispensable
part of modern application management.
172
APM tools are useful for monitoring applications from development, through test, and into
production in the following ways:
● Understand in advance how the application works.
● Review application execution data to determine the cause of the incident.
173
Instrument an app or service to use Application Insights
Azure Application Insights enables monitoring, anomaly detection, and insights into application
performance and usage.
Steps to Integrate Application Insights
1. Create an Application Insights Resource
○ In the Azure Portal, create a new Application Insights resource.
○ Select the resource group, region, and note the Instrumentation Key or
Connection String.
174
5. Verify Telemetry
○ Check metrics, logs, and telemetry data in the Azure Portal.
Benefits
● Real-Time Monitoring: Track application performance and usage live.
● Diagnostics: Quickly identify and resolve issues using rich telemetry data.
● User Analytics: Analyze user behavior and engagement.
● Custom Metrics: Monitor specific events and metrics tailored to your application.
Integrating Application Insights provides valuable data to enhance performance, improve
reliability, and make informed decisions.
175
Application Insights feature overview
There are several ways to start monitoring and analyzing app performance:
● At run time: Instrument your web app on the server. Ideal for already deployed
applications. Prevents any update to the code.
● At development time: Add application insights to your code. Allows you to customize
telemetry collection and send more telemetry.
● Instrument your web pages for page view, AJAX and other client-side telemetry.
● Analyze mobile app usage by integrating with Visual Studio App Center.
● Availability Tests - Regularly ping your website from our servers.
176
Application Insights log-based metrics
Application Insights log-based metrics allow you to analyze the health of your monitored
apps, create powerful dashboards, and configure alerts.
There are two types of metrics:
● Behind the scenes, Log-based metrics are translated from stored events into Kusto queries
● Standard metrics are stored as pre-aggregated time series.
Standard metrics are pre-aggregated at collection time, they perform better at query time. This
makes them the best choice for dashboarding and real-time alerting. Log-based metrics have
more dimensions, which makes them a better choice for data analysis and temporal analysis.
Use the Namespace selector to switch between log-based and standard metrics in Metrics
Explorer.
● The selected Time range is translated as an additional timestamp... clause to select only
events from the selected time range. For example, a chart showing the most recent 24
hours of data in question | Timestamp > ago(24h).
● The chosen/selected time granularity is put into the final summary by the
...bin(timestamp, [timegrain]) clause.
● The selected Split chart size is translated to an additional summary property. For
example, if you segment your chart by location and plot using 5-minute time granularity,
the summary clause would be summary ... bin(timestamp, 5m), by location.
Availability metrics:
Metrics in the Availability category allow you to see the health of your web application as
observed from points around the world. Configure availability tests to start using any metrics
from this category.
177
(Source: Microsoft Documentatio)
If all components are roles in the same Application Insights resource, this discovery step is not
necessary. The initial load for such an application consists of all its components.
178
One of the key goals with this experience is to be able to visualize complex topologies with
hundreds of components. Click on any component to see related insights and jump to the
performance and failure triage experience for that component.
The application uses the map cloud role name property to identify the components of the map.
You can manually set or override the cloud role name and change what is displayed in the
application map.
179
Application Insights availability tests
Once you have your web app or website up and running, you can set up recurring tests to
monitor availability and responsiveness. Application Insights regularly sends web requests to
your application from points around the world. It can alert you if your application is not
responding or is responding too slowly.
You can set up availability tests for any HTTP or HTTPS endpoint that is accessible from the
public Internet. There is no need to make any changes to the website you are testing. Of
course, it doesn't have to be your own site. You can test the availability of a REST API based on
your service.
References Links:
● Monitor app performance - Training | Microsoft Learn
● Configure monitoring for ASP.NET with Azure Application Insights
● Application Insights overview - Azure Monitor | Microsoft Learn
● Azure Application Insights log-based metrics - Azure Monitor | Microsoft Learn
● Select an availability test - Training | Microsoft Learn
● Review TrackAvailability() test results - Azure Monitor | Microsoft Learn
● Troubleshoot app performance by using Application Map - Training
180
Extra Learning Concepts
A.Deploying code to webapp
1.On your computer, open a terminal window and navigate to a working directory. Use the
dotnet new webapp command to build a new.NET web application, and then navigate into
the newly created app's directories.
2.Use the dotnet run command to launch the application locally from within the same
terminal session.
181
4.Go to extensions and install Azure tools in visual studio code.
5.In Visual Studio Code, open the Command Palette by selecting View > Command Palette.
6.Search for and select "Azure App Service: Create New Web App (Advanced)".
7.Respond to the prompts as follows:
● If prompted, sign in to your Azure account.
● Select your Subscription.
● Select Create new Web App... Advanced.
● For Enter a globally unique name, use a name that's unique across all of Azure (valid
characters are a-z, 0-9, and -). A good pattern is to use a combination of your company
name and an app identifier.
● Select Create new resource group and provide a name like az204RG.
182
● Select an operating system (Windows or Linux).
● Select Create a new App Service plan, provide a name, and select the F1 Free pricing tier.
183
● Select Skip for now for the Application Insights resource.
184
● Select Add Config when prompted.
8.Whenever you are in the "DemoAzureWebApp" workspace in Visual Studio Code, the App
Service app will be deployed too automatically. Choose the Yes option from the dialog to
make this happen.
9.When publishing completes, select Browse Website in the notification and select Open
when prompted.
You see the ASP.NET Core 7.0 web app displayed in the page.
185
10. Open Index.cshtml.
11. Replace the first <div> element with the following code:
15. When publishing completes, select Browse Website in the notification and select Open
when prompted.
16. You see the updated ASP.NET Core 7.0 web app displayed in the page.
186
17. To add a new app setting, select the new application setting. If you're using deployment
slots you can specify if your setting is swappable or not. In the dialog, you can stick the
setting to the current slot.
18. If you want to update application settings in bulk then click on Advanced edit. You can
now edit the settings as per your need.
187
19. If you want to add a connection string for your app then if you are using a .NET
application then you can specify your connection string in web.config or else for other
languages you can add connection string by clicking on the new connection string button in
the configuration window.
20. To use certificates for your webapp click on certificates under settings select Certificates
> Public key certificates (.cer) > Add certificate.
188
21. Select your existing certificate if you have any and click on Add.
189
Static Website hosting
1. To begin, log in to the Azure portal.
2. Find your storage account and click it to bring up the Overview pane for that account.
3. Click the Capabilities tab in the Overview window. To view the static website's configuration
page, pick Static website next.
4. To allow hosting of static websites for the storage account, select Enabled.
5. Please provide the default index page (index.html, for example) in the Index document
name box. When a person visits the root of your static website, the default index page
appears.
6. Enter the default error page (404.html, for example) in the Error document path field.
When a user tries to access a page that doesn't exist on your static website, the default
error page is shown.
7. To complete configuring the static site, click Save.
190
8. A notification of confirmation appears. The Overview pane displays other configuration
details along with your static website endpoints.
9. Go to the storage account that houses your static website in the Azure interface. To view
the list of containers, select Containers from the left navigation pane.
10.To access the Overview pane for a container, choose the $web container in the Containers
pane.
191
11.To access the Upload blob window, click the Upload symbol in the Overview pane. Next, to
launch the file browser, pick the Files option in the Upload blob pane. To fill in the Files
area, navigate to the file you wish to upload, select it, and then click Open. Choose whether
to check the Overwrite if files already exist.
12.Make sure that the file's content type is set to text/html if you want the browser to see its
contents. To confirm this, enter the Overview pane by selecting the name of the blob you
uploaded in the previous step. Make that the CONTENT-TYPE property field has the value
set.
13.Select the URL from the static website from your storage account overview pane and check
if it shows your contents.
192
193